id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
yzhuang/autotree_automl_10000_Higgs_sgosdt_l256_dim10_d3_sd0 | 2023-09-07T06:25:57.000Z | [
"region:us"
] | yzhuang | null | null | 0 | 89 | 2023-09-07T06:25:49 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 236440000
num_examples: 10000
- name: validation
num_bytes: 236440000
num_examples: 10000
download_size: 288073393
dataset_size: 472880000
---
# Dataset Card for "autotree_automl_10000_Higgs_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 843 | [
[
-0.03125,
-0.01425933837890625,
0.022216796875,
0.0126495361328125,
-0.006671905517578125,
0.01143646240234375,
0.04083251953125,
-0.00565338134765625,
0.052886962890625,
0.024932861328125,
-0.0501708984375,
-0.046783447265625,
-0.049774169921875,
0.00233840... |
yzhuang/autotree_pmlb_10000_twonorm_sgosdt_l256_dim10_d3_sd0 | 2023-09-07T06:47:08.000Z | [
"region:us"
] | yzhuang | null | null | 0 | 89 | 2023-09-07T06:47:00 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 236440000
num_examples: 10000
- name: validation
num_bytes: 236440000
num_examples: 10000
download_size: 144253019
dataset_size: 472880000
---
# Dataset Card for "autotree_pmlb_10000_twonorm_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 843 | [
[
-0.0294647216796875,
-0.01473236083984375,
0.01097869873046875,
0.0310821533203125,
-0.0194854736328125,
0.01473236083984375,
0.049774169921875,
-0.002178192138671875,
0.0548095703125,
0.032135009765625,
-0.060272216796875,
-0.03399658203125,
-0.049468994140625,... |
yzhuang/autotree_automl_10000_heloc_sgosdt_l256_dim10_d3_sd0 | 2023-09-07T07:33:56.000Z | [
"region:us"
] | yzhuang | null | null | 0 | 89 | 2023-09-07T07:33:51 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 236440000
num_examples: 10000
- name: validation
num_bytes: 236440000
num_examples: 10000
download_size: 80234603
dataset_size: 472880000
---
# Dataset Card for "autotree_automl_10000_heloc_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 842 | [
[
-0.03216552734375,
-0.00975799560546875,
0.0167083740234375,
0.021942138671875,
-0.0137786865234375,
0.022735595703125,
0.0447998046875,
-0.0035953521728515625,
0.055023193359375,
0.0262451171875,
-0.058197021484375,
-0.053741455078125,
-0.054107666015625,
0... |
ShengbinYue/DISC-Law-SFT | 2023-09-25T14:47:18.000Z | [
"size_categories:100M<n<1B",
"language:zh",
"license:apache-2.0",
"legal",
"arxiv:2309.11325",
"region:us"
] | ShengbinYue | null | null | 28 | 89 | 2023-09-23T07:56:07 | ---
language:
- zh
tags:
- legal
size_categories:
- 100M<n<1B
license: apache-2.0
---
# DISC-Law-SFT Dataset
Legal Intelligent systems in Chinese require a combination of various abilities, including legal text understanding and generation. To achieve this, we have constructed a high-quality supervised fine-tuning dataset called DISC-Law-SFT, which covers different legal scenarios such as legal information extraction, legal judgment prediction, legal document summarization, and legal question answering. DISC-Law-SFT comprises two subsets, DISC-Law-SFT-Pair and DISC-Law-SFT-Triplet. The former aims to introduce legal reasoning abilities to the LLM, while the latter helps enhance the model's capability to utilize external legal knowledge. For more detailed information, please refer to our [technical report](https://arxiv.org/abs/2309.11325). The distribution of the dataset is:
<img src="" alt="" width=""/>
<table>
<tr>
<th>Dataset</th>
<th>Task/Source</th>
<th>Size</th>
<th>Scenario</th>
</tr>
<tr>
<td rowspan="10">DISC-Law-SFT-Pair</td>
<td>Legal information extraction</td>
<td>32K</td>
<td rowspan="7">Legal professional assistant</td>
</tr>
<tr>
<td>Legal event detection</td>
<td>27K</td>
</tr>
<tr>
<td>Legal case classification</td>
<td>20K</td>
</tr>
<tr>
<td>Legal judgement prediction</td>
<td>11K</td>
</tr>
<tr>
<td>Legal case matching</td>
<td>8K</td>
</tr>
<tr>
<td>Legal text summarization</td>
<td>9K</td>
</tr>
<tr>
<td>Judicial public opinion summarization</td>
<td>6K</td>
</tr>
<tr>
<td>Legal question answering</td>
<td>93K</td>
<td>Legal consultation services</td>
</tr>
<tr>
<td>Legal reading comprehension</td>
<td>38K</td>
<td rowspan="2">Judicial examination assistant</td>
</tr>
<tr>
<td>Judicial examination</td>
<td>12K</td>
</tr>
<tr>
<td rowspan="2">DISC-Law-SFT-Triple</td>
<td>Legal judgement prediction</td>
<td>16K</td>
<td>Legal professional assistant</td>
</tr>
<tr>
<td>Legal question answering</td>
<td>23K</td>
<td>Legal consultation services</td>
</tr>
<tr>
<td rowspan="2">General</td>
<td>Alpaca-GPT4</td>
<td>48K</td>
<td rowspan="2">General scenarios</td>
</tr>
<tr>
<td>Firefly</td>
<td>60K</td>
</tr>
<tr>
<td>Total</td>
<td colspan="3">403K</td>
</tr>
</table>
We currently open-source most of the DISC-Law-SFT Dataset.
More detail and news check our [homepage](https://github.com/FudanDISC/DISC-LawLLM) ! | 2,610 | [
[
-0.0280914306640625,
-0.03375244140625,
0.0234222412109375,
0.0231475830078125,
-0.035888671875,
-0.0169525146484375,
-0.0037136077880859375,
-0.0195770263671875,
0.00014317035675048828,
0.07281494140625,
-0.041717529296875,
-0.063720703125,
-0.0234527587890625,... |
lberglund/reversal_curse | 2023-09-25T15:33:57.000Z | [
"language:en",
"license:mit",
"arxiv:2309.12288",
"region:us"
] | lberglund | null | null | 0 | 89 | 2023-09-25T15:06:42 | ---
license: mit
language:
- en
---
# Dataset Card for Dataset Name
## Dataset Description
- **Repository: https://github.com/lukasberglund/reversal_curse**
- **Paper: https://arxiv.org/abs/2309.12288**
### Dataset Summary
Datasets used for experiments 1, 2, and 3 from the reversal curse paper.
1. Experiment 1 uses `name_description_dataset`
2. Experiment 2 uses `celebrity_relations`
3. Experiment 3 uses `instruction_dataset`
| 440 | [
[
-0.0181732177734375,
-0.038421630859375,
0.00026154518127441406,
0.01416015625,
-0.040771484375,
-0.006717681884765625,
0.00952911376953125,
-0.0187225341796875,
0.060516357421875,
0.068603515625,
-0.046722412109375,
-0.0340576171875,
-0.044158935546875,
0.0... |
MLNTeam-Unical/NFT-70M_transactions | 2023-10-03T07:15:49.000Z | [
"task_categories:time-series-forecasting",
"task_categories:text-classification",
"task_categories:feature-extraction",
"task_categories:text-generation",
"task_categories:zero-shot-classification",
"task_categories:text2text-generation",
"task_categories:sentence-similarity",
"task_categories:image-c... | MLNTeam-Unical | null | null | 3 | 89 | 2023-09-26T15:48:21 | ---
dataset_info:
features:
- name: num_sales
dtype: int64
- name: fees_seller
dtype: float64
- name: fees_opensea
dtype: float64
- name: fees_seller_usd
dtype: float64
- name: fees_opensea_usd
dtype: float64
- name: tx_timestamp
dtype: string
- name: price
dtype: float64
- name: gain
dtype: float64
- name: usd_price
dtype: float64
- name: usd_gain
dtype: float64
- name: token
dtype: string
- name: to_eth
dtype: float64
- name: to_usd
dtype: float64
- name: created_date
dtype: string
- name: chain
dtype: string
- name: token_type
dtype: string
- name: asset_contract_type
dtype: string
- name: asset_type
dtype: string
- name: payout_collection_address
dtype: int64
- name: from_account
dtype: int64
- name: to_account
dtype: int64
- name: seller_account
dtype: int64
- name: winner_account
dtype: int64
- name: contract_address
dtype: int64
- name: nft_image
dtype: int64
- name: collection_image
dtype: int64
- name: token_id
dtype: int64
- name: nft_name
dtype: int64
- name: nft_description
dtype: int64
- name: collection_name
dtype: int64
- name: collection_description
dtype: int64
splits:
- name: train
num_bytes: 21291348001
num_examples: 70972143
download_size: 6633664673
dataset_size: 21291348001
size_categories:
- 10M<n<100M
license: cc-by-nc-4.0
task_categories:
- time-series-forecasting
- text-classification
- feature-extraction
- text-generation
- zero-shot-classification
- text2text-generation
- sentence-similarity
- image-classification
- image-to-text
- text-to-image
- text-retrieval
language:
- en
tags:
- Non-fungible Tokens
- Crypto
- Web3
- Art
- Multimodal Learning
pretty_name: NFT-70M_transactions
---
# Dataset Card for "NFT-70M_transactions"
## Dataset summary
The *NFT-70M_transactions* dataset is the largest and most up-to-date collection of Non-Fungible Tokens (NFT) transactions between 2021 and 2023 sourced from [OpenSea](https://opensea.io), the leading trading platform in the Web3 ecosystem.
With more than 70M transactions enriched with metadata, this dataset is conceived to support a wide range of tasks, ranging from sequential and transactional data processing/analysis to graph-based modeling of the complex relationships between traders.
Besides, the availability of textual and image contents further amplifies the modeling capabilities and usage opportunities of this dataset, making it a unique and comprehensive multimodal source of information for delving into the NFT landscape.
This dataset can serve as a benchmark for various innovative and impactful tasks within the crypto landscape, such as projecting NFT prices or detecting fraudolent and wash trading activities.
Furthermore, the multimodal nature of the dataset fosters the development of classification models, as well as textual and visual generative models.
## Data anonymization
We point out that the collected NFT transactions and metadata from OpenSea are publicly distributed on blockchain.
For our purposes of re-distribution, we are also committed to ensure non-disclosure of information that might lead to identifying the NFT creators, in order to be compliant with privacy-preserving requirements and to avoid violation of data protection regulations and of property rights.
In this respect, we carried out three actions:
- Values of all variables describing non-sensitive information were kept in their original form;
- Values of all variables describing sensitive information were anonymized, in a one-way, non-revertible mode;
- URLs of image data and textual contents (i.e., NFT images and their descriptions) were replaced by identifiers to numerical vectors that represent an encrypted representation (i.e., embeddings) of the image/text contents obtained via neural network models. Such embeddings are eventually provided in place of their original image and text data,
and can be found in the [**NFT-70M_image**](https://huggingface.co/datasets/MLNTeam-Unical/NFT-70M_image) and [**NFT-70M_text**](https://huggingface.co/datasets/MLNTeam-Unical/NFT-70M_text) supplementary datasets, respectively.
## Data Fields
| Variable | Type | Description | Processing | Notes |
|--------------------------|-------------|-----------------------------------------------------------------------------------------------------------|------------------|-----------------------------------|
| token_id | String | The id of the NFT — this value is unique within the same collection | Anonymized | Original values were replaced by hash-codes |
| num_sales | Integer | A progressive integer indicating the number of successful transactions involving the NFT up to the current timestamp (cf. *tx_timestamp*) | Original | Not sensitive variable |
| nft_name | Vector ID | The name of the NFT | Anonymized | Original values were encrypted via neural textual embedding |
| nft_description | Vector ID | The description of the NFT as provided by the creator | Anonymized | Original values were encrypted via neural textual embedding |
| nft_image | Vector ID | The ID for accessing the NFT image vector | Anonymized | Original values were encrypted via neural visual embedding |
| collection_name | Vector ID | The ID for accessing the Collection name vector | Anonymized | Original values were encrypted via neural textual embedding |
| collection_description | Vector ID | The ID for accessing the Collection description vector | Anonymized | Original values were encrypted via neural textual embedding |
| collection_image | Vector ID | The ID for accessing the Collection image vector | Anonymized | Original values were encrypted via neural visual embedding |
| fees_seller | Float | The absolute amount of fees the seller has gained from this transaction expressed in *token* | Original | Not sensitive variable |
| fees_opensea | Float | The absolute amount of fees OpenSea has gained from this transaction expressed in *token* | Original | Not sensitive variable |
| fees_seller_usd | Float | The absolute amount of fees the seller has gained from this transaction expressed in US dollars (USD) | Original | Not sensitive variable |
| fees_opensea_usd | Float | The absolute amount of fees OpenSea has gained from this transaction expressed in US dollars (USD) | Original | Not sensitive variable |
| payout_collection_address| String | The wallet address where seller fees are deposited | Anonymized | Original values were replaced by hash-codes |
| tx_timestamp | String | Timestamp of the transaction expressed in yyyy-mm-ddTHH:MM:SS | Original | Not sensitive variable |
| price | Float | The price of the transaction expressed in token | Original | Not sensitive variable |
| gain | Float | The gain after fees (i.e., gain = price - fees_opensea * price - fees_seller * price) | Original | Not sensitive variable |
| usd_price | Float | The price of the transaction expressed in US dollars (USD) | Original | Not sensitive variable |
| usd_gain | Float | The difference between the price and the fees expressed in US dollars (USD) | Original | Not sensitive variable |
| token | Categorical | The token type used to pay the transaction | Original | Not sensitive variable |
| to_eth | Float | The conversion rate to convert tokens into Ethereum at the current timestamp, such that eth = price * to_eth | Original | Not sensitive variable |
| to_usd | Float | The conversion rate to convert tokens into US dollars (USD) at the current timestamp, such that usd = price * to_usd | Original | Not sensitive variable |
| from_account | String | The address that sends the payment (i.e., winner/buyer) | Anonymized | Original values were replaced by hash-codes |
| to_account | String | The address that receives the payment (it often corresponds to the contract linked to the asset) | Anonymized | Original values were replaced by hash-codes |
| seller_account | String | The address of the NFT seller | Anonymized | Original values were replaced by hash-codes |
| winner_account | String | The address of the NFT buyer | Anonymized | Original values were replaced by hash-codes |
| contract_address | String | The contract address on the blockchain | Anonymized | Original values were replaced by hash-codes |
| created_date | Timestamp | The date of creation of the contract | Original | Not sensitive variable |
| chain | Categorical | The blockchain where the transaction occurs | Original | Not sensitive variable |
| token_type | Categorical | The schema of the token, i.e., ERC721 or ERC1155 | Original | Not sensitive variable |
| asset_contract_type | Categorical | The asset typology, i.e., non-fungible or semi-fungible | Original | Not sensitive variable |
| asset_type | Categorical | Whether the asset was involved in a simple or bundle transaction | Original | Not sensitive variable |
## How to use
Data provided within this repository can be straightforwardly loaded via the *datasets* library as follows:
```python
from datasets import load_dataset
dataset = load_dataset("MLNTeam-Unical/NFT-70M_transactions")
```
Complementary data involving textual and visual embeddings can be integrated as follows:
```python
from datasets import load_dataset
import numpy as np
transactions_dataset=load_dataset("MLNTeam-Unical/NFT-70M_transactions")
image_dataset=load_dataset("MLNTeam-Unical/NFT-70M_image")
text_dataset=load_dataset("MLNTeam-Unical/NFT-70M_text")
# Mapping from image_id to the row_index within the image dataset
image_id2row_index={int(id):k for k,id in enumerate(image_dataset["train"]["id"])}
# Mapping from text_id to row_index within the text dataset
text_id2row_index={int(id):k for k,id in enumerate(text_dataset["train"]["id"])}
def get_image_embedding(image_id,image_id2row_index,image_dataset):
# If the mapping contains the image, the embedding exists
idx_emb=image_id2row_index.get(int(image_id),None)
if idx_emb:
# If the embedding exists, return it
return np.array(image_dataset["train"].select([idx_emb])["emb"][0])
else:
return None
def get_text_embedding(text_id,text_id2row_index,text_dataset):
# If the mapping contains the text, the embedding exists
idx_emb=text_id2row_index.get(int(text_id),None)
if idx_emb:
# If the embedding exists, return it
return np.array(text_dataset["train"].select([idx_emb])["emb"][0])
else:
return None
### USAGE EXAMPLE ###
# Select transaction_id
transaction_id=120
# Get the image_id (e.g., collection_image or nft_image)
id_image=transactions_dataset["train"].select([transaction_id])["collection_image"][0]
# Get the image
image_embedding=get_image_embedding(id_image,image_id2row_index,image_dataset)
# Get the text_id
id_text=transactions_dataset["train"].select([transaction_id])["collection_description"][0]
# Get the text
text_embedding=get_text_embedding(id_text,text_id2row_index,text_dataset)
```
## Ethical use of data and informed consent
This data repository is made available for research and informational purposes only.
Any finding that might be drawn from the data provided within this repository should be intended to support decision-making regarding actions made on NFTs, and not to replace the human specialists.
*The authors are not responsible for any issues related to trading failures based on the data provided within this repository.*
## Terms of Usage
Please cite the following papers in any research product whose findings are based on the data provided within this repository:
- L. La Cava, D. Costa, A. Tagarelli: SONAR: Web-based Tool for Multimodal Exploration of Non-Fungible Token Inspiration Networks. In: Proc. ACM SIGIR 2023. Taipei, Taiwan, July 23-27 2023. DOI: https://doi.org/10.1145/3539618.3591821
- L. La Cava, D. Costa, A. Tagarelli: Visually Wired NFTs: Exploring the Role of Inspiration in Non-Fungible Tokens. CoRR abs/2303.17031 (2023). DOI: https://doi.org/10.48550/arXiv.2303.17031
- D. Costa, L. La Cava, A. Tagarelli: Show me your NFT and I tell you how it will perform: Multimodal representation learning for NFT selling price prediction. In: Proc. ACM WebConf 2023, pp. 1875-1885. Austin, TX, USA, 30 April 2023 – 4 May 2023. DOI: https://doi.org/10.1145/3543507.3583520
Data within this repository were fetched using the REST APIs provided by OpenSea. You should also acknowledge [OpenSea API]("https://docs.opensea.io/reference/api-overview).
## Liability statement
The authors hereby declare that they are not responsible for any harmful or objectionable content that may be contained within the data provided within this repository.
Users of the dataset are expected to exercise due diligence and responsibility when using the data, including but not limited to:
(i) Content Review: Users should review the dataset's contents carefully and assess its suitability for their intended purposes; (ii) Compliance: Users are responsible for ensuring that their use of the dataset complies with all applicable laws, regulations, and ethical standards;
(iii) Data Processing: Users may need to apply data preprocessing, filtering, or other techniques to remove or address any objectionable or harmful content as needed.
The authors of this dataset disclaim any liability for the accuracy, completeness, or suitability of the data and shall not be held responsible for any consequences resulting from the use or misuse of the dataset.
*By accessing and using this dataset, users acknowledge and accept this disclaimer.* | 16,326 | [
[
-0.0372314453125,
-0.061187744140625,
0.00994873046875,
0.0031337738037109375,
-0.033203125,
0.0038585662841796875,
0.006359100341796875,
-0.06060791015625,
0.0428466796875,
0.0545654296875,
-0.0406494140625,
-0.05194091796875,
-0.047119140625,
0.00375747680... |
pbaoo2705/cpgqa_processed | 2023-10-16T06:02:20.000Z | [
"region:us"
] | pbaoo2705 | null | null | 0 | 89 | 2023-10-10T06:53:18 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: answer
dtype: string
- name: start_positions
dtype: int64
- name: end_positions
dtype: int64
splits:
- name: train
num_bytes: 9148601
num_examples: 884
download_size: 190231
dataset_size: 9148601
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "cpgqa_processed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 611 | [
[
-0.035064697265625,
-0.02264404296875,
0.0257415771484375,
0.0164642333984375,
-0.0207061767578125,
0.00876617431640625,
0.0132598876953125,
-0.00962066650390625,
0.04193115234375,
0.04779052734375,
-0.0570068359375,
-0.05572509765625,
-0.048583984375,
-0.01... |
kejian/odmeeting_oracle_govrep_format | 2023-10-10T23:07:56.000Z | [
"region:us"
] | kejian | null | null | 0 | 89 | 2023-10-10T23:07:53 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: int64
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 25751717
num_examples: 261
- name: test
num_bytes: 13445910
num_examples: 131
- name: validation
num_bytes: 4116222
num_examples: 44
download_size: 21987705
dataset_size: 43313849
---
# Dataset Card for "odmeeting_oracle_govrep_format"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 740 | [
[
-0.0275726318359375,
-0.0272064208984375,
0.0163726806640625,
0.00962066650390625,
-0.01165008544921875,
-0.03436279296875,
0.0041961669921875,
-0.006195068359375,
0.05126953125,
0.059478759765625,
-0.04669189453125,
-0.055389404296875,
-0.024993896484375,
-... |
danjacobellis/musdb | 2023-10-11T16:01:47.000Z | [
"region:us"
] | danjacobellis | null | null | 0 | 89 | 2023-10-11T15:03:04 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: name
dtype: string
- name: mixture
dtype:
audio:
sampling_rate: 44100
mono: false
- name: drums
dtype:
audio:
sampling_rate: 44100
mono: false
- name: bass
dtype:
audio:
sampling_rate: 44100
mono: false
- name: other
dtype:
audio:
sampling_rate: 44100
mono: false
- name: vocals
dtype:
audio:
sampling_rate: 44100
mono: false
splits:
- name: test
num_bytes: 6534850857.0
num_examples: 15
download_size: 4928706537
dataset_size: 6534850857.0
---
# Dataset Card for "musdb"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 888 | [
[
-0.053009033203125,
-0.01255035400390625,
0.025421142578125,
0.021697998046875,
-0.01183319091796875,
0.01174163818359375,
0.0341796875,
-0.003734588623046875,
0.07635498046875,
0.0355224609375,
-0.068359375,
-0.04986572265625,
-0.04107666015625,
-0.02551269... |
coastalcph/fm_queries_classifier | 2023-10-18T13:36:57.000Z | [
"region:us"
] | coastalcph | null | null | 0 | 89 | 2023-10-11T20:37:31 | ---
dataset_info:
features:
- name: query
dtype: string
- name: answer
list:
- name: wikidata_id
dtype: string
- name: name
dtype: string
- name: id
dtype: string
- name: relation
dtype: string
- name: date
dtype: int64
- name: type
dtype: string
- name: is_mutable
dtype: int64
splits:
- name: train
num_bytes: 1437936
num_examples: 8974
- name: all_fm
num_bytes: 33337568
num_examples: 192165
- name: validation
num_bytes: 960721
num_examples: 5793
- name: test
num_bytes: 1026699
num_examples: 5698
download_size: 1260361
dataset_size: 36762924
---
# Dataset Card for "fm_queries_classifier"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 837 | [
[
-0.03765869140625,
-0.0164337158203125,
0.016571044921875,
0.00585174560546875,
-0.0029773712158203125,
-0.01245880126953125,
0.01406097412109375,
-0.02069091796875,
0.044281005859375,
0.03271484375,
-0.05340576171875,
-0.0498046875,
-0.033599853515625,
-0.0... |
code_x_glue_cc_clone_detection_poj104 | 2023-03-13T11:02:07.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:code",
"license:c-uda",
"region:us"
] | null | Given a code and a collection of candidates as the input, the task is to return Top K codes with the same semantic. Models are evaluated by MAP score.
We use POJ-104 dataset on this task. | @inproceedings{mou2016convolutional,
title={Convolutional neural networks over tree structures for programming language processing},
author={Mou, Lili and Li, Ge and Zhang, Lu and Wang, Tao and Jin, Zhi},
booktitle={Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence},
pages={1287--1293},
year={2016}
} | 3 | 88 | 2022-03-02T23:29:22 | ---
pretty_name: CodeXGlueCcCloneDetectionPoj104
annotations_creators:
- found
language_creators:
- found
language:
- code
license:
- c-uda
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-retrieval
task_ids:
- document-retrieval
dataset_info:
features:
- name: id
dtype: int32
- name: code
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 20179075
num_examples: 32500
- name: validation
num_bytes: 6382433
num_examples: 8500
- name: test
num_bytes: 7227506
num_examples: 12000
download_size: 8658581
dataset_size: 33789014
---
# Dataset Card for "code_x_glue_cc_clone_detection_poj_104"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits-sample-size)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/Clone-detection-POJ-104
### Dataset Summary
CodeXGLUE Clone-detection-POJ-104 dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/Clone-detection-POJ-104
Given a code and a collection of candidates as the input, the task is to return Top K codes with the same semantic. Models are evaluated by MAP score.
We use POJ-104 dataset on this task.
### Supported Tasks and Leaderboards
- `document-retrieval`: The dataset can be used to train a model for retrieving top-k codes with the same semantics.
### Languages
- C++ **programming** language
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
{
"code": "\nint f(int shu,int min)\n{ \n int k=1;\n if(shu < min)\n { \n k= 0; \n return k;\n } \n else\n {\n for(int i = min;i<shu;i++)\n { \n if(shu%i == 0)\n { \n k=k+ f(shu/i,i); \n } \n \n \n } \n return k; \n}\n} \n\nmain()\n{\n int n,i,a;\n scanf(\"%d\",&n);\n \n for(i=0;i<n;i++)\n {\n scanf(\"%d\",&a);\n \n if(i!=n-1) \n printf(\"%d\\n\",f(a,2));\n else\n printf(\"%d\",f(a,2)); \n \n \n \n } \n \n \n }",
"id": 0,
"label": "home"
}
```
### Data Fields
In the following each data field in go is explained for each config. The data fields are the same among all splits.
#### default
|field name| type | description |
|----------|------|----------------------------------------------|
|id |int32 | Index of the sample |
|code |string| The full text of the function |
|label |string| The id of problem that the source code solves|
### Data Splits
| name |train|validation|test |
|-------|----:|---------:|----:|
|default|32000| 8000|12000|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
https://github.com/microsoft, https://github.com/madlag
### Licensing Information
Computational Use of Data Agreement (C-UDA) License.
### Citation Information
```
@inproceedings{mou2016convolutional,
title={Convolutional neural networks over tree structures for programming language processing},
author={Mou, Lili and Li, Ge and Zhang, Lu and Wang, Tao and Jin, Zhi},
booktitle={Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence},
pages={1287--1293},
year={2016}
}
```
### Contributions
Thanks to @madlag (and partly also @ncoop57) for adding this dataset. | 5,244 | [
[
-0.0249481201171875,
-0.043365478515625,
0.01433563232421875,
0.0014543533325195312,
-0.010406494140625,
0.0089111328125,
-0.0236663818359375,
-0.01207733154296875,
0.027099609375,
0.03118896484375,
-0.037384033203125,
-0.06878662109375,
-0.046966552734375,
... |
eitb_parcc | 2022-11-03T16:15:31.000Z | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:es",
"language:eu",
"license:unknown",
"region:us"
] | null | EiTB-ParCC: Parallel Corpus of Comparable News. A Basque-Spanish parallel corpus provided by Vicomtech (https://www.vicomtech.org), extracted from comparable news produced by the Basque public broadcasting group Euskal Irrati Telebista. | @InProceedings{TIEDEMANN12.463,
author = {J{\"o}rg Tiedemann},
title = {Parallel Data, Tools and Interfaces in OPUS},
booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)},
year = {2012},
month = {may},
date = {23-25},
address = {Istanbul, Turkey},
editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis},
publisher = {European Language Resources Association (ELRA)},
isbn = {978-2-9517408-7-7},
language = {english}
} | 1 | 88 | 2022-03-02T23:29:22 | ---
annotations_creators:
- found
language_creators:
- found
language:
- es
- eu
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: eitb-parcc
pretty_name: EiTB-ParCC
dataset_info:
features:
- name: translation
dtype:
translation:
languages:
- es
- eu
config_name: es-eu
splits:
- name: train
num_bytes: 139039398
num_examples: 637183
download_size: 57244346
dataset_size: 139039398
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**[EiTB-ParCC: Parallel Corpus of Comparable New](http://opus.nlpl.eu/EiTB-ParCC.php)
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
EiTB-ParCC: Parallel Corpus of Comparable News. A Basque-Spanish parallel corpus provided by \
Vicomtech (https://www.vicomtech.org), extracted from comparable news produced by the \
Basque public broadcasting group Euskal Irrati Telebista.
### Supported Tasks and Leaderboards
The underlying task is machine translation.
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@InProceedings{TIEDEMANN12.463,
author = {J{\"o}rg Tiedemann},
title = {Parallel Data, Tools and Interfaces in OPUS},
booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)},
year = {2012},
month = {may},
date = {23-25},
address = {Istanbul, Turkey},
editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis},
publisher = {European Language Resources Association (ELRA)},
isbn = {978-2-9517408-7-7},
language = {english}
}
```
### Contributions
Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset. | 3,921 | [
[
-0.041717529296875,
-0.03607177734375,
0.01458740234375,
0.0256500244140625,
-0.0194244384765625,
0.01024627685546875,
-0.0399169921875,
-0.022674560546875,
0.0413818359375,
0.034637451171875,
-0.049713134765625,
-0.07464599609375,
-0.05572509765625,
0.02784... |
giga_fren | 2022-11-03T16:15:21.000Z | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:en",
"language:fr",
"license:unknown",
"region:us"
] | null | Giga-word corpus for French-English from WMT2010 collected by Chris Callison-Burch
2 languages, total number of files: 452
total number of tokens: 1.43G
total number of sentence fragments: 47.55M | @InProceedings{TIEDEMANN12.463,
author = {J{\"o}rg Tiedemann},
title = {Parallel Data, Tools and Interfaces in OPUS},
booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)},
year = {2012},
month = {may},
date = {23-25},
address = {Istanbul, Turkey},
editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis},
publisher = {European Language Resources Association (ELRA)},
isbn = {978-2-9517408-7-7},
language = {english}
} | 0 | 88 | 2022-03-02T23:29:22 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
- fr
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: GigaFren
dataset_info:
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- fr
config_name: en-fr
splits:
- name: train
num_bytes: 8690296821
num_examples: 22519904
download_size: 2701536198
dataset_size: 8690296821
---
# Dataset Card for GigaFren
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://opus.nlpl.eu/giga-fren.php
- **Repository:** None
- **Paper:** http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
Here are some examples of questions and facts:
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset. | 3,258 | [
[
-0.040313720703125,
-0.039581298828125,
0.00923919677734375,
0.0079803466796875,
-0.032012939453125,
0.01367950439453125,
-0.0199127197265625,
-0.0268096923828125,
0.05316162109375,
0.05084228515625,
-0.0716552734375,
-0.06195068359375,
-0.03643798828125,
-0... |
hausa_voa_ner | 2023-01-25T14:31:51.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ha",
"license:cc-by-4.0",
"region:us"
] | null | The Hausa VOA NER dataset is a labeled dataset for named entity recognition in Hausa. The texts were obtained from
Hausa Voice of America News articles https://www.voahausa.com/ . We concentrate on
four types of named entities: persons [PER], locations [LOC], organizations [ORG], and dates & time [DATE].
The Hausa VOA NER data files contain 2 columns separated by a tab ('\t'). Each word has been put on a separate line and
there is an empty line after each sentences i.e the CoNLL format. The first item on each line is a word, the second
is the named entity tag. The named entity tags have the format I-TYPE which means that the word is inside a phrase
of type TYPE. For every multi-word expression like 'New York', the first word gets a tag B-TYPE and the subsequent words
have tags I-TYPE, a word with tag O is not part of a phrase. The dataset is in the BIO tagging scheme.
For more details, see https://www.aclweb.org/anthology/2020.emnlp-main.204/ | @inproceedings{hedderich-etal-2020-transfer,
title = "Transfer Learning and Distant Supervision for Multilingual Transformer Models: A Study on {A}frican Languages",
author = "Hedderich, Michael A. and
Adelani, David and
Zhu, Dawei and
Alabi, Jesujoba and
Markus, Udia and
Klakow, Dietrich",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-main.204",
doi = "10.18653/v1/2020.emnlp-main.204",
pages = "2580--2591",
} | 2 | 88 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- ha
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: Hausa VOA NER Corpus
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-DATE
'8': I-DATE
config_name: hausa_voa_ner
splits:
- name: train
num_bytes: 483634
num_examples: 1015
- name: validation
num_bytes: 69673
num_examples: 146
- name: test
num_bytes: 139227
num_examples: 292
download_size: 324962
dataset_size: 692534
---
# Dataset Card for Hausa VOA NER Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.aclweb.org/anthology/2020.emnlp-main.204/
- **Repository:** [Hausa VOA NER](https://github.com/uds-lsv/transfer-distant-transformer-african/tree/master/data/hausa_ner)
- **Paper:** https://www.aclweb.org/anthology/2020.emnlp-main.204/
- **Leaderboard:**
- **Point of Contact:** [David Adelani](mailto:didelani@lsv.uni-saarland.de)
### Dataset Summary
The Hausa VOA NER is a named entity recognition (NER) dataset for Hausa language based on the [VOA Hausa news](https://www.voahausa.com/) corpus.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The language supported is Hausa.
## Dataset Structure
### Data Instances
A data point consists of sentences seperated by empty line and tab-seperated tokens and tags.
{'id': '0',
'ner_tags': [B-PER, 0, 0, B-LOC, 0],
'tokens': ['Trump', 'ya', 'ce', 'Rasha', 'ma']
}
### Data Fields
- `id`: id of the sample
- `tokens`: the tokens of the example text
- `ner_tags`: the NER tags of each token
The NER tags correspond to this list:
```
"O", "B-PER", "I-PER", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-DATE", "I-DATE",
```
The NER tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and dates & times (DATE). (O) is used for tokens not considered part of any named entity.
### Data Splits
Training (1,014 sentences), validation (145 sentences) and test split (291 sentences)
## Dataset Creation
### Curation Rationale
The data was created to help introduce resources to new language - Hausa.
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The dataset is based on the news domain and was crawled from [VOA Hausa news](https://www.voahausa.com/).
[More Information Needed]
#### Who are the source language producers?
The dataset was collected from VOA Hausa news. Most of the texts used in creating the Hausa VOA NER are news stories from Nigeria, Niger Republic, United States, and other parts of the world.
[More Information Needed]
### Annotations
Named entity recognition annotation
#### Annotation process
[More Information Needed]
#### Who are the annotators?
The data was annotated by Jesujoba Alabi and David Adelani for the paper:
[Transfer Learning and Distant Supervision for Multilingual Transformer Models: A Study on African Languages](https://www.aclweb.org/anthology/2020.emnlp-main.204/).
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The annotated data sets were developed by students of Saarland University, Saarbrücken, Germany .
### Licensing Information
The data is under the [Creative Commons Attribution 4.0 ](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@inproceedings{hedderich-etal-2020-transfer,
title = "Transfer Learning and Distant Supervision for Multilingual Transformer Models: A Study on {A}frican Languages",
author = "Hedderich, Michael A. and
Adelani, David and
Zhu, Dawei and
Alabi, Jesujoba and
Markus, Udia and
Klakow, Dietrich",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-main.204",
doi = "10.18653/v1/2020.emnlp-main.204",
pages = "2580--2591",
}
```
### Contributions
Thanks to [@dadelani](https://github.com/dadelani) for adding this dataset. | 6,010 | [
[
-0.04168701171875,
-0.058349609375,
-0.002468109130859375,
0.0246734619140625,
-0.01514434814453125,
-0.0064239501953125,
-0.03204345703125,
-0.0423583984375,
0.046142578125,
0.04437255859375,
-0.03631591796875,
-0.047637939453125,
-0.055511474609375,
0.0302... |
hda_nli_hindi | 2023-01-25T14:31:58.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|hindi_discourse",
"language:hi",
"license:mit",
"region:us"
] | null | This dataset is a recasted version of the Hindi Discourse Analysis Dataset used to train models for Natural Language Inference Tasks in Low-Resource Languages like Hindi. | @inproceedings{uppal-etal-2020-two,
title = "Two-Step Classification using Recasted Data for Low Resource Settings",
author = "Uppal, Shagun and
Gupta, Vivek and
Swaminathan, Avinash and
Zhang, Haimin and
Mahata, Debanjan and
Gosangi, Rakesh and
Shah, Rajiv Ratn and
Stent, Amanda",
booktitle = "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing",
month = dec,
year = "2020",
address = "Suzhou, China",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.aacl-main.71",
pages = "706--719",
abstract = "An NLP model{'}s ability to reason should be independent of language. Previous works utilize Natural Language Inference (NLI) to understand the reasoning ability of models, mostly focusing on high resource languages like English. To address scarcity of data in low-resource languages such as Hindi, we use data recasting to create NLI datasets for four existing text classification datasets. Through experiments, we show that our recasted dataset is devoid of statistical irregularities and spurious patterns. We further study the consistency in predictions of the textual entailment models and propose a consistency regulariser to remove pairwise-inconsistencies in predictions. We propose a novel two-step classification method which uses textual-entailment predictions for classification task. We further improve the performance by using a joint-objective for classification and textual entailment. We therefore highlight the benefits of data recasting and improvements on classification performance using our approach with supporting experimental results.",
} | 0 | 88 | 2022-03-02T23:29:22 | ---
annotations_creators:
- machine-generated
language_creators:
- found
language:
- hi
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|hindi_discourse
task_categories:
- text-classification
task_ids:
- natural-language-inference
pretty_name: Hindi Discourse Analysis Dataset
dataset_info:
- config_name: HDA hindi nli
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': not-entailment
'1': entailment
- name: topic
dtype:
class_label:
names:
'0': Argumentative
'1': Descriptive
'2': Dialogic
'3': Informative
'4': Narrative
splits:
- name: train
num_bytes: 8721972
num_examples: 31892
- name: validation
num_bytes: 2556118
num_examples: 9460
- name: test
num_bytes: 2646453
num_examples: 9970
download_size: 13519261
dataset_size: 13924543
- config_name: hda nli hindi
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': not-entailment
'1': entailment
- name: topic
dtype:
class_label:
names:
'0': Argumentative
'1': Descriptive
'2': Dialogic
'3': Informative
'4': Narrative
splits:
- name: train
num_bytes: 8721972
num_examples: 31892
- name: validation
num_bytes: 2556118
num_examples: 9460
- name: test
num_bytes: 2646453
num_examples: 9970
download_size: 13519261
dataset_size: 13924543
---
# Dataset Card for Hindi Discourse Analysis Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **HomePage:** [GitHub](https://github.com/midas-research/hindi-nli-data)
- **Paper:** [Aclweb](https://www.aclweb.org/anthology/2020.aacl-main.71)
- **Point of Contact:** [GitHub](https://github.com/midas-research/hindi-nli-data)
### Dataset Summary
- Dataset for Natural Language Inference in Hindi Language. Hindi Discourse Analysis (HDA) Dataset consists of textual-entailment pairs.
- Each row of the Datasets if made up of 4 columns - Premise, Hypothesis, Label and Topic.
- Premise and Hypothesis is written in Hindi while Entailment_Label is in English.
- Entailment_label is of 2 types - entailed and not-entailed.
- Entailed means that hypotheis can be inferred from premise and not-entailed means vice versa
- Dataset can be used to train models for Natural Language Inference tasks in Hindi Language.
### Supported Tasks and Leaderboards
- Natural Language Inference for Hindi
### Languages
- Dataset is in Hindi
## Dataset Structure
- Data is structured in TSV format.
- train, test and dev files are in seperate files
### Dataset Instances
An example of 'train' looks as follows.
```
{'hypothesis': 'यह एक वर्णनात्मक कथन है।', 'label': 1, 'premise': 'जैसे उस का सारा चेहरा अपना हो और आँखें किसी दूसरे की जो चेहरे पर पपोटों के पीछे महसूर कर दी गईं।', 'topic': 1}
```
### Data Fields
Each row contatins 4 columns:
- premise: string
- hypothesis: string
- label: class label with values that correspond to "not-entailment" (0) or "entailment" (1)
- topic: class label with values that correspond to "Argumentative" (0), "Descriptive" (1), "Dialogic" (2), "Informative" (3) or "Narrative" (4).
### Data Splits
- Train : 31892
- Valid : 9460
- Test : 9970
## Dataset Creation
- We employ a recasting technique from Poliak et al. (2018a,b) to convert publicly available Hindi Discourse Analysis classification datasets in Hindi and pose them as TE problems
- In this recasting process, we build template hypotheses for each class in the label taxonomy
- Then, we pair the original annotated sentence with each of the template hypotheses to create TE samples.
- For more information on the recasting process, refer to paper https://www.aclweb.org/anthology/2020.aacl-main.71
### Source Data
Source Dataset for the recasting process is the BBC Hindi Headlines Dataset(https://github.com/NirantK/hindi2vec/releases/tag/bbc-hindi-v0.1)
#### Initial Data Collection and Normalization
- Initial Data was collected by members of MIDAS Lab from Hindi Websites. They crowd sourced the data annotation process and selected two random stories from our corpus and had the three annotators work on them independently and classify each sentence based on the discourse mode.
- Please refer to this paper for detailed information: https://www.aclweb.org/anthology/2020.lrec-1.149/
- The Discourse is further classified into "Argumentative" , "Descriptive" , "Dialogic" , "Informative" and "Narrative" - 5 Clases.
#### Who are the source language producers?
Please refer to this paper for detailed information: https://www.aclweb.org/anthology/2020.lrec-1.149/
### Annotations
#### Annotation process
Annotation process has been described in Dataset Creation Section.
#### Who are the annotators?
Annotation is done automatically by machine and corresponding recasting process.
### Personal and Sensitive Information
No Personal and Sensitive Information is mentioned in the Datasets.
## Considerations for Using the Data
Pls refer to this paper: https://www.aclweb.org/anthology/2020.aacl-main.71
### Discussion of Biases
No known bias exist in the dataset.
Pls refer to this paper: https://www.aclweb.org/anthology/2020.aacl-main.71
### Other Known Limitations
No other known limitations . Size of data may not be enough to train large models
## Additional Information
Pls refer to this link: https://github.com/midas-research/hindi-nli-data
### Dataset Curators
It is written in the repo : https://github.com/midas-research/hindi-nli-data that
- This corpus can be used freely for research purposes.
- The paper listed below provide details of the creation and use of the corpus. If you use the corpus, then please cite the paper.
- If interested in commercial use of the corpus, send email to midas@iiitd.ac.in.
- If you use the corpus in a product or application, then please credit the authors and Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi appropriately. Also, if you send us an email, we will be thrilled to know about how you have used the corpus.
- Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi, India disclaims any responsibility for the use of the corpus and does not provide technical support. However, the contact listed above will be happy to respond to queries and clarifications.
- Rather than redistributing the corpus, please direct interested parties to this page
- Please feel free to send us an email:
- with feedback regarding the corpus.
- with information on how you have used the corpus.
- if interested in having us analyze your data for natural language inference.
- if interested in a collaborative research project.
### Licensing Information
Copyright (C) 2019 Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi (MIDAS, IIIT-Delhi).
Pls contact authors for any information on the dataset.
### Citation Information
```
@inproceedings{uppal-etal-2020-two,
title = "Two-Step Classification using Recasted Data for Low Resource Settings",
author = "Uppal, Shagun and
Gupta, Vivek and
Swaminathan, Avinash and
Zhang, Haimin and
Mahata, Debanjan and
Gosangi, Rakesh and
Shah, Rajiv Ratn and
Stent, Amanda",
booktitle = "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing",
month = dec,
year = "2020",
address = "Suzhou, China",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.aacl-main.71",
pages = "706--719",
abstract = "An NLP model{'}s ability to reason should be independent of language. Previous works utilize Natural Language Inference (NLI) to understand the reasoning ability of models, mostly focusing on high resource languages like English. To address scarcity of data in low-resource languages such as Hindi, we use data recasting to create NLI datasets for four existing text classification datasets. Through experiments, we show that our recasted dataset is devoid of statistical irregularities and spurious patterns. We further study the consistency in predictions of the textual entailment models and propose a consistency regulariser to remove pairwise-inconsistencies in predictions. We propose a novel two-step classification method which uses textual-entailment predictions for classification task. We further improve the performance by using a joint-objective for classification and textual entailment. We therefore highlight the benefits of data recasting and improvements on classification performance using our approach with supporting experimental results.",
}
```
### Contributions
Thanks to [@avinsit123](https://github.com/avinsit123) for adding this dataset. | 10,244 | [
[
-0.018463134765625,
-0.06280517578125,
-0.00414276123046875,
0.031402587890625,
-0.0240631103515625,
0.0116729736328125,
-0.021270751953125,
-0.0226593017578125,
0.02239990234375,
0.0196685791015625,
-0.0270233154296875,
-0.035125732421875,
-0.0479736328125,
... |
hrwac | 2022-11-03T16:15:15.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1B<n<10B",
"source_datasets:original",
"language:hr",
... | null | The Croatian web corpus hrWaC was built by crawling the .hr top-level domain in 2011 and again in 2014. The corpus was near-deduplicated on paragraph level, normalised via diacritic restoration, morphosyntactically annotated and lemmatised. The corpus is shuffled by paragraphs. Each paragraph contains metadata on the URL, domain and language identification (Croatian vs. Serbian).
Version 2.0 of this corpus is described in http://www.aclweb.org/anthology/W14-0405. Version 2.1 contains newer and better linguistic annotations. | @misc{11356/1064,
title = {Croatian web corpus {hrWaC} 2.1},
author = {Ljube{\v s}i{\'c}, Nikola and Klubi{\v c}ka, Filip},
url = {http://hdl.handle.net/11356/1064},
note = {Slovenian language resource repository {CLARIN}.{SI}},
copyright = {Creative Commons - Attribution-{ShareAlike} 4.0 International ({CC} {BY}-{SA} 4.0)},
year = {2016} } | 0 | 88 | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- hr
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
size_categories:
- 1B<n<10B
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: null
pretty_name: HrWac
dataset_info:
features:
- name: sentence
dtype: string
config_name: hrwac
splits:
- name: train
num_bytes: 43994569015
num_examples: 1736944727
download_size: 9217221471
dataset_size: 43994569015
---
# Dataset Card for HrWac
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://nlp.ffzg.hr/resources/corpora/hrwac/
- **Repository:** https://www.clarin.si/repository/xmlui/handle/11356/1064
- **Paper:** http://nlp.ffzg.hr/data/publications/nljubesi/ljubesic11-hrwac.pdf
- **Leaderboard:**
- **Point of Contact:** [Nikola Ljubešič](mailto:nikola.ljubesic@ffzg.hr)
### Dataset Summary
The Croatian web corpus hrWaC was built by crawling the .hr top-level domain in 2011 and again in 2014. The corpus was near-deduplicated on paragraph level, normalised via diacritic restoration, morphosyntactically annotated and lemmatised. The corpus is shuffled by paragraphs. Each paragraph contains metadata on the URL, domain and language identification (Croatian vs. Serbian).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Dataset is monolingual in Croatian language.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- sentence: sentences as strings
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Dataset is under the [CC-BY-SA 3.0](http://creativecommons.org/licenses/by-sa/3.0/) license.
### Citation Information
```
@misc{11356/1064,
title = {Croatian web corpus {hrWaC} 2.1},
author = {Ljube{\v s}i{\'c}, Nikola and Klubi{\v c}ka, Filip},
url = {http://hdl.handle.net/11356/1064},
note = {Slovenian language resource repository {CLARIN}.{SI}},
copyright = {Creative Commons - Attribution-{ShareAlike} 4.0 International ({CC} {BY}-{SA} 4.0)},
year = {2016} }
```
### Contributions
Thanks to [@IvanZidov](https://github.com/IvanZidov) for adding this dataset. | 3,986 | [
[
-0.027587890625,
-0.0467529296875,
0.0027332305908203125,
0.0297088623046875,
-0.01983642578125,
-0.002323150634765625,
-0.033233642578125,
-0.033538818359375,
0.022674560546875,
0.033111572265625,
-0.0716552734375,
-0.08349609375,
-0.041656494140625,
0.0320... |
opus_tedtalks | 2022-11-03T16:15:24.000Z | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"language:hr",
"license:unknown",
"region:us"
] | null | This is a Croatian-English parallel corpus of transcribed and translated TED talks, originally extracted from https://wit3.fbk.eu. The corpus is compiled by Željko Agić and is taken from http://lt.ffzg.hr/zagic provided under the CC-BY-NC-SA license.
2 languages, total number of files: 2
total number of tokens: 2.81M
total number of sentence fragments: 0.17M | @InProceedings{TIEDEMANN12.463,
author = {J{\"o}rg Tiedemann},
title = {Parallel Data, Tools and Interfaces in OPUS},
booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)},
year = {2012},
month = {may},
date = {23-25},
address = {Istanbul, Turkey},
editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis},
publisher = {European Language Resources Association (ELRA)},
isbn = {978-2-9517408-7-7},
language = {english}
} | 0 | 88 | 2022-03-02T23:29:22 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
- hr
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: OpusTedtalks
dataset_info:
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- hr
config_name: en-hr
splits:
- name: train
num_bytes: 15249417
num_examples: 86348
download_size: 5639306
dataset_size: 15249417
---
# Dataset Card for OpusTedtalks
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://opus.nlpl.eu/TedTalks.php
- **Repository:** None
- **Paper:** http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
This is a Croatian-English parallel corpus of transcribed and translated TED talks, originally extracted from https://wit3.fbk.eu. The corpus is compiled by Željko Agić and is taken from http://lt.ffzg.hr/zagic provided under the CC-BY-NC-SA license. This corpus is sentence aligned for both language pairs. The documents were collected and aligned using the Hunalign algorithm.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Dataset provided for research purposes only. Please check dataset license for additional information.
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[CC-BY-NC-SA license]<http://creativecommons.org/licenses/by-sa/3.0/>
### Citation Information
@InProceedings{TIEDEMANN12.463,
author = {J{\"o}rg Tiedemann},
title = {Parallel Data, Tools and Interfaces in OPUS},
booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)},
year = {2012},
month = {may},
date = {23-25},
address = {Istanbul, Turkey},
editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis},
publisher = {European Language Resources Association (ELRA)},
isbn = {978-2-9517408-7-7},
language = {english}
}
### Contributions
Thanks to [@rkc007](https://github.com/rkc007) for adding this dataset. | 4,292 | [
[
-0.0242767333984375,
-0.0396728515625,
0.0184478759765625,
0.02880859375,
-0.0229034423828125,
0.01325225830078125,
-0.049224853515625,
-0.0290679931640625,
0.0347900390625,
0.0237579345703125,
-0.048553466796875,
-0.06939697265625,
-0.04608154296875,
0.0163... |
psc | 2023-01-25T14:42:57.000Z | [
"task_categories:summarization",
"task_ids:news-articles-summarization",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:pl",
"license:cc-by-sa-3.0",
"region:us"
] | null | The Polish Summaries Corpus contains news articles and their summaries. We used summaries of the same article as positive pairs and sampled the most similar summaries of different articles as negatives. | @inproceedings{ogro:kop:14:lrec,
title={The {P}olish {S}ummaries {C}orpus},
author={Ogrodniczuk, Maciej and Kope{\'c}, Mateusz},
booktitle = "Proceedings of the Ninth International {C}onference on {L}anguage {R}esources and {E}valuation, {LREC}~2014",
year = "2014",
} | 1 | 88 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- other
language:
- pl
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- summarization
task_ids:
- news-articles-summarization
pretty_name: psc
dataset_info:
features:
- name: extract_text
dtype: string
- name: summary_text
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 5026582
num_examples: 4302
- name: test
num_bytes: 1292103
num_examples: 1078
download_size: 2357808
dataset_size: 6318685
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
http://zil.ipipan.waw.pl/PolishSummariesCorpus
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The Polish Summaries Corpus contains news articles and their summaries. We used summaries of the same article as positive pairs and sampled the most similar summaries of different articles as negatives.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Polish
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- extract_text: text to summarise
- summary_text: summary of extracted text
- label: 1 indicates summary is similar, 0 means that it is not similar
### Data Splits
Data is splitted in train and test dataset. Test dataset doesn't have label column, so -1 is set instead.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
CC BY-SA 3.0
### Citation Information
@inproceedings{ogro:kop:14:lrec,
title={The {P}olish {S}ummaries {C}orpus},
author={Ogrodniczuk, Maciej and Kope{\'c}, Mateusz},
booktitle = "Proceedings of the Ninth International {C}onference on {L}anguage {R}esources and {E}valuation, {LREC}~2014",
year = "2014",
}
### Contributions
Thanks to [@abecadel](https://github.com/abecadel) for adding this dataset. | 3,747 | [
[
-0.0323486328125,
-0.04498291015625,
0.0193023681640625,
0.0196380615234375,
-0.022705078125,
0.00620269775390625,
-0.032501220703125,
-0.033721923828125,
0.04718017578125,
0.034942626953125,
-0.05743408203125,
-0.074951171875,
-0.051300048828125,
0.01811218... |
telugu_news | 2023-01-25T14:45:35.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:text-classification",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"task_ids:multi-class-classification",
"task_ids:topic-classification",
"annotations_creators:machine-generated",
"language_creato... | null | This dataset contains Telugu language news articles along with respective
topic labels (business, editorial, entertainment, nation, sport) extracted from
the daily Andhra Jyoti. This dataset could be used to build Classification and Language Models. | @InProceedings{kaggle:dataset,
title = {Telugu News - Natural Language Processing for Indian Languages},
authors={Sudalai Rajkumar, Anusha Motamarri},
year={2019}
} | 0 | 88 | 2022-03-02T23:29:22 | ---
annotations_creators:
- machine-generated
language_creators:
- other
language:
- te
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
- text-classification
task_ids:
- language-modeling
- masked-language-modeling
- multi-class-classification
- topic-classification
pretty_name: TeluguNews
dataset_info:
features:
- name: sno
dtype: int32
- name: date
dtype: string
- name: heading
dtype: string
- name: body
dtype: string
- name: topic
dtype:
class_label:
names:
'0': business
'1': editorial
'2': entertainment
'3': nation
'4': sports
splits:
- name: train
num_bytes: 69400234
num_examples: 17312
- name: test
num_bytes: 17265514
num_examples: 4329
download_size: 0
dataset_size: 86665748
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.kaggle.com/sudalairajkumar/telugu-nlp?select=telugu_news
- **Repository:** https://github.com/AnushaMotamarri/Telugu-Newspaper-Article-Dataset
### Dataset Summary
This dataset contains Telugu language news articles along with respective topic
labels (business, editorial, entertainment, nation, sport) extracted from the daily Andhra Jyoti.
This dataset could be used to build Classification and Language Models.
### Supported Tasks and Leaderboards
Multiclass classification, Topic Classification, Language Model
### Languages
TE - Telugu, India
## Dataset Structure
### Data Instances
Two CSV files (train, test) with five columns (sno, date, heading, body, topic).
### Data Fields
- sno: id
- date: publish date of the news article
- heading: article heading/title
- body: article body/content
- topic: one of the following topics (business, editorial, entertainment, nation, sport)
### Data Splits
Train and Test
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
- https://www.kaggle.com/sudalairajkumar/telugu-nlp?select=telugu_news
- https://github.com/AnushaMotamarri/Telugu-Newspaper-Article-Dataset
#### Initial Data Collection and Normalization
The source data is scraped articles from archives of Telugu newspaper website Andhra Jyoti.
A set of queries were created and the corresponding ground truth answers were retrieved by a combination of BM25 and tf-idf.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Sudalai Rajkumar, Anusha Motamarri
### Licensing Information
[More Information Needed]
### Citation Information
```
@InProceedings{kaggle:dataset,
title = {Telugu News - Natural Language Processing for Indian Languages},
authors={Sudalai Rajkumar, Anusha Motamarri},
year={2019}
}
```
### Contributions
Thanks to [@oostopitre](https://github.com/oostopitre) for adding this dataset. | 4,403 | [
[
-0.0149078369140625,
-0.050689697265625,
0.0012273788452148438,
0.0313720703125,
-0.0300445556640625,
0.0165252685546875,
-0.03277587890625,
-0.024017333984375,
0.040985107421875,
0.0213623046875,
-0.0400390625,
-0.060882568359375,
-0.053741455078125,
0.0160... |
times_of_india_news_headlines | 2022-11-03T16:15:42.000Z | [
"task_categories:text2text-generation",
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"task_ids:fact-checking-retrieval",
"task_ids:text-simplification",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1M<... | null | This news dataset is a persistent historical archive of noteable events in the Indian subcontinent from start-2001 to mid-2020, recorded in realtime by the journalists of India. It contains approximately 3.3 million events published by Times of India. Times Group as a news agency, reaches out a very wide audience across Asia and drawfs every other agency in the quantity of english articles published per day. Due to the heavy daily volume over multiple years, this data offers a deep insight into Indian society, its priorities, events, issues and talking points and how they have unfolded over time. It is possible to chop this dataset into a smaller piece for a more focused analysis, based on one or more facets. | @data{DVN/DPQMQH_2020,
author = {Kulkarni, Rohit},
publisher = {Harvard Dataverse},
title = {{Times of India News Headlines}},
year = {2020},
version = {V1},
doi = {10.7910/DVN/DPQMQH},
url = {https://doi.org/10.7910/DVN/DPQMQH}
} | 0 | 88 | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- expert-generated
language:
- en
license:
- cc0-1.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text2text-generation
- text-retrieval
task_ids:
- document-retrieval
- fact-checking-retrieval
- text-simplification
paperswithcode_id: null
pretty_name: Times of India News Headlines
dataset_info:
features:
- name: publish_date
dtype: string
- name: headline_category
dtype: string
- name: headline_text
dtype: string
splits:
- name: train
num_bytes: 260939306
num_examples: 3297173
download_size: 0
dataset_size: 260939306
---
# Dataset Card for Times of India News Headlines
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/J7BYRX
- **Repository:** [More Information Needed]
- **Paper:** [More Information Needed]
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
This news dataset is a persistent historical archive of noteable events in the Indian subcontinent from start-2001 to mid-2020, recorded in realtime by the journalists of India. It contains approximately 3.3 million events published by Times of India. Times Group as a news agency, reaches out a very wide audience across Asia and drawfs every other agency in the quantity of english articles published per day. Due to the heavy daily volume over multiple years, this data offers a deep insight into Indian society, its priorities, events, issues and talking points and how they have unfolded over time. It is possible to chop this dataset into a smaller piece for a more focused analysis, based on one or more facets.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
```
{
'publish_date': '20010530',
'headline_category': city.kolkata,
'headline_text': "Malda fake notes"
}
```
### Data Fields
- `publish_date`: Date of publishing in yyyyMMdd format
- `headline_category`: Category of event in ascii, dot-delimited values
- `headline_text`: Headline of article en la Engrezi (2020-07-10)
### Data Splits
This dataset has no splits.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset was created by Rohit Kulkarni.
### Licensing Information
The data is under the [CC0: Public Domain](https://creativecommons.org/publicdomain/zero/1.0/)
### Citation Information
```
@data{DVN/DPQMQH_2020,
author = {Kulkarni, Rohit},
publisher = {Harvard Dataverse},
title = {{Times of India News Headlines}},
year = {2020},
version = {V1},
doi = {10.7910/DVN/DPQMQH},
url = {https://doi.org/10.7910/DVN/DPQMQH}
}
```
### Contributions
Thanks to [@tanmoyio](https://github.com/tanmoyio) for adding this dataset. | 4,585 | [
[
-0.0133819580078125,
-0.031097412109375,
0.00494384765625,
0.0340576171875,
-0.0372314453125,
0.0170745849609375,
-0.004852294921875,
-0.0286407470703125,
0.0263671875,
0.0091400146484375,
-0.03985595703125,
-0.043426513671875,
-0.0467529296875,
0.0152511596... |
twi_wordsim353 | 2022-11-03T16:07:57.000Z | [
"task_categories:text-classification",
"task_ids:text-scoring",
"task_ids:semantic-similarity-scoring",
"annotations_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"language:tw",
"li... | null | A translation of the word pair similarity dataset wordsim-353 to Twi.
The dataset was presented in the paper
Alabi et al.: Massive vs. Curated Embeddings for Low-Resourced
Languages: the Case of Yorùbá and Twi (LREC 2020). | @inproceedings{alabi-etal-2020-massive,
title = "Massive vs. Curated Embeddings for Low-Resourced Languages: the Case of {Y}or{\\`u}b{\\'a} and {T}wi",
author = "Alabi, Jesujoba and
Amponsah-Kaakyire, Kwabena and
Adelani, David and
Espa{\\~n}a-Bonet, Cristina",
booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://www.aclweb.org/anthology/2020.lrec-1.335",
pages = "2754--2762",
language = "English",
ISBN = "979-10-95546-34-4",
} | 1 | 88 | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
language_creators:
- expert-generated
language:
- en
- tw
license:
- unknown
multilinguality:
- multilingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- text-scoring
- semantic-similarity-scoring
paperswithcode_id: null
pretty_name: Yorùbá Wordsim-353
dataset_info:
features:
- name: twi1
dtype: string
- name: twi2
dtype: string
- name: similarity
dtype: float32
splits:
- name: test
num_bytes: 7285
num_examples: 274
download_size: 6141
dataset_size: 7285
---
# Dataset Card for Yorùbá Wordsim-353
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** -https://www.aclweb.org/anthology/2020.lrec-1.335/
- **Repository:** https://github.com/ajesujoba/YorubaTwi-Embedding
- **Paper:** https://www.aclweb.org/anthology/2020.lrec-1.335/
- **Leaderboard:** -
- **Point of Contact:** [Kwabena Amponsah-Kaakyire](mailto:s8kwampo@stud.uni-saarland.de)
### Dataset Summary
A translation of the word pair similarity dataset wordsim-353 to Twi. However, only 274 (out of 353) pairs of words were translated
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Twi (ISO 639-1: tw)
## Dataset Structure
### Data Instances
An instance consists of a pair of words as well as their similarity. The dataset contains both the original English words (from wordsim-353) as well as their translation to Twi.
### Data Fields
- `twi1`: the first word of the pair; translation to Twi
- `twi2`: the second word of the pair; translation to Twi
- `similarity`: similarity rating according to the English dataset
### Data Splits
Only the test data is available
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{alabi-etal-2020-massive,
title = "Massive vs. Curated Embeddings for Low-Resourced Languages: the Case of {Y}or{\`u}b{\'a} and {T}wi",
author = "Alabi, Jesujoba and
Amponsah-Kaakyire, Kwabena and
Adelani, David and
Espa{\~n}a-Bonet, Cristina",
booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://www.aclweb.org/anthology/2020.lrec-1.335",
pages = "2754--2762",
abstract = "The success of several architectures to learn semantic representations from unannotated text and the availability of these kind of texts in online multilingual resources such as Wikipedia has facilitated the massive and automatic creation of resources for multiple languages. The evaluation of such resources is usually done for the high-resourced languages, where one has a smorgasbord of tasks and test sets to evaluate on. For low-resourced languages, the evaluation is more difficult and normally ignored, with the hope that the impressive capability of deep learning architectures to learn (multilingual) representations in the high-resourced setting holds in the low-resourced setting too. In this paper we focus on two African languages, Yor{\`u}b{\'a} and Twi, and compare the word embeddings obtained in this way, with word embeddings obtained from curated corpora and a language-dependent processing. We analyse the noise in the publicly available corpora, collect high quality and noisy data for the two languages and quantify the improvements that depend not only on the amount of data but on the quality too. We also use different architectures that learn word representations both from surface forms and characters to further exploit all the available information which showed to be important for these languages. For the evaluation, we manually translate the wordsim-353 word pairs dataset from English into Yor{\`u}b{\'a} and Twi. We extend the analysis to contextual word embeddings and evaluate multilingual BERT on a named entity recognition task. For this, we annotate with named entities the Global Voices corpus for Yor{\`u}b{\'a}. As output of the work, we provide corpora, embeddings and the test suits for both languages.",
language = "English",
ISBN = "979-10-95546-34-4",
}
```
### Contributions
Thanks to [@dadelani](https://github.com/dadelani) for adding this dataset. | 6,060 | [
[
-0.03594970703125,
-0.06256103515625,
0.01148223876953125,
0.01371002197265625,
-0.0274505615234375,
0.006320953369140625,
-0.050262451171875,
-0.038665771484375,
0.044708251953125,
0.02410888671875,
-0.037689208984375,
-0.050750732421875,
-0.060333251953125,
... |
adorkin/extended_tweet_emojis | 2023-02-07T12:18:57.000Z | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"region:us"
] | adorkin | null | null | 1 | 88 | 2022-03-02T23:29:22 | ---
task_categories:
- text-classification
language:
- en
size_categories:
- 10K<n<100K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset is comprised of `emoji` and `emotion` subsets of [tweet_eval](https://huggingface.co/datasets/tweet_eval). The motivation
is that the original `emoji` subset essentially contains only positive/neutral emojis, while `emotion` subset contains a varied array
of emotions. So, the idea was to replace emotion labels with corresponding emojis (sad, angry) in the `emotion` subset and mix it together
with the `emoji` subset.
### Supported Tasks and Leaderboards
Similar to tweet eval the expected usage is text classification.
### Languages
Only English is present in the dataset.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
Refer to [tweet_eval](https://huggingface.co/datasets/tweet_eval). No additional data was added.
#### Annotation process
Same as tweet eval.
#### Who are the annotators?
Same as tweet eval.
### Personal and Sensitive Information
Same as tweet eval.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 1,957 | [
[
-0.0179290771484375,
-0.041259765625,
0.0030269622802734375,
0.036773681640625,
-0.0251922607421875,
0.027374267578125,
-0.02532958984375,
-0.0201873779296875,
0.058349609375,
0.0244293212890625,
-0.060028076171875,
-0.0797119140625,
-0.057861328125,
0.00955... |
AlexMaclean/all-deletion-compressions | 2021-12-07T00:29:41.000Z | [
"region:us"
] | AlexMaclean | null | null | 1 | 88 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
AryanLala/autonlp-data-Scientific_Title_Generator | 2021-11-20T18:00:56.000Z | [
"region:us"
] | AryanLala | null | null | 1 | 88 | 2022-03-02T23:29:22 | ---
task_categories:
- conditional-text-generation
---
# AutoNLP Dataset for project: Scientific_Title_Generator
## Table of content
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
## Dataset Descritpion
This dataset has been automatically processed by AutoNLP for project Scientific_Title_Generator.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"target": "Unification of Fusion Theories, Rules, Filters, Image Fusion and Target\n Tracking Methods (UFT)",
"text": " The author has pledged in various papers, conference or seminar\npresentations, and scientific grant applications (between 2004-2015) for the\nunification of fusion theories, combinations of fusion rules, image fusion\nprocedures, filter algorithms, and target tracking methods for more accurate\napplications to our real world problems - since neither fusion theory nor\nfusion rule fully satisfy all needed applications. For each particular\napplication, one selects the most appropriate fusion space and fusion model,\nthen the fusion rules, and the algorithms of implementation. He has worked in\nthe Unification of the Fusion Theories (UFT), which looks like a cooking\nrecipe, better one could say like a logical chart for a computer programmer,\nbut one does not see another method to comprise/unify all things. The\nunification scenario presented herein, which is now in an incipient form,\nshould periodically be updated incorporating new discoveries from the fusion\nand engineering research.\n"
},
{
"target": "Investigation of Variances in Belief Networks",
"text": " The belief network is a well-known graphical structure for representing\nindependences in a joint probability distribution. The methods, which perform\nprobabilistic inference in belief networks, often treat the conditional\nprobabilities which are stored in the network as certain values. However, if\none takes either a subjectivistic or a limiting frequency approach to\nprobability, one can never be certain of probability values. An algorithm\nshould not only be capable of reporting the probabilities of the alternatives\nof remaining nodes when other nodes are instantiated; it should also be capable\nof reporting the uncertainty in these probabilities relative to the uncertainty\nin the probabilities which are stored in the network. In this paper a method\nfor determining the variances in inferred probabilities is obtained under the\nassumption that a posterior distribution on the uncertainty variables can be\napproximated by the prior distribution. It is shown that this assumption is\nplausible if their is a reasonable amount of confidence in the probabilities\nwhich are stored in the network. Furthermore in this paper, a surprising upper\nbound for the prior variances in the probabilities of the alternatives of all\nnodes is obtained in the case where the probability distributions of the\nprobabilities of the alternatives are beta distributions. It is shown that the\nprior variance in the probability at an alternative of a node is bounded above\nby the largest variance in an element of the conditional probability\ndistribution for that node.\n"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"target": "Value(dtype='string', id=None)",
"text": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 5784 |
| valid | 1446 |
| 3,880 | [
[
-0.037811279296875,
-0.03277587890625,
0.0232696533203125,
0.01210784912109375,
-0.006992340087890625,
-0.015655517578125,
0.0075225830078125,
-0.022247314453125,
0.03582763671875,
0.035797119140625,
-0.051910400390625,
-0.036773681640625,
-0.0235137939453125,
... |
Atsushi/fungi_trait_circus_database | 2022-12-26T10:38:17.000Z | [
"annotations_creators:other",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"language:ja",
"license:cc-by-4.0",
"region:us"
] | Atsushi | null | null | 0 | 88 | 2022-03-02T23:29:22 | ---
annotations_creators:
- other
language:
- en
- ja
multilinguality:
- multilingual
license:
- cc-by-4.0
source_datasets:
- original
size_categories:
- 100K<n<1M
---
fungi_trait_circus_database
大菌輪「Trait Circus」データセット(統制形質)
最終更新日:2022/12/26
====
### Languages
Japanese and English
Please do not use this dataset for academic purposes for the time being. (casual use only)
当面の間仮公開とします。学術目的での使用はご遠慮ください。
# 概要
Atsushi Nakajima(中島淳志)が個人で運営しているWebサイト[大菌輪](http://mycoscouter.coolblog.jp/daikinrin/) では、菌類の記載文を自然言語処理の手法を利用して半自動的に処理し、菌類の形態、生態などに関する様々な「形質 (traits)」データを抽出して、集計や解析の便宜を図るために、あらかじめ設定された「統制語 (controlled term)」の形でまとめています。
抽出手法については「ニッチェ・ライフ」誌の[こちらの記事](https://media.niche-life.com/series/008/Niche008_06.pdf)(査読なし)で報告しています。
自動抽出という性質上、ある程度の誤りが含まれる可能性があることをご承知おきください。
統制語は「要素 (element)」「属性(attribute)」「値(value)」の3つ組からなります。
例えば「傘_色_黒」はそれぞれ「傘」「色」「黒」の要素/属性/値を持っています。一部の統制語では要素と属性が同一となっています(「生息環境」など)
参考までに、データ数上位3件は「要素」で「子実体」「傘」「胞子」、「属性」で「色」「形状」「表面性状」、「値」で「褐」「平滑」「黄」です。
また、菌類分類学の学習および同定支援の目的で、そのデータを基にしたインタラクティブな可視化Webアプリ「[Trait Circus](https://tinyurl.com/nrhcfksu)」を提供しています。
本データセットは、そのWebアプリの生データに相当し、容量の都合等でWebアプリに反映されていない情報も含まれています。
## 関連データセット
「論文3行まとめ」
[Atsushi/fungi_indexed_mycological_papers_japanese](https://huggingface.co/datasets/Atsushi/fungi_indexed_mycological_papers_japanese)
「識別形質まとめ」
[Atsushi/fungi_diagnostic_chars_comparison_japanese](https://huggingface.co/datasets/Atsushi/fungi_diagnostic_chars_comparison_japanese)
## 各カラムの説明
* source … 各情報の出典のURLです。多くは学術文献またはMycoBankの記載文データベースを参照しています。
* hit_term … 抽出された形質の出典中における表現です。
* current_name … その形質を有する菌の現行学名です。MycoBankを参照していますが、最新の情報ではない可能性があります。
* element_j … 「要素」の日本語表記です。
* attribute_j … 「属性」の日本語表記です。
* value_j … 「値」の日本語表記です。
* element … 「要素」の英語表記です。
* attribute … 「属性」の英語表記です。
* value … 「値」の英語表記です。 | 1,841 | [
[
-0.039398193359375,
-0.050384521484375,
0.0250244140625,
0.020660400390625,
-0.032562255859375,
0.003265380859375,
0.005916595458984375,
-0.039398193359375,
0.08209228515625,
0.0163421630859375,
-0.0484619140625,
-0.06427001953125,
-0.0408935546875,
0.043731... |
SetFit/insincere-questions | 2022-01-19T18:15:51.000Z | [
"region:us"
] | SetFit | null | null | 1 | 88 | 2022-03-02T23:29:22 | This is a version of the [Quora Insincere Questions Classification](https://www.kaggle.com/c/quora-insincere-questions-classification).
An insincere question is defined as a question intended to make a statement rather than look for helpful answers. About 6% of questions are labeled as insincere. | 301 | [
[
-0.036224365234375,
-0.06573486328125,
0.01485443115234375,
0.0007939338684082031,
-0.03173828125,
0.0086212158203125,
0.0182647705078125,
-0.04132080078125,
0.0197296142578125,
0.048309326171875,
-0.0283203125,
0.00453948974609375,
-0.028900146484375,
0.007... |
aseifert/pie-synthetic | 2022-07-07T11:55:53.000Z | [
"multilinguality:translation",
"size_categories:unknown",
"language:en",
"region:us"
] | aseifert | null | null | 1 | 88 | 2022-03-02T23:29:22 | ---
annotations_creators: []
language_creators: []
language:
- en
license: []
multilinguality:
- translation
pretty_name: pie-synthetic
size_categories:
- unknown
source_datasets: []
task_categories:
- conditional-text-generation
task_ids:
- machine-translation
---
# PIE synthetic dataset
Repo: https://github.com/awasthiabhijeet/PIE
Paper: https://aclanthology.org/D19-1435.pdf | 382 | [
[
-0.0182647705078125,
-0.04119873046875,
0.031982421875,
0.01995849609375,
-0.0030078887939453125,
0.021881103515625,
-0.00856781005859375,
-0.0194244384765625,
0.0472412109375,
0.059326171875,
-0.057952880859375,
-0.04266357421875,
-0.0093536376953125,
0.005... |
ashraq/dhivehi-corpus | 2021-12-19T14:39:45.000Z | [
"region:us"
] | ashraq | This is a dataset put together to pretrain a language model in Dhivehi, the language of Maldives. | @InProceedings{huggingface:dataset,
title = {A great new dataset},
author={huggingface, Inc.
},
year={2021}
} | 2 | 88 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01494598388671875,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.0465087890625,
0.052490234375,
0.00505828857421875,
0.051361083984375,
0.0170135498046875,
-0.05206298828125,
-0.0149993896484375,
-0.06036376953125,
0.0379028320... |
astrideducation/cefr-combined-no-cefr-test | 2021-12-02T14:47:44.000Z | [
"region:us"
] | astrideducation | This dataset contains 3370555 sentences, which each have an assigned CEFR level derived from EFLLex (https://cental.uclouvain.be/cefrlex/efllex/download).
The sentences comes from "the pile books3", which is available on Huggingface (https://huggingface.co/datasets/the_pile_books3).
The CEFR levels used are A1, A2, B1, B2 and C1, and there are equals number of sentences for each level.
Assigning each sentence a CEFR level followed is based on the concept of "shifted frequency distribution", introduced by David Alfter and his paper can be found at (https://gupea.ub.gu.se/bitstream/2077/66861/4/gupea_2077_66861_4.pdf).
For each word in each sentence, take the CEFR level with the highest "shifted frequency distribution" in the EFLLex table.
After all words have been processed, the sentence gets annotated with the most frequently appearing CEFR level from the whole senctence. | @misc{cefr_book_sentences,
author={Astrid Education AB}
year={2021}
} | 1 | 88 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01494598388671875,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.0465087890625,
0.052490234375,
0.00505828857421875,
0.051361083984375,
0.0170135498046875,
-0.05206298828125,
-0.0149993896484375,
-0.06036376953125,
0.0379028320... |
athar/QA | 2021-10-24T17:30:33.000Z | [
"region:us"
] | athar | null | null | 0 | 88 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
austin/rheum_abstracts | 2022-01-04T05:10:23.000Z | [
"region:us"
] | austin | null | null | 0 | 88 | 2022-03-02T23:29:22 | # Dataset Card for Rheumatology Abstracts
## Data Source
This dataset comes from PubMed, derived from my fork of the pymed package (no longer maintained). My fork can be found at https://github.com/cmcmaster1/pymed
## Data Structure
The dataset is split into train (80%) and test (20%) files (CSV). Each file contains three columns:
- id
- abstract (minus conclusion)
- conclusion | 381 | [
[
-0.0042266845703125,
-0.04205322265625,
0.0160675048828125,
-0.0036067962646484375,
-0.0545654296875,
0.0142974853515625,
0.0175933837890625,
-0.035919189453125,
0.047271728515625,
0.03851318359375,
-0.00946807861328125,
-0.0699462890625,
-0.041595458984375,
... |
bitmorse/kickstarter_2022-2021 | 2022-02-10T06:28:18.000Z | [
"region:us"
] | bitmorse | null | null | 1 | 88 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
biu-nlp/qamr | 2021-10-20T07:10:13.000Z | [
"region:us"
] | biu-nlp | Question-Answer Meaning Representations (QAMR) are a new paradigm for representing predicate-argument structure, which makes use of free-form questions and their answers in order to represent a wide range of semantic phenomena.
The semantic expressivity of QAMR compares to (and in some cases exceeds) that of existing formalisms, while the representations can be annotated by non-experts (in particular, using crowdsourcing).
Formal Notes:
* The `answer_ranges` feature here has a different meaning from that of the `qanom` and `qa_srl` datasets, although both are structured the same way;
while in qasrl/qanom, each "answer range" (i.e. each span, represented as [begin-idx, end-idx]) stands for an independant answer which is read separately
(e.g., "John Vincen", "head of marketing"), in this `qamr` dataset each question has a single answer who might be conposed of non-consecutive spans;
that is, all given spans should be read successively.
* Another difference is that the meaning of `predicate` in QAMR is different and softer than in QASRL/QANom - here, the predicate is not necessarily within the question,
it can also be in the answer; it is generally what the annotator marked as the focus of the QA. | @inproceedings{michael-etal-2018-crowdsourcing,
title = "Crowdsourcing Question-Answer Meaning Representations",
author = "Michael, Julian and
Stanovsky, Gabriel and
He, Luheng and
Dagan, Ido and
Zettlemoyer, Luke",
booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)",
month = jun,
year = "2018",
address = "New Orleans, Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N18-2089",
doi = "10.18653/v1/N18-2089",
pages = "560--568",
abstract = "We introduce Question-Answer Meaning Representations (QAMRs), which represent the predicate-argument structure of a sentence as a set of question-answer pairs. We develop a crowdsourcing scheme to show that QAMRs can be labeled with very little training, and gather a dataset with over 5,000 sentences and 100,000 questions. A qualitative analysis demonstrates that the crowd-generated question-answer pairs cover the vast majority of predicate-argument relationships in existing datasets (including PropBank, NomBank, and QA-SRL) along with many previously under-resourced ones, including implicit arguments and relations. We also report baseline models for question generation and answering, and summarize a recent approach for using QAMR labels to improve an Open IE system. These results suggest the freely available QAMR data and annotation scheme should support significant future work.",
} | 0 | 88 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
blinoff/medical_qa_ru_data | 2022-07-02T06:24:13.000Z | [
"task_categories:question-answering",
"task_ids:closed-domain-qa",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:ru",
"license:unknown",
"region:us"
] | blinoff | This dataset contains 190,335 Russian Q&A posts from a medical related forum. | null | 6 | 88 | 2022-03-02T23:29:22 | ---
annotations_creators: []
language_creators: []
language:
- ru
license:
- unknown
multilinguality:
- monolingual
pretty_name: Medical Q&A Russian Data
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- closed-domain-qa
---
### Dataset Summary
This dataset contains 190,335 Russian Q&A posts from a medical related forum.
### Dataset Fields
* date: date and time of the asked question, like '26 Октября 2018, 08:30'
* categ: question category
* theme: question topic
* desc: question text
* ans: question answers separated with ';\n'
* spec10: if present, one of 10 medical specializations
| 651 | [
[
-0.0298309326171875,
-0.0538330078125,
0.0287933349609375,
0.0033893585205078125,
-0.020355224609375,
-0.0187225341796875,
0.0153045654296875,
-0.002628326416015625,
0.039947509765625,
0.04046630859375,
-0.060699462890625,
-0.049560546875,
-0.035369873046875,
... |
bwu2018/anime-tagging-dataset | 2021-12-08T17:20:46.000Z | [
"region:us"
] | bwu2018 | null | null | 5 | 88 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
castorini/msmarco_v2_doc_doc2query-t5_expansions | 2021-11-11T17:41:32.000Z | [
"language:English",
"license:Apache License 2.0",
"region:us"
] | castorini | null | null | 0 | 88 | 2022-03-02T23:29:22 | ---
language:
- English
license: "Apache License 2.0"
---
# Dataset Summary
The repo provides queries generated for the MS MARCO v2 document corpus with docTTTTTquery (sometimes written as docT5query or doc2query-T5), the latest version of the doc2query family of document expansion models. The basic idea is to train a model, that when given an input document, generates questions that the document might answer (or more broadly, queries for which the document might be relevant). These predicted questions (or queries) are then appended to the original documents, which are then indexed as before. The docTTTTTquery model gets its name from the use of T5 as the expansion model.
# Dataset Structure
All three folds (train, dev and test) share the same corpus.
An example data entry looks as follows:
```
{
'docid': '25#0',
'title': 'Autism',
'text': 'Autism is a developmental disorder characterized by difficulties with social interaction and communication, ...'
}
```
# Load Dataset
An example to load the dataset:
```
dataset = load_dataset('castorini/msmarco_v2_doc_doc2query-t5_expansions')
```
# Citation Information
```
@article{docTTTTTquery,
title={From doc2query to {docTTTTTquery}},
author={Nogueira, Rodrigo and Lin, Jimmy},
year={2019}
}
@article{emdt5,
author = "Ronak Pradeep and Rodrigo Nogueira and Jimmy Lin",
title = "The Expando-Mono-Duo Design Pattern for Text Ranking with Pretrained Sequence-to-Sequence Models",
journal = "arXiv:2101.05667",
year = 2021,
}
| 1,522 | [
[
-0.0180511474609375,
-0.03729248046875,
0.0323486328125,
0.00243377685546875,
-0.010345458984375,
0.003993988037109375,
-0.011474609375,
-0.0295562744140625,
0.0032806396484375,
0.05364990234375,
-0.0396728515625,
-0.05517578125,
-0.042510986328125,
0.015579... |
castorini/msmarco_v2_doc_segmented_doc2query-t5_expansions | 2021-11-02T08:13:56.000Z | [
"language:English",
"license:Apache License 2.0",
"region:us"
] | castorini | null | null | 0 | 88 | 2022-03-02T23:29:22 | ---
language:
- English
license: "Apache License 2.0"
---
# Dataset Summary
The repo provides queries generated for the MS MARCO v2 document segmented corpus with docTTTTTquery (sometimes written as docT5query or doc2query-T5), the latest version of the doc2query family of document expansion models. The basic idea is to train a model, that when given an input document, generates questions that the document might answer (or more broadly, queries for which the document might be relevant). These predicted questions (or queries) are then appended to the original documents, which are then indexed as before. The docTTTTTquery model gets its name from the use of T5 as the expansion model.
# Dataset Structure
All three folds (train, dev and test) share the same corpus.
An example data entry looks as follows:
```
{
'docid': '25#0',
'title': 'Autism',
'text': 'Autism is a developmental disorder characterized by difficulties with social interaction and communication, ...'
}
```
# Load Dataset
An example to load the dataset:
```
dataset = load_dataset('castorini/msmarco_v2_doc_segmented_doc2query-t5_expansions', data_files='d2q/d2q.jsonl???.gz')
```
# Citation Information
```
@article{docTTTTTquery,
title={From doc2query to {docTTTTTquery}},
author={Nogueira, Rodrigo and Lin, Jimmy},
year={2019}
}
@article{emdt5,
author = "Ronak Pradeep and Rodrigo Nogueira and Jimmy Lin",
title = "The Expando-Mono-Duo Design Pattern for Text Ranking with Pretrained Sequence-to-Sequence Models",
journal = "arXiv:2101.05667",
year = 2021,
}
| 1,576 | [
[
-0.0177764892578125,
-0.0426025390625,
0.038543701171875,
0.002536773681640625,
-0.0153045654296875,
0.01068878173828125,
-0.00629425048828125,
-0.02716064453125,
0.00313568115234375,
0.054229736328125,
-0.043426513671875,
-0.058868408203125,
-0.042449951171875,... |
chenghao/mc4_eu_dedup | 2021-12-08T05:25:24.000Z | [
"region:us"
] | chenghao | null | null | 0 | 88 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
chitra/contradiction | 2022-01-19T11:46:58.000Z | [
"region:us"
] | chitra | null | null | 0 | 88 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01494598388671875,
0.057159423828125,
0.028839111328125,
-0.0350341796875,
0.04656982421875,
0.052490234375,
0.00504302978515625,
0.0513916015625,
0.016998291015625,
-0.0521240234375,
-0.0149993896484375,
-0.06036376953125,
0.03790283... |
chmanoj/ai4bharat__samanantar_processed_te | 2022-02-05T04:02:51.000Z | [
"region:us"
] | chmanoj | null | null | 0 | 88 | 2022-03-02T23:29:22 | This is extracted from telugu subset from https://huggingface.co/datasets/ai4bharat/samanantar - used to create telugu kenLM models for ASR decoding. | 149 | [
[
0.007015228271484375,
-0.023223876953125,
-0.0007772445678710938,
0.01245880126953125,
-0.0246734619140625,
0.0219268798828125,
0.0271759033203125,
-0.0252685546875,
0.028717041015625,
0.05181884765625,
-0.053192138671875,
-0.015777587890625,
-0.031494140625,
... |
clarin-pl/multiwiki_90k | 2022-01-24T18:49:03.000Z | [
"region:us"
] | clarin-pl | Multi-Wiki90k: Multilingual benchmark dataset for paragraph
segmentation | null | 1 | 88 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
davanstrien/test_push_to_hub_image | 2022-02-15T12:15:59.000Z | [
"region:us"
] | davanstrien | null | null | 0 | 88 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
huggingartists/adele | 2022-10-25T09:22:32.000Z | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | huggingartists | This dataset is designed to generate lyrics with HuggingArtists. | @InProceedings{huggingartists:dataset,
title = {Lyrics dataset},
author={Aleksey Korshuk
},
year={2021}
} | 0 | 88 | 2022-03-02T23:29:22 | ---
language:
- en
tags:
- huggingartists
- lyrics
---
# Dataset Card for "huggingartists/adele"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.304292 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/45ccf22bba4c1f80989e645c2fd4ec44.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/adele">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Adele</div>
<a href="https://genius.com/artists/adele">
<div style="text-align: center; font-size: 14px;">@adele</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/adele).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/adele")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|203| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/adele")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| 7,140 | [
[
-0.043121337890625,
-0.0382080078125,
0.00391387939453125,
0.0245208740234375,
-0.01629638671875,
-0.0007982254028320312,
-0.0190277099609375,
-0.0352783203125,
0.062164306640625,
0.0269317626953125,
-0.06695556640625,
-0.0645751953125,
-0.0423583984375,
0.0... |
zapsdcn/chemprot | 2021-12-08T03:17:13.000Z | [
"region:us"
] | zapsdcn | null | null | 0 | 88 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
hugginglearners/netflix-shows | 2022-08-18T03:04:55.000Z | [
"license:cc0-1.0",
"region:us"
] | hugginglearners | null | null | 4 | 88 | 2022-08-18T03:04:50 | ---
license:
- cc0-1.0
kaggle_id: infamouscoder/dataset-netflix-shows
---
# Dataset Card for Dataset: NetFlix Shows
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://kaggle.com/datasets/infamouscoder/dataset-netflix-shows
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The raw data is Web Scrapped through Selenium. It contains Unlabelled text data of around 9000 Netflix Shows and Movies along with Full details like Cast, Release Year, Rating, Description, etc.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was shared by [@infamouscoder](https://kaggle.com/infamouscoder)
### Licensing Information
The license for this dataset is cc0-1.0
### Citation Information
```bibtex
[More Information Needed]
```
### Contributions
[More Information Needed] | 2,812 | [
[
-0.0268096923828125,
-0.031341552734375,
-0.00063323974609375,
-0.00823211669921875,
-0.01285552978515625,
0.005340576171875,
0.00734710693359375,
0.01177978515625,
0.04559326171875,
0.05908203125,
-0.06671142578125,
-0.053497314453125,
-0.045257568359375,
0... |
mitclinicalml/clinical-ie | 2022-12-01T16:34:20.000Z | [
"arxiv:2205.12689",
"arxiv:2010.02010",
"arxiv:1806.04185",
"region:us"
] | mitclinicalml | null | @inproceedings{agrawal2022large,
title={Large Language Models are Few-Shot Clinical Information Extractors},
author={Monica Agrawal and Stefan Hegselmann and Hunter Lang and Yoon Kim and David Sontag},
booktitle = {Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing},
year={2022},
url_Paper = {https://arxiv.org/pdf/2205.12689.pdf}} | 20 | 88 | 2022-10-21T23:00:31 | ---
{}
---
Below, we provide access to the datasets used in and created for the EMNLP 2022 paper [Large Language Models are Few-Shot Clinical Information Extractors](https://arxiv.org/abs/2205.12689).
# Task #1: Clinical Sense Disambiguation
For Task #1, we use the original annotations from the [Clinical Acronym Sense Inventory (CASI) dataset](https://conservancy.umn.edu/handle/11299/137703), described in [their paper](https://academic.oup.com/jamia/article/21/2/299/723657).
As is common, due to noisiness in the label set, we do not evaluate on the entire dataset, but only on a cleaner subset. For consistency, we use the subset defined by the filtering used in ["Zero-Shot Clinical Acronym Expansion
via Latent Meaning Cells"](https://arxiv.org/pdf/2010.02010.pdf). This results in a subset of 18,164 examples and 41 acronyms for evaluation.
We additionally use the MIMIC Reverse Substitution dataset, as created in that same paper, with further instructions available in [their repository](https://github.com/griff4692/LMC).
# Task #2: Biomedical Evidence Extraction
For Task #2, we use the out-of-the-box high-level labels from the [PICO dataset](https://arxiv.org/abs/1806.04185) available publicly in the repository [here](https://github.com/bepnye/EBM-NLP).
# Task #3: Coreference Resolution
For Task #3, we annotated 105 snippets from the [CASI dataset](https://conservancy.umn.edu/handle/11299/137703), 5 for development and 100 for test. Each example is labeled with a singular pronoun and that pronoun's corresponding noun phrase antecedent (or antecedents).
The antecedent was annotated as the entire noun phrase (barring any dependent clauses); in cases where multiple equally valid antecedents were available, all were labeled (empirically, up to 2).
For the purposes of evaluation, we chose the antecedent with the highest overlap to each model’s output.
To ensure nontrivial examples, the annotators excluded all examples of personal pronouns (e.g. “he”, “she”) if another person (and possible antecedent) had not yet been mentioned in the snippet.
Examples were skipped in annotation if the pronoun did not have an antecedent within the provided text snippet.
# Task #4: Medication Status Extraction
For Task #3, we annotated 105 snippets from the [CASI dataset](https://conservancy.umn.edu/handle/11299/137703), 5 for development and 100 for test. We wanted to create a dataset of challenging examples containing a changeover in treatment. From a sample, only ∼5% of CASI snippets contained such examples. To increase the density of these examples, speeding up annotation, clinical notes were filtered with the following search terms: discont, adverse, side effect, switch, and dosage, leading to 1445 snippets. We excluded snippets that were purely medication lists, requiring at least some narrative part to be present.
For each example, the annotators first extracted all medications. Guidelines excluded medication categories (e.g. “ACE-inhibitor”) if they referred to more specific drug names mentioned elsewhere (even if partially cut off in the snippet). For instance, only the antibiotic Levaquin was labeled in: “It is
probably reasonable to treat with antibiotics [...]. I would agree with Levaquin alone [...]”. Guidelines also excluded electrolytes and intravenous fluids as well as route and dosage information. In a second step, medications were assigned to one of three categories: active, discontinued, and neither.
Discontinued medications also contain medications that are temporarily on hold. The category neither was assigned to all remaining medications (e.g. allergies, potential medications).
The medication lists for each example were serialized as a json.
# Task #5: Medication Attribute Extraction
For Task #5, we again annotated 105 snippets from the [CASI dataset](https://conservancy.umn.edu/handle/11299/137703), 5 for development and 100 for test.
Annotation guideline were adopted from the 2009 i2b2 medication extraction challenge (Uzuner et al., 2010) with slight modifications.
We allowed medication attributes to have multiple spans and grouped together different mentions of the the same drug (e.g. “Tylenol” and “Tylenol PM”) for the purpose of relation extraction.
The annotation list for each example was serialized as a json.
# Citations
When using our annotations for tasks #3-5, please cite our paper, as well as the papers from which the underlying text originated.
```
@inproceedings{agrawal2022large,
title={Large Language Models are Few-Shot Clinical Information Extractors},
author={Monica Agrawal and Stefan Hegselmann and Hunter Lang and Yoon Kim and David Sontag},
booktitle = {Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing},
year={2022},
url_Paper = {https://arxiv.org/pdf/2205.12689.pdf}
}
```
```
@article{moon2014sense,
title={A sense inventory for clinical abbreviations and acronyms created using clinical notes and medical dictionary resources},
author={Moon, Sungrim and Pakhomov, Serguei and Liu, Nathan and Ryan, James O and Melton, Genevieve B},
journal={Journal of the American Medical Informatics Association},
volume={21},
number={2},
pages={299--307},
year={2014},
publisher={BMJ Publishing Group BMA House, Tavistock Square, London, WC1H 9JR}
}
```
# Licensing
The annotations added by our team fall under the MIT license, but the CASI dataset itself is subject to its own licensing.
---
license: other
---
| 5,501 | [
[
-0.00804901123046875,
-0.044036865234375,
0.056396484375,
0.0019207000732421875,
-0.01351165771484375,
-0.03289794921875,
-0.011474609375,
-0.04754638671875,
0.0305328369140625,
0.045654296875,
-0.02984619140625,
-0.05206298828125,
-0.06561279296875,
0.03515... |
Dahoas/rm-hh-rlhf | 2022-12-22T16:45:57.000Z | [
"region:us"
] | Dahoas | null | null | 1 | 88 | 2022-12-16T22:20:50 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
keremberke/nfl-object-detection | 2023-01-29T12:37:17.000Z | [
"task_categories:object-detection",
"roboflow",
"roboflow2huggingface",
"region:us"
] | keremberke | null | @misc{ nfl-competition_dataset,
title = { NFL-competition Dataset },
type = { Open Source Dataset },
author = { home },
howpublished = { \\url{ https://universe.roboflow.com/home-mxzv1/nfl-competition } },
url = { https://universe.roboflow.com/home-mxzv1/nfl-competition },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { sep },
note = { visited on 2023-01-18 },
} | 4 | 88 | 2022-12-30T10:37:59 | ---
task_categories:
- object-detection
tags:
- roboflow
- roboflow2huggingface
---
<div align="center">
<img width="640" alt="keremberke/nfl-object-detection" src="https://huggingface.co/datasets/keremberke/nfl-object-detection/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['helmet', 'helmet-blurred', 'helmet-difficult', 'helmet-partial', 'helmet-sideline']
```
### Number of Images
```json
{'valid': 1989, 'train': 6963, 'test': 995}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/nfl-object-detection", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/home-mxzv1/nfl-competition/dataset/1](https://universe.roboflow.com/home-mxzv1/nfl-competition/dataset/1?ref=roboflow2huggingface?ref=roboflow2huggingface)
### Citation
```
@misc{ nfl-competition_dataset,
title = { NFL-competition Dataset },
type = { Open Source Dataset },
author = { home },
howpublished = { \\url{ https://universe.roboflow.com/home-mxzv1/nfl-competition } },
url = { https://universe.roboflow.com/home-mxzv1/nfl-competition },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { sep },
note = { visited on 2023-01-18 },
}
```
### License
Public Domain
### Dataset Summary
This dataset was exported via roboflow.com on December 29, 2022 at 8:12 PM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
It includes 9947 images.
Helmets are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 1280x720 (Stretch)
No image augmentation techniques were applied.
| 2,141 | [
[
-0.04510498046875,
-0.0283660888671875,
0.0095367431640625,
0.01004791259765625,
-0.0316162109375,
-0.001483917236328125,
0.0022945404052734375,
-0.056396484375,
0.02947998046875,
0.01535797119140625,
-0.05377197265625,
-0.05633544921875,
-0.039520263671875,
... |
Francesco/brain-tumor-m2pbp | 2023-03-30T09:11:06.000Z | [
"task_categories:object-detection",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc",
"rf100",
"region:us"
] | Francesco | null | null | 2 | 88 | 2023-03-30T09:10:00 | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': brain-tumor
'1': label0
'2': label1
'3': label2
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- object-detection
task_ids: []
pretty_name: brain-tumor-m2pbp
tags:
- rf100
---
# Dataset Card for brain-tumor-m2pbp
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/brain-tumor-m2pbp
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
brain-tumor-m2pbp
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/brain-tumor-m2pbp
### Citation Information
```
@misc{ brain-tumor-m2pbp,
title = { brain tumor m2pbp Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/brain-tumor-m2pbp } },
url = { https://universe.roboflow.com/object-detection/brain-tumor-m2pbp },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. | 3,431 | [
[
-0.038970947265625,
-0.0460205078125,
0.0213470458984375,
0.004421234130859375,
-0.0270233154296875,
-0.02294921875,
-0.0081787109375,
-0.03436279296875,
0.0265045166015625,
0.035400390625,
-0.04541015625,
-0.07177734375,
-0.046661376953125,
0.00535964965820... |
winglian/visual-novels-json | 2023-06-17T03:08:49.000Z | [
"region:us"
] | winglian | null | null | 1 | 88 | 2023-06-17T03:08:20 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
marclove/llama_functions | 2023-08-03T17:31:48.000Z | [
"task_categories:conversational",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | marclove | null | null | 6 | 88 | 2023-07-26T23:55:21 | ---
license: cc-by-sa-4.0
task_categories:
- conversational
- text-generation
language:
- en
pretty_name: Llama Functions
size_categories:
- 10K<n<100K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:** https://marclove.com
- **Repository:** https://huggingface.co/datasets/marclove/llama_functions
### Dataset Summary
‼️ This dataset is still in a beta state. Its contents, and likely its format, will change. If you need to depend on it in its current state, please create your own fork and provide attribution to this original repository. ‼️
Llama Functions is a synthetic dataset generated from a mix of manual curation of OpenAPI endpoints and prompting of OpenAI models. It is further mixed with chat completions from the Guanaco subset of the OASST1 chat dialogue dataset. It is a total of 18,000 rows, 9,000 rows from the synthetic dataset of function calls and 9,000 rows from the Guanaco dataset.
The dataset is mixed with Guanaco in order to maintain accuracy and helpfulness when calling a function is not the appropriate response. I plan to remove the Guanaco portion of the dataset and instead provide fine-tuning recommendations, guidelines for use, more detailed information regarding limitations, and eval stats of 7B, 13B, and 70B models.
There is no existing evaluation benchmark to measure the accuracy of function calls, which makes it hard during training to identify when we've maximized the balance of function calling accuracy and chat model performance. I'm working on a custom HF eval for this purpose, but until then I have chosen to mix the two datasets in equal parts to get a proxy of performance for both tasks in the eval & test stats during fine-tuning.
### Languages
English primarily, though since it has been mixed with the multilingual Guanaco dataset, other languages are included.
## Dataset Structure
### Data Fields
| Field | Description |
|-------|-------------|
| `input` |A prompt in Llama-2 Chat format, including an appropriate system instruction and chat history. |
| `output` | The expected completion. |
### Data Splits
There are currently no splits, but future versions will likely have train, eval, and test splits.
## Dataset Creation
### Curation Rationale
In an effort to enable tool-using chat agents and autonomous agents, I developed this synthetic dataset to bring [OpenAI-style function calling](https://openai.com/blog/function-calling-and-other-api-updates#function-calling) to the Llama family and to fully open source models.
### Source Data
The data was sourced by prompting OpenAI models to generate function calls of:
1. Real OpenAPI endpoints collected and filtered from the web
2. Manually written (but artificial) OpenAPI endpoints, and
3. Prompted iterations of 1 & 2.
Prompted iterations were generated by ChatGPT-4 (July 20, 2023 version). Generated function calls and their natural language counterparts were generated by iterative prompting of `gpt-3.5-turbo-0301`. A blog post detailing the generation process will be published in the next few days.
OpenAI's TOS give me ownership of this synthetic dataset. I am licensing it under [Creative Commons' Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license](https://creativecommons.org/licenses/by-sa/4.0/). I have used the dataset to fine tune a research-only model, [marclove/llama-2-7b-chat-functions](https://huggingface.co/marclove/llama-2-7b-chat-functions), per OpenAI TOS. You are responsible for determining whether you can use the dataset for your particular use case. I take no responsibility and make no guarantees beyond licensing my own rights under the designated CC license.
#### Who are the source language producers?
- Marc Love
- Prompting of ChatGPT-4 & API calls to gpt-3.5-turbo-0301
### Personal and Sensitive Information
None.
## Considerations for Using the Data
### Social Impact of Dataset
Unknown, beyond those of the [Guanaco subset of the OASST1 dataset](https://huggingface.co/datasets/timdettmers/openassistant-guanaco/viewer/timdettmers--openassistant-guanaco/).
### Discussion of Biases
Unknown, beyond those of the [Guanaco subset of the OASST1 dataset](https://huggingface.co/datasets/timdettmers/openassistant-guanaco/viewer/timdettmers--openassistant-guanaco/).
### Other Known Limitations
Fine-tuning on this dataset can lead to hallucinated function calls. This is more pronounced in smaller models.
## Additional Information
### Dataset Curators
Marc Love
### Licensing Information
[Creative Commons' Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license](https://creativecommons.org/licenses/by-sa/4.0/). Please note that the synthetic data portion of the dataset was generated using OpenAI models, which may or may not impact your ability to use the dataset, depending on your use case.
### Citation Information
If you use this dataset, please cite:
```
@misc{LlamaFunctions,
title = {LlamaFunctions: An Open Dataset of Structured API Calls From Natural Language Prompts},
author = {Marc Love},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://https://huggingface.co/marclove/llama_functions},
}
``` | 5,221 | [
[
-0.0231781005859375,
-0.07379150390625,
0.018341064453125,
0.036895751953125,
-0.026123046875,
-0.00038361549377441406,
-0.0211181640625,
-0.0533447265625,
0.03564453125,
0.02850341796875,
-0.058929443359375,
-0.0460205078125,
-0.024139404296875,
0.018936157... |
q-allen/opentix-faq | 2023-10-12T10:26:28.000Z | [
"region:us"
] | q-allen | null | null | 0 | 88 | 2023-10-12T10:12:28 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.014984130859375,
0.05718994140625,
0.0288543701171875,
-0.0350341796875,
0.046478271484375,
0.052520751953125,
0.005062103271484375,
0.051361083984375,
0.016998291015625,
-0.0521240234375,
-0.01496124267578125,
-0.0604248046875,
0.037... |
kannada_news | 2023-01-25T14:33:33.000Z | [
"task_categories:text-classification",
"task_ids:topic-classification",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:kn",
"license:cc-by-sa-4.0",
"region:us"
] | null | The Kannada news dataset contains only the headlines of news article in three categories:
Entertainment, Tech, and Sports.
The data set contains around 6300 news article headlines which collected from Kannada news websites.
The data set has been cleaned and contains train and test set using which can be used to benchmark
classification models in Kannada. | null | 1 | 87 | 2022-03-02T23:29:22 | ---
annotations_creators:
- other
language_creators:
- other
language:
- kn
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- topic-classification
pretty_name: KannadaNews Dataset
dataset_info:
features:
- name: headline
dtype: string
- name: label
dtype:
class_label:
names:
'0': sports
'1': tech
'2': entertainment
splits:
- name: train
num_bytes: 969216
num_examples: 5167
- name: validation
num_bytes: 236817
num_examples: 1293
download_size: 0
dataset_size: 1206033
---
# Dataset Card for kannada_news dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Kaggle link](https://www.kaggle.com/disisbig/kannada-news-dataset) for kannada news headlines dataset
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** More information about the dataset and the models can be found [here](https://github.com/goru001/nlp-for-kannada)
### Dataset Summary
The Kannada news dataset contains only the headlines of news article in three categories:
Entertainment, Tech, and Sports.
The data set contains around 6300 news article headlines which are collected from Kannada news websites.
The data set has been cleaned and contains train and test set using which can be used to benchmark topic classification models in Kannada.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Kannada (kn)
## Dataset Structure
### Data Instances
The data has two files. A train.csv and valid.csv. An example row of the dataset is as below:
```
{
'headline': 'ಫಿಫಾ ವಿಶ್ವಕಪ್ ಫೈನಲ್: ಅತಿರೇಕಕ್ಕೇರಿದ ಸಂಭ್ರಮಾಚರಣೆ; ಅಭಿಮಾನಿಗಳ ಹುಚ್ಚು ವರ್ತನೆಗೆ ವ್ಯಾಪಕ ಖಂಡನೆ',
'label':'sports'
}
```
NOTE: The data has very few examples on the technology (class label: 'tech') topic. [More Information Needed]
### Data Fields
Data has two fields:
- headline: text headline in kannada (string)
- label : corresponding class label which the headlines pertains to in english (string)
### Data Splits
The dataset is divided into two splits. All the headlines are scraped from news websites on the internet.
| | train | validation |
|-----------------|--------:|-----------:|
| Input Sentences | 5167 | 1293 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
There are starkingly less amount of data for South Indian languages, especially Kannada, available in digital format which can be used for NLP purposes.
Though having roughly 38 million native speakers, it is a little under-represented language and will benefit from active contribution from the community.
This dataset, however, can just help people get exposed to Kannada and help proceed further active participation for enabling continuous progress and development.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[Gaurav Arora] (https://github.com/goru001/nlp-for-kannada). Has also got some starter models an embeddings to help get started.
### Licensing Information
cc-by-sa-4.0
### Citation Information
https://www.kaggle.com/disisbig/kannada-news-dataset
### Contributions
Thanks to [@vrindaprabhu](https://github.com/vrindaprabhu) for adding this dataset. | 4,873 | [
[
-0.01535797119140625,
-0.041107177734375,
0.0013713836669921875,
0.035797119140625,
-0.045257568359375,
0.003253936767578125,
-0.0186767578125,
-0.005615234375,
0.0469970703125,
0.0252227783203125,
-0.03973388671875,
-0.049468994140625,
-0.0499267578125,
0.0... |
menyo20k_mt | 2022-12-30T19:38:49.000Z | [
"task_categories:translation",
"annotations_creators:expert-generated",
"annotations_creators:found",
"language_creators:found",
"multilinguality:translation",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"language:yo",
"license:cc-by-nc-4.0",
"arxiv:2103.08647",
"r... | null | MENYO-20k is a multi-domain parallel dataset with texts obtained from news articles, ted talks, movie transcripts, radio transcripts, science and technology texts, and other short articles curated from the web and professional translators. The dataset has 20,100 parallel sentences split into 10,070 training sentences, 3,397 development sentences, and 6,633 test sentences (3,419 multi-domain, 1,714 news domain, and 1,500 ted talks speech transcript domain). The development and test sets are available upon request. | @dataset{david_ifeoluwa_adelani_2020_4297448,
author = {David Ifeoluwa Adelani and
Jesujoba O. Alabi and
Damilola Adebonojo and
Adesina Ayeni and
Mofe Adeyemi and
Ayodele Awokoya},
title = {MENYO-20k: A Multi-domain English - Yorùbá Corpus
for Machine Translation},
month = nov,
year = 2020,
publisher = {Zenodo},
version = {1.0},
doi = {10.5281/zenodo.4297448},
url = {https://doi.org/10.5281/zenodo.4297448}
} | 1 | 87 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
- found
language_creators:
- found
language:
- en
- yo
license:
- cc-by-nc-4.0
multilinguality:
- translation
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: menyo-20k
pretty_name: MENYO-20k
dataset_info:
features:
- name: translation
dtype:
translation:
languages:
- en
- yo
config_name: menyo20k_mt
splits:
- name: train
num_bytes: 2551345
num_examples: 10070
- name: validation
num_bytes: 870011
num_examples: 3397
- name: test
num_bytes: 1905432
num_examples: 6633
download_size: 5206234
dataset_size: 5326788
---
# Dataset Card for MENYO-20k
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/uds-lsv/menyo-20k_MT/
- **Paper:** [The Effect of Domain and Diacritics in Yorùbá-English Neural Machine Translation](https://arxiv.org/abs/2103.08647)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
MENYO-20k is a multi-domain parallel dataset with texts obtained from news articles, ted talks, movie transcripts, radio transcripts, science and technology texts, and other short articles curated from the web and professional translators. The dataset has 20,100 parallel sentences split into 10,070 training sentences, 3,397 development sentences, and 6,633 test sentences (3,419 multi-domain, 1,714 news domain, and 1,500 ted talks speech transcript domain).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Languages are English and Yoruba.
## Dataset Structure
### Data Instances
An instance example:
```
{'translation':
{'en': 'Unit 1: What is Creative Commons?',
'yo': 'Ìdá 1: Kín ni Creative Commons?'
}
}
```
### Data Fields
- `translation`:
- `en`: English sentence.
- `yo`: Yoruba sentence.
### Data Splits
Training, validation and test splits are available.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset is open but for non-commercial use because some data sources like Ted talks and JW news require permission for commercial use.
The dataset is licensed under Creative Commons [Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/) License: https://github.com/uds-lsv/menyo-20k_MT/blob/master/LICENSE
### Citation Information
If you use this dataset, please cite this paper:
```
@inproceedings{adelani-etal-2021-effect,
title = "The Effect of Domain and Diacritics in {Y}oruba{--}{E}nglish Neural Machine Translation",
author = "Adelani, David and
Ruiter, Dana and
Alabi, Jesujoba and
Adebonojo, Damilola and
Ayeni, Adesina and
Adeyemi, Mofe and
Awokoya, Ayodele Esther and
Espa{\~n}a-Bonet, Cristina",
booktitle = "Proceedings of the 18th Biennial Machine Translation Summit (Volume 1: Research Track)",
month = aug,
year = "2021",
address = "Virtual",
publisher = "Association for Machine Translation in the Americas",
url = "https://aclanthology.org/2021.mtsummit-research.6",
pages = "61--75",
abstract = "Massively multilingual machine translation (MT) has shown impressive capabilities and including zero and few-shot translation between low-resource language pairs. However and these models are often evaluated on high-resource languages with the assumption that they generalize to low-resource ones. The difficulty of evaluating MT models on low-resource pairs is often due to lack of standardized evaluation datasets. In this paper and we present MENYO-20k and the first multi-domain parallel corpus with a especially curated orthography for Yoruba{--}English with standardized train-test splits for benchmarking. We provide several neural MT benchmarks and compare them to the performance of popular pre-trained (massively multilingual) MT models both for the heterogeneous test set and its subdomains. Since these pre-trained models use huge amounts of data with uncertain quality and we also analyze the effect of diacritics and a major characteristic of Yoruba and in the training data. We investigate how and when this training condition affects the final quality of a translation and its understandability.Our models outperform massively multilingual models such as Google ($+8.7$ BLEU) and Facebook M2M ($+9.1$) when translating to Yoruba and setting a high quality benchmark for future research.",
}
```
### Contributions
Thanks to [@yvonnegitau](https://github.com/yvonnegitau) for adding this dataset.
| 6,325 | [
[
-0.03466796875,
-0.059967041015625,
-0.004657745361328125,
0.0178985595703125,
-0.0280914306640625,
0.012939453125,
-0.0443115234375,
-0.03338623046875,
0.0313720703125,
0.0241241455078125,
-0.062103271484375,
-0.05853271484375,
-0.057037353515625,
0.0359802... |
offenseval2020_tr | 2023-01-25T14:41:59.000Z | [
"task_categories:text-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:tr",
"license:cc-by-2.0",
"offensive-language-classification",
"region:us"
] | null | OffensEval-TR 2020 is a Turkish offensive language corpus. The corpus consist of randomly sampled tweets and annotated in a similar way to OffensEval and GermEval. | @InProceedings{coltekin2020lrec,
author = {Cagri Coltekin},
year = {2020},
title = {A Corpus of Turkish Offensive Language on Social Media},
booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference},
pages = {6174--6184},
address = {Marseille, France},
url = {https://www.aclweb.org/anthology/2020.lrec-1.758},
} | 3 | 87 | 2022-03-02T23:29:22 | ---
annotations_creators:
- found
language_creators:
- found
language:
- tr
license:
- cc-by-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
pretty_name: OffensEval-TR 2020
tags:
- offensive-language-classification
dataset_info:
features:
- name: id
dtype: int32
- name: tweet
dtype: string
- name: subtask_a
dtype:
class_label:
names:
'0': NOT
'1': 'OFF'
config_name: offenseval2020-turkish
splits:
- name: train
num_bytes: 4260505
num_examples: 31756
- name: test
num_bytes: 481300
num_examples: 3528
download_size: 2048258
dataset_size: 4741805
---
# Dataset Card for OffensEval-TR 2020
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [offensive-turkish](https://coltekin.github.io/offensive-turkish/)
- **Paper:** [A Corpus of Turkish Offensive Language on Social Media](https://coltekin.github.io/offensive-turkish/troff.pdf)
- **Point of Contact:** [Çağrı Çöltekin](ccoltekin@sfs.uni-tuebingen.de)
### Dataset Summary
The file offenseval-tr-training-v1.tsv contains 31,756 annotated tweets.
The file offenseval-annotation.txt contains a short summary of the annotation guidelines.
Twitter user mentions were substituted by @USER and URLs have been substitute by URL.
Each instance contains up to 1 labels corresponding to one of the following sub-task:
- Sub-task A: Offensive language identification;
### Supported Tasks and Leaderboards
The dataset was published on this [paper](https://coltekin.github.io/offensive-turkish/troff.pdf).
### Languages
The dataset is based on Turkish.
## Dataset Structure
### Data Instances
A binary dataset with with (NOT) Not Offensive and (OFF) Offensive tweets.
### Data Fields
Instances are included in TSV format as follows:
ID INSTANCE SUBA
The column names in the file are the following:
id tweet subtask_a
The labels used in the annotation are listed below.
#### Task and Labels
(A) Sub-task A: Offensive language identification
- (NOT) Not Offensive - This post does not contain offense or profanity.
- (OFF) Offensive - This post contains offensive language or a targeted (veiled or direct) offense
In our annotation, we label a post as offensive (OFF) if it contains any form of non-acceptable language (profanity) or a targeted offense, which can be veiled or direct.
### Data Splits
| train | test |
|------:|-----:|
| 31756 | 3528 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
From tweeter.
### Annotations
[More Information Needed]
#### Annotation process
We describe the labels above in a “flat” manner. However, the annotation process we follow is hierarchical. The following QA pairs give a more flowchart-like procedure to follow
1. Is the tweet in Turkish and understandable?
* No: mark tweet X for exclusion, and go to next tweet
* Yes: continue to step 2
2. Is the tweet include offensive/inappropriate language?
* No: mark the tweet non go to step 4
* Yes: continue to step 3
3. Is the offense in the tweet targeted?
* No: mark the tweet prof go to step 4
* Yes: chose one (or more) of grp, ind, *oth based on the definitions above. Please try to limit the number of labels unless it is clear that the tweet includes offense against multiple categories.
4. Was the labeling decision difficult (precise answer needs more context, tweets includes irony, or for another reason)?
* No: go to next tweet
* Yes: add the label X, go to next tweet
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The annotations are distributed under the terms of [Creative Commons Attribution License (CC-BY)](https://creativecommons.org/licenses/by/2.0/). Please cite the following paper, if you use this resource.
### Citation Information
```
@inproceedings{coltekin2020lrec,
author = {\c{C}\"{o}ltekin, \c{C}a\u{g}r{\i}},
year = {2020},
title = {A Corpus of Turkish Offensive Language on Social Media},
booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference},
pages = {6174--6184},
address = {Marseille, France},
url = {https://www.aclweb.org/anthology/2020.lrec-1.758},
}
```
### Contributions
Thanks to [@yavuzKomecoglu](https://github.com/yavuzKomecoglu) for adding this dataset. | 5,943 | [
[
-0.00849151611328125,
-0.05743408203125,
-0.004940032958984375,
0.0140533447265625,
-0.02764892578125,
0.008697509765625,
-0.0203704833984375,
-0.046722412109375,
0.019012451171875,
0.03643798828125,
-0.03448486328125,
-0.07794189453125,
-0.0675048828125,
0.... |
opus_memat | 2022-11-03T16:08:11.000Z | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:translation",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"language:xh",
"license:unknown",
"region:us"
] | null | Xhosa-English parallel corpora, funded by EPSRC, the Medical Machine Translation project worked on machine translation between ixiXhosa and English, with a focus on the medical domain. | J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012) | 1 | 87 | 2022-03-02T23:29:22 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
- xh
license:
- unknown
multilinguality:
- translation
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: OpusMemat
dataset_info:
features:
- name: translation
dtype:
translation:
languages:
- xh
- en
config_name: xh-en
splits:
- name: train
num_bytes: 25400570
num_examples: 154764
download_size: 8382865
dataset_size: 25400570
---
# Dataset Card for [opus_memat]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**[memat](http://opus.nlpl.eu/memat.php)
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Xhosa-English parallel corpora, funded by EPSRC, the Medical Machine Translation project worked on machine translation between ixiXhosa and English, with a focus on the medical domain.
### Supported Tasks and Leaderboards
The underlying task is machine translation from Xhosa to English
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)
### Contributions
Thanks to [@spatil6](https://github.com/spatil6) for adding this dataset. | 3,349 | [
[
-0.02752685546875,
-0.0266571044921875,
0.01229095458984375,
0.02337646484375,
-0.024444580078125,
0.006946563720703125,
-0.0252227783203125,
-0.0159149169921875,
0.0465087890625,
0.0374755859375,
-0.0631103515625,
-0.07525634765625,
-0.052490234375,
0.02462... |
polsum | 2022-11-03T16:07:56.000Z | [
"task_categories:summarization",
"task_ids:news-articles-summarization",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:pl",
"license:cc-by-3.0",
"region:us"
] | null | Polish Summaries Corpus: the corpus of Polish news summaries. | @inproceedings{
ogro:kop:14:lrec,
author = "Ogrodniczuk, Maciej and Kopeć, Mateusz",
pdf = "http://nlp.ipipan.waw.pl/Bib/ogro:kop:14:lrec.pdf",
title = "The {P}olish {S}ummaries {C}orpus",
pages = "3712--3715",
crossref = "lrec:14"
}
@proceedings{
lrec:14,
editor = "Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios",
isbn = "978-2-9517408-8-4",
title = "Proceedings of the Ninth International {C}onference on {L}anguage {R}esources and {E}valuation, {LREC}~2014",
url = "http://www.lrec-conf.org/proceedings/lrec2014/index.html",
booktitle = "Proceedings of the Ninth International {C}onference on {L}anguage {R}esources and {E}valuation, {LREC}~2014",
address = "Reykjavík, Iceland",
key = "LREC",
year = "2014",
organization = "European Language Resources Association (ELRA)"
} | 1 | 87 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- pl
license:
- cc-by-3.0
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- summarization
task_ids:
- news-articles-summarization
paperswithcode_id: null
pretty_name: Polish Summaries Corpus
dataset_info:
features:
- name: id
dtype: string
- name: date
dtype: string
- name: title
dtype: string
- name: section
dtype: string
- name: authors
dtype: string
- name: body
dtype: string
- name: summaries
sequence:
- name: ratio
dtype: int32
- name: type
dtype: string
- name: author
dtype: string
- name: body
dtype: string
- name: spans
sequence:
- name: start
dtype: int32
- name: end
dtype: int32
- name: span_text
dtype: string
splits:
- name: train
num_bytes: 34787575
num_examples: 569
download_size: 6082812
dataset_size: 34787575
---
# Dataset Card for Polish Summaries Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://zil.ipipan.waw.pl/PolishSummariesCorpus
- **Repository:** http://zil.ipipan.waw.pl/PolishSummariesCorpus
- **Paper:** http://nlp.ipipan.waw.pl/Bib/ogro:kop:14:lrec.pdf
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Mateusz Kopeć](http://zil.ipipan.waw.pl/MateuszKopec)
### Dataset Summary
The Corpus contains a large number of manual summaries of news articles,
with many independently created summaries for a single text. Such approach is supposed to overcome the annotator bias, which is often described as a problem during the evaluation of the summarization algorithms against a single gold standard.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
Polish
## Dataset Structure
### Data Instances
See below an example from the dataset. Detailed descriptions of the fields are provided in the following section.
```
{'authors': 'Krystyna Forowicz',
'body': "ROZMOWA\n\nProf. Krzysztof Ernst, kierownik Zakładu Optyki Instytutu Fizyki Doświadczalnej Uniwersytetu Warszawskiego\n\nLidarowe oczy\n\nRYS. MAREK KONECKI\n\nJutro w Instytucie odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar? \n\nPROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Nazywane też jest radarem laserowym.\n\nCzy to kosztowne urządzenie będzie służyło tylko naukowcom?\n\nTego typu lidar jest rzeczywiście drogi, kosztuje około miliona marek niemieckich. Jest to najnowsza generacja tego typu lidarów. DIAL - lidar absorbcji różnicowej jest urządzeniem inteligentnym, to znaczy potrafi rozróżnić, co mierzy. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. Z lidara korzyść mamy potrójną: użyteczną, bo przy jego pomocy wykonujemy pomiary skażeń atmosferycznych, korzyść naukową - rozwijamy badania nad tym urządzeniem, staramy się m.in. rozszerzyć jego zastosowanie także na inne substancje występujące w atmosferze. I korzyść dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska. Nad lidarem pracują specjaliści od laserów i od komputerów. Współpracujemy z doskonałym laboratorium prof. Ludgera Wöste z Freie Universitat Berlin rozwijającym m.in. problematykę lidarową. Pakiet software'u wzbogacamy o nowe algorytmy, które potrafią lepiej i dokładniej rozszyfrowywać sygnał lidarowy, a w konsekwencji skażenia. Żeby przetworzyć tzw. sygnał lidarowy, czyli to co wraca po rozproszeniu światła do układu, i otrzymać rozsądne dane dotyczące rozkładu koncentracji - trzeba dokonać skomplikowanych operacji. \n\nBadania, które prowadzimy, są zainicjowane i finansowane przez Fundację Współpracy Polsko-Niemieckiej, dzięki której ten lidar u nas zaistniał i dla której w ramach naszych zobowiązań wykonujemy pomiary zanieczyszczeń nad naszą wspólną granicą. Zasadniczy koszt jego budowy pokryła uzyskana od Fundacji dotacja. Część pieniędzy przekazał też Narodowy Fundusz Ochrony Środowiska i Gospodarki Wodnej oraz Komitet Badań Naukowych.\n\nCzy wszystkie zanieczyszczenia będzie można wykryć za pomocą lidaru?\n\nNie ma takiego jednostkowego urządzenia, które by wykrywało i mierzyło wszystkie szkodliwe gazy w atmosferze łącznie z dostarczeniem informacji o ich rozkładzie. Ale np. obecnie prowadzimy badania mające na celu rozszerzenie możliwości lidaru o taką substancję jak fosgen. Tym szkodliwym gazem może być skażone powietrze w miastach, w których zlokalizowane są zakłady chemiczne, np. w Bydgoszczy pewne ilości fosgenu emitują Zakłady Chemiczne Organika- Zachem. \n\nLidar typu DIAL jest oparty na pomiarze absorbcji różnicowej, czyli muszą być zastosowane dwie wiązki laserowe o dwóch różnych długościach fali, z których jedna jest absorbowana, a druga nie jest absorbowana przez substancję, którą chcemy wykryć. Cząsteczki, które wykrywamy mają pasma absorbcji w bliskim nadfiolecie. Możemy np. badać zawartość ozonu w troposferze. Okazuje się bowiem, że o ile brak tego gazu w wysokich warstwach atmosfery powoduje groźny efekt cieplarniany, to jego nadmiar tuż nad Ziemią jest szkodliwy. Groźne są też substancje gazowe, jak np. tlenki azotu, będące następstwem spalin samochodowych. A samochodów przybywa.\n\nCzy stać nas będzie na prowadzenie pomiarów ozonu w miastach? \n\nKoszt jednego dnia kampanii pomiarowej firmy zachodnie szacują na kilka tysięcy DM. Potrzebne są pieniądze na utrzymanie lidaru, na prowadzenie badań. Nasze przedsięwzięcie nie ma charakteru komercyjnego. Koszt pomiarów będzie znacznie niższy. Chcemy np. mierzyć w Warszawie rozkłady koncentracji tlenków azotu, ich ewolucję czasową nad różnymi arteriami miasta. Chcielibyśmy rozwinąć tutaj współpracę z państwowymi i wojewódzkimi służbami ochrony środowiska. Tego typu badania były prowadzone np. w Lyonie. Okazało się, że najwięcej tlenków azotu występuje niekoniecznie tam gdzie są one produkowane, to znaczy nie przy najruchliwszych ulicach, jeśli są one dobrze wentylowane a gromadzą się one w małych uliczkach. Przede wszystkim jednak do końca tego roku zamierzamy zakończyć pomiary skażeń atmosferycznych nad granicą polsko-niemiecką. Koncentrujemy się głównie na Czarnym Trójkącie - obszarze u zbiegu trzech granic: Polski, Niemiec i Czech, do niedawna uważanym za najbardziej zdegradowany region w Europie. Prowadziliśmy pomiary w samym Turowie, gdzie elektrownia Turoszowska jest głównym źródłem emisji. W planie mamy Bogatynię, zagłębie miedziowe. \n\nW Czarnym Trójkącie istnieje wiele stacjonarnych stacji monitoringowych.\n\nNasz lidar ma większe możliwości niż stacje monitoringowe. Mierzy zanieczyszczenia nie tylko lokalnie, ale też ich rozkład w przestrzeni, z wysoką rozdzielczością przestrzenną i na odległość kilku kilometrów. Możemy zatem śledzić ewolucję rozprzestrzeniania się tych zanieczyszczeń, ich kierunek i zmiany spowodowane m.in. warunkami atmosferycznymi. Wyniki naszych pomiarów porównujemy z danymi uzyskanymi ze stacji monitoringowych. \n\nJak wypadł Czarny Trójkąt?\n\nKiedy występowaliśmy o finansowanie tego projektu do Fundacji Współpracy Polsko-Niemieckiej zanieczyszczenie powietrza w Czarnym Trójkącie było dużo większe niż obecnie i wszystko wskazuje na to, że będzie dalej spadać. Obecnie stężenie dwutlenku siarki jest na granicy naszych możliwości pomiarowych. Dla regionu Turoszowskiego to dobra wiadomość i dla stosunków polsko-niemieckich też.\n\nTypów lidarów jest wiele \n\nTen lidar pracuje w obszarze bliskiego nadfioletu i promieniowania widzialnego, które jest wynikiem wykorzystania drugiej lub trzeciej harmonicznej lasera szafirowego, pracującego na granicy czerwieni i podczerwieni. DIAL jest tym typem lidara, który dzisiaj ma zdecydowanie największe wzięcie w ochronie środowiska. Z lidarów korzysta meteorologia. W Stanach Zjednoczonych lidary umieszcza się na satelitach (program NASA). Określają na przestrzeni kilkudziesięciu kilometrów rozkłady temperatury, wilgotności, ciśnienia, a także prędkości wiatru. Wykrywają pojawianie się huraganów, a nawet mogą określać rozmiary oka tajfunu.\n\nIle takich urządzeń jest w Europie?\n\n- W Europie takich lidarów jak nasz jest zaledwie kilka. Większość z nich mierzy ozon, dwutlenek siarki i tlenek azotu. Wykrywanie toluenu i benzenu jest oryginalnym rozwiązaniem. Długość fali dla benzenu jest już na skraju możliwości widmowych. Nasz lidar typu DIAL jest najnowocześniejszym w Polsce. Ponadto jest lidarem ruchomym, zainstalowanym na samochodzie. Ale historia lidarów w naszym kraju jest dłuższa i zaczęła się na początku lat 60. Pierwsze próby prowadzone były w stacji geofizycznej PAN w Belsku, niedługo po skonstruowaniu pierwszego w świecie lasera rubinowego. Potem powstał lidar stacjonarny, również typu DIAL, w Gdańsku, a w Krakowie sodary - urządzenia oparte na falach akustycznych, wygodne np. do pomiarów szybkości wiatru. Lidar umieszczony na samochodzie i zbudowany w latach 80 na Politechnice Poznańskiej w perspektywie miał być lidarem typu DIAL.\n\nFizycy dotychczas nie zajmowali się ochroną środowiska?\n\nTaka specjalizacja powstala na Wydziale Fizyki UW dwa lata temu. Gwarancją sukcesu naszego programu dydaktyczno-badawczego jest udział w nim zakładów należących do Instytutu Fizyki Doświadczalnej UW, Pracowni Przetwarzania Informacji (zdjęć satelitarnych) Instytutu Geofizyki i, co bardzo ważne, współpraca z Freie Universität Berlin. Mamy również na UW Międzywydziałowe Studia Ochrony Środowiska i studentom przekazujemy informacje o lidarze i fizycznych metodach badania środowiska. Nasze działania dydaktyczne bardzo efektywnie wspiera NFOŚ.\n\nRozmawiała Krystyna Forowicz",
'date': '1997-04-21',
'id': '199704210011',
'section': 'Nauka i Technika',
'summaries': {'author': ['I',
'I',
'I',
'C',
'C',
'C',
'K',
'K',
'K',
'G',
'G',
'G',
'J',
'J',
'J'],
'body': ['Jutro w Instytucie odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar?PROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Nazywane też jest radarem laserowym.Czy to kosztowne urządzenie będzie służyło tylko naukowcom? Z lidara korzyść mamy potrójną: użyteczną, bo przy jego pomocy wykonujemy pomiary skażeń atmosferycznych, naukową - rozwijamy badania nad tym urządzeniem. I korzyść dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska. Badania, które prowadzimy, są zainicjowane i finansowane przez Fundację Współpracy Polsko-Niemieckiej, dzięki której ten lidar u nas zaistniał i dla której w ramach naszych zobowiązań wykonujemy pomiary zanieczyszczeń nad naszą wspólną granicą.',
'Jutro w Instytucie odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar?PROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Nazywane też jest radarem laserowym.Czy to kosztowne urządzenie będzie służyło tylko naukowcom? Z lidara korzyść mamy potrójną: użyteczną, bo przy jego pomocy wykonujemy pomiary skażeń atmosferycznych, naukową - rozwijamy badania nad tym urządzeniem. I korzyść dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska. Badania, które prowadzimy, są zainicjowane i finansowane przez Fundację Współpracy Polsko-Niemieckiej, dzięki której ten lidar u nas zaistniał i dla której w ramach naszych zobowiązań wykonujemy pomiary zanieczyszczeń nad naszą wspólną granicą. Czy wszystkie zanieczyszczenia będzie można wykryć za pomocą lidaru?Nie ma takiego jednostkowego urządzenia, które by wykrywało i mierzyło wszystkie szkodliwe gazy w atmosferze łącznie z dostarczeniem informacji o ich rozkładzie. Możemy np. badać zawartość ozonu w troposferze. W Europie takich lidarów jak nasz jest zaledwie kilka. Większość z nich mierzy ozon, dwutlenek siarki i tlenek azotu. Fizycy dotychczas nie zajmowali się ochroną środowiska?Taka specjalizacja powstala na Wydziale Fizyki UW dwa lata temu. Gwarancją sukcesu naszego programu dydaktyczno-badawczego jest udział w nim zakładów należących do Instytutu Fizyki Doświadczalnej UW, Pracowni Przetwarzania Informacji Instytutu Geofizyki i współpraca z Freie Universität Berlin.',
'Jutro w Instytucie odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar?PROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Nazywane też jest radarem laserowym. Badania, które prowadzimy, są zainicjowane i finansowane przez Fundację Współpracy Polsko-Niemieckiej, dzięki której ten lidar u nas zaistniał.',
'Jutro odbędzie sie pokaz nowego polskiego lidara typu DIAL. lidar Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. DIAL - lidar absorbcji różnicowej jest urządzeniem inteligentnym, to znaczy potrafi rozróżnić, co mierzy. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. Z lidara korzyść mamy potrójną: użyteczną, naukową I dydaktyczną. Żeby przetworzyć sygnał lidarowy, czyli to co wraca po rozproszeniu światła do układu, i otrzymać dane dotyczące rozkładu koncentracji - trzeba dokonać skomplikowanych operacji. muszą być zastosowane dwie wiązki laserowe o dwóch różnych długościach fali, z których jedna jest absorbowana, a druga nie jest absorbowana przez substancję, którą chcemy wykryć.',
'Jutro odbędzie sie pokaz nowego polskiego lidara typu DIAL. lidar Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Nazywane też jest radarem laserowym. Jest to najnowsza generacja tego typu lidarów. DIAL - lidar absorbcji różnicowej jest urządzeniem inteligentnym, to znaczy potrafi rozróżnić, co mierzy. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. Z lidara korzyść mamy potrójną: użyteczną, bo przy jego pomocy wykonujemy pomiary skażeń atmosferycznych, korzyść naukową - rozwijamy badania nad tym urządzeniem. I korzyść dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska. Żeby przetworzyć tzw. sygnał lidarowy, czyli to co wraca po rozproszeniu światła do układu, i otrzymać rozsądne dane dotyczące rozkładu koncentracji - trzeba dokonać skomplikowanych operacji. Badania, które prowadzimy, są zainicjowane i finansowane przez Fundację Współpracy Polsko-Niemieckiej, dzięki której ten lidar u nas zaistniał i dla której w ramach naszych zobowiązań wykonujemy pomiary zanieczyszczeń nad naszą wspólną granicą. Zasadniczy koszt jego budowy pokryła uzyskana od Fundacji dotacja. Część pieniędzy przekazał też Narodowy Fundusz Ochrony Środowiska i Gospodarki Wodnej oraz Komitet Badań Naukowych. Lidar typu DIAL jest oparty na pomiarze absorbcji różnicowej, czyli muszą być zastosowane dwie wiązki laserowe o dwóch różnych długościach fali, z których jedna jest absorbowana, a druga nie jest absorbowana przez substancję, którą chcemy wykryć.',
'Jutro odbędzie sie pokaz nowego polskiego lidara typu DIAL. lidar Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. DIAL - lidar absorbcji różnicowej jest urządzeniem inteligentnym, to znaczy potrafi rozróżnić, co mierzy. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. Z lidara korzyść mamy potrójną: użyteczną, naukową I dydaktyczną.',
'Jutro w Instytucie odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar? \nPROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Nazywane też jest radarem laserowym. DIAL - lidar absorbcji różnicowej jest urządzeniem inteligentnym, to znaczy potrafi rozróżnić, co mierzy. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. Z lidara korzyść mamy potrójną: użyteczną, bo przy jego pomocy wykonujemy pomiary skażeń atmosferycznych, korzyść naukową - rozwijamy badania nad tym urządzeniem I korzyść dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska.Nasz lidar ma większe możliwości niż stacje monitoringowe. Mierzy zanieczyszczenia nie tylko lokalnie, ale też ich rozkład w przestrzeni, z wysoką rozdzielczością przestrzenną i na odległość kilku kilometrów.',
'Jutro w Instytucie odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar? \nPROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Nazywane też jest radarem laserowym.Tego typu lidar jest drogi, kosztuje około miliona marek niemieckich. DIAL - lidar absorbcji różnicowej jest urządzeniem inteligentnym, to znaczy potrafi rozróżnić, co mierzy. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. Z lidara korzyść mamy potrójną: użyteczną, bo przy jego pomocy wykonujemy pomiary skażeń atmosferycznych, korzyść naukową - rozwijamy badania nad tym urządzeniem, staramy się m.in. rozszerzyć jego zastosowanie także na inne substancje występujące w atmosferze. I korzyść dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska.Lidar typu DIAL jest oparty na pomiarze absorbcji różnicowej, czyli muszą być zastosowane dwie wiązki laserowe o dwóch różnych długościach fali, z których jedna jest absorbowana, a druga nie jest absorbowana przez substancję, którą chcemy wykryć. Cząsteczki, które wykrywamy mają pasma absorbcji w bliskim nadfiolecie.Nasz lidar ma większe możliwości niż stacje monitoringowe. Mierzy zanieczyszczenia nie tylko lokalnie, ale też ich rozkład w przestrzeni, z wysoką rozdzielczością przestrzenną i na odległość kilku kilometrów. Możemy zatem śledzić ewolucję rozprzestrzeniania się tych zanieczyszczeń, ich kierunek i zmiany spowodowane m.in. warunkami atmosferycznymi. Wyniki naszych pomiarów porównujemy z danymi uzyskanymi ze stacji monitoringowych.',
'Jutro w Instytucie odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar? \nPROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Z lidara korzyść mamy potrójną: użyteczną, bo przy jego pomocy wykonujemy pomiary skażeń atmosferycznych, korzyść naukową i dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska.',
'Jutro odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar? \n\nPROF. KRZYSZTOF ERNST: urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi.\nto najnowsza generacja tego typu lidarów. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. korzyść mamy potrójną: użyteczną, przy jego pomocy wykonujemy pomiary skażeń atmosferycznych, naukową - rozwijamy badania nad urządzeniem I dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska.\nNasze przedsięwzięcie nie ma charakteru komercyjnego. Chcemy np. mierzyć w Warszawie rozkłady koncentracji tlenków azotu. Koncentrujemy się głównie na Czarnym Trójkącie - obszarze u zbiegu granic: Polski, Niemiec i Czech, do niedawna uważanym za najbardziej zdegradowany region w Europie.',
'Jutro odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar? \n\nPROF. KRZYSZTOF ERNST: urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi.\n\nto kosztowne urządzenie będzie służyło tylko naukowcom?\n\nlidar jest rzeczywiście drogi. to najnowsza generacja tego typu lidarów. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. korzyść mamy potrójną: użyteczną, przy jego pomocy wykonujemy pomiary skażeń atmosferycznych, naukową - rozwijamy badania nad tym urządzeniem I dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska.\n\nCzy wszystkie zanieczyszczenia będzie można wykryć za pomocą lidaru?\n\nNie ma takiego jednostkowego urządzenia, które by wykrywało i mierzyło wszystkie szkodliwe gazy w atmosferze. Ale prowadzimy badania mające na celu rozszerzenie możliwości lidaru o taką substancję jak fosgen.\n\nstać nas będzie na prowadzenie pomiarów ozonu w miastach? \n\nNasze przedsięwzięcie nie ma charakteru komercyjnego. Chcemy np. mierzyć w Warszawie rozkłady koncentracji tlenków azotu, ich ewolucję czasową nad różnymi arteriami miasta. Koncentrujemy się głównie na Czarnym Trójkącie - obszarze u zbiegu granic: Polski, Niemiec i Czech, do niedawna uważanym za najbardziej zdegradowany region w Europie. zanieczyszczenie było dużo większe niż obecnie i wszystko wskazuje na to, że będzie dalej spadać.\nDIAL dzisiaj ma zdecydowanie największe wzięcie w ochronie środowiska. \n\nFizycy dotychczas nie zajmowali się ochroną środowiska?\n\nTaka specjalizacja powstala na Wydziale Fizyki UW dwa lata temu.',
'Co to jest lidar? \n\nPROF. KRZYSZTOF ERNST: urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi.\nto najnowsza generacja tego typu lidarów. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. korzyść mamy potrójną: użyteczną, wykonujemy pomiary skażeń atmosferycznych, naukową - rozwijamy badania nad urządzeniem I dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska.',
'Co to jest lidar? \nPROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. staramy się rozszerzyć jego zastosowanie na inne substancje występujące w atmosferze. Badania, które prowadzimy, są zainicjowane i finansowane przez Fundację Współpracy Polsko-Niemieckiej. zamierzamy zakończyć pomiary skażeń atmosferycznych nad granicą polsko-niemiecką. Nasz lidar ma większe możliwości niż stacje monitoringowe. Możemy śledzić ewolucję rozprzestrzeniania się zanieczyszczeń, ich kierunek i zmiany. Gwarancją sukcesu naszego programu dydaktyczno-badawczego jest udział w nim zakładów należących do Instytutu Fizyki Doświadczalnej UW, Pracowni Przetwarzania Informacji Instytutu Geofizyki i współpraca z Freie Universität Berlin.',
"Co to jest lidar? \nPROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. DIAL - lidar absorbcji różnicowej potrafi rozróżnić, co mierzy. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. staramy się rozszerzyć jego zastosowanie także na inne substancje występujące w atmosferze. Pakiet software'u wzbogacamy o nowe algorytmy, które potrafią dokładniej rozszyfrowywać sygnał lidarowy, a w konsekwencji skażenia. Badania, które prowadzimy, są zainicjowane i finansowane przez Fundację Współpracy Polsko-Niemieckiej. \n\nChcemy mierzyć w Warszawie rozkłady koncentracji tlenków azotu, ich ewolucję czasową nad różnymi arteriami miasta. zamierzamy zakończyć pomiary skażeń atmosferycznych nad granicą polsko-niemiecką. Nasz lidar ma większe możliwości niż stacje monitoringowe. Możemy śledzić ewolucję rozprzestrzeniania się zanieczyszczeń, ich kierunek i zmiany spowodowane m.in. warunkami atmosferycznymi. \n\nDIAL jest tym typem lidara, który dzisiaj ma największe wzięcie w ochronie środowiska. Z lidarów korzysta meteorologia. W Europie takich lidarów jak nasz jest zaledwie kilka. Nasz lidar jest najnowocześniejszym w Polsce. Ponadto jest lidarem ruchomym, zainstalowanym na samochodzie. \n\nFizycy dotychczas nie zajmowali się ochroną środowiska?\nTaka specjalizacja powstala na Wydziale Fizyki UW dwa lata temu. Gwarancją sukcesu naszego programu dydaktyczno-badawczego jest udział w nim zakładów należących do Instytutu Fizyki Doświadczalnej UW, Pracowni Przetwarzania Informacji Instytutu Geofizyki i współpraca z Freie Universität Berlin.",
'Co to jest lidar? \nPROF. KRZYSZTOF ERNST: to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. zamierzamy zakończyć pomiary skażeń atmosferycznych nad granicą polsko-niemiecką. Nasz lidar ma większe możliwości niż stacje monitoringowe. Możemy śledzić ewolucję rozprzestrzeniania się zanieczyszczeń, ich kierunek i zmiany.'],
'ratio': [10, 20, 5, 10, 20, 5, 10, 20, 5, 10, 20, 5, 10, 20, 5],
'spans': [{'end': [244, 396, 457, 867, 922, 1022, 1103, 1877],
'span_text': ['Jutro w Instytucie odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar?',
'PROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Nazywane też jest radarem laserowym.',
'Czy to kosztowne urządzenie będzie służyło tylko naukowcom?',
'Z lidara korzyść mamy potrójną: użyteczną, bo przy jego pomocy wykonujemy pomiary skażeń atmosferycznych,',
'naukową - rozwijamy badania nad tym urządzeniem',
'.',
'I korzyść dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska.',
'Badania, które prowadzimy, są zainicjowane i finansowane przez Fundację Współpracy Polsko-Niemieckiej, dzięki której ten lidar u nas zaistniał i dla której w ramach naszych zobowiązań wykonujemy pomiary zanieczyszczeń nad naszą wspólną granicą.'],
'start': [153, 247, 398, 760, 875, 1020, 1023, 1631]},
{'end': [244,
396,
457,
867,
922,
1022,
1103,
1878,
2132,
2296,
2969,
6225,
6985,
7047,
7282,
7326,
7383],
'span_text': ['Jutro w Instytucie odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar?',
'PROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Nazywane też jest radarem laserowym.',
'Czy to kosztowne urządzenie będzie służyło tylko naukowcom?',
'Z lidara korzyść mamy potrójną: użyteczną, bo przy jego pomocy wykonujemy pomiary skażeń atmosferycznych,',
'naukową - rozwijamy badania nad tym urządzeniem',
'.',
'I korzyść dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska.',
'Badania, które prowadzimy, są zainicjowane i finansowane przez Fundację Współpracy Polsko-Niemieckiej, dzięki której ten lidar u nas zaistniał i dla której w ramach naszych zobowiązań wykonujemy pomiary zanieczyszczeń nad naszą wspólną granicą.',
'Czy wszystkie zanieczyszczenia będzie można wykryć za pomocą lidaru?',
'Nie ma takiego jednostkowego urządzenia, które by wykrywało i mierzyło wszystkie szkodliwe gazy w atmosferze łącznie z dostarczeniem informacji o ich rozkładzie.',
'Możemy np. badać zawartość ozonu w troposferze.',
'W Europie takich lidarów jak nasz jest zaledwie kilka. Większość z nich mierzy ozon, dwutlenek siarki i tlenek azotu.',
'',
'Fizycy dotychczas nie zajmowali się ochroną środowiska?',
'Taka specjalizacja powstala na Wydziale Fizyki UW dwa lata temu. Gwarancją sukcesu naszego programu dydaktyczno-badawczego jest udział w nim zakładów należących do Instytutu Fizyki Doświadczalnej UW, Pracowni Przetwarzania Informacji',
'Instytutu Geofizyki i',
'współpraca z Freie Universität Berlin.'],
'start': [153,
247,
398,
760,
875,
1020,
1023,
1631,
2064,
2134,
2921,
6108,
6984,
6992,
7049,
7304,
7344]},
{'end': [244, 396, 1103, 1774, 1877],
'span_text': ['Jutro w Instytucie odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar?',
'PROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Nazywane też jest radarem laserowym.',
'',
'Badania, które prowadzimy, są zainicjowane i finansowane przez Fundację Współpracy Polsko-Niemieckiej, dzięki której ten lidar u nas zaistniał',
'.'],
'start': [153, 247, 1102, 1631, 1876]},
{'end': [159,
227,
243,
360,
804,
882,
1025,
1044,
1103,
1454,
1540,
1629,
2848],
'span_text': ['Jutro',
'odbędzie sie pokaz nowego polskiego lidara typu DIAL.',
'lidar',
'Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi.',
'DIAL - lidar absorbcji różnicowej jest urządzeniem inteligentnym, to znaczy potrafi rozróżnić, co mierzy. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. Z lidara korzyść mamy potrójną: użyteczną,',
'naukową',
'I',
'dydaktyczną',
'.',
'Żeby przetworzyć',
'sygnał lidarowy, czyli to co wraca po rozproszeniu światła do układu, i otrzymać',
'dane dotyczące rozkładu koncentracji - trzeba dokonać skomplikowanych operacji.',
'muszą być zastosowane dwie wiązki laserowe o dwóch różnych długościach fali, z których jedna jest absorbowana, a druga nie jest absorbowana przez substancję, którą chcemy wykryć.'],
'start': [153,
173,
238,
270,
591,
875,
1022,
1033,
1101,
1437,
1459,
1549,
2670]},
{'end': [159, 227, 243, 396, 922, 1103, 1629, 2062, 2582, 2848],
'span_text': ['Jutro',
'odbędzie sie pokaz nowego polskiego lidara typu DIAL.',
'lidar',
'Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Nazywane też jest radarem laserowym.',
'Jest to najnowsza generacja tego typu lidarów. DIAL - lidar absorbcji różnicowej jest urządzeniem inteligentnym, to znaczy potrafi rozróżnić, co mierzy. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. Z lidara korzyść mamy potrójną: użyteczną, bo przy jego pomocy wykonujemy pomiary skażeń atmosferycznych, korzyść naukową - rozwijamy badania nad tym urządzeniem',
'. I korzyść dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska.',
'Żeby przetworzyć tzw. sygnał lidarowy, czyli to co wraca po rozproszeniu światła do układu, i otrzymać rozsądne dane dotyczące rozkładu koncentracji - trzeba dokonać skomplikowanych operacji.',
'Badania, które prowadzimy, są zainicjowane i finansowane przez Fundację Współpracy Polsko-Niemieckiej, dzięki której ten lidar u nas zaistniał i dla której w ramach naszych zobowiązań wykonujemy pomiary zanieczyszczeń nad naszą wspólną granicą. Zasadniczy koszt jego budowy pokryła uzyskana od Fundacji dotacja. Część pieniędzy przekazał też Narodowy Fundusz Ochrony Środowiska i Gospodarki Wodnej oraz Komitet Badań Naukowych.',
'',
'Lidar typu DIAL jest oparty na pomiarze absorbcji różnicowej, czyli muszą być zastosowane dwie wiązki laserowe o dwóch różnych długościach fali, z których jedna jest absorbowana, a druga nie jest absorbowana przez substancję, którą chcemy wykryć.'],
'start': [153, 173, 238, 270, 542, 1020, 1437, 1631, 2581, 2602]},
{'end': [159, 227, 243, 360, 804, 882, 1025, 1044, 1102],
'span_text': ['Jutro',
'odbędzie sie pokaz nowego polskiego lidara typu DIAL.',
'lidar',
'Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi.',
'DIAL - lidar absorbcji różnicowej jest urządzeniem inteligentnym, to znaczy potrafi rozróżnić, co mierzy. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. Z lidara korzyść mamy potrójną: użyteczną,',
'naukową',
'I',
'dydaktyczną',
'.'],
'start': [153, 173, 238, 270, 591, 875, 1022, 1033, 1101]},
{'end': [246, 396, 922, 1102, 4763],
'span_text': ['Jutro w Instytucie odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar?',
'PROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Nazywane też jest radarem laserowym.',
'DIAL - lidar absorbcji różnicowej jest urządzeniem inteligentnym, to znaczy potrafi rozróżnić, co mierzy. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. Z lidara korzyść mamy potrójną: użyteczną, bo przy jego pomocy wykonujemy pomiary skażeń atmosferycznych, korzyść naukową - rozwijamy badania nad tym urządzeniem',
'I korzyść dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska.',
'Nasz lidar ma większe możliwości niż stacje monitoringowe. Mierzy zanieczyszczenia nie tylko lokalnie, ale też ich rozkład w przestrzeni, z wysoką rozdzielczością przestrzenną i na odległość kilku kilometrów.'],
'start': [153, 247, 590, 1022, 4555]},
{'end': [246, 396, 480, 542, 1021, 1102, 2920, 4989],
'span_text': ['Jutro w Instytucie odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar?',
'PROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Nazywane też jest radarem laserowym.',
'Tego typu lidar jest',
'drogi, kosztuje około miliona marek niemieckich.',
'DIAL - lidar absorbcji różnicowej jest urządzeniem inteligentnym, to znaczy potrafi rozróżnić, co mierzy. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. Z lidara korzyść mamy potrójną: użyteczną, bo przy jego pomocy wykonujemy pomiary skażeń atmosferycznych, korzyść naukową - rozwijamy badania nad tym urządzeniem, staramy się m.in. rozszerzyć jego zastosowanie także na inne substancje występujące w atmosferze.',
'I korzyść dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska.',
'Lidar typu DIAL jest oparty na pomiarze absorbcji różnicowej, czyli muszą być zastosowane dwie wiązki laserowe o dwóch różnych długościach fali, z których jedna jest absorbowana, a druga nie jest absorbowana przez substancję, którą chcemy wykryć. Cząsteczki, które wykrywamy mają pasma absorbcji w bliskim nadfiolecie.',
'Nasz lidar ma większe możliwości niż stacje monitoringowe. Mierzy zanieczyszczenia nie tylko lokalnie, ale też ich rozkład w przestrzeni, z wysoką rozdzielczością przestrzenną i na odległość kilku kilometrów. Możemy zatem śledzić ewolucję rozprzestrzeniania się tych zanieczyszczeń, ich kierunek i zmiany spowodowane m.in. warunkami atmosferycznymi. Wyniki naszych pomiarów porównujemy z danymi uzyskanymi ze stacji monitoringowych.'],
'start': [153, 247, 459, 493, 590, 1022, 2602, 4555]},
{'end': [246, 360, 626, 883, 920, 1102],
'span_text': ['Jutro w Instytucie odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar?',
'PROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi.',
'',
'Z lidara korzyść mamy potrójną: użyteczną, bo przy jego pomocy wykonujemy pomiary skażeń atmosferycznych, korzyść naukową',
'i',
'dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska.'],
'start': [153, 247, 625, 760, 919, 1032]},
{'end': [158,
262,
271,
359,
397,
590,
761,
803,
867,
907,
922,
1025,
1102,
3311,
3516,
3595,
3623,
3675,
4226,
4332],
'span_text': ['Jutro',
'odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar? \n\nPROF. KRZYSZTOF',
'ERNST:',
'urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi.',
'',
'to najnowsza generacja tego typu lidarów.',
'Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen.',
'korzyść mamy potrójną: użyteczną,',
'przy jego pomocy wykonujemy pomiary skażeń atmosferycznych,',
'naukową - rozwijamy badania nad',
'urządzeniem',
'I',
'dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska.',
'',
'Nasze przedsięwzięcie nie ma charakteru komercyjnego.',
'Chcemy np. mierzyć w Warszawie rozkłady',
'koncentracji tlenków azotu',
'.',
'Koncentrujemy się głównie na Czarnym Trójkącie - obszarze u zbiegu',
'granic: Polski, Niemiec i Czech, do niedawna uważanym za najbardziej zdegradowany region w Europie.'],
'start': [153,
172,
263,
279,
396,
548,
699,
769,
806,
875,
911,
1022,
1033,
3310,
3462,
3556,
3596,
3674,
4158,
4233]},
{'end': [158,
262,
271,
359,
398,
459,
498,
543,
590,
761,
803,
867,
922,
1025,
1102,
2242,
2300,
2406,
3247,
3311,
3516,
3595,
3675,
4226,
4333,
5130,
5241,
5439,
5661,
5756,
7113],
'span_text': ['Jutro',
'odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar? \n\nPROF. KRZYSZTOF',
'ERNST:',
'urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi.',
'',
'to kosztowne urządzenie będzie służyło tylko naukowcom?',
'lidar jest rzeczywiście drogi',
'.',
'to najnowsza generacja tego typu lidarów.',
'Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen.',
'korzyść mamy potrójną: użyteczną,',
'przy jego pomocy wykonujemy pomiary skażeń atmosferycznych,',
'naukową - rozwijamy badania nad tym urządzeniem',
'I',
'dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska.',
'Czy wszystkie zanieczyszczenia będzie można wykryć za pomocą lidaru?\n\nNie ma takiego jednostkowego urządzenia, które by wykrywało i mierzyło wszystkie szkodliwe gazy w atmosferze',
'. Ale',
'prowadzimy badania mające na celu rozszerzenie możliwości lidaru o taką substancję jak fosgen.',
'',
'stać nas będzie na prowadzenie pomiarów ozonu w miastach?',
'Nasze przedsięwzięcie nie ma charakteru komercyjnego.',
'Chcemy np. mierzyć w Warszawie rozkłady',
'koncentracji tlenków azotu, ich ewolucję czasową nad różnymi arteriami miasta.',
'Koncentrujemy się głównie na Czarnym Trójkącie - obszarze u zbiegu',
'granic: Polski, Niemiec i Czech, do niedawna uważanym za najbardziej zdegradowany region w Europie.',
'zanieczyszczenie',
'było dużo większe niż obecnie i wszystko wskazuje na to, że będzie dalej spadać.',
'',
'DIAL',
'dzisiaj ma zdecydowanie największe wzięcie w ochronie środowiska.',
'Fizycy dotychczas nie zajmowali się ochroną środowiska?\n\nTaka specjalizacja powstala na Wydziale Fizyki UW dwa lata temu.'],
'start': [153,
172,
263,
279,
396,
402,
469,
541,
548,
699,
769,
806,
875,
1022,
1033,
2062,
2294,
2312,
3245,
3251,
3462,
3556,
3596,
4158,
4233,
5114,
5160,
5438,
5656,
5690,
6990]},
{'end': [262, 271, 359, 397, 590, 761, 803, 807, 867, 907, 922, 1025, 1102],
'span_text': ['Co to jest lidar? \n\nPROF. KRZYSZTOF',
'ERNST:',
'urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi.',
'',
'to najnowsza generacja tego typu lidarów.',
'Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen.',
'korzyść mamy potrójną: użyteczną,',
'',
'wykonujemy pomiary skażeń atmosferycznych,',
'naukową - rozwijamy badania nad',
'urządzeniem',
'I',
'dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska.'],
'start': [227,
263,
279,
396,
548,
699,
769,
806,
824,
875,
911,
1022,
1033]},
{'end': [245,
360,
761,
936,
971,
1022,
1733,
1878,
4159,
4614,
4772,
4818,
4860,
4906,
7283,
7326,
7383],
'span_text': ['Co to jest lidar?',
'PROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi.',
'Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen.',
'staramy się',
'rozszerzyć jego zastosowanie',
'na inne substancje występujące w atmosferze.',
'Badania, które prowadzimy, są zainicjowane i finansowane przez Fundację Współpracy Polsko-Niemieckiej',
'.',
'zamierzamy zakończyć pomiary skażeń atmosferycznych nad granicą polsko-niemiecką.',
'Nasz lidar ma większe możliwości niż stacje monitoringowe.',
'Możemy',
'śledzić ewolucję rozprzestrzeniania się',
'zanieczyszczeń, ich kierunek i zmiany',
'.',
'Gwarancją sukcesu naszego programu dydaktyczno-badawczego jest udział w nim zakładów należących do Instytutu Fizyki Doświadczalnej UW, Pracowni Przetwarzania Informacji',
'Instytutu Geofizyki i',
'współpraca z Freie Universität Berlin.'],
'start': [227,
246,
699,
924,
942,
977,
1631,
1876,
4076,
4555,
4765,
4778,
4823,
4904,
7114,
7305,
7344]},
{'end': [245,
360,
625,
761,
936,
1022,
1311,
1357,
1436,
1733,
1878,
3247,
3311,
3563,
3676,
4159,
4614,
4772,
4818,
4906,
5410,
5439,
5701,
5789,
6163,
6364,
6472,
7048,
7283,
7326,
7383],
'span_text': ['Co to jest lidar?',
'PROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi.',
'DIAL - lidar absorbcji różnicowej',
'potrafi rozróżnić, co mierzy. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen.',
'staramy się',
'rozszerzyć jego zastosowanie także na inne substancje występujące w atmosferze.',
"Pakiet software'u",
'wzbogacamy o nowe algorytmy, które potrafią',
'dokładniej rozszyfrowywać sygnał lidarowy, a w konsekwencji skażenia.',
'Badania, które prowadzimy, są zainicjowane i finansowane przez Fundację Współpracy Polsko-Niemieckiej',
'.',
'',
'',
'Chcemy',
'mierzyć w Warszawie rozkłady koncentracji tlenków azotu, ich ewolucję czasową nad różnymi arteriami miasta.',
'zamierzamy zakończyć pomiary skażeń atmosferycznych nad granicą polsko-niemiecką.',
'Nasz lidar ma większe możliwości niż stacje monitoringowe.',
'Możemy',
'śledzić ewolucję rozprzestrzeniania się',
'zanieczyszczeń, ich kierunek i zmiany spowodowane m.in. warunkami atmosferycznymi.',
'',
'',
'DIAL jest tym typem lidara, który dzisiaj ma',
'największe wzięcie w ochronie środowiska. Z lidarów korzysta meteorologia.',
'W Europie takich lidarów jak nasz jest zaledwie kilka.',
'Nasz lidar',
'jest najnowocześniejszym w Polsce. Ponadto jest lidarem ruchomym, zainstalowanym na samochodzie.',
'Fizycy dotychczas nie zajmowali się ochroną środowiska?',
'Taka specjalizacja powstala na Wydziale Fizyki UW dwa lata temu. Gwarancją sukcesu naszego programu dydaktyczno-badawczego jest udział w nim zakładów należących do Instytutu Fizyki Doświadczalnej UW, Pracowni Przetwarzania Informacji',
'Instytutu Geofizyki i',
'współpraca z Freie Universität Berlin.'],
'start': [227,
246,
591,
668,
924,
942,
1293,
1313,
1366,
1631,
1876,
3246,
3310,
3556,
3567,
4076,
4555,
4765,
4778,
4823,
5409,
5438,
5656,
5714,
6108,
6353,
6374,
6990,
7049,
7305,
7344]},
{'end': [245, 271, 360, 761, 4159, 4614, 4772, 4818, 4860, 4905],
'span_text': ['Co to jest lidar?',
'PROF. KRZYSZTOF ERNST:',
'to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi.',
'Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen.',
'zamierzamy zakończyć pomiary skażeń atmosferycznych nad granicą polsko-niemiecką.',
'Nasz lidar ma większe możliwości niż stacje monitoringowe.',
'Możemy',
'śledzić ewolucję rozprzestrzeniania się',
'zanieczyszczeń, ich kierunek i zmiany',
'.'],
'start': [227, 246, 276, 699, 4076, 4555, 4765, 4778, 4823, 4904]}],
'type': ['extract',
'extract',
'extract',
'extract',
'extract',
'extract',
'extract',
'extract',
'extract',
'extract',
'extract',
'extract',
'extract',
'extract',
'extract']},
'title': 'Lidarowe oczy'}
```
### Data Fields
- `id`: a `string` example identifier
- `date`: date of the original article (`string`)
- `title`: title of the original article (`string`)
- `section`: the section of the newspaper the original article belonged to (`string`)
- `authors`: original article authors (`string`)
- `body`: original article body (list of `string`s)
- `summaries`: a dictionary feature containing summaries of the original article with the following attributes:
- `ratio`: ratio of summary - percentage of the original article (list of `int32`s)
- `type`: type of summary - extractive (`extract`) or abstractive (`abstract`) (list of `string`s)
- `author`: acronym of summary author (list of `string`)
- `body`: body of summary (list of `string`)
- `spans`: a list containing spans for extractive summaries (empty for abstractive summaries):
- `start`: start of span (`int32`)
- `end`: end of span (`int32`)
- `span_text`: span text (`string`)
### Data Splits
Single train split
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
```
@inproceedings{
ogro:kop:14:lrec,
author = "Ogrodniczuk, Maciej and Kopeć, Mateusz",
pdf = "http://nlp.ipipan.waw.pl/Bib/ogro:kop:14:lrec.pdf",
title = "The {P}olish {S}ummaries {C}orpus",
pages = "3712--3715",
crossref = "lrec:14"
}
@proceedings{
lrec:14,
editor = "Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios",
isbn = "978-2-9517408-8-4",
title = "Proceedings of the Ninth International {C}onference on {L}anguage {R}esources and {E}valuation, {LREC}~2014",
url = "http://www.lrec-conf.org/proceedings/lrec2014/index.html",
booktitle = "Proceedings of the Ninth International {C}onference on {L}anguage {R}esources and {E}valuation, {LREC}~2014",
address = "Reykjavík, Iceland",
key = "LREC",
year = "2014",
organization = "European Language Resources Association (ELRA)"
}
```
### Contributions
Thanks to [@kldarek](https://github.com/kldarek) for adding this dataset. | 47,857 | [
[
-0.054779052734375,
-0.04364013671875,
0.053314208984375,
0.01331329345703125,
-0.037353515625,
-0.0204925537109375,
-0.0038089752197265625,
-0.0364990234375,
0.054229736328125,
0.042510986328125,
-0.044525146484375,
-0.054962158203125,
-0.04632568359375,
0.... |
py_ast | 2022-11-18T21:40:05.000Z | [
"task_categories:text2text-generation",
"task_categories:text-generation",
"task_categories:fill-mask",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:code",
"license:bsd-2-claus... | null | Dataset consisting of parsed ASTs that were used to train and
evaluate the DeepSyn tool.
The Python programs are collected from GitHub repositories
by removing duplicate files, removing project forks (copy of another existing repository)
,keeping only programs that parse and have at most 30'000 nodes in the AST and
we aim to remove obfuscated files | @InProceedings{OOPSLA ’16, ACM,
title = {Probabilistic Model for Code with Decision Trees.},
authors={Raychev, V., Bielik, P., and Vechev, M.},
year={2016}
} | 3 | 87 | 2022-03-02T23:29:22 | ---
pretty_name: PyAst
annotations_creators:
- machine-generated
language_creators:
- found
language:
- code
license:
- bsd-2-clause
- mit
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text2text-generation
- text-generation
- fill-mask
task_ids: []
paperswithcode_id: null
tags:
- code-modeling
- code-generation
dataset_info:
features:
- name: ast
sequence:
- name: type
dtype: string
- name: value
dtype: string
- name: children
sequence: int32
config_name: ast
splits:
- name: train
num_bytes: 1870790180
num_examples: 100000
- name: test
num_bytes: 907514993
num_examples: 50000
download_size: 526642289
dataset_size: 2778305173
---
# Dataset Card for [py_ast]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **homepage**: [py150](https://www.sri.inf.ethz.ch/py150)
- **Paper**: [Probabilistic Model for Code with Decision Trees](https://www.semanticscholar.org/paper/Probabilistic-model-for-code-with-decision-trees-Raychev-Bielik/62e176977d439aac2e2d7eca834a7a99016dfcaf)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The dataset consists of parsed ASTs that were used to train and evaluate the DeepSyn tool.
The Python programs are collected from GitHub repositories
by removing duplicate files, removing project forks (copy of another existing repository),
keeping only programs that parse and have at most 30'000 nodes in the AST and
we aim to remove obfuscated files
### Supported Tasks and Leaderboards
Code Representation, Unsupervised Learning
### Languages
Python
## Dataset Structure
### Data Instances
A typical datapoint contains an AST of a python program, parsed.
The main key is `ast` wherein every program's AST is stored.
Each children would have,
`type` which will formulate the type of the node.
`children` which enumerates if a given node has children(non-empty list).
`value`, if the given node has any hardcoded value(else "N/A").
An example would be,
'''
[ {"type":"Module","children":[1,4]},{"type":"Assign","children":[2,3]},{"type":"NameStore","value":"x"},{"type":"Num","value":"7"}, {"type":"Print","children":[5]}, {"type":"BinOpAdd","children":[6,7]}, {"type":"NameLoad","value":"x"}, {"type":"Num","value":"1"} ]
'''
### Data Fields
- `ast`: a list of dictionaries, wherein every dictionary is a node in the Abstract Syntax Tree.
- `type`: explains the type of the node.
- `children`: list of nodes which are children under the given
- `value`: hardcoded value, if the node holds an hardcoded value.
### Data Splits
The data is split into a training and test set.
The final split sizes are as follows:
| | train | validation |
|------------------|--------:|------------:|
| py_ast examples | 100000 | 50000 |
## Dataset Creation
[More Information Needed]
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Raychev, V., Bielik, P., and Vechev, M
### Licensing Information
MIT, BSD and Apache
### Citation Information
@InProceedings{OOPSLA ’16, ACM,
title = {Probabilistic Model for Code with Decision Trees.},
authors={Raychev, V., Bielik, P., and Vechev, M.},
year={2016}
}
```
@inproceedings{10.1145/2983990.2984041,
author = {Raychev, Veselin and Bielik, Pavol and Vechev, Martin},
title = {Probabilistic Model for Code with Decision Trees},
year = {2016},
isbn = {9781450344449},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/2983990.2984041},
doi = {10.1145/2983990.2984041},
booktitle = {Proceedings of the 2016 ACM SIGPLAN International Conference on Object-Oriented Programming, Systems, Languages, and Applications},
pages = {731–747},
numpages = {17},
keywords = {Code Completion, Decision Trees, Probabilistic Models of Code},
location = {Amsterdam, Netherlands},
series = {OOPSLA 2016}
}
```
### Contributions
Thanks to [@reshinthadithyan](https://github.com/reshinthadithyan) for adding this dataset. | 5,688 | [
[
-0.01027679443359375,
-0.0474853515625,
0.012786865234375,
0.0135955810546875,
-0.00048828125,
0.007236480712890625,
-0.0290069580078125,
-0.0176544189453125,
0.002490997314453125,
0.03033447265625,
-0.04376220703125,
-0.06512451171875,
-0.037109375,
0.00661... |
telugu_books | 2022-11-03T16:07:57.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"lang... | null | This dataset is created by scraping telugu novels from teluguone.com this dataset can be used for nlp tasks like topic modeling, word embeddings, transfer learning etc | @InProceedings{huggingface:dataset,
title = {Indic NLP - Natural Language Processing for Indian Languages},
authors = {Sudalai Rajkumar, Anusha Motamarri},
year={2019}
} | 1 | 87 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- te
license:
- unknown
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: null
pretty_name: TeluguBooks
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 315076011
num_examples: 25794
download_size: 0
dataset_size: 315076011
---
# Dataset Card for [telugu_books]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:**
[Telugu Books](https://www.kaggle.com/sudalairajkumar/telugu-nlp)
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset is created by scraping telugu novels from teluguone.com this dataset can be used for nlp tasks like topic modeling, word embeddings, transfer learning etc
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
TE - Telugu
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- Text: Sentence from a novel
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
Anusha Motamarri
### Annotations
#### Annotation process
Anusha Motamarri
#### Who are the annotators?
Anusha Motamarri
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@vinaykudari](https://github.com/vinaykudari) for adding this dataset. | 3,163 | [
[
-0.00691986083984375,
-0.025634765625,
-0.0097808837890625,
0.01166534423828125,
-0.03448486328125,
0.00875091552734375,
-0.0114898681640625,
-0.0181884765625,
0.045257568359375,
0.037353515625,
-0.051513671875,
-0.052276611328125,
-0.042510986328125,
0.0236... |
turkish_shrinked_ner | 2023-01-25T14:54:44.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:machine-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|other-turkish_ner",
"language:tr",
"license:cc-by-4.0",
... | null | Shrinked version (48 entity type) of the turkish_ner.
Original turkish_ner dataset: Automatically annotated Turkish corpus for named entity recognition and text categorization using large-scale gazetteers. The constructed gazetteers contains approximately 300K entities with thousands of fine-grained entity types under 25 different domains.
Shrinked entity types are: academic, academic_person, aircraft, album_person, anatomy, animal, architect_person, capital, chemical, clothes, country, culture, currency, date, food, genre, government, government_person, language, location, material, measure, medical, military, military_person, nation, newspaper, organization, organization_person, person, production_art_music, production_art_music_person, quantity, religion, science, shape, ship, software, space, space_person, sport, sport_name, sport_person, structure, subject, tech, train, vehicle | \ | 1 | 87 | 2022-03-02T23:29:22 | ---
annotations_creators:
- machine-generated
language_creators:
- expert-generated
language:
- tr
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|other-turkish_ner
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: TurkishShrinkedNer
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-academic
'2': I-academic
'3': B-academic_person
'4': I-academic_person
'5': B-aircraft
'6': I-aircraft
'7': B-album_person
'8': I-album_person
'9': B-anatomy
'10': I-anatomy
'11': B-animal
'12': I-animal
'13': B-architect_person
'14': I-architect_person
'15': B-capital
'16': I-capital
'17': B-chemical
'18': I-chemical
'19': B-clothes
'20': I-clothes
'21': B-country
'22': I-country
'23': B-culture
'24': I-culture
'25': B-currency
'26': I-currency
'27': B-date
'28': I-date
'29': B-food
'30': I-food
'31': B-genre
'32': I-genre
'33': B-government
'34': I-government
'35': B-government_person
'36': I-government_person
'37': B-language
'38': I-language
'39': B-location
'40': I-location
'41': B-material
'42': I-material
'43': B-measure
'44': I-measure
'45': B-medical
'46': I-medical
'47': B-military
'48': I-military
'49': B-military_person
'50': I-military_person
'51': B-nation
'52': I-nation
'53': B-newspaper
'54': I-newspaper
'55': B-organization
'56': I-organization
'57': B-organization_person
'58': I-organization_person
'59': B-person
'60': I-person
'61': B-production_art_music
'62': I-production_art_music
'63': B-production_art_music_person
'64': I-production_art_music_person
'65': B-quantity
'66': I-quantity
'67': B-religion
'68': I-religion
'69': B-science
'70': I-science
'71': B-shape
'72': I-shape
'73': B-ship
'74': I-ship
'75': B-software
'76': I-software
'77': B-space
'78': I-space
'79': B-space_person
'80': I-space_person
'81': B-sport
'82': I-sport
'83': B-sport_name
'84': I-sport_name
'85': B-sport_person
'86': I-sport_person
'87': B-structure
'88': I-structure
'89': B-subject
'90': I-subject
'91': B-tech
'92': I-tech
'93': B-train
'94': I-train
'95': B-vehicle
'96': I-vehicle
splits:
- name: train
num_bytes: 200728389
num_examples: 614515
download_size: 0
dataset_size: 200728389
---
# Dataset Card for turkish_shrinked_ner
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.kaggle.com/behcetsenturk/shrinked-twnertc-turkish-ner-data-by-kuzgunlar
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** https://www.kaggle.com/behcetsenturk
### Dataset Summary
Shrinked processed version (48 entity type) of the turkish_ner.
Original turkish_ner dataset: Automatically annotated Turkish corpus for named entity recognition and text categorization using large-scale gazetteers. The constructed gazetteers contains approximately 300K entities with thousands of fine-grained entity types under 25 different domains.
Shrinked entity types are: academic, academic_person, aircraft, album_person, anatomy, animal, architect_person, capital, chemical, clothes, country, culture, currency, date, food, genre, government, government_person, language, location, material, measure, medical, military, military_person, nation, newspaper, organization, organization_person, person, production_art_music, production_art_music_person, quantity, religion, science, shape, ship, software, space, space_person, sport, sport_name, sport_person, structure, subject, tech, train, vehicle
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
Turkish
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
There's only the training set.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
Behcet Senturk
### Licensing Information
Creative Commons Attribution 4.0 International
### Citation Information
[Needs More Information]
### Contributions
Thanks to [@bhctsntrk](https://github.com/bhctsntrk) for adding this dataset. | 6,813 | [
[
-0.047119140625,
-0.046417236328125,
0.0058135986328125,
0.006572723388671875,
-0.0238800048828125,
-0.0018796920776367188,
-0.03485107421875,
-0.0262908935546875,
0.04168701171875,
0.041168212890625,
-0.04998779296875,
-0.0706787109375,
-0.06231689453125,
0... |
yoruba_wordsim353 | 2022-11-03T16:07:49.000Z | [
"task_categories:text-classification",
"task_ids:text-scoring",
"task_ids:semantic-similarity-scoring",
"annotations_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"language:yo",
"li... | null | A translation of the word pair similarity dataset wordsim-353 to Yorùbá.
The dataset was presented in the paper
Alabi et al.: Massive vs. Curated Embeddings for Low-Resourced
Languages: the Case of Yorùbá and Twi (LREC 2020). | @inproceedings{alabi-etal-2020-massive,
title = "Massive vs. Curated Embeddings for Low-Resourced Languages: the Case of {Y}or{\\`u}b{\\'a} and {T}wi",
author = "Alabi, Jesujoba and
Amponsah-Kaakyire, Kwabena and
Adelani, David and
Espa{\\~n}a-Bonet, Cristina",
booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://www.aclweb.org/anthology/2020.lrec-1.335",
pages = "2754--2762",
language = "English",
ISBN = "979-10-95546-34-4",
} | 0 | 87 | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
language_creators:
- expert-generated
language:
- en
- yo
license:
- unknown
multilinguality:
- multilingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- text-scoring
- semantic-similarity-scoring
paperswithcode_id: null
pretty_name: Wordsim-353 In Yorùbá (YorubaWordsim353)
dataset_info:
features:
- name: english1
dtype: string
- name: english2
dtype: string
- name: yoruba1
dtype: string
- name: yoruba2
dtype: string
- name: similarity
dtype: float32
splits:
- name: test
num_bytes: 19299
num_examples: 353
download_size: 17039
dataset_size: 19299
---
# Dataset Card for wordsim-353 in Yorùbá (yoruba_wordsim353)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** -
- **Repository:** https://github.com/ajesujoba/YorubaTwi-Embedding
- **Paper:** https://www.aclweb.org/anthology/2020.lrec-1.335/
- **Leaderboard:** -
- **Point of Contact:** Jesujoba Alabi ( jesujobaoluwadara.alabi (at) dfki.de ) and David Adelani ( didelani (at) lsv.uni-saarland.de )
### Dataset Summary
A translation of the word pair similarity dataset wordsim-353 to Yorùbá.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Yorùbá (ISO 639-1: yo)
## Dataset Structure
### Data Instances
An instance consists of a pair of words as well as their similarity. The dataset contains both the original English words (from wordsim-353) as well as their translation to Yorùbá.
### Data Fields
- `english1`: the first word of the pair; the original English word
- `english2`: the second word of the pair; the original English word
- `yoruba1`: the first word of the pair; translation to Yorùbá
- `yoruba2`: the second word of the pair; translation to Yorùbá
- `similarity`: similarity rating according to the English dataset
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@michael-aloys](https://github.com/michael-aloys) for adding this dataset. | 3,884 | [
[
-0.040283203125,
-0.052978515625,
0.0121002197265625,
0.01824951171875,
-0.033477783203125,
-0.003078460693359375,
-0.03125,
-0.03192138671875,
0.057647705078125,
0.039764404296875,
-0.057769775390625,
-0.057098388671875,
-0.061248779296875,
0.02011108398437... |
Abirate/code_net_test_final_dataset | 2022-01-27T10:15:52.000Z | [
"region:us"
] | Abirate | null | null | 1 | 87 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Aisha/BAAD6 | 2022-10-22T05:30:28.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:found",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"language_creators:found",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:unkno... | Aisha | null | null | 0 | 87 | 2022-03-02T23:29:22 | ---
annotations_creators:
- found
- crowdsourced
- expert-generated
language_creators:
- found
- crowdsourced
language:
- bn
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: 'BAAD6: Bangla Authorship Attribution Dataset (6 Authors)'
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
---
## Description
**BAAD6** is an **Authorship Attribution dataset for Bengali Literature**. It was collected and analyzed by Hemayet et al [[1]](https://ieeexplore.ieee.org/document/8631977). The data was obtained from different online posts and blogs. This dataset is balanced among the 6 Authors with 350 sample texts per author. This is a relatively small dataset but is noisy given the sources it was collected from and its cleaning procedure. Nonetheless, it may help evaluate authorship attribution systems as it resembles texts often available on the Internet. Details about the dataset are given in the table below.
| Author | Samples | Word count | Unique word |
| ------ | ------ | ------ | ------ |
|fe|350|357k|53k|
| ij | 350 | 391k | 72k
| mk | 350 | 377k | 47k
| rn | 350 | 231k | 50k
| hm | 350 | 555k | 72k
| rg | 350 | 391k | 58k
**Total** | 2,100 | 2,304,338 | 230,075
**Average** | 350 | 384,056.33 | 59,006.67
## Citation
If you use this dataset, please cite the paper [A Comparative Analysis of Word Embedding Representations in Authorship Attribution of Bengali Literature](https://ieeexplore.ieee.org/document/8631977).
```
@INPROCEEDINGS{BAAD6Dataset,
author={Ahmed Chowdhury, Hemayet and Haque Imon, Md. Azizul and Islam, Md. Saiful},
booktitle={2018 21st International Conference of Computer and Information Technology (ICCIT)},
title={A Comparative Analysis of Word Embedding Representations in Authorship Attribution of Bengali Literature},
year={2018},
volume={},
number={},
pages={1-6},
doi={10.1109/ICCITECHN.2018.8631977}
}
```
This dataset is also available in Mendeley: [BAAD6 dataset](https://data.mendeley.com/datasets/w9wkd7g43f/5). Always make sure to use the latest version of the dataset. Cite the dataset directly by:
```
@misc{BAAD6Dataset,
author = {Ahmed Chowdhury, Hemayet and Haque Imon, Md. Azizul and Khatun, Aisha and Islam, Md. Saiful},
title = {BAAD6: Bangla Authorship Attribution Dataset},
year={2018},
doi = {10.17632/w9wkd7g43f.5},
howpublished= {\url{https://data.mendeley.com/datasets/w9wkd7g43f/5}}
}
``` | 2,490 | [
[
0.0017557144165039062,
-0.012176513671875,
0.01554107666015625,
0.016021728515625,
-0.02728271484375,
-0.004085540771484375,
0.013336181640625,
-0.0213470458984375,
0.0272369384765625,
0.0240325927734375,
-0.006641387939453125,
-0.02972412109375,
-0.058685302734... |
AndrewMcDowell/de_corpora_parliament_processed | 2022-02-04T15:45:27.000Z | [
"region:us"
] | AndrewMcDowell | null | null | 0 | 87 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
Anurag-Singh-creator/task | 2021-12-12T21:26:53.000Z | [
"region:us"
] | Anurag-Singh-creator | null | null | 0 | 87 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
Atsushi/fungi_diagnostic_chars_comparison_japanese | 2023-10-08T21:35:23.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:other",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:ja",
"license:cc-by-4.0",
"region:us"
] | Atsushi | null | null | 0 | 87 | 2022-03-02T23:29:22 | ---
annotations_creators:
- other
language:
- ja
license:
- cc-by-4.0
multilinguality:
- monolingual
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
size_categories:
- 100K<n<1M
---
fungi_diagnostic_chars_comparison_japanese
大菌輪「識別形質まとめ」データセット
最終更新日:2023/10/9(R3-11401まで)
====
### Languages
Japanese
This dataset is available in Japanese only.
# 概要
Atsushi Nakajima(中島淳志)が個人で運営しているWebサイト[大菌輪](http://mycoscouter.coolblog.jp/daikinrin/) では、数千件以上の菌類分類学論文を「論文3行まとめ」という形で要約および索引付け(インデキシング)した情報を提供しています。
その一環として、ある菌と別の菌の「共通する」あるいは「異なる」識別形質 (diagnostic characters) に関する記述を人手で抽出しています。
本データセットは、抽出された識別形質の一覧に、「色/color」、「形状/shape」などのカテゴリを半自動的に付与して集積したものです。
「論文3行まとめ」は毎日更新していますが、本データセットの更新はおおむね1ヶ月に一度とする予定です。
## 関連データセット
「論文3行まとめ」
[Atsushi/fungi_indexed_mycological_papers_japanese](https://huggingface.co/datasets/Atsushi/fungi_indexed_mycological_papers_japanese)
「Trait Circusデータセット」(統制形質)
[Atsushi/fungi_trait_circus_database](https://huggingface.co/datasets/Atsushi/fungi_trait_circus_database)
## 各カラムの説明
* R3ID … 大菌輪「論文3行まとめ」のIDです。
* No … 各識別文を一意のIDで区別するために、各R3IDにおいてナンバリングしたものです。
* comparison_source … 比較元の分類群(学名)です。
* comparison_target … 比較先の分類群(学名)です。
* sentence … 識別文です。全て日本語です。
* label …半自動的に付与されたカテゴリです(人手で修正していますが、ダブルチェックは行っていないので誤分類もあると思います)。以下の25のカテゴリが存在します。
* サイズ/size
* 分子系統解析/molecular_phylogenetic_analysis
* 形状/shape
* 色/color
* 地理的分布/geographical_distribution
* 生息環境/habitat
* 表面性状/surface_characteristics
* 構造/structure
* 有無/presence
* 形態全般/general_morphology
* 位置/position
* 二次代謝産物/secondary_metabolite
* 呈色反応/chemical_reaction
* 数量/amount
* 発達/development
* 生理学的形質/physiological_characters
* 分類/classification
* 資化・発酵能/assimilation_and_fermentation
* 質感/texture
* 味・臭い/taste_and_smell
* 病害・病原性関連/disease_and_pathogenecity
* 全般/general_characters
* 耐性・感受性/resistance_and_susceptibility
* 栄養摂取様式/nutrition_style
* 未分類/unclassified
* common_or_different … 共通する形質は「1」、異なる形質は「0」です。
* data_source … 各情報の 出典(文献)のURLです。 | 2,083 | [
[
-0.032562255859375,
-0.058197021484375,
0.0418701171875,
0.02728271484375,
-0.044525146484375,
-0.0105743408203125,
0.004909515380859375,
-0.036895751953125,
0.07342529296875,
0.034820556640625,
-0.0374755859375,
-0.072998046875,
-0.045135498046875,
0.058837... |
Bosio/pacman | 2021-09-28T16:00:06.000Z | [
"region:us"
] | Bosio | null | null | 0 | 87 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
Bosio/pacman_descriptions | 2021-09-29T14:05:41.000Z | [
"region:us"
] | Bosio | null | null | 0 | 87 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
CyranoB/polarity | 2022-10-25T08:54:09.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"arxiv:1509.01626",
"regi... | CyranoB | The Amazon reviews dataset consists of reviews from amazon.
The data span a period of 18 years, including ~35 million reviews up to March 2013.
Reviews include product and user information, ratings, and a plaintext review. | @inproceedings{mcauley2013hidden,
title={Hidden factors and hidden topics: understanding rating dimensions with review text},
author={McAuley, Julian and Leskovec, Jure},
booktitle={Proceedings of the 7th ACM conference on Recommender systems},
pages={165--172},
year={2013}
} | 1 | 87 | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
pretty_name: Amazon Review Polarity
---
# Dataset Card for Amazon Review Polarity
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://registry.opendata.aws/
- **Repository:** https://github.com/zhangxiangxiao/Crepe
- **Paper:** https://arxiv.org/abs/1509.01626
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Xiang Zhang](mailto:xiang.zhang@nyu.edu)
### Dataset Summary
The Amazon reviews dataset consists of reviews from amazon.
The data span a period of 18 years, including ~35 million reviews up to March 2013.
Reviews include product and user information, ratings, and a plaintext review.
### Supported Tasks and Leaderboards
- `text-classification`, `sentiment-classification`: The dataset is mainly used for text classification: given the content and the title, predict the correct star rating.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A typical data point, comprises of a title, a content and the corresponding label.
An example from the AmazonPolarity test set looks as follows:
```
{
'title':'Great CD',
'content':"My lovely Pat has one of the GREAT voices of her generation. I have listened to this CD for YEARS and I still LOVE IT. When I'm in a good mood it makes me feel better. A bad mood just evaporates like sugar in the rain. This CD just oozes LIFE. Vocals are jusat STUUNNING and lyrics just kill. One of life's hidden gems. This is a desert isle CD in my book. Why she never made it big is just beyond me. Everytime I play this, no matter black, white, young, old, male, female EVERYBODY says one thing ""Who was that singing ?""",
'label':1
}
```
### Data Fields
- 'title': a string containing the title of the review - escaped using double quotes (") and any internal double quote is escaped by 2 double quotes (""). New lines are escaped by a backslash followed with an "n" character, that is "\n".
- 'content': a string containing the body of the document - escaped using double quotes (") and any internal double quote is escaped by 2 double quotes (""). New lines are escaped by a backslash followed with an "n" character, that is "\n".
- 'label': either 1 (positive) or 0 (negative) rating.
### Data Splits
The Amazon reviews polarity dataset is constructed by taking review score 1 and 2 as negative, and 4 and 5 as positive. Samples of score 3 is ignored. Each class has 1,800,000 training samples and 200,000 testing samples.
## Dataset Creation
### Curation Rationale
The Amazon reviews polarity dataset is constructed by Xiang Zhang (xiang.zhang@nyu.edu). It is used as a text classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015).
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
Apache License 2.0
### Citation Information
McAuley, Julian, and Jure Leskovec. "Hidden factors and hidden topics: understanding rating dimensions with review text." In Proceedings of the 7th ACM conference on Recommender systems, pp. 165-172. 2013.
Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015)
### Contributions
Thanks to [@hfawaz](https://github.com/hfawaz) for adding this dataset. | 5,298 | [
[
-0.043609619140625,
-0.038299560546875,
0.01763916015625,
0.0211334228515625,
-0.0298004150390625,
0.01300811767578125,
-0.015289306640625,
-0.01617431640625,
0.03192138671875,
0.0732421875,
-0.0648193359375,
-0.0770263671875,
-0.041107177734375,
0.006473541... |
Finnish-NLP/mc4_fi_cleaned | 2022-10-21T16:57:34.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:extended|mc4",
"language:fi",
"region:us"
] | Finnish-NLP | null | null | 3 | 87 | 2022-03-02T23:29:22 | ---
annotations_creators: []
language_creators: []
language:
- fi
license: []
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- extended|mc4
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
pretty_name: mC4 Finnish Cleaned
---
# Dataset Card for mC4 Finnish Cleaned
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
mC4 Finnish cleaned is cleaned version of the original mC4 Finnish split.
### Supported Tasks and Leaderboards
mC4 Finnish is mainly intended to pretrain Finnish language models and word representations.
### Languages
Finnish
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
The data have several fields:
- url: url of the source as a string
- text: text content as a string
- timestamp: timestamp as a string
- perplexity_kenlm_full: perplexity of the text calculated by KenLM model
### Data Splits
Train Validation
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information] | 2,983 | [
[
-0.034912109375,
-0.03179931640625,
0.025238037109375,
0.0016298294067382812,
-0.021881103515625,
0.00894927978515625,
-0.00763702392578125,
-0.021240234375,
0.042266845703125,
0.054290771484375,
-0.0640869140625,
-0.07196044921875,
-0.0418701171875,
0.02859... |
IFSTalfredoswald/MBTI | 2021-10-25T10:40:02.000Z | [
"region:us"
] | IFSTalfredoswald | null | null | 1 | 87 | 2022-03-02T23:29:22 | ---
YAML tags:
- copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | 2,603 | [
[
-0.03265380859375,
-0.034698486328125,
0.00995635986328125,
0.0190277099609375,
-0.01482391357421875,
0.016937255859375,
-0.02294921875,
-0.025665283203125,
0.0458984375,
0.044097900390625,
-0.0626220703125,
-0.083251953125,
-0.05157470703125,
0.004974365234... |
Jack0508/demo | 2021-11-07T16:25:20.000Z | [
"region:us"
] | Jack0508 | null | null | 0 | 87 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
Jeska/vaccinchat | 2021-10-21T12:14:29.000Z | [
"region:us"
] | Jeska | null | null | 0 | 87 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
LysandreJik/demo4 | 2021-09-25T20:02:48.000Z | [
"region:us"
] | LysandreJik | null | null | 0 | 87 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
Mansooreh/sharif-emotional-speech-dataset | 2021-10-19T23:33:59.000Z | [
"arxiv:1906.01155",
"region:us"
] | Mansooreh | null | null | 0 | 87 | 2022-03-02T23:29:22 | # <a href='https://arxiv.org/pdf/1906.01155.pdf'>ShEMO: a large-scale validated database for Persian speech emotion detection</a><br>
## Abstract
<div align="justify"> This paper introduces a large-scale, validated database for Persian called Sharif Emotional Speech Database (ShEMO). The database includes 3000 semi-natural utterances, equivalent to 3 hours and 25 minutes of speech data extracted from online radio plays. The ShEMO covers speech samples of 87 native-Persian speakers for five basic emotions including <i>anger</i>, <i>fear</i>, <i>happiness</i>, <i>sadness</i> and <i>surprise</i>, as well as neutral state. Twelve annotators label the underlying emotional state of utterances and majority voting is used to decide on the final labels. According to the kappa measure,
the inter-annotator agreement is 64% which is interpreted as "substantial agreement". We also present benchmark results based on common classification methods in speech emotion detection task. According to the experiments, support vector machine achieves the best results for both gender-independent (58.2%) and gender-dependent models (female=59.4%, male=57.6%). The ShEMO is available for academic purposes free of charge to provide a baseline for further research on Persian emotional speech.
## Download Dataset
To download female utterances (zip file):
```bash
wget -O female.zip "https://www.dropbox.com/s/42okby6c40w3j2x/female.zip?dl=0"
```
To download male utterances (zip file):
```bash
wget -O male.zip "https://www.dropbox.com/s/5ebs8hq1zm0qkp6/male.zip?dl=0"
```
To download labels & transcripts (json file):
```bash
wget https://github.com/pariajm/sharif-emotional-speech-dataset/raw/master/shemo.json
```
## Models Trained or Fine-tuned on ShEMO
Credits to [Mehrdad Farahani](https://github.com/m3hrdadfi/soxan)
- [Speech emotion detection in Persian (fa) using wav2vec 2.0](https://huggingface.co/m3hrdadfi/wav2vec2-xlsr-persian-speech-emotion-recognition)
- [Speech emotion detection in Persian (fa) using HuBERT](https://huggingface.co/m3hrdadfi/hubert-base-persian-speech-emotion-recognition)
- [Speech geneder detection in Persian (fa) using HuBERT](https://huggingface.co/m3hrdadfi/hubert-base-persian-speech-gender-recognition)
- [Automatic speech recognition in Persian (fa) using XLSR-53](https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-persian-shemo)
## Overview of ShEMO
Feature | Status
------------- | ----------
**access** | open source
**language** | Persian (fa)
**modality** | speech
**duration** | 3 hours and 25 minutes
**#utterances** | 3000
**#speakers** | 87 (31 females, 56 males)
**#emotions** | 5 basic emotions (anger, fear, happiness, sadness and surprise) and neutral state
**orthographic transcripts** | available
**phonetic transcripts** | available
Read our paper on <a href='https://link.springer.com/article/10.1007/s10579-018-9427-x'>Springer</a> or [arxiv](https://arxiv.org/pdf/1906.01155.pdf)
## Description of Filenames
The characters used in the filenames and their corresponding meaning:
- **A**: angry
- **F**: female speaker (if used at the beginning of the label e.g.`F14A09`) or fearful (if used in the middle of the label e.g. `M02F01`)
- **H** : happy
- **M** : male speaker
- **N** : neutral
- **S** : sad
- **W** : surprised
e.g. `F03S02` **F** means the speaker is **female**, **03** denotes **the speaker code**, **S** refers to the underlying emotion of the utterance which is **sadness**, **02** means this is the **second utterance for this speaker in sad emotion**.
## Data Instances
Here is a sample of data instances:
```json
"F21N37": {
"speaker_id": "F21",
"gender": "female",
"emotion": "neutral",
"transcript": "مگه من به تو نگفته بودم که باید راجع به دورانت سکوت کنی؟",
"ipa": "mӕge mæn be to nægofte budӕm ke bɑyæd rɑdʒeʔ be dorɑnt sokut koni"
}
```
## دادگان گفتار احساسی شریف (شمو)
برای دریافت مقاله <a href='https://arxiv.org/pdf/1906.01155.pdf'>اینجا</a> کلیک کنید
## Citation
If you use this dataset, please cite the following paper:
~~~~
@Article{MohamadNezami2019,
author = {Mohamad Nezami, Omid and Jamshid Lou, Paria and Karami, Mansoureh},
title = {ShEMO: a large-scale validated database for Persian speech emotion detection},
journal = {Language Resources and Evaluation},
year = {2019},
volume = {53},
number = {1},
pages = {1--16},
issn = {1574-0218},
doi = {10.1007/s10579-018-9427-x},
url = {https://doi.org/10.1007/s10579-018-9427-x}
}
~~~~
### Contact
Paria Jamshid Lou <paria.jamshid-lou@hdr.mq.edu.au>
Omid Mohamad Nezami <omid.mohamad-nezami@hdr.mq.edu.au> | 4,754 | [
[
-0.0325927734375,
-0.03863525390625,
0.004634857177734375,
0.008941650390625,
-0.02093505859375,
0.00197601318359375,
-0.03338623046875,
-0.021881103515625,
-0.001201629638671875,
0.002002716064453125,
-0.0433349609375,
-0.0625,
-0.047271728515625,
0.0186920... |
NbAiLab/norec_agg | 2022-07-01T19:53:24.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"arxiv:2011.02686",
"region:u... | NbAiLab | Aggregated NoRec_fine: A Fine-grained Sentiment Dataset for Norwegian
This dataset was created by the Nordic Language Processing Laboratory by
aggregating the fine-grained annotations in NoReC_fine and removing sentences
with conflicting or no sentiment. | @InProceedings{OvrMaeBar20,
author = {Lilja {\O}vrelid and Petter M{\ae}hlum and Jeremy Barnes and Erik Velldal},
title = {A Fine-grained Sentiment Dataset for {N}orwegian},
booktitle = {{Proceedings of the 12th Edition of the Language Resources and Evaluation Conference}},
year = 2020,
address = "Marseille, France, 2020"
} | 0 | 87 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
# Dataset Card Creation Guide
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** N/A
- **Repository:** [GitHub](https://github.com/ltgoslo/NorBERT/)
- **Paper:** [A Fine-grained Sentiment Dataset for Norwegian](https://www.aclweb.org/anthology/2020.lrec-1.618/)
- **Leaderboard:** N/A
- **Point of Contact:** -
### Dataset Summary
Aggregated NoRec_fine: A Fine-grained Sentiment Dataset for Norwegian.
This dataset was created by the Nordic Language Processing Laboratory by aggregating the fine-grained annotations in NoReC_fine and removing sentences with conflicting or no sentiment.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text in the dataset is in Norwegian.
## Dataset Structure
### Data Instances
Example of one instance in the dataset.
```{'label': 0, 'text': 'Verre er det med slagsmålene .'}```
### Data Fields
- `id`: index of the example
- `text`: Text of a sentence
- `label`: The sentiment label. Here
- 0 = negative
- 1 = positive
### Data Splits
The dataset is split into a `train`, `validation`, and `test` split with the following sizes:
| | Tain | Valid | Test |
| ----- | ------ | ----- | ----- |
| Number of examples | 2675 | 516 | 417 |
## Dataset Creation
This dataset is based largely on the original data described in the paper _A Fine-Grained Sentiment Dataset for Norwegian_ by L. Øvrelid, P. Mæhlum, J. Barnes, and E. Velldal, accepted at LREC 2020, [paper available](https://www.aclweb.org/anthology/2020.lrec-1.618). However, we have since added annotations for another 3476 sentences, increasing the overall size and scope of the dataset.
## Additional Information
### Licensing Information
This work is licensed under a Creative Commons Attribution 4.0 International License
### Citation Information
```latex
@misc{sheng2020investigating,
title={Investigating Societal Biases in a Poetry Composition System},
author={Emily Sheng and David Uthus},
year={2020},
eprint={2011.02686},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| 3,405 | [
[
-0.03729248046875,
-0.0513916015625,
0.0089874267578125,
0.013397216796875,
-0.02496337890625,
-0.010772705078125,
-0.025390625,
-0.031707763671875,
0.036651611328125,
0.0369873046875,
-0.055084228515625,
-0.07305908203125,
-0.035858154296875,
0.008956909179... |
alvp/autonlp-data-alberti-stanza-names | 2021-11-19T13:26:10.000Z | [
"task_categories:text-classification",
"region:us"
] | alvp | null | null | 0 | 87 | 2022-03-02T23:29:22 | ---
task_categories:
- text-classification
---
# AutoNLP Dataset for project: alberti-stanza-names
## Table of content
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
## Dataset Descritpion
This dataset has been automatically processed by AutoNLP for project alberti-stanza-names.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "\u00bfDe d\u00f3nde tantos dolores?\nAmores.\n\u00bfY cu\u00e1nto cuesta esa herida?\nLa vida\n\u00bfTe mueres si no te quiero?\nMe muero.\nY es as\u00ed, pues nada espero,\nque esta ausencia es mi condena:\ntodo cuanto me enajena\namor es, la vida muero.",
"target": 24
},
{
"text": "Retra\u00edda est\u00e1 la infanta,\nbien as\u00ed como sol\u00eda,\nviviendo muy descontenta\nde la vida que ten\u00eda,\nviendo que ya se pasaba\ntoda la flor de su vida,\ny que el rey no la casaba,\nni tal cuidado ten\u00eda.\nEntre s\u00ed estaba pensando\na qui\u00e9n se descubrir\u00eda;\nacord\u00f3 llamar al rey\ncomo otras veces sol\u00eda,\npor decirle su secreto\ny la intenci\u00f3n que ten\u00eda.\nVino el rey, siendo llamado,\nque no tard\u00f3 su venida:\nv\u00eddola estar apartada,\nsola est\u00e1 sin compa\u00f1\u00eda:\nsu lindo gesto mostraba\nser m\u00e1s triste que sol\u00eda.\nConociera luego el rey \nel enojo que ten\u00eda.\n\u00bfQu\u00e9 es aquesto, la infanta?\n\u00bfQu\u00e9 es aquesto, hija m\u00eda?\nContadme vuestros enojos,\nno tom\u00e9is malencon\u00eda,\nque sabiendo la verdad\ntodo se remediar\u00eda.\nMenester ser\u00e1, buen rey,\nremediar la vida m\u00eda,\nque a vos qued\u00e9 encomendada\nde la madre que ten\u00eda.\nD\u00e9desme, buen rey, marido,\nque mi edad ya lo ped\u00eda:\ncon verg\u00fcenza os lo demando,\nno con gana que ten\u00eda,\nque aquestos cuidados tales,\na vos, rey, pertenec\u00edan.\nEscuchada su demanda,\nel buen rey le respond\u00eda:\nEsa culpa, la infanta,\nvuestra era, que no m\u00eda,\nque ya fu\u00e9rades casada\ncon el pr\u00edncipe de Hungr\u00eda.\nNo quesistes escuchar\nla embajada que os ven\u00eda:\npues ac\u00e1 en las nuestras cortes,\nhija, mal recaudo hab\u00eda, \nporque en todos los mis reinos\nvuestro par igual no hab\u00eda,\nsi no era el conde Alarcos,\nhijos y mujer ten\u00eda.\nConvidaldo vos, el rey,\nal conde Alarcos un d\u00eda,\ny despu\u00e9s que hay\u00e1is comido\ndecidle de parte m\u00eda,\ndecidle que se le acuerde\nde la fe que del ten\u00eda,\nla cual \u00e9l me prometi\u00f3,\nque yo no se la ped\u00eda,\nde ser siempre mi marido,\nyo que su mujer ser\u00eda.\nYo fui de ello muy contenta\ny que no me arrepent\u00eda.\nSi cas\u00f3 con la condesa,\nque mirase lo que hac\u00eda,\nque por \u00e9l no me cas\u00e9\ncon el pr\u00edncipe de Hungr\u00eda;\nsi cas\u00f3 con la Condesa,\ndel es culpa, que no m\u00eda.\nPerdiera el rey en o\u00edrlo\nel sentido que ten\u00eda,\nmas despu\u00e9s en si tornado\ncon enojo respond\u00eda:\n\u00a1No son \u00e9stos los consejos\nque vuestra madre os dec\u00eda! \n\u00a1Muy mal mirastes, infanta,\ndo estaba la honra m\u00eda!\nSi verdad es todo eso,\nvuestra honra ya es perdida:\nno pod\u00e9is vos ser casada,\nsiendo la condesa viva.\nSi se hace el casamiento\npor raz\u00f3n o por justicia,\nen el decir de las gentes\npor mala ser\u00e9is tenida.\nDadme vos, hija, consejo,\nque el m\u00edo no bastar\u00eda,\nque ya es muerta vuestra madre\na quien consejo ped\u00eda.\nYo vos lo dar\u00e9, buen rey,\nde este poco que ten\u00eda:\nmate el conde a la condesa,\nque nadie no lo sabr\u00eda,\ny eche fama que ella es muerta\nde un cierto mal que ten\u00eda,\ny tratarse ha el casamiento\ncomo cosa no sabida.\nDe esta manera, buen rey,\nmi honra se guardar\u00eda.\nDe all\u00ed se sal\u00eda el rey,\nno con placer que ten\u00eda;\nlleno va de pensamientos\ncon la nueva que sab\u00eda; \nvido estar al conde Alarcos\nentre muchos, que dec\u00eda:\n\u00bf Qu\u00e9 aprovecha, caballeros,\namar y servir amiga,\nque son servicios perdidos\ndonde firmeza no hab\u00eda?\nNo pueden por m\u00ed decir\naquesto que yo dec\u00eda,\nque en el tiempo que serv\u00ed\nuna que tanto quer\u00eda,\nsi muy bien la quise entonces,\nagora m\u00e1s la quer\u00eda;\nmas por m\u00ed pueden decir:\nquien bien ama, tarde olvida.\nEstas palabras diciendo,\nvido al buen rey que ven\u00eda,\ny para hablar con el rey,\nde entre todos se sal\u00eda.\nDijo el buen rey al conde,\nhablando con cortes\u00eda:\nConvidaros quiero, conde,\npor ma\u00f1ana en aquel d\u00eda,\nque quer\u00e1is comer conmigo\npor tenerme compa\u00f1\u00eda.\nQue se haga de buen grado\nlo que su alteza dec\u00eda;\nbeso sus reales manos\npor la buena cortes\u00eda; \ndetenerme he aqu\u00ed ma\u00f1ana,\naunque estaba de partida,\nque la condesa me espera\nseg\u00fan la carta me env\u00eda.\nOtro d\u00eda de ma\u00f1ana\nel rey de misa sal\u00eda;\nluego se asent\u00f3 a comer,\nno por gana que ten\u00eda,\nsino por hablar al Conde\nlo que hablarle quer\u00eda.\nAll\u00ed fueron bien servidos\ncomo a rey pertenec\u00eda.\nDespu\u00e9s que hubieron comido,\ntoda la gente salida,\nqued\u00f3se el rey con el conde\nen la tabla do com\u00eda.\nEmpez\u00f3 de hablar el rey\nla embajada que tra\u00eda:\nUnas nuevas traigo, conde,\nque de ellas no me plac\u00eda,\npor las cuales yo me quejo\nde vuestra descortes\u00eda.\nPrometistes a la infanta\nlo que ella no vos ped\u00eda,\nde siempre ser su marido,\ny a ella que le plac\u00eda. \nSi otras cosas m\u00e1s pasastes\nno entro en esa porf\u00eda.\nOtra cosa os digo, conde,\nde que m\u00e1s os pesar\u00eda:\nque mat\u00e9is a la condesa\nque cumple a la honra m\u00eda;\nech\u00e9is fama que ella es muerta\nde cierto mal que ten\u00eda,\ny tratarse ha el casamiento\ncomo cosa no sabida,\nporque no sea deshonrada\nhija que tanto quer\u00eda.\nO\u00eddas estas razones\nel buen conde respond\u00eda:\nNo puedo negar, el rey,\nlo que la infanta dec\u00eda,\nsino que otorgo ser verdad\ntodo cuanto me ped\u00eda.\nPor miedo de vos, el rey,\nno cas\u00e9 con quien deb\u00eda,\nno pens\u00e9 que vuestra alteza\nen ello consentir\u00eda:\nde casar con la infanta\nyo, se\u00f1or, bien casar\u00eda;\nmas matar a la condesa,\nse\u00f1or rey, no lo har\u00eda,\nporque no debe morir\nla que mal no merec\u00eda. \nDe morir tiene, el buen conde,\npor salvar la honra m\u00eda,\npues no miraste primero\nlo que mirar se deb\u00eda.\nSi no muere la condesa\na vos costar\u00e1 la vida.\nPor la honra de los reyes\nmuchos sin culpa mor\u00edan,\nporque muera la condesa\nno es mucha maravilla.\nYo la matar\u00e9, buen rey,\nmas no ser\u00e1 culpa m\u00eda:\nvos os avendr\u00e9is con Dios\nen la fin de vuestra vida,\ny prometo a vuestra alteza,\na fe de caballer\u00eda,\nque me tengan por traidor\nsi lo dicho no cumpl\u00eda,\nde matar a la condesa,\naunque mal no merec\u00eda.\nBuen rey, si me dais licencia\nyo luego me partir\u00eda.\nVay\u00e1is con Dios, el buen conde,\nordenad vuestra partida.\nLlorando se parte el conde,\nllorando, sin alegr\u00eda;\nllorando por la condesa,\nque m\u00e1s que a s\u00ed la quer\u00eda\nLloraba tambi\u00e9n el conde\npor tres hijos que ten\u00eda,\nel uno era de pecho,\nque la condesa lo cr\u00eda;\nlos otros eran peque\u00f1os,\npoco sentido ten\u00edan.\nAntes que llegase el conde\nestas razones dec\u00eda:\n\u00a1Qui\u00e9n podr\u00e1 mirar, condesa,\nvuestra cara de alegr\u00eda,\nque saldr\u00e9is a recebirme\na la fin de vuestra vida!\nYo soy el triste culpado,\nesta culpa toda es m\u00eda.\nEn diciendo estas palabras\nla condesa ya sal\u00eda,\nque un paje le hab\u00eda dicho\nc\u00f3mo el conde ya ven\u00eda.\nVido la condesa al conde\nla tristeza que ten\u00eda,\nviole los ojos llorosos,\nque hinchados los tra\u00eda,\nde llorar por el camino,\nmirando el bien que perd\u00eda.\nDijo la condesa al conde:\n\u00a1Bien veng\u00e1is, bien de mi vida!\n\u00bfQu\u00e9 hab\u00e9is, el conde Alarcos?\n\u00bfPor qu\u00e9 llor\u00e1is, vida m\u00eda, \nque ven\u00eds tan demudado\nque cierto no os conoc\u00eda?\nNo parece vuestra cara\nni el gesto que ser sol\u00eda;\ndadme parte del enojo\ncomo dais de la alegr\u00eda.\n\u00a1Dec\u00eddmelo luego, conde,\nno mat\u00e9is la vida m\u00eda!\nYo vos lo dir\u00e9, condesa,\ncuando la hora ser\u00eda.\nSi no me lo dec\u00eds, conde,\ncierto yo reventar\u00eda.\nNo me fatigu\u00e9is, se\u00f1ora,\nque no es la hora venida.\nCenemos luego, condesa,\nde aqueso que en casa hab\u00eda.\nAparejado est\u00e1, conde,\ncomo otras veces sol\u00eda.\nSent\u00f3se el conde a la mesa,\nno cenaba ni pod\u00eda,\ncon sus hijos al costado,\nque muy mucho los quer\u00eda.\nEch\u00f3se sobre los brazos;\nhizo como que dorm\u00eda;\nde l\u00e1grimas de sus ojos\ntoda la mesa cubr\u00eda.\nMir\u00e1ndolo la condesa,\nque la causa no sab\u00eda,\nno le preguntaba nada,\nque no osaba ni pod\u00eda.\nLevant\u00f3se luego el conde,\ndijo que dormir quer\u00eda;\ndijo tambi\u00e9n la condesa\nque ella tambi\u00e9n dormir\u00eda;\nmas entre ellos no hab\u00eda sue\u00f1o,\nsi la verdad se dec\u00eda.\nVanse el conde y la condesa\na dormir donde sol\u00edan:\ndejan los ni\u00f1os de fuera\nque el conde no los quer\u00eda;\nllev\u00e1ronse el m\u00e1s chiquito,\nel que la condesa cr\u00eda;\ncerrara el conde la puerta,\nlo que hacer no sol\u00eda.\nEmpez\u00f3 de hablar el conde\ncon dolor y con mancilla:\n\u00a1Oh, desdichada condesa,\ngrande fu\u00e9 la tu desdicha!\nNo so desdichada, el conde,\npor dichosa me ten\u00eda;\ns\u00f3lo en ser vuestra mujer,\nesta fu\u00e9 gran dicha m\u00eda.\n\u00a1 Si bien lo sab\u00e9is, condesa,\nesa fu\u00e9 vuestra desdicha!\nSabed que en tiempo pasado\nYO am\u00e9 a quien bien serv\u00eda,\nla cual era la infanta,\npor desdicha vuestra y m\u00eda.\nPromet\u00ed casar con ella,\ny a ella que le plac\u00eda;\ndem\u00e1ndame por marido\npor la fe que me ten\u00eda.\nPu\u00e9delo muy bien hacer\nde raz\u00f3n y de justicia:\nd\u00edjomelo el rey, su padre,\nporque de ella lo sab\u00eda.\nOtra cosa manda el rey,\nque toca en el alma m\u00eda:\nmanda que mur\u00e1is, condesa,\npor la honra de su hija,\nque no puede tener honra\nsiendo vos, condesa, viva.\nDesque esto oy\u00f3 la condesa\ncay\u00f3 en tierra amortecida;\nmas despu\u00e9s en s\u00ed tornada\nestas palabras dec\u00eda:\n\u00a1Pagos son de mis servicios,\nconde, con que yo os serv\u00eda!\nSi no me mat\u00e1is, el conde,\nyo bien os aconsejar\u00eda,\nenvi\u00e9desme a mis tierras\nque mi padre me tern\u00eda;\nyo criar\u00e9 vuestros hijos\nmejor que la que vern\u00eda, \nyo os mantendr\u00e9 lealtad\ncomo siempre os manten\u00eda.\nDe morir hab\u00e9is, condesa,\nenantes que venga el d\u00eda.\n\u00a1Bien parece, el conde Alarcos,\nyo ser sola en esta vida;\nporque tengo el padre viejo,\nmi madre ya es fallecida,\ny mataron a mi hermano,\nel buen conde don Garc\u00eda,\nque el rey lo mand\u00f3 matar\npor miedo que del ten\u00eda!\nNo me pesa de mi muerte,\nporque yo morir ten\u00eda,\nmas p\u00e9same de mis hijos,\nque pierden mi compa\u00f1\u00eda;\nhac\u00e9melos venir, conde,\ny ver\u00e1n mi despedida.\nNo los ver\u00e9is m\u00e1s, condesa,\nen d\u00edas de vuestra vida;\nabrazad este chiquito,\nque aqueste es el que os perd\u00eda.\nP\u00e9same de vos, condesa,\ncuanto pesar me pod\u00eda.\nNo os puedo valer, se\u00f1ora,\nque m\u00e1s me va que la vida;\nencomendaos a Dios\nque esto hacerse ten\u00eda. \nDej\u00e9isme decir, buen conde,\nuna oraci\u00f3n que sab\u00eda.\nDecidla presto, condesa,\nenantes que venga el d\u00eda.\nPresto la habr\u00e9 dicho, conde,\nno estar\u00e9 un Ave Mar\u00eda.\nHinc\u00f3 rodillas en tierra,\naquesta oraci\u00f3n dec\u00eda:\nEn las tus manos, Se\u00f1or,\nencomiendo el alma m\u00eda;\nno me juzgues mis pecados\nseg\u00fan que yo merec\u00eda,\nm\u00e1s seg\u00fan tu gran piedad\ny la tu gracia infinita.\nAcabada es ya, buen conde,\nla oraci\u00f3n que yo sab\u00eda;\nencomi\u00e9ndoos esos hijos\nque entre vos y m\u00ed hab\u00eda,\ny rogad a Dios por m\u00ed,\nmientras tuvi\u00e9redes vida,\nque a ello sois obligado\npues que sin culpa mor\u00eda.\nD\u00e9desme ac\u00e1 ese hijo,\nmamar\u00e1 por despedida.\nNo lo despert\u00e9is, condesa,\ndejadlo estar, que dorm\u00eda,\nsino que os pido perd\u00f3n\nporque ya se viene el d\u00eda. \nA vos yo perdono, conde,\npor el amor que os ten\u00eda;\nm\u00e1s yo no perdono al rey,\nni a la infanta su hija,\nsino que queden citados\ndelante la alta justicia,\nque all\u00e1 vayan a juicio\ndentro de los treinta d\u00edas.\nEstas palabras diciendo\nel conde se aperceb\u00eda:\nech\u00f3le por la garganta\nuna toca que ten\u00eda.\n\u00a1Socorre, mis escuderos,\nque la condesa se fina!\nHallan la condesa muerta,\nlos que a socorrer ven\u00edan.\nAs\u00ed muri\u00f3 la condesa,\nsin raz\u00f3n y sin justicia;\nmas tambi\u00e9n todos murieron\ndentro de los treinta d\u00edas.\nLos doce d\u00edas pasados\nla infanta tambi\u00e9n mor\u00eda;\nel rey a los veinte y cinco,\nel conde al treinteno d\u00eda:\nall\u00e1 fueron a dar cuenta\na la justicia divina.\nAc\u00e1 nos d\u00e9 Dios su gracia,\ny all\u00e1 la gloria cumplida. \n",
"target": 28
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"target": "ClassLabel(num_classes=46, names=['cantar', 'chamberga', 'copla_arte_mayor', 'copla_arte_menor', 'copla_castellana', 'copla_mixta', 'copla_real', 'couplet', 'cuaderna_v\u00eda', 'cuarteta', 'cuarteto', 'cuarteto_lira', 'd\u00e9cima_antigua', 'endecha_real', 'espinela', 'estrofa_francisco_de_la_torre', 'estrofa_manrique\u00f1a', 'estrofa_s\u00e1fica', 'haiku', 'lira', 'novena', 'octava', 'octava_real', 'octavilla', 'ovillejo', 'quinteto', 'quintilla', 'redondilla', 'romance', 'romance_arte_mayor', 'seguidilla', 'seguidilla_compuesta', 'seguidilla_gitana', 'septeto', 'septilla', 'serventesio', 'sexta_rima', 'sexteto', 'sexteto_lira', 'sextilla', 'silva_arromanzada', 'sole\u00e1', 'tercetillo', 'terceto', 'terceto_monorrimo', 'unknown'], names_file=None, id=None)",
"text": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 4004 |
| valid | 1001 |
| 14,604 | [
[
-0.0250244140625,
-0.018585205078125,
0.016998291015625,
0.039581298828125,
-0.033843994140625,
0.0070343017578125,
0.014007568359375,
-0.03204345703125,
0.058319091796875,
0.05133056640625,
-0.051849365234375,
-0.050811767578125,
-0.0247344970703125,
0.0323... |
alvp/autonlp-data-alberti-stanzas-finetuning | 2021-11-19T12:46:22.000Z | [
"task_categories:text-classification",
"region:us"
] | alvp | null | null | 0 | 87 | 2022-03-02T23:29:22 | ---
task_categories:
- text-classification
---
# AutoNLP Dataset for project: alberti-stanzas-finetuning
## Table of content
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
## Dataset Descritpion
This dataset has been automatically processed by AutoNLP for project alberti-stanzas-finetuning.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "No es la ciudad inmunda \nquien empuja las velas. Tampoco el coraz\u00f3n, \nprimitiva caba\u00f1a del deseo, \nse aventura por islas encendidas \nen donde el mar oculta sus ruinas, \nalgas de Baudelaire, espumas y silencios. \nEs la necesidad, la solitaria \nnecesidad de un hombre, \nquien nos lleva a cubierta, \nquien nos hace temblar, vivir en cuerpos \nque resisten la voz de las sirenas, \namarrados en proa, \ncon el tim\u00f3n gimiendo entre las manos.",
"target": 40
},
{
"text": "Ni mueve m\u00e1s ligera,\nni m\u00e1s igual divide por derecha\nel aire, y fiel carrera,\no la traciana flecha\no la bola tudesca un fuego hecha.",
"target": 11
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"target": "ClassLabel(num_classes=46, names=['0', '1', '10', '11', '12', '13', '14', '15', '16', '17', '18', '19', '2', '20', '21', '22', '23', '24', '25', '26', '27', '28', '29', '3', '30', '31', '32', '33', '34', '35', '36', '37', '38', '39', '4', '40', '41', '42', '43', '44', '45', '5', '6', '7', '8', '9'], names_file=None, id=None)",
"text": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 4004 |
| valid | 1001 |
| 2,053 | [
[
-0.035400390625,
-0.01074981689453125,
0.006710052490234375,
0.022369384765625,
-0.00879669189453125,
0.0211944580078125,
-0.01331329345703125,
-0.0203857421875,
0.026947021484375,
0.036712646484375,
-0.034576416015625,
-0.072265625,
-0.039306640625,
0.01837... |
arch-raven/MAMI | 2022-01-06T17:55:22.000Z | [
"region:us"
] | arch-raven | null | null | 0 | 87 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
artemis13fowl/github-issues | 2022-01-15T05:56:13.000Z | [
"region:us"
] | artemis13fowl | null | null | 0 | 87 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
astarostap/autonlp-data-antisemitism-2 | 2022-10-25T09:07:21.000Z | [
"task_categories:text-classification",
"language:en",
"region:us"
] | astarostap | null | null | 0 | 87 | 2022-03-02T23:29:22 | ---
language:
- en
task_categories:
- text-classification
---
# AutoNLP Dataset for project: antisemitism-2
## Table of content
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
## Dataset Descritpion
This dataset has been automatically processed by AutoNLP for project antisemitism-2.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"target": 0,
"text": "Jew pods"
},
{
"target": 1,
"text": "@PotatoLaydee He's a Jew...."
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"target": "ClassLabel(num_classes=2, names=['0', '1'], names_file=None, id=None)",
"text": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 3161 |
| valid | 791 |
| 1,211 | [
[
-0.038330078125,
-0.00939178466796875,
-0.0021114349365234375,
0.00568389892578125,
-0.02252197265625,
0.021942138671875,
-0.0121917724609375,
-0.0227813720703125,
0.0211334228515625,
0.0311431884765625,
-0.025726318359375,
-0.058563232421875,
-0.045074462890625... |
azuur/gn_wiki_cleaned | 2022-02-09T17:02:12.000Z | [
"region:us"
] | azuur | null | null | 0 | 87 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
be4rr/github-issues | 2022-02-23T12:05:51.000Z | [
"region:us"
] | be4rr | null | null | 1 | 87 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
benjaminbeilharz/daily_dialog_w_turn_templates | 2022-02-26T17:54:18.000Z | [
"region:us"
] | benjaminbeilharz | null | null | 1 | 87 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
bhadresh-savani/web_split | 2021-10-15T06:42:18.000Z | [
"region:us"
] | bhadresh-savani | null | null | 1 | 87 | 2022-03-02T23:29:22 | Work In progress! | 17 | [
[
-0.0019359588623046875,
0.0053253173828125,
0.053558349609375,
0.03955078125,
-0.00579833984375,
0.00910186767578125,
0.0195465087890625,
-0.005817413330078125,
0.045745849609375,
0.08013916015625,
-0.02423095703125,
-0.011749267578125,
-0.056396484375,
0.00... |
bs-modeling-metadata/c4-en-html-with-metadata | 2022-08-18T13:01:15.000Z | [
"region:us"
] | bs-modeling-metadata | null | null | 5 | 87 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
bs-modeling-metadata/c4_newslike_url_only | 2021-09-20T11:14:17.000Z | [
"region:us"
] | bs-modeling-metadata | null | null | 0 | 87 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
bs-modeling-metadata/website_metadata_c4 | 2021-11-24T14:04:30.000Z | [
"region:us"
] | bs-modeling-metadata | null | null | 1 | 87 | 2022-03-02T23:29:22 | The dataset is in the form of a json lines file with 1,20,000 examples, where an example consists of text (extracted from C4 English dataset) and metadata fields (website description extracted from Wikipedia).
Example:
```
{
"text": "US10289222B2 - Handling of touch events in a browser environment - Google Patents\nHandling of touch events in a browser environment Download PDF\nUS10289222B2\nUS10289222B2 US13/857,848 US201313857848A US10289222B2 US 10289222 B2 US10289222 B2 US 10289222B2 US 201313857848 A US201313857848 A US 201313857848A US 10289222 B2 US10289222 B2 US 10289222B2\nUS13/857,848\nUS20130222244A1 (en\nEli Joshua FIDLER\nMichael Thomas Winkler\nMatthew Nicholaos STAIKOS\nJoseph Charles MASON\n2011-01-05 Priority to US12/985,337 priority Critical patent/US8438473B2/en\n2013-04-05 Application filed by BlackBerry Ltd filed Critical BlackBerry Ltd\n2013-04-05 Priority to US13/857,848 priority patent/US10289222B2/en\n2013-06-26 Assigned to RESEARCH IN MOTION CORPORATION reassignment RESEARCH IN MOTION CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: Winkler, Michael Thomas\n2013-06-26 Assigned to RESEARCH IN MOTION LIMITED reassignment RESEARCH IN MOTION LIMITED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: Fidler, Eli Joshua, Mak, Genevieve Elizabeth, Mason, Joseph Charles, STAIKOS, MATTHEW\n2013-08-29 Publication of US20130222244A1 publication Critical patent/US20130222244A1/en\n2016-03-08 Assigned to RESEARCH IN MOTION LIMITED reassignment RESEARCH IN MOTION LIMITED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: RESEARCH IN MOTION CORPORATION\n2019-05-14 Publication of US10289222B2 publication Critical patent/US10289222B2/en\nHandling of touch events in a browser environment is disclosed. An example method includes, while a document is displayed on a touchscreen display of a device, detecting a touch event at the touchscreen display, and selectively processing the detected touch event using one of a default hander, a touch event handler, and a conversion to one or more mouse events, according to a touch event handling property defined for the document.\nThe present application relates generally to the processing of detected user input events in a web browser.\nComputing devices such as desktop computers are typically equipped with external pointing devices, such as a mouse, to permit cursor-based user interaction with content executing on the computer.,
"metadata": [
{
"key": "website_description",
"type": "global",
"value": "Google Patents is a search engine from Google that indexes patents and patent applications."
}
]
}
``` | 2,736 | [
[
-0.0230255126953125,
-0.052276611328125,
0.033203125,
0.006710052490234375,
-0.006557464599609375,
-0.01444244384765625,
0.0180511474609375,
-0.0333251953125,
0.042999267578125,
0.044891357421875,
-0.0545654296875,
-0.03619384765625,
-0.0299835205078125,
0.0... |
castorini/msmarco_v2_passage_doc2query-t5_expansions | 2021-11-02T06:37:36.000Z | [
"language:English",
"license:Apache License 2.0",
"region:us"
] | castorini | null | null | 0 | 87 | 2022-03-02T23:29:22 | ---
language:
- English
license: "Apache License 2.0"
---
# Dataset Summary
The repo provides queries generated for the MS MARCO v2 passage corpus with docTTTTTquery (sometimes written as docT5query or doc2query-T5), the latest version of the doc2query family of document expansion models. The basic idea is to train a model, that when given an input document, generates questions that the document might answer (or more broadly, queries for which the document might be relevant). These predicted questions (or queries) are then appended to the original documents, which are then indexed as before. The docTTTTTquery model gets its name from the use of T5 as the expansion model.
# Dataset Structure
All three folds (train, dev and test) share the same corpus. The queries are generated from this corpus.
An example data entry looks as follows:
```
{ "id": "msmarco_passage_22_0",
"predicted_queries": ["in drug combat does a zombie take more damage or die", "is the health bar the same as smash bros", "is brawlhalla health bar", "icpri league brawlhalla", "what is a battle brawlhalla", "is smash bros minecraft brawlhalla zombies", "what are the health bars on brawlhalla", "does smash bros have health bars", "is brawlhalla a health bar", "what is brawlhalla", "what is brwlhalla", "how many health bars is in brawlhalla", "is there health bar in brawlhalla", "what is boiledhalla?", "what is a good health bar in brawlhalla", "what is skills brawlhalla", "how many gobs in a brawlhalla", "is smash bros. an nsb game", "how many health bars are there in the brawlhalla", "what is brawlhalla"]
}
```
# Load Dataset
An example to load the dataset:
```
dataset = load_dataset('castorini/msmarco_v2_passage_doc2query-t5_expansions', data_files='d2q/d2q.jsonl???.gz')
```
# Citation Information
```
@article{docTTTTTquery,
title={From doc2query to {docTTTTTquery}},
author={Nogueira, Rodrigo and Lin, Jimmy},
year={2019}
}
@article{emdt5,
author={Ronak Pradeep and Rodrigo Nogueira and Jimmy Lin},
title={The Expando-Mono-Duo Design Pattern for Text Ranking with Pretrained Sequence-to-Sequence Models},
journal={arXiv:2101.05667},
year={2021},
}
| 2,185 | [
[
-0.016571044921875,
-0.04742431640625,
0.044891357421875,
0.007701873779296875,
-0.01300048828125,
0.011962890625,
0.0013875961303710938,
-0.0279541015625,
0.01317596435546875,
0.05535888671875,
-0.052276611328125,
-0.055694580078125,
-0.040924072265625,
0.0... |
cestwc/adapted-sentcomp | 2022-01-02T17:21:39.000Z | [
"region:us"
] | cestwc | null | null | 0 | 87 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
cestwc/adapted-wordnet | 2021-12-31T18:40:29.000Z | [
"region:us"
] | cestwc | null | null | 1 | 87 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01494598388671875,
0.057159423828125,
0.02880859375,
-0.035064697265625,
0.046478271484375,
0.052520751953125,
0.00505828857421875,
0.051361083984375,
0.016998291015625,
-0.05206298828125,
-0.01496124267578125,
-0.06036376953125,
0.03... |
cestwc/cnn_dailymail-snippets | 2022-02-15T06:09:43.000Z | [
"region:us"
] | cestwc | null | null | 0 | 87 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01494598388671875,
0.057159423828125,
0.02880859375,
-0.035064697265625,
0.046478271484375,
0.052520751953125,
0.00505828857421875,
0.051361083984375,
0.016998291015625,
-0.05206298828125,
-0.01496124267578125,
-0.06036376953125,
0.03... |
cestwc/cnn_dailymail-test50 | 2021-12-16T17:40:40.000Z | [
"region:us"
] | cestwc | null | null | 0 | 87 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01494598388671875,
0.057159423828125,
0.02880859375,
-0.035064697265625,
0.046478271484375,
0.052520751953125,
0.00505828857421875,
0.051361083984375,
0.016998291015625,
-0.05206298828125,
-0.01496124267578125,
-0.06036376953125,
0.03... |
cestwc/sac-approx-1 | 2022-01-02T19:14:27.000Z | [
"region:us"
] | cestwc | null | null | 0 | 87 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
cgarciae/point-cloud-mnist | 2021-10-31T23:09:55.000Z | [
"region:us"
] | cgarciae | The MNIST dataset consists of 70,000 28x28 black-and-white points in 10 classes (one for each digits), with 7,000
points per class. There are 60,000 training points and 10,000 test points. | # @article{lecun2010mnist,
# title={MNIST handwritten digit database},
# author={LeCun, Yann and Cortes, Corinna and Burges, CJ},
# journal={ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist},
# volume={2},
# year={2010}
# }
# | 2 | 87 | 2022-03-02T23:29:22 | # Point CLoud MNIST
A point cloud version of the original MNIST.

## Getting Started
```python
import matplotlib.pyplot as plt
import numpy as np
from datasets import load_dataset
# load dataset
dataset = load_dataset("cgarciae/point-cloud-mnist")
dataset.set_format("np")
# get numpy arrays
X_train = dataset["train"]["points"]
y_train = dataset["train"]["label"]
X_test = dataset["test"]["points"]
y_test = dataset["test"]["label"]
# plot some training samples
figure = plt.figure(figsize=(10, 10))
for i in range(3):
for j in range(3):
k = 3 * i + j
plt.subplot(3, 3, k + 1)
idx = np.random.randint(0, len(X_train))
plt.title(f"{y_train[idx]}")
plt.scatter(X_train[idx, :, 0], X_train[idx, :, 1])
plt.show()
```
## Format
* `points`: `(batch, point, 3)` array of uint8.
* `label`: `(batch, 1)` array of uint8.
Where `point` is the number of points in the point cloud. Points have no order and were shuffled when creating the data. Each point has the structure `[x, y, v]` where:
* `x`: is the x coordinate of the point in the image.
* `y`: is the y coordinate of the point in the image.
* `v`: is the value of the pixel at the point in the image.
Samples are padded with `0`s such that `point = 351` since its the largest number of non-zero pixels per image in the original dataset. You can tell apart padding point because they are the only ones where `v = 0`.
Here is the distribution of non-zero pixels in the MNIST:
 | 1,690 | [
[
-0.035430908203125,
-0.0190277099609375,
0.0426025390625,
0.0177154541015625,
-0.0281829833984375,
-0.0286407470703125,
0.01432037353515625,
-0.0097198486328125,
0.06207275390625,
0.0294647216796875,
-0.03564453125,
-0.053070068359375,
-0.0447998046875,
-0.0... |
chenghao/mc4_sw_dedup | 2021-12-09T02:25:03.000Z | [
"region:us"
] | chenghao | null | null | 0 | 87 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
chitra/contradictionNLI | 2021-12-29T10:45:19.000Z | [
"region:us"
] | chitra | null | null | 0 | 87 | 2022-03-02T23:29:22 | This data can help in solving contradiction detection problem. this data is picked from kaggle.
reference - Contradictory, My DWatson | 134 | [
[
-0.0295562744140625,
-0.043243408203125,
0.048797607421875,
0.02490234375,
-0.01059722900390625,
-0.00826263427734375,
0.03924560546875,
-0.0240631103515625,
-0.0035457611083984375,
0.033294677734375,
-0.1007080078125,
0.00982666015625,
-0.0443115234375,
0.0... |
chopey/dhivehi | 2021-11-30T03:41:11.000Z | [
"region:us"
] | chopey | null | null | 0 | 87 | 2022-03-02T23:29:22 | Dhivehi dataset for MNT | 23 | [
[
-0.002964019775390625,
-0.00704193115234375,
0.009063720703125,
0.00859832763671875,
-0.0304412841796875,
-0.0029048919677734375,
0.00787353515625,
0.004138946533203125,
0.028533935546875,
0.04229736328125,
-0.057037353515625,
-0.039520263671875,
-0.034545898437... |
csikasote/bembaspeech_plus_jw_processed | 2022-02-09T07:38:17.000Z | [
"region:us"
] | csikasote | null | null | 0 | 87 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
cylee/github-issues | 2021-12-19T19:12:55.000Z | [
"arxiv:2005.00614",
"region:us"
] | cylee | null | null | 0 | 87 | 2022-03-02T23:29:22 | # Dataset Card for GitHub Issues
## Dataset Description
This dataset is created for the Hugging Face Datasets library course
### Dataset Summary
GitHub Issues is a dataset consisting of GitHub issues and pull requests associated with the 🤗 Datasets [repository](https://github.com/huggingface/datasets). It is intended for educational purposes and can be used for semantic search or multilabel text classification. The contents of each GitHub issue are in English and concern the domain of datasets for NLP, computer vision, and beyond.
### Supported Tasks and Leaderboards
For each of the tasks tagged for this dataset, give a brief description of the tag, metrics, and suggested models (with a link to their HuggingFace implementation if available). Give a similar description of tasks that were not covered by the structured tag set (repace the `task-category-tag` with an appropriate `other:other-task-name`).
- `task-category-tag`: The dataset can be used to train a model for [TASK NAME], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a *high/low* [metric name](https://huggingface.co/metrics/metric_name). The ([model name](https://huggingface.co/model_name) or [model class](https://huggingface.co/transformers/model_doc/model_class.html)) model currently achieves the following score. *[IF A LEADERBOARD IS AVAILABLE]:* This task has an active leaderboard which can be found at [leaderboard url]() and ranks models based on [metric name](https://huggingface.co/metrics/metric_name) while also reporting [other metric name](https://huggingface.co/metrics/other_metric_name).
### Languages
Provide a brief overview of the languages represented in the dataset. Describe relevant details about specifics of the language such as whether it is social media text, African American English,...
When relevant, please provide [BCP-47 codes](https://tools.ietf.org/html/bcp47), which consist of a [primary language subtag](https://tools.ietf.org/html/bcp47#section-2.2.1), with a [script subtag](https://tools.ietf.org/html/bcp47#section-2.2.3) and/or [region subtag](https://tools.ietf.org/html/bcp47#section-2.2.4) if available.
## Dataset Structure
### Data Instances
Provide an JSON-formatted example and brief description of a typical instance in the dataset. If available, provide a link to further examples.
```
{
'example_field': ...,
...
}
```
Provide any additional information that is not covered in the other sections about the data here. In particular describe any relationships between data points and if these relationships are made explicit.
### Data Fields
List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points.
- `example_field`: description of `example_field`
Note that the descriptions can be initialized with the **Show Markdown Data Fields** output of the [tagging app](https://github.com/huggingface/datasets-tagging), you will then only need to refine the generated descriptions.
### Data Splits
Describe and name the splits in the dataset if there are more than one.
Describe any criteria for splitting the data, if used. If their are differences between the splits (e.g. if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here.
Provide the sizes of each split. As appropriate, provide any descriptive statistics for the features, such as average length. For example:
| | Tain | Valid | Test |
| ----- | ------ | ----- | ---- |
| Input Sentences | | | |
| Average Sentence Length | | | |
## Dataset Creation
### Curation Rationale
What need motivated the creation of this dataset? What are some of the reasons underlying the major choices involved in putting it together?
### Source Data
This section describes the source data (e.g. news text and headlines, social media posts, translated sentences,...)
#### Initial Data Collection and Normalization
Describe the data collection process. Describe any criteria for data selection or filtering. List any key words or search terms used. If possible, include runtime information for the collection process.
If data was collected from other pre-existing datasets, link to source here and to their [Hugging Face version](https://huggingface.co/datasets/dataset_name).
If the data was modified or normalized after being collected (e.g. if the data is word-tokenized), describe the process and the tools used.
#### Who are the source language producers?
State whether the data was produced by humans or machine generated. Describe the people or systems who originally created the data.
If available, include self-reported demographic or identity information for the source data creators, but avoid inferring this information. Instead state that this information is unknown. See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender.
Describe the conditions under which the data was created (for example, if the producers were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.
Describe other people represented or mentioned in the data. Where possible, link to references for the information.
### Annotations
If the dataset contains annotations which are not part of the initial data collection, describe them in the following paragraphs.
#### Annotation process
If applicable, describe the annotation process and any tools used, or state otherwise. Describe the amount of data annotated, if not all. Describe or reference annotation guidelines provided to the annotators. If available, provide interannotator statistics. Describe any annotation validation processes.
#### Who are the annotators?
If annotations were collected for the source data (such as class labels or syntactic parses), state whether the annotations were produced by humans or machine generated.
Describe the people or systems who originally created the annotations and their selection criteria if applicable.
If available, include self-reported demographic or identity information for the annotators, but avoid inferring this information. Instead state that this information is unknown. See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender.
Describe the conditions under which the data was annotated (for example, if the annotators were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.
### Personal and Sensitive Information
State whether the dataset uses identity categories and, if so, how the information is used. Describe where this information comes from (i.e. self-reporting, collecting from profiles, inferring, etc.). See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender. State whether the data is linked to individuals and whether those individuals can be identified in the dataset, either directly or indirectly (i.e., in combination with other data).
State whether the dataset contains other data that might be considered sensitive (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history).
If efforts were made to anonymize the data, describe the anonymization process.
## Considerations for Using the Data
### Social Impact of Dataset
Please discuss some of the ways you believe the use of this dataset will impact society.
The statement should include both positive outlooks, such as outlining how technologies developed through its use may improve people's lives, and discuss the accompanying risks. These risks may range from making important decisions more opaque to people who are affected by the technology, to reinforcing existing harmful biases (whose specifics should be discussed in the next section), among other considerations.
Also describe in this section if the proposed dataset contains a low-resource or under-represented language. If this is the case or if this task has any impact on underserved communities, please elaborate here.
### Discussion of Biases
Provide descriptions of specific biases that are likely to be reflected in the data, and state whether any steps were taken to reduce their impact.
For Wikipedia text, see for example [Dinan et al 2020 on biases in Wikipedia (esp. Table 1)](https://arxiv.org/abs/2005.00614), or [Blodgett et al 2020](https://www.aclweb.org/anthology/2020.acl-main.485/) for a more general discussion of the topic.
If analyses have been run quantifying these biases, please add brief summaries and links to the studies here.
### Other Known Limitations
If studies of the datasets have outlined other limitations of the dataset, such as annotation artifacts, please outline and cite them here.
## Additional Information
### Dataset Curators
List the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here.
### Licensing Information
Provide the license and link to the license webpage if available.
### Citation Information
Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example:
```
@article{article_id,
author = {Author List},
title = {Dataset Paper Title},
journal = {Publication Venue},
year = {2525}
}
```
If the dataset has a [DOI](https://www.doi.org/), please provide it here.
### Contributions
[@cylee] added this dataset as part of the Hugging Face Dataset library tutorial (https://huggingface.co/course/chapter5/5?fw=tf). | 10,567 | [
[
-0.03924560546875,
-0.043914794921875,
0.004718780517578125,
0.01885986328125,
-0.00295257568359375,
0.004810333251953125,
-0.01568603515625,
-0.049530029296875,
0.035186767578125,
0.04119873046875,
-0.052825927734375,
-0.061614990234375,
-0.039886474609375,
... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.