id stringlengths 2 115 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 ⌀ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 ⌀ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
GBaker/MedQA-USMLE-4-options-hf-MPNet-IR | GBaker | 2023-03-20T21:53:18Z | 27 | 3 | null | [
"region:us"
] | 2023-03-20T21:53:18Z | 2023-03-20T21:53:01.000Z | 2023-03-20T21:53:01 | ---
dataset_info:
features:
- name: id
dtype: string
- name: sent1
dtype: string
- name: sent2
dtype: string
- name: ending0
dtype: string
- name: ending1
dtype: string
- name: ending2
dtype: string
- name: ending3
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 14052739
num_examples: 10178
- name: validation
num_bytes: 1754234
num_examples: 1272
- name: test
num_bytes: 1780124
num_examples: 1273
download_size: 10209487
dataset_size: 17587097
---
# Dataset Card for "MedQA-USMLE-4-options-hf-MPNet-IR"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6228136420249939,
0.06494871526956558,
0.31927719712257385,
-0.06086473539471626,
-0.28148356080055237,
0.10205481946468353,
0.4171130657196045,
0.12067433446645737,
0.7180879712104797,
0.5553974509239197,
-0.9091658592224121,
-0.6346908807754517,
-0.5011517405509949,
0.0034912936389446... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
open-source-metrics/stars-external | open-source-metrics | 2023-11-22T21:59:21Z | 27 | 0 | null | [
"region:us"
] | 2023-11-22T21:59:21Z | 2023-03-24T17:21:22.000Z | 2023-03-24T17:21:22 | ---
dataset_info:
features:
- name: login
dtype: string
- name: dates
dtype: string
splits:
- name: openai_python
num_bytes: 574781
num_examples: 15358
- name: stable_diffusion_webui
num_bytes: 4075161
num_examples: 110057
- name: langchain
num_bytes: 2562432
num_examples: 68861
- name: pytorch
num_bytes: 2710225
num_examples: 72791
- name: tensorflow
num_bytes: 6648730
num_examples: 178938
download_size: 9793536
dataset_size: 16571329
configs:
- config_name: default
data_files:
- split: stable_diffusion_webui
path: data/stable_diffusion_webui-*
- split: langchain
path: data/langchain-*
- split: pytorch
path: data/pytorch-*
- split: tensorflow
path: data/tensorflow-*
---
# Dataset Card for "stars-external"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6424686908721924,
-0.16031375527381897,
0.31868231296539307,
0.12384974211454391,
-0.06931348145008087,
-0.02445097826421261,
0.07258184999227524,
-0.5192058086395264,
0.8210445642471313,
0.5099487900733948,
-1.0251802206039429,
-0.5081488490104675,
-0.6380375027656555,
-0.1327186524868... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mstz/nbfi | mstz | 2023-04-07T14:33:02Z | 27 | 0 | null | [
"task_categories:tabular-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc",
"nbfi",
"tabular_classification",
"binary_classification",
"region:us"
] | 2023-04-07T14:33:02Z | 2023-03-29T16:21:38.000Z | 2023-03-29T16:21:38 | ---
language:
- en
tags:
- nbfi
- tabular_classification
- binary_classification
pretty_name: NBFI
size_categories:
- 1K<n<10K
task_categories:
- tabular-classification
configs:
- default
license: cc
---
# NBFI
The [NBFI dataset](https://www.kaggle.com/datasets/meastanmay/nbfi-vehicle-loan-repayment-dataset) from the [Kaggle](https://www.kaggle.com/datasets).
Client default prediction.
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|-----------------------------------------------------------------|
| default | Binary classification | Has the client defaulted? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/nbfi")["train"]
```
# Features
|**Feature** |**Type** |
|-----------------------------------------------|---------------|
|`income` | `float32` |
|`owns_a_car` | `bool` |
|`owns_a_bike` | `bool` |
|`has_an_active_loan` | `bool` |
|`owns_a_house` | `bool` |
|`nr_children` | `int8` |
|`credit` | `float32` |
|`loan_annuity` | `float32` |
|`accompanied_by` | `string` |
|`income_type` | `string` |
|`education_level` | `float32` |
|`marital_status` | `float32` |
|`is_male` | `bool` |
|`type_of_contract` | `string` |
|`type_of_housing` | `string` |
|`residence_density` | `float32` |
|`age_in_days` | `int32` |
|`consecutive_days_of_employment` | `int16` |
|`nr_days_since_last_registration_change` | `int32` |
|`nr_days_since_last_document_change` | `int32` |
|`owned_a_house_for_nr_days` | `int32` |
|`has_provided_a_mobile_number` | `bool` |
|`has_provided_a_home_number` | `bool` |
|`was_reachable_at_work` | `bool` |
|`job` | `string` |
|`nr_family_members` | `int8` |
|`city_rating` | `int8` |
|`weekday_of_application` | `int8` |
|`hour_of_application` | `float32` |
|`same_residence_and_home` | `bool` |
|`same_work_and_home` | `bool` |
|`score_1` | `float32` |
|`score_2` | `float32` |
|`score_3` | `float32` |
|`nr_defaults_in_social_circle` | `int8` |
|`inquiries_in_last_year` | `float32` | | [
-0.586427628993988,
-0.5058817267417908,
0.18274569511413574,
0.3431711792945862,
0.051242999732494354,
-0.1985488384962082,
0.3023512065410614,
-0.3532133996486664,
0.3070501983165741,
0.5970209240913391,
-0.7856484055519104,
-0.5917654633522034,
-0.6022865772247314,
0.0064468905329704285... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Francesco/animals-ij5d2 | Francesco | 2023-03-30T09:30:09Z | 27 | 4 | null | [
"task_categories:object-detection",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc",
"rf100",
"region:us"
] | 2023-03-30T09:30:09Z | 2023-03-30T09:29:48.000Z | 2023-03-30T09:29:48 | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': animals
'1': cat
'2': chicken
'3': cow
'4': dog
'5': fox
'6': goat
'7': horse
'8': person
'9': racoon
'10': skunk
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- object-detection
task_ids: []
pretty_name: animals-ij5d2
tags:
- rf100
---
# Dataset Card for animals-ij5d2
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/animals-ij5d2
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
animals-ij5d2
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/animals-ij5d2
### Citation Information
```
@misc{ animals-ij5d2,
title = { animals ij5d2 Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/animals-ij5d2 } },
url = { https://universe.roboflow.com/object-detection/animals-ij5d2 },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. | [
-0.7691892385482788,
-0.376539409160614,
0.04282372072339058,
0.03509137034416199,
-0.4230562448501587,
-0.19668839871883392,
-0.07392308861017227,
-0.7099829912185669,
0.25776898860931396,
0.33924242854118347,
-0.6212400794029236,
-0.951254665851593,
-0.5033870339393616,
0.464788824319839... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mangoesai/DepressionDetection | mangoesai | 2023-04-05T17:55:23Z | 27 | 0 | null | [
"region:us"
] | 2023-04-05T17:55:23Z | 2023-04-05T17:55:18.000Z | 2023-04-05T17:55:18 | ---
dataset_info:
features:
- name: clean_text
dtype: string
- name: is_depression
dtype: int64
splits:
- name: train
num_bytes: 2020382.4309921097
num_examples: 5411
- name: test
num_bytes: 866251.5690078903
num_examples: 2320
download_size: 1709340
dataset_size: 2886634.0
---
# Dataset Card for "DepressionDetection"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6010061502456665,
-0.32790613174438477,
0.4450058341026306,
0.45546096563339233,
-0.16028566658496857,
-0.15451660752296448,
0.24843238294124603,
-0.1268739104270935,
0.8851768374443054,
0.2489594966173172,
-0.9411330819129944,
-0.8304241895675659,
-0.7648926973342896,
-0.12087271362543... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mstz/mushroom | mstz | 2023-04-16T17:34:40Z | 27 | 0 | null | [
"task_categories:tabular-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc",
"mushroom",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | 2023-04-16T17:34:40Z | 2023-04-06T17:42:03.000Z | 2023-04-06T17:42:03 | ---
language:
- en
tags:
- mushroom
- tabular_classification
- binary_classification
- UCI
pretty_name: Mushroom
size_categories:
- 1K<n<10K
task_categories:
- tabular-classification
configs:
- mushroom
license: cc
---
# Mushroom
The [Mushroom dataset](https://archive.ics.uci.edu/ml/datasets/Mushroom) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|---------------------------|
| mushroom | Binary classification | Is the mushroom poisonous?|
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/mushroom")["train"]
``` | [
-0.06664182245731354,
-0.49322429299354553,
0.17997591197490692,
0.2466716766357422,
-0.2750696837902069,
-0.32773053646087646,
-0.10294186323881149,
-0.1404249221086502,
0.3378826677799225,
0.6792832016944885,
-0.6426112055778503,
-0.9290347099304199,
-0.7849730253219604,
0.42249277234077... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dominguesm/Canarim-Instruct-PTBR-Dataset | dominguesm | 2023-11-17T09:03:46Z | 27 | 13 | null | [
"language:pt",
"license:cc-by-nc-4.0",
"doi:10.57967/hf/0983",
"region:us"
] | 2023-11-17T09:03:46Z | 2023-04-06T21:36:49.000Z | 2023-04-06T21:36:49 | ---
language: pt
license: cc-by-nc-4.0
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 113100060
num_examples: 316413
- name: test
num_bytes: 687328
num_examples: 1519
download_size: 63510092
dataset_size: 113787388
---
## 🐥 🇧🇷 Canarim Instruct Dataset
<p align="center">
<img width="250" alt="Camarim Logo" src="https://raw.githubusercontent.com/DominguesM/Canarim-Instruct-PTBR/main/assets/canarim.png">
</p>
<p align="center">
<a href="https://github.com/DominguesM/Canarim-Instruct-PTBR">[🐱 Github]</a>
</p>
<hr>
## What's Canarim?
Canarim is a dataset with over 300,000 instructions in Portuguese, ranging from simple instructions like "Descreva os efeitos do aquecimento global" to more complex instructions like "Nesta tarefa, você precisa ser capaz de resumir uma determinada lista de pontos-chave" where additional context is provided.
## Why it's called Canarim?
"Canarim" is spoken in some regions of Brazil (mainly by grandparents), and it could be translated as "canarinho," which means "little canary" in English. "Canarim" (is pronounced: kɑnɑrɪm) or canary is a bird very present in Brazilian daily life, living for up to 30 years. Every Brazilian at some point in their life has come across this bird, which is why I chose this name for my project.
## Source Data
This dataset was created through translation and adaptation from the following sources:
* [**dominguesm/alpaca-data-pt-br**](https://huggingface.co/datasets/dominguesm/alpaca-data-pt-br) (*51759 rows*)
* [**cahya/instructions-pt**](https://huggingface.co/datasets/cahya/instructions-pt) (*57692 rows*)
* [**HuggingFaceH4/self_instruct**](https://huggingface.co/datasets/HuggingFaceH4/self_instruct) (*74350 rows*)
* [**HuggingFaceH4/helpful_instructions (self_instruct)**](https://huggingface.co/datasets/HuggingFaceH4/helpful_instructions/viewer/self_instruct/train) (*82612 rows*)
* [**HuggingFaceH4/helpful_instructions (super_natural_instructions)**](https://huggingface.co/datasets/HuggingFaceH4/helpful_instructions/viewer/super_natural_instructions/train) (*50000 rows*)
Totaling 316k lines in the training split and 1519 lines in the test split, all lines must contain an "instruction" and an "output", and may or may not also contain a "context".
There may be translation errors in the data.
## Prompt
Below is the prompt template I am using with this dataset:
```python
def generate_prompt(instruction, input=None):
if input:
return f"""Abaixo está uma instrução que descreve uma tarefa, emparelhada com uma entrada que fornece mais contexto. Escreva uma resposta que conclua adequadamente a solicitação.
### Instruções:
{instruction}
### Entrada:
{input}
### Resposta:"""
else:
return f"""Abaixo está uma instrução que descreve uma tarefa. Escreva uma resposta que conclua adequadamente a solicitação.
### Instruções:
{instruction}
### Resposta:"""
```
## Examples of prompts
```json
{
"instruction": "Sugira uma estratégia de marketing eficaz para um programa de TV.",
"input": "",
"output": "Uma estratégia de marketing eficaz para um programa de TV é criar um trailer de teaser a ser lançado antes da estréia do programa na televisão e outras plataformas digitais. O trailer de teaser deve ser curto, dinâmico e atraente, a fim de desencadear a curiosidade e o interesse nos espectadores. Ele também deve conter uma linha de tags convincente ou "gancho" que resume a premissa do programa. Além disso, criar uma hashtag associada ao programa de conteúdo da TV e pedir aos espectadores que se juntem à conversa nas mídias sociais."
}
```
```json
{
"instruction": "Compare os dois assuntos a seguir",
"input": "Inteligência Artificial e Machine Learning",
"output": "Inteligência Artificial (IA) e Machine Learning (ML) são campos da ciência da computação que envolvem o desenvolvimento de máquinas inteligentes que podem aprender com dados e executar tarefas de forma autônoma. A IA se concentra no comportamento semelhante ao humano e envolve o desenvolvimento de algoritmos e estruturas de dados que permitem que as máquinas aprendam através do reconhecimento de padrões e do raciocínio lógico. A ML, por outro lado, é um subconjunto de IA que se concentra em ensinar computadores a otimizar tarefas com um certo grau de precisão."
}
```
## License Information
### Overview
The Canarim-Instruct dataset is now licensed under CC-BY-NC (Creative Commons Attribution-NonCommercial). This change is a result of my commitment to ethical data usage and legal compliance, particularly in the realm of derived data and AI-generated content.
### Why CC-BY-NC?
My decision to adopt the CC-BY-NC license comes from a detailed assessment of the origins and intended use of the Canarim-Instruct dataset. Portions of our dataset derive from or are influenced by models from OpenAI (e.g., Self-instruct, Alpaca). In light of this, adherence to specific guidelines on the usage of such data is essential.
The policy of OpenAI limits the use of its model generations for training other models, especially in commercial scenarios. To align with these guidelines and ensure the responsible use of AI-generated data, the CC-BY-NC license was selected as the most appropriate.
### What Does This Mean for Users?
- **Remixing and Adaptation**: Users are free to remix, adapt, and build upon the Canarim-Instruct dataset non-commercially.
- **Credit**: Proper attribution must be given to me as the creator of the dataset, with a link to the license and an indication of any changes made.
- **Non-Commercial Use**: The dataset is not to be used for commercial purposes under this license.
I believe that the CC-BY-NC license strikes a balance between open accessibility and the legal and ethical considerations surrounding AI-generated data. My aim is to create an environment where the community can utilize this valuable resource for research and development while respecting the boundaries set by the origins of the data and relevant policies.
## Citation
If you want to cite **Canarim Instruct PTBR dataset**, you could use this:
```
@misc {maicon_domingues_2023,
author = { {Maicon Domingues} },
title = { Canarim-Instruct-PTBR-Dataset (Revision c2de751) },
year = 2023,
url = { https://huggingface.co/datasets/dominguesm/Canarim-Instruct-PTBR-Dataset },
doi = { 10.57967/hf/0983 },
publisher = { Hugging Face }
}
``` | [
-0.40930771827697754,
-0.4946253299713135,
0.17018674314022064,
0.5085498094558716,
-0.41771697998046875,
-0.20364618301391602,
-0.24659007787704468,
-0.37470054626464844,
0.15531444549560547,
0.22843635082244873,
-0.6851043105125427,
-0.7421029210090637,
-0.7173706293106079,
0.33544722199... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mstz/phoneme | mstz | 2023-04-11T00:14:47Z | 27 | 0 | null | [
"task_categories:tabular-classification",
"size_categories:10k<n<100K",
"language:en",
"phoneme",
"tabular_classification",
"binary_classification",
"region:us"
] | 2023-04-11T00:14:47Z | 2023-04-11T00:14:16.000Z | 2023-04-11T00:14:16 | ---
language:
- en
tags:
- phoneme
- tabular_classification
- binary_classification
pretty_name: Phoneme
size_categories:
- 10k<n<100K
task_categories: # Full list at https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts
- tabular-classification
configs:
- phoneme
---
# Phoneme
The [Phoneme dataset](https://www.openml.org/search?type=data&sort=runs&id=1489&status=active) from the [OpenML repository](https://www.openml.org/).
# Configurations and tasks
| **Configuration** | **Task** |
|-------------------|---------------------------|
| phoneme | Binary classification |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/phoneme")["train"]
```
| [
-0.3163497745990753,
-0.05034858360886574,
0.14970354735851288,
0.1245843917131424,
-0.3469487726688385,
-0.45579177141189575,
-0.443878173828125,
-0.12249242514371872,
-0.05254458263516426,
0.49587658047676086,
-0.3174261748790741,
-0.9801895618438721,
-0.3519448935985565,
0.2873515784740... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mstz/iris | mstz | 2023-04-28T13:35:36Z | 27 | 1 | null | [
"task_categories:tabular-classification",
"size_categories:n<1k",
"language:en",
"license:cc",
"iris",
"tabular_classification",
"binary_classification",
"multiclass_classification",
"UCI",
"region:us"
] | 2023-04-28T13:35:36Z | 2023-04-12T10:52:47.000Z | 2023-04-12T10:52:47 | ---
language:
- en
tags:
- iris
- tabular_classification
- binary_classification
- multiclass_classification
- UCI
pretty_name: Iris
size_categories:
- n<1k
task_categories:
- tabular-classification
configs:
- iris
- setosa
- versicolor
- virginica
license: cc
---
# Iris
The [Iris dataset](https://archive-beta.ics.uci.edu/dataset/53/iris) from the [UCI repository](https://archive-beta.ics.uci.edu).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|-------------------------------|
| iris | Multiclass classification | Classify iris type. |
| setosa | Binary classification | Is this a iris-setosa? |
| versicolor | Binary classification | Is this a iris-versicolor? |
| virginica | Binary classification | Is this a iris-virginica? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/iris", "iris")["train"]
``` | [
-0.4146670997142792,
-0.10514901578426361,
-0.07537885755300522,
0.4183635115623474,
0.02270202897489071,
-0.10132798552513123,
-0.06544443219900131,
-0.3000970482826233,
0.2948090434074402,
0.43480151891708374,
-0.6672229766845703,
-0.5982245206832886,
-0.47533199191093445,
0.415929585695... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gimmaru/story_cloze-2016 | gimmaru | 2023-05-08T03:00:51Z | 27 | 1 | null | [
"region:us"
] | 2023-05-08T03:00:51Z | 2023-05-08T03:00:22.000Z | 2023-05-08T03:00:22 | ---
dataset_info:
features:
- name: story_id
dtype: string
- name: input_sentence_1
dtype: string
- name: input_sentence_2
dtype: string
- name: input_sentence_3
dtype: string
- name: input_sentence_4
dtype: string
- name: sentence_quiz1
dtype: string
- name: sentence_quiz2
dtype: string
- name: answer_right_ending
dtype: int32
splits:
- name: test
num_bytes: 326264
num_examples: 1000
download_size: 0
dataset_size: 326264
---
# Dataset Card for "story_cloze-2016"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5476750731468201,
-0.3673403859138489,
0.3262026607990265,
0.0705946832895279,
-0.5089200139045715,
-0.10002130270004272,
0.09518417716026306,
-0.44726061820983887,
0.944488525390625,
0.5475404262542725,
-0.9564083218574524,
-1.0099295377731323,
-0.5124413967132568,
-0.19925692677497864... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Thaweewat/alpaca-finance-43k-th | Thaweewat | 2023-05-09T19:05:48Z | 27 | 2 | null | [
"task_categories:question-answering",
"task_categories:summarization",
"size_categories:10K<n<100K",
"language:th",
"license:cc-by-sa-3.0",
"instruction-finetuning",
"region:us"
] | 2023-05-09T19:05:48Z | 2023-05-09T19:01:32.000Z | 2023-05-09T19:01:32 | ---
license: cc-by-sa-3.0
task_categories:
- question-answering
- summarization
language:
- th
tags:
- instruction-finetuning
size_categories:
- 10K<n<100K
---
# Summary
🇹🇭 Thai-instructed dataset translated from [gbharti/wealth-alpaca_lora](https://huggingface.co/datasets/gbharti/wealth-alpaca_lora) using Google Cloud Translation.
This dataset is a combination of Stanford's Alpaca (https://github.com/tatsu-lab/stanford_alpaca) and FiQA (https://sites.google.com/view/fiqa/) with another 1.3k pairs custom generated using GPT3.5
Script for tuning through Kaggle's (https://www.kaggle.com) free resources using PEFT/LoRa: https://www.kaggle.com/code/gbhacker23/wealth-alpaca-lora
Supported Tasks:
- Training LLMs
- Synthetic Data Generation
- Data Augmentation
Languages: Thai
Version: 1.0
---
| [
-0.36791858077049255,
-0.8025489449501038,
0.07535989582538605,
0.42873528599739075,
-0.6668680906295776,
0.03511551395058632,
-0.177501380443573,
-0.617325484752655,
0.5369478464126587,
0.748684823513031,
-0.6254721283912659,
-0.7262765169143677,
-0.5583282709121704,
-0.03215717151761055,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Abrumu/Fashion_controlnet_dataset_V3 | Abrumu | 2023-05-19T09:44:48Z | 27 | 10 | null | [
"region:us"
] | 2023-05-19T09:44:48Z | 2023-05-18T17:04:45.000Z | 2023-05-18T17:04:45 | ---
dataset_info:
features:
- name: target
dtype: image
- name: mask
dtype: image
- name: cloth
dtype: image
- name: control
dtype: image
- name: prompt
dtype: string
- name: CLIP_captions
dtype: string
splits:
- name: train
num_bytes: 7964862365.0
num_examples: 11647
download_size: 7944023014
dataset_size: 7964862365.0
---
# Dataset Card for "Fashion_controlnet_dataset_V3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.32843172550201416,
-0.020482050254940987,
0.019934657961130142,
0.3976115882396698,
-0.25581786036491394,
-0.0756358802318573,
0.6245683431625366,
-0.34637123346328735,
0.823643684387207,
0.5284572839736938,
-1.0684369802474976,
-0.7603318691253662,
-0.40317302942276,
-0.274283468723297... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
joey234/mmlu-clinical_knowledge | joey234 | 2023-08-23T04:29:12Z | 27 | 0 | null | [
"region:us"
] | 2023-08-23T04:29:12Z | 2023-05-19T04:30:31.000Z | 2023-05-19T04:30:31 | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: negate_openai_prompt
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: neg_question
dtype: string
- name: fewshot_context
dtype: string
- name: fewshot_context_neg
dtype: string
splits:
- name: dev
num_bytes: 4228
num_examples: 5
- name: test
num_bytes: 848200
num_examples: 265
download_size: 103156
dataset_size: 852428
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
- split: test
path: data/test-*
---
# Dataset Card for "mmlu-clinical_knowledge"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.2871115505695343,
-0.2715071439743042,
0.5047482848167419,
-0.041309159249067307,
-0.06350140273571014,
-0.07690393924713135,
0.31093984842300415,
-0.12347064912319183,
0.7925851941108704,
0.3427739441394806,
-0.82247394323349,
-0.8578399419784546,
-0.7581915259361267,
-0.25655964016914... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Glavin001/startup-interviews | Glavin001 | 2023-06-29T05:59:47Z | 27 | 9 | null | [
"task_categories:question-answering",
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"license:cc-by-nc-2.0",
"region:us"
] | 2023-06-29T05:59:47Z | 2023-06-27T23:01:18.000Z | 2023-06-27T23:01:18 | ---
license: cc-by-nc-2.0
task_categories:
- question-answering
- text-generation
language:
- en
size_categories:
- n<1K
--- | [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
izumi-lab/oscar2301-ja-filter-ja-normal | izumi-lab | 2023-07-29T03:16:00Z | 27 | 2 | null | [
"language:ja",
"license:cc0-1.0",
"region:us"
] | 2023-07-29T03:16:00Z | 2023-07-12T16:38:36.000Z | 2023-07-12T16:38:36 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 68837059273.1919
num_examples: 31447063
download_size: 54798731310
dataset_size: 68837059273.1919
license: cc0-1.0
language:
- ja
---
# Dataset Card for "oscar2301-ja-filter-ja-normal"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.8360308408737183,
-0.21252191066741943,
0.1553707867860794,
-0.028104588389396667,
-0.5010870695114136,
-0.1284351348876953,
0.33893877267837524,
-0.18490557372570038,
1.1262013912200928,
0.8761356472969055,
-0.6647391319274902,
-0.8068147897720337,
-0.7108593583106995,
-0.1125466823577... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
npvinHnivqn/EnglishDictionary | npvinHnivqn | 2023-07-15T15:53:25Z | 27 | 1 | null | [
"task_categories:token-classification",
"size_categories:100K<n<1M",
"language:en",
"license:afl-3.0",
"region:us"
] | 2023-07-15T15:53:25Z | 2023-07-15T15:51:04.000Z | 2023-07-15T15:51:04 | ---
license: afl-3.0
task_categories:
- token-classification
language:
- en
size_categories:
- 100K<n<1M
--- | [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SachinKaushik/LlamaV2InstructCode | SachinKaushik | 2023-07-21T19:17:00Z | 27 | 3 | null | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"language:en",
"python",
"llamav2",
"instruction",
"code",
"region:us"
] | 2023-07-21T19:17:00Z | 2023-07-21T17:41:06.000Z | 2023-07-21T17:41:06 | ---
dataset_info:
features:
- name: text
dtype: string
- name: input
dtype: string
- name: instruction
dtype: string
- name: output
dtype: string
- name: llamaV2Instruct
dtype: string
splits:
- name: train
num_bytes: 241331660
num_examples: 121959
download_size: 0
dataset_size: 241331660
task_categories:
- text-generation
- text2text-generation
language:
- en
tags:
- python
- llamav2
- instruction
- code
---
# Dataset Card for "LlamaV2InstructCode"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3106459677219391,
0.02896343171596527,
0.2010757327079773,
0.4223116934299469,
-0.33065372705459595,
0.2030733823776245,
0.46249496936798096,
-0.11473910510540009,
0.6676433086395264,
0.6286771297454834,
-0.7908158302307129,
-0.8761231899261475,
-0.6554279327392578,
-0.25437721610069275... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
qanastek/LLaMaInstructionsFrenchMedMCQA | qanastek | 2023-07-21T23:45:31Z | 27 | 1 | frenchmedmcqa | [
"task_categories:question-answering",
"task_categories:multiple-choice",
"task_ids:multiple-choice-qa",
"task_ids:open-domain-qa",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1k<n<10k",
"source_datasets:original",
"lan... | 2023-07-21T23:45:31Z | 2023-07-21T23:29:30.000Z | 2023-07-21T23:29:30 | ---
annotations_creators:
- no-annotation
language_creators:
- expert-generated
language:
- fr
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 1k<n<10k
source_datasets:
- original
task_categories:
- question-answering
- multiple-choice
task_ids:
- multiple-choice-qa
- open-domain-qa
paperswithcode_id: frenchmedmcqa
pretty_name: FrenchMedMCQA
---
# Dataset Card for FrenchMedMCQA : A French Multiple-Choice Question Answering Corpus for Medical domain
## Table of Contents
- [Dataset Card for FrenchMedMCQA : A French Multiple-Choice Question Answering Corpus for Medical domain](#dataset-card-for-frenchmedmcqa--a-french-multiple-choice-question-answering-corpus-for-medical-domain)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contact](#contact)
## Dataset Description
- **Homepage:** https://deft2023.univ-avignon.fr/
- **Repository:** https://deft2023.univ-avignon.fr/
- **Paper:** [FrenchMedMCQA: A French Multiple-Choice Question Answering Dataset for Medical domain](https://hal.science/hal-03824241/document)
- **Leaderboard:** Coming soon
- **Point of Contact:** [Yanis LABRAK](mailto:yanis.labrak@univ-avignon.fr)
### Dataset Summary
This paper introduces FrenchMedMCQA, the first publicly available Multiple-Choice Question Answering (MCQA) dataset in French for medical domain. It is composed of 3,105 questions taken from real exams of the French medical specialization diploma in pharmacy, mixing single and multiple answers.
Each instance of the dataset contains an identifier, a question, five possible answers and their manual correction(s).
We also propose first baseline models to automatically process this MCQA task in order to report on the current performances and to highlight the difficulty of the task. A detailed analysis of the results showed that it is necessary to have representations adapted to the medical domain or to the MCQA task: in our case, English specialized models yielded better results than generic French ones, even though FrenchMedMCQA is in French. Corpus, models and tools are available online.
### Supported Tasks and Leaderboards
Multiple-Choice Question Answering (MCQA)
### Languages
The questions and answers are available in French.
## Dataset Structure
### Data Instances
```json
{
"id": "230bac49b0fe863b772410bc8d01a025f63c3c999065480131d6334abd2efeff",
"prompt": "Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: We are giving you a scientific question (easy level) and five answers options (associated to « A », « B », « C », « D », « E »). Your task is to find the correct(s) answer(s) based on scientific facts, knowledge and reasoning. Don't generate anything other than one of the following characters : 'A B C D E'. ### Input: Parmi les affirmations suivantes, une seule est fausse, indiquer laquelle: les particules alpha (A) Sont formées de noyaux d'hélium (B) Sont peu pénétrantes (C) Toute l'énergie qu'elles transportent est cédée au long d'un parcours de quelques centimètres dans l'air (D) Sont arrêtées par une feuille de papier (E) Sont peu ionisantes ### Response: E",
"prompt_no_answer": "Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: We are giving you a scientific question (easy level) and five answers options (associated to « A », « B », « C », « D », « E »). Your task is to find the correct(s) answer(s) based on scientific facts, knowledge and reasoning. Don't generate anything other than one of the following characters : 'A B C D E'. ### Input: Parmi les affirmations suivantes, une seule est fausse, indiquer laquelle: les particules alpha (A) Sont formées de noyaux d'hélium (B) Sont peu pénétrantes (C) Toute l'énergie qu'elles transportent est cédée au long d'un parcours de quelques centimètres dans l'air (D) Sont arrêtées par une feuille de papier (E) Sont peu ionisantes ### Response:",
"correct_answers": [4],
}
```
### Data Fields
- `id` : a string question identifier for each example
- `prompt` : prompt text formatted for LLaMa (a string)
- `correct_answers` : Correct options, i.e., A, D and E
### Data Splits
| # Answers | Training | Validation | Test | Total |
|:---------:|:--------:|:----------:|:----:|:-----:|
| 1 | 595 | 164 | 321 | 1,080 |
| 2 | 528 | 45 | 97 | 670 |
| 3 | 718 | 71 | 141 | 930 |
| 4 | 296 | 30 | 56 | 382 |
| 5 | 34 | 2 | 7 | 43 |
| Total | 2171 | 312 | 622 | 3,105 |
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
The questions and their associated candidate answer(s) were collected from real French pharmacy exams on the remede website. Questions and answers were manually created by medical experts and used during examinations. The dataset is composed of 2,025 questions with multiple answers and 1,080 with a single one, for a total of 3,105 questions. Each instance of the dataset contains an identifier, a question, five options (labeled from A to E) and correct answer(s). The average question length is 14.17 tokens and the average answer length is 6.44 tokens. The vocabulary size is of 13k words, of which 3.8k are estimated medical domain-specific words (i.e. a word related to the medical field). We find an average of 2.49 medical domain-specific words in each question (17 % of the words) and 2 in each answer (36 % of the words). On average, a medical domain-specific word is present in 2 questions and in 8 answers.
### Personal and Sensitive Information
The corpora is free of personal or sensitive information.
## Additional Information
### Dataset Curators
The dataset was created by Labrak Yanis and Bazoge Adrien and Dufour Richard and Daille Béatrice and Gourraud Pierre-Antoine and Morin Emmanuel and Rouvier Mickael.
### Licensing Information
Apache 2.0
### Citation Information
If you find this useful in your research, please consider citing the dataset paper :
```latex
@inproceedings{labrak-etal-2022-frenchmedmcqa,
title = "{F}rench{M}ed{MCQA}: A {F}rench Multiple-Choice Question Answering Dataset for Medical domain",
author = "Labrak, Yanis and
Bazoge, Adrien and
Dufour, Richard and
Daille, Beatrice and
Gourraud, Pierre-Antoine and
Morin, Emmanuel and
Rouvier, Mickael",
booktitle = "Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI)",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates (Hybrid)",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.louhi-1.5",
pages = "41--46",
abstract = "This paper introduces FrenchMedMCQA, the first publicly available Multiple-Choice Question Answering (MCQA) dataset in French for medical domain. It is composed of 3,105 questions taken from real exams of the French medical specialization diploma in pharmacy, mixing single and multiple answers. Each instance of the dataset contains an identifier, a question, five possible answers and their manual correction(s). We also propose first baseline models to automatically process this MCQA task in order to report on the current performances and to highlight the difficulty of the task. A detailed analysis of the results showed that it is necessary to have representations adapted to the medical domain or to the MCQA task: in our case, English specialized models yielded better results than generic French ones, even though FrenchMedMCQA is in French. Corpus, models and tools are available online.",
}
```
### Contact
Thanks to contact [Yanis LABRAK](https://github.com/qanastek) for more information about this dataset.
| [
-0.4362001121044159,
-0.7643755674362183,
0.6097506880760193,
0.0004301695735193789,
-0.0027882568538188934,
-0.04004592448472977,
0.07332007586956024,
-0.09063275158405304,
0.5717478394508362,
0.5032660365104675,
-0.6972712874412537,
-0.5894061326980591,
-0.6167733669281006,
0.50133627653... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bdpc/rvl_cdip_mp | bdpc | 2023-08-11T12:44:13Z | 27 | 0 | null | [
"license:cc-by-nc-4.0",
"region:us"
] | 2023-08-11T12:44:13Z | 2023-08-11T09:55:56.000Z | 2023-08-11T09:55:56 | ---
license: cc-by-nc-4.0
---
# Dataset Card for RVL-CDIP_MultiPage
## Extension
The data loader provides support for loading RVL_CDIP in its extended multipage format.
Since the dataset binaries are huge (80GB) it will be hosted elsewhere: [LINK](https://shorturl.at/adyC7) | [
-0.9951896071434021,
-0.15131983160972595,
0.006909586489200592,
0.6265802383422852,
-0.4033994674682617,
0.026761388406157494,
0.002609200542792678,
-0.14324288070201874,
0.2540026903152466,
0.807122528553009,
-0.5845506191253662,
-0.4631255567073822,
-0.27881622314453125,
0.0490924455225... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pkufool/libriheavy | pkufool | 2023-09-19T11:35:45Z | 27 | 4 | null | [
"license:apache-2.0",
"arxiv:2309.08105",
"region:us"
] | 2023-09-19T11:35:45Z | 2023-08-21T11:20:42.000Z | 2023-08-21T11:20:42 | ---
license: apache-2.0
---
# Libriheavy: a 50,000 hours ASR corpus with punctuation casing and context
Libriheavy is a labeled version of [Librilight](https://github.com/facebookresearch/libri-light), read our [paper](https://arxiv.org/abs/2309.08105) for more details.
See https://github.com/k2-fsa/libriheavy for more details.
## Citation
```
@misc{kang2023libriheavy,
title={Libriheavy: a 50,000 hours ASR corpus with punctuation casing and context},
author={Wei Kang and Xiaoyu Yang and Zengwei Yao and Fangjun Kuang and Yifan Yang and Liyong Guo and Long Lin and Daniel Povey},
year={2023},
eprint={2309.08105},
archivePrefix={arXiv},
primaryClass={eess.AS}
}
```
| [
0.10301312059164047,
-0.4361165463924408,
0.5997394919395447,
0.27588510513305664,
-0.3046746850013733,
0.03911507502198219,
-0.34685397148132324,
-0.5188398957252502,
0.17328199744224548,
0.5258815884590149,
-0.15192510187625885,
-0.4537152349948883,
-0.01942356862127781,
0.22981141507625... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ArmelR/oasst1_guanaco_english | ArmelR | 2023-08-26T01:05:26Z | 27 | 1 | null | [
"region:us"
] | 2023-08-26T01:05:26Z | 2023-08-26T01:05:22.000Z | 2023-08-26T01:05:22 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 2500171.710492605
num_examples: 2181
- name: test
num_bytes: 278561.0846628625
num_examples: 243
download_size: 1690262
dataset_size: 2778732.7951554675
---
# Dataset Card for "oasst1_guanaco_english"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.26927703619003296,
-0.3193947374820709,
0.21973533928394318,
0.20957128703594208,
-0.23306521773338318,
-0.07225105166435242,
0.08453314006328583,
-0.2524770200252533,
0.8710165619850159,
0.3018836975097656,
-0.6218435168266296,
-0.9946979284286499,
-0.7523857951164246,
-0.2156956791877... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
wbensvage/clothes_desc | wbensvage | 2023-08-29T19:14:36Z | 27 | 1 | null | [
"task_categories:text-to-image",
"annotations_creators:human generated by using detail_desc and color",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:n=1K",
"source_datasets:www.kaggle.com/competitions/h-and-m-personalized-fashion-recommendations",
"language:en",
"license:... | 2023-08-29T19:14:36Z | 2023-08-29T11:55:35.000Z | 2023-08-29T11:55:35 | ---
license: apache-2.0
annotations_creators:
- human generated by using detail_desc and color
language:
- en
language_creators:
- other
multilinguality:
- monolingual
pretty_name: 'H&M Clothes captions'
size_categories:
- n=1K
source_datasets:
- www.kaggle.com/competitions/h-and-m-personalized-fashion-recommendations
tags: []
task_categories:
- text-to-image
task_ids: []
---
# Dataset Card for H&M Clothes captions
_Dataset used to train/finetune [Clothes text to image model]
Captions are generated by using the 'detail_desc' and 'colour_group_name' or 'perceived_colour_master_name' from kaggle/competitions/h-and-m-personalized-fashion-recommendations. Original images were also obtained from the url (https://www.kaggle.com/competitions/h-and-m-personalized-fashion-recommendations/data?select=images)
For each row the dataset contains `image` and `text` keys. `image` is a varying size PIL jpeg, and `text` is the accompanying text caption. Only a train split is provided.
---
| [
-0.48709025979042053,
-0.3940514326095581,
0.12866979837417603,
-0.036428142338991165,
-0.6792436838150024,
0.13446293771266937,
-0.07656446099281311,
-0.4219907224178314,
0.24601174890995026,
0.48663330078125,
-1.2245672941207886,
-0.5056878924369812,
-0.39614924788475037,
0.2458802759647... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
legacy107/bioasq10b-factoid | legacy107 | 2023-09-06T13:45:03Z | 27 | 2 | null | [
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:en",
"medical",
"region:us"
] | 2023-09-06T13:45:03Z | 2023-09-06T13:39:03.000Z | 2023-09-06T13:39:03 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: long_answer
dtype: string
- name: answer
dtype: string
- name: context
dtype: string
splits:
- name: train
num_bytes: 3321906
num_examples: 1252
- name: test
num_bytes: 318200
num_examples: 166
download_size: 1758966
dataset_size: 3640106
task_categories:
- question-answering
language:
- en
tags:
- medical
pretty_name: BioASQ10b (factoid only)
size_categories:
- 1K<n<10K
---
# Dataset Card for "bioasq10b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6127057075500488,
-0.17212443053722382,
0.37489834427833557,
0.33524319529533386,
-0.23432692885398865,
0.1967480480670929,
0.5550491213798523,
-0.23182404041290283,
1.0629971027374268,
0.4019851088523865,
-0.7932576537132263,
-0.616794228553772,
-0.520984411239624,
-0.02272616140544414... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Divya1287/llama2 | Divya1287 | 2023-09-20T06:33:37Z | 27 | 0 | null | [
"task_categories:text-generation",
"task_categories:conversational",
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:en",
"license:openrail",
"region:us"
] | 2023-09-20T06:33:37Z | 2023-09-14T09:41:19.000Z | 2023-09-14T09:41:19 | ---
license: openrail
task_categories:
- text-generation
- conversational
- question-answering
language:
- en
pretty_name: prompt
size_categories:
- 1K<n<10K
--- | [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/grade_school_math_instructions | dim | 2023-09-25T13:50:09Z | 27 | 1 | null | [
"region:us"
] | 2023-09-25T13:50:09Z | 2023-09-25T13:50:04.000Z | 2023-09-25T13:50:04 | ---
dataset_info:
features:
- name: INSTRUCTION
dtype: string
- name: RESPONSE
dtype: string
- name: SOURCE
dtype: string
splits:
- name: train
num_bytes: 4804916
num_examples: 8792
download_size: 2555411
dataset_size: 4804916
---
# Dataset Card for "grade_school_math_instructions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.46062904596328735,
-0.453346312046051,
0.24512344598770142,
0.3620809018611908,
0.023724179714918137,
-0.10508176684379578,
0.29288583993911743,
0.4201527237892151,
0.5220503807067871,
0.36049407720565796,
-1.012402057647705,
-0.9351279735565186,
-0.5190525054931641,
-0.4920762777328491... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SEACrowd/kamus_alay | SEACrowd | 2023-09-26T12:28:13Z | 27 | 0 | null | [
"language:ind",
"license:unknown",
"morphological-inflection",
"region:us"
] | 2023-09-26T12:28:13Z | 2023-09-26T11:11:16.000Z | 2023-09-26T11:11:16 | ---
license: unknown
tags:
- morphological-inflection
language:
- ind
---
# kamus_alay
Kamus Alay provide a lexicon for text normalization of Indonesian colloquial words.
It contains 3,592 unique colloquial words-also known as “bahasa alay” -and manually annotated them
with the normalized form. We built this lexicon from Instagram comments provided by Septiandri & Wibisono (2017)
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@INPROCEEDINGS{8629151,
author={Aliyah Salsabila, Nikmatun and Ardhito Winatmoko, Yosef and Akbar Septiandri, Ali and Jamal, Ade},
booktitle={2018 International Conference on Asian Language Processing (IALP)},
title={Colloquial Indonesian Lexicon},
year={2018},
volume={},
number={},
pages={226-229},
doi={10.1109/IALP.2018.8629151}}
```
## License
Unknown
## Homepage
[https://ieeexplore.ieee.org/abstract/document/8629151](https://ieeexplore.ieee.org/abstract/document/8629151)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) | [
-0.5000583529472351,
-0.4757567048072815,
0.1446433663368225,
0.31030112504959106,
-0.364594966173172,
-0.2750820815563202,
-0.3304247558116913,
-0.4711291790008545,
0.6594410538673401,
0.5905662178993225,
-0.10191874951124191,
-0.62938392162323,
-0.6102725863456726,
0.6526949405670166,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Mxode/StackOverflow-QA-C-Language-5k | Mxode | 2023-10-02T10:30:48Z | 27 | 1 | null | [
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"code",
"region:us"
] | 2023-10-02T10:30:48Z | 2023-10-02T10:08:11.000Z | 2023-10-02T10:08:11 | ---
license: apache-2.0
language:
- en
tags:
- code
task_categories:
- question-answering
size_categories:
- 1K<n<10K
---
This is a collection of ~5000 QA's in **C Language** from StackOverflow. The data has been initially cleaned, and each response is with **Accepted Answer**.
All data is **<500** in length.
The questions and answers were organized into a **one-line** format. A sample format is shown below:
```json
{
"question": "```\nFILE* file = fopen(some file)\n\npcap_t* pd = pcap_fopen_offline(file)\n\npcap_close(pd)\n\nfclose(file)\n```\n\nThis code occurs double free error.\n\nCould you explain about this happening?\n\nMy Guess is that pd and file pointers are sharing some datas.\n",
"answer": "As the documentation says, thepcap_closefunction closes the files associated with thepcap_tstructure passed to it. Closing the file again withfcloseis an error.\n"
}
``` | [
-0.18858279287815094,
-0.6712357997894287,
0.4554745554924011,
0.6123675107955933,
-0.3789874017238617,
0.39930447936058044,
0.14501634240150452,
-0.3007137179374695,
0.15351106226444244,
0.6911065578460693,
-0.27513813972473145,
-0.3644479215145111,
-0.385568231344223,
0.09098953753709793... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
orgcatorg/israel-hamas-gaza-cnn | orgcatorg | 2023-11-28T04:07:17Z | 27 | 0 | null | [
"region:us"
] | 2023-11-28T04:07:17Z | 2023-10-10T14:16:59.000Z | 2023-10-10T14:16:59 | ---
dataset_info:
features:
- name: '@type'
dtype: string
- name: headline
dtype: string
- name: url
dtype: string
- name: dateModified
dtype: string
- name: datePublished
dtype: string
- name: mainEntityOfPage
dtype: string
- name: publisher
dtype: string
- name: author
dtype: string
- name: articleBody
dtype: string
- name: image
dtype: string
configs:
- config_name: default
data_files:
- split: train
path: data-*
---
# Dataset Card for "israel-hamas-gaza-cnn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6187560558319092,
-0.047777120023965836,
0.22585338354110718,
0.3246687650680542,
-0.366678386926651,
-0.013957327231764793,
0.225724458694458,
-0.12704205513000488,
0.7005921602249146,
0.2932744026184082,
-0.663637101650238,
-0.9004150629043579,
-0.9019954800605774,
-0.3703247904777527... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Otter-AI/MathVista | Otter-AI | 2023-10-30T18:13:46Z | 27 | 1 | null | [
"region:us"
] | 2023-10-30T18:13:46Z | 2023-10-12T08:15:46.000Z | 2023-10-12T08:15:46 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
zelalt/MLPapers-Arxiv | zelalt | 2023-10-24T16:03:18Z | 27 | 0 | null | [
"region:us"
] | 2023-10-24T16:03:18Z | 2023-10-23T23:10:03.000Z | 2023-10-23T23:10:03 | ---
dataset_info:
features:
- name: title
dtype: string
- name: abstract
dtype: string
splits:
- name: train
num_bytes: 145682026
num_examples: 117592
download_size: 83722678
dataset_size: 145682026
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "MLPapers-Arxiv"
Original Dataset: [CShorten/ML-ArXiv-Papers](https://huggingface.co/datasets/CShorten/ML-ArXiv-Papers)
| [
-0.31812259554862976,
-0.09387653321027756,
-0.02562722936272621,
0.19925038516521454,
-0.41036760807037354,
-0.13415852189064026,
0.16633401811122894,
0.2743745446205139,
0.7719082832336426,
0.6396149396896362,
-0.62513267993927,
-0.6426587104797363,
-0.4500991106033325,
-0.23398742079734... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BEE-spoke-data/Long-Data-Col-rp_pile_pretrain | BEE-spoke-data | 2023-10-26T02:01:57Z | 27 | 0 | null | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:feature-extraction",
"size_categories:1M<n<10M",
"source_datasets:togethercomputer/Long-Data-Collections",
"license:other",
"long boi",
"region:us"
] | 2023-10-26T02:01:57Z | 2023-10-25T01:52:15.000Z | 2023-10-25T01:52:15 | ---
license: other
size_categories:
- 1M<n<10M
source_datasets: togethercomputer/Long-Data-Collections
task_categories:
- text-generation
- fill-mask
- feature-extraction
configs:
- config_name: cleaned
data_files:
- split: train
path: cleaned/train-*
- config_name: cleaned-dedup
data_files:
- split: train
path: cleaned-dedup/train-*
- config_name: cleaned-dedup-en
data_files:
- split: train
path: cleaned-dedup-en/train-*
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
- config_name: cleaned
features:
- name: text
dtype: string
- name: meta
dtype: string
splits:
- name: train
num_bytes: 16969436991
num_examples: 2759555
download_size: 9521997027
dataset_size: 16969436991
- config_name: cleaned-dedup
features:
- name: text
dtype: string
- name: meta
dtype: string
splits:
- name: train
num_bytes: 13009681081
num_examples: 2712907
download_size: 7319241627
dataset_size: 13009681081
- config_name: cleaned-dedup-en
features:
- name: text
dtype: string
- name: meta
dtype: string
splits:
- name: train
num_bytes: 12723856310.202166
num_examples: 2653304
download_size: 7180653999
dataset_size: 12723856310.202166
- config_name: default
features:
- name: text
dtype: string
- name: meta
dtype: string
splits:
- name: train
num_bytes: 16821991568.354612
num_examples: 2759555
download_size: 9685120636
dataset_size: 16821991568.354612
tags:
- long boi
---
# Dataset Card for "Long-Data-Col-rp_pile_pretrain"
This dataset is a subset of [togethercomputer/Long-Data-Collections](https://huggingface.co/datasets/togethercomputer/Long-Data-Collections), namely the `rp_sub.jsonl.zst` and `pile_sub.jsonl.zst` files from the `pretrain` split.
Like the source dataset, we do not attempt to modify/change licenses of underlying data. Refer to the source dataset (and its source datasets) for details.
## changes
1. as this is supposed to be a "long text dataset", we drop all rows where `text` contains <= 250 characters. This drops approx 100k rows from the raw data. Resulting stats are below.
| | text_len |
|:------|----------------:|
| count | 2.75956e+06 |
| mean | 6195.11 |
| std | 56364.9 |
| min | 251 |
| 25% | 1102 |
| 50% | 2147 |
| 75% | 4762 |
| max | 4.66452e+07 |
--- | [
-0.6843467354774475,
-0.418425977230072,
0.22923362255096436,
0.18327844142913818,
-0.8071332573890686,
0.056013960391283035,
-0.23019015789031982,
-0.3475463390350342,
0.702330470085144,
0.5898509621620178,
-0.8894662857055664,
-0.7868552803993225,
-0.6174159049987793,
0.23152899742126465... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kardosdrur/europarl-scandinavian | kardosdrur | 2023-10-25T08:38:29Z | 27 | 0 | null | [
"license:mit",
"region:us"
] | 2023-10-25T08:38:29Z | 2023-10-25T06:54:03.000Z | 2023-10-25T06:54:03 | ---
license: mit
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: da
dtype: string
- name: en
dtype: string
- name: sv
dtype: string
splits:
- name: train
num_bytes: 620348322.4
num_examples: 1304296
- name: test
num_bytes: 155087080.6
num_examples: 326074
download_size: 488376564
dataset_size: 775435403.0
---
# Europarl Scandinavian Languages
The data originates from the Europarl parallel corpus, where English transcriptions of parliamentary discussions were aligned
with a number of other languages algorithmically.
In order to align Danish and Swedish corpora in the dataset, English entries were hashed with 128bit Murmurhash3,
and the Danish and Swedish transcriptions were joined on the obtained hash values.
Entries that had more than one pair in the other dataset were removed, this ensures that no false positives due to hash collisions
got into the dataset.
Source code is available in the repository.
The dataset was created for aiding the training of sentence transformer models in the Danish Foundation Models project.
| [
-0.41500619053840637,
-0.45856010913848877,
0.6649718880653381,
-0.10669276863336563,
-0.23292328417301178,
0.3336596190929413,
-0.2135516107082367,
-0.30745476484298706,
0.24138964712619781,
0.715209424495697,
-0.5080034732818604,
-0.680458664894104,
-0.5910788178443909,
0.250176757574081... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
thanhduycao/soict_private_test_v2 | thanhduycao | 2023-10-28T06:52:08Z | 27 | 0 | null | [
"region:us"
] | 2023-10-28T06:52:08Z | 2023-10-28T06:51:47.000Z | 2023-10-28T06:51:47 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: audio
struct:
- name: array
sequence: float32
- name: path
dtype: string
- name: sampling_rate
dtype: int64
splits:
- name: train
num_bytes: 567746816
num_examples: 2139
download_size: 461190048
dataset_size: 567746816
---
# Dataset Card for "soict_private_test_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.30114176869392395,
-0.27617835998535156,
0.12498034536838531,
0.3268486559391022,
-0.2780723571777344,
-0.19640947878360748,
0.4342465102672577,
-0.11706507205963135,
0.5392453670501709,
0.5348734855651855,
-0.8587661385536194,
-0.6755026578903198,
-0.5477922558784485,
-0.35796502232551... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dajor85570/invoices-and-receipts_ocr_v1 | dajor85570 | 2023-10-28T13:33:00Z | 27 | 0 | null | [
"region:us"
] | 2023-10-28T13:33:00Z | 2023-10-28T13:26:27.000Z | 2023-10-28T13:26:27 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: image
dtype: image
- name: id
dtype: string
- name: parsed_data
dtype: string
- name: raw_data
dtype: string
splits:
- name: train
num_bytes: 465061949.289
num_examples: 2043
- name: test
num_bytes: 23808463.0
num_examples: 125
- name: valid
num_bytes: 22325731.0
num_examples: 70
download_size: 281665599
dataset_size: 511196143.289
---
# Dataset Card for "invoices-and-receipts_ocr_v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.22217807173728943,
-0.02949884906411171,
0.29868951439857483,
-0.07206987589597702,
-0.3415440022945404,
-0.22955265641212463,
0.5392619371414185,
-0.4251926839351654,
0.6555123925209045,
0.869098424911499,
-0.5690856575965881,
-0.6549307703971863,
-0.6198427081108093,
-0.23575861752033... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Otter-AI/MagnifierBench | Otter-AI | 2023-11-07T03:07:33Z | 27 | 5 | null | [
"license:mit",
"region:us"
] | 2023-11-07T03:07:33Z | 2023-10-29T05:17:28.000Z | 2023-10-29T05:17:28 | ---
license: mit
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
abhishek/dpo-sample | abhishek | 2023-10-30T13:46:55Z | 27 | 0 | null | [
"region:us"
] | 2023-10-30T13:46:55Z | 2023-10-30T13:46:52.000Z | 2023-10-30T13:46:52 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 404
num_examples: 7
download_size: 1980
dataset_size: 404
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "dpo-sample"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5661584734916687,
-0.10415568202733994,
0.3709235191345215,
0.12009478360414505,
-0.35604965686798096,
0.10085891932249069,
0.5008937120437622,
-0.20709985494613647,
0.7858171463012695,
0.46526339650154114,
-0.9099280834197998,
-0.6797667741775513,
-0.583511471748352,
-0.052170768380165... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
KETI-AIR/kor_amazon_polarity | KETI-AIR | 2023-11-15T01:14:28Z | 27 | 1 | null | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:ko",
"license:cc0-1.0",
"region:us"
] | 2023-11-15T01:14:28Z | 2023-11-03T06:33:37.000Z | 2023-11-03T06:33:37 | ---
language:
- ko
license: cc0-1.0
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
dataset_info:
features:
- name: label
dtype:
class_label:
names:
'0': negative
'1': positive
- name: title
dtype: string
- name: content
dtype: string
- name: data_index_by_user
dtype: int32
splits:
- name: train
num_bytes: 2059069183
num_examples: 3600000
- name: test
num_bytes: 228905323
num_examples: 400000
download_size: 1298504656
dataset_size: 2287974506
---
# Dataset Card for amazon_polarity
## Licensing Information
The data is distributed under the [CC0 1.0](https://creativecommons.org/publicdomain/zero/1.0/) license.
## Source Data Citation Information
McAuley, Julian, and Jure Leskovec. "Hidden factors and hidden topics: understanding rating dimensions with review text." In Proceedings of the 7th ACM conference on Recommender systems, pp. 165-172. 2013.
Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015)
| [
-0.3989025950431824,
-0.4810542166233063,
0.4202578365802765,
0.33614838123321533,
-0.48405057191848755,
0.1123601570725441,
0.12528346478939056,
-0.4310719072818756,
0.1867038905620575,
0.7547479867935181,
-0.8227163553237915,
-0.8460658192634583,
-0.6032706499099731,
0.11730338633060455,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
royzhong/ASVS5-G | royzhong | 2023-11-05T05:18:24Z | 27 | 0 | null | [
"region:us"
] | 2023-11-05T05:18:24Z | 2023-11-05T04:17:03.000Z | 2023-11-05T04:17:03 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nguyenthanhdo/zac2023-math | nguyenthanhdo | 2023-11-06T10:16:56Z | 27 | 0 | null | [
"region:us"
] | 2023-11-06T10:16:56Z | 2023-11-06T10:16:55.000Z | 2023-11-06T10:16:55 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: public_test
path: data/public_test-*
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: choices
sequence: string
- name: explanation
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 303871
num_examples: 1200
- name: public_test
num_bytes: 31224
num_examples: 189
download_size: 172884
dataset_size: 335095
---
# Dataset Card for "zac2023-math"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7964920401573181,
-0.15300817787647247,
0.19832079112529755,
0.3020184338092804,
0.0054777092300355434,
0.0029591976199299097,
0.024967489764094353,
-0.00737825408577919,
0.8189561367034912,
0.47206780314445496,
-1.1271775960922241,
-0.5823460817337036,
-0.33738264441490173,
-0.39297401... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Nghengu123/Al_Challenge | Nghengu123 | 2023-11-08T07:34:38Z | 27 | 0 | null | [
"license:llama2",
"region:us"
] | 2023-11-08T07:34:38Z | 2023-11-06T12:36:06.000Z | 2023-11-06T12:36:06 | ---
license: llama2
--- | [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
arieg/bw_spec_cls_80_00 | arieg | 2023-11-08T10:36:55Z | 27 | 0 | null | [
"region:us"
] | 2023-11-08T10:36:55Z | 2023-11-08T10:36:48.000Z | 2023-11-08T10:36:48 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '10'
'1': '1039'
'2': '1040'
'3': '1082'
'4': '1083'
'5': '1102'
'6': '1193'
'7': '1195'
'8': '1196'
'9': '1197'
'10': '1270'
'11': '1276'
'12': '1277'
'13': '1278'
'14': '140'
'15': '141'
'16': '1417'
'17': '1427'
'18': '1443'
'19': '1482'
'20': '1510'
'21': '1544'
'22': '1642'
'23': '1644'
'24': '1649'
'25': '1661'
'26': '1663'
'27': '1666'
'28': '1673'
'29': '1680'
'30': '1681'
'31': '1682'
'32': '1683'
'33': '1684'
'34': '1685'
'35': '190'
'36': '193'
'37': '194'
'38': '197'
'39': '2'
'40': '200'
'41': '203'
'42': '204'
'43': '207'
'44': '210'
'45': '211'
'46': '212'
'47': '213'
'48': '255'
'49': '256'
'50': '368'
'51': '424'
'52': '5'
'53': '534'
'54': '540'
'55': '546'
'56': '574'
'57': '615'
'58': '620'
'59': '621'
'60': '625'
'61': '666'
'62': '667'
'63': '676'
'64': '694'
'65': '695'
'66': '714'
'67': '715'
'68': '716'
'69': '718'
'70': '777'
'71': '814'
'72': '821'
'73': '822'
'74': '825'
'75': '853'
'76': '897'
'77': '995'
'78': '997'
'79': '998'
splits:
- name: train
num_bytes: 89804439.2
num_examples: 1600
download_size: 88034240
dataset_size: 89804439.2
---
# Dataset Card for "bw_spec_cls_80_00"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6681512594223022,
-0.20967267453670502,
0.179905965924263,
0.3565845489501953,
-0.2609364986419678,
-0.2170587033033371,
0.07478230446577072,
-0.27802392840385437,
0.6656636595726013,
0.541271448135376,
-0.7852960824966431,
-0.7713688015937805,
-0.5095733404159546,
-0.1916239708662033,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
recoilme/portraits_xs | recoilme | 2023-11-09T09:01:46Z | 27 | 0 | null | [
"region:us"
] | 2023-11-09T09:01:46Z | 2023-11-09T08:57:22.000Z | 2023-11-09T08:57:22 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1942655934.504
num_examples: 1401
download_size: 1954879071
dataset_size: 1942655934.504
---
# Dataset Card for "portraits_xs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.45362135767936707,
0.2550382912158966,
0.16689281165599823,
0.19571015238761902,
-0.1254456341266632,
0.15009650588035583,
0.31344252824783325,
-0.22498585283756256,
0.9579638242721558,
0.4945078194141388,
-1.041585087776184,
-0.7741422653198242,
-0.6107326745986938,
-0.2306375503540039... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
marmofayezi/CelebAll | marmofayezi | 2023-11-27T11:33:59Z | 27 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-27T11:33:59Z | 2023-11-10T15:16:41.000Z | 2023-11-10T15:16:41 | ---
license: apache-2.0
dataset_info:
features:
- name: id
dtype: string
- name: image
dtype: image
- name: mask
dtype: image
- name: landmark_image
dtype: image
- name: landmark_cropped_image
dtype: image
- name: landmark
sequence: int32
- name: captions_eng
sequence: string
- name: captions_pes
sequence: string
- name: captions_cmn
sequence: string
- name: captions_fra
sequence: string
- name: captions_deu
sequence: string
- name: captions_ita
sequence: string
- name: captions_spa
sequence: string
- name: captions_all
sequence: string
splits:
- name: train
num_bytes: 12992151231.096
num_examples: 196476
- name: test
num_bytes: 396349964.59099996
num_examples: 5997
download_size: 11458340429
dataset_size: 13388501195.687
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
arieg/bw_spec_cls_4_01_noise_200 | arieg | 2023-11-11T18:06:32Z | 27 | 0 | null | [
"region:us"
] | 2023-11-11T18:06:32Z | 2023-11-11T18:06:21.000Z | 2023-11-11T18:06:21 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '141'
'1': '190'
'2': '193'
'3': '194'
splits:
- name: train
num_bytes: 48403090.0
num_examples: 800
- name: test
num_bytes: 4851289.0
num_examples: 80
download_size: 27012884
dataset_size: 53254379.0
---
# Dataset Card for "bw_spec_cls_4_01_noise_200"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.681585967540741,
-0.19632942974567413,
0.2803572416305542,
0.5277174711227417,
-0.22538311779499054,
-0.26468005776405334,
-0.012927788309752941,
-0.31221887469291687,
0.5255771279335022,
0.40818360447883606,
-1.0273923873901367,
-0.7812250256538391,
-0.2599656879901886,
-0.103265456855... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sunglyul/stt_data_2311152 | sunglyul | 2023-11-17T08:46:24Z | 27 | 0 | null | [
"region:us"
] | 2023-11-17T08:46:24Z | 2023-11-15T08:00:59.000Z | 2023-11-15T08:00:59 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcript
dtype: string
splits:
- name: train
num_bytes: 76509.5
num_examples: 6
- name: test
num_bytes: 18911.25
num_examples: 1
- name: valid
num_bytes: 8272.25
num_examples: 1
download_size: 91071
dataset_size: 103693.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
---
# Dataset Card for "stt_data_2311152"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.434652179479599,
-0.2205968052148819,
0.2473972588777542,
0.3510516583919525,
-0.416890949010849,
-0.09279971569776535,
0.29532957077026367,
-0.17280088365077972,
0.985666036605835,
0.5139550566673279,
-0.7526624202728271,
-0.6134331226348877,
-0.5919265747070312,
-0.23622871935367584,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
liminghao1630/API-Bank | liminghao1630 | 2023-11-17T06:41:39Z | 27 | 1 | null | [
"license:mit",
"region:us"
] | 2023-11-17T06:41:39Z | 2023-11-17T06:39:46.000Z | 2023-11-17T06:39:46 | ---
license: mit
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
rkdeva/QA_Dataset-3 | rkdeva | 2023-11-21T19:56:20Z | 27 | 0 | null | [
"region:us"
] | 2023-11-21T19:56:20Z | 2023-11-21T19:56:15.000Z | 2023-11-21T19:56:15 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 255350
num_examples: 785
download_size: 94494
dataset_size: 255350
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "QA_Dataset-3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5286188125610352,
-0.1770721822977066,
0.49225643277168274,
0.2866207957267761,
-0.2643141746520996,
-0.07141847163438797,
0.7665050029754639,
-0.1738099902868271,
0.7680231928825378,
0.4378599226474762,
-0.7091728448867798,
-0.7395309209823608,
-0.32591357827186584,
-0.0729164779186248... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
darrel999/java-1000 | darrel999 | 2023-11-23T07:00:41Z | 27 | 0 | null | [
"region:us"
] | 2023-11-23T07:00:41Z | 2023-11-23T07:00:28.000Z | 2023-11-23T07:00:28 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: content
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 576160
num_examples: 1000
download_size: 300158
dataset_size: 576160
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
result-kand2-sdxl-wuerst-karlo/e13426e5 | result-kand2-sdxl-wuerst-karlo | 2023-11-23T14:28:13Z | 27 | 0 | null | [
"region:us"
] | 2023-11-23T14:28:13Z | 2023-11-23T14:28:12.000Z | 2023-11-23T14:28:12 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 166
num_examples: 10
download_size: 1307
dataset_size: 166
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "e13426e5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7430073022842407,
-0.08212123066186905,
0.2909897267818451,
0.23124690353870392,
-0.19955900311470032,
-0.35881277918815613,
0.39496222138404846,
-0.31511035561561584,
0.9738046526908875,
0.35718974471092224,
-0.988182783126831,
-0.667454183101654,
-0.5564189553260803,
0.077472634613513... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dutta18/omcs_50k_with_FAISS | dutta18 | 2023-11-25T09:55:32Z | 27 | 0 | null | [
"region:us"
] | 2023-11-25T09:55:32Z | 2023-11-25T07:27:50.000Z | 2023-11-25T07:27:50 | ---
dataset_info:
features:
- name: count
dtype: int64
- name: fact
dtype: string
- name: embeddings
sequence: float32
splits:
- name: train
num_bytes: 157033742
num_examples: 50000
download_size: 186812200
dataset_size: 157033742
---
# Dataset Card for "omcs_50k_with_FAISS"
When people communicate, they rely on a large body of shared common sense knowledge in order to understand each other. Many barriers we face today in artificial intelligence and user interface design are due to the fact that computers do not share this knowledge. To improve computers' understanding of the world that people live in and talk about, we need to provide them with usable knowledge about the basic relationships between things that nearly every person knows.
The embedding for implementing FAISS indexing is given in the dataset as the 'embedding' column.
To implement FAISS indexing:
dataset.add_faiss_index(column='embeddings')
The above code needed to be executed. Then FAISS indexing can be verified. | [
-0.6345259547233582,
-0.43234190344810486,
0.09196793287992477,
0.31266772747039795,
-0.10970737040042877,
-0.20414388179779053,
0.19304631650447845,
-0.23375022411346436,
0.2132740318775177,
0.2532505989074707,
-0.49517256021499634,
-0.6933920383453369,
-0.2622861862182617,
0.266038328409... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
TiffanyCheng/LLM_Bias_EECS182_Project | TiffanyCheng | 2023-11-27T01:32:27Z | 27 | 0 | null | [
"region:us"
] | 2023-11-27T01:32:27Z | 2023-11-27T00:46:56.000Z | 2023-11-27T00:46:56 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lvwerra/red-wine | lvwerra | 2022-02-15T15:55:52Z | 26 | 2 | null | [
"region:us"
] | 2022-02-15T15:55:52Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | # Red Wine Dataset 🍷
This dataset contains the red wine dataset found [here](https://github.com/suvoooo/Machine_Learning). See also [this](https://huggingface.co/julien-c/wine-quality) example of a Scikit-Learn model trained on this dataset. | [
-0.0916110947728157,
-0.2638236880302429,
0.25918033719062805,
0.27080991864204407,
-0.14894644916057587,
0.1690751165151596,
0.279247909784317,
-0.08872035890817642,
0.42913511395454407,
0.6994556784629822,
-0.7785386443138123,
-0.6632562875747681,
-0.33227303624153137,
-0.219336122274398... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
piEsposito/br_quad_20 | piEsposito | 2021-02-05T16:05:55Z | 26 | 0 | null | [
"region:us"
] | 2021-02-05T16:05:55Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
projecte-aina/parlament_parla | projecte-aina | 2023-09-13T12:38:52Z | 26 | 1 | null | [
"task_categories:automatic-speech-recognition",
"task_categories:text-generation",
"task_ids:language-modeling",
"task_ids:speaker-identification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:ca",
"license:cc-by-4.0"... | 2023-09-13T12:38:52Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- found
language_creators:
- found
language:
- ca
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
clean:
- 10K<n<100K
other:
- 100K<n<1M
source_datasets:
- original
task_categories:
- automatic-speech-recognition
- text-generation
task_ids:
- language-modeling
- speaker-identification
pretty_name: ParlamentParla
---
# Dataset Card for ParlamentParla
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://zenodo.org/record/5541827
- **Repository:** https://github.com/CollectivaT-dev/ParlamentParla
- **Paper:** ParlamentParla: [A Speech Corpus of Catalan Parliamentary Sessions.](http://www.lrec-conf.org/proceedings/lrec2022/workshops/ParlaCLARINIII/2022.parlaclariniii-1.0.pdf#page=135)
- **Point of Contact:** [Baybars Kulebi](mailto:baybars.kulebi@bsc.es)
### Dataset Summary
This is the ParlamentParla speech corpus for Catalan prepared by Col·lectivaT. The audio segments were extracted from recordings the Catalan Parliament (Parlament de Catalunya) plenary sessions, which took place between 2007/07/11 - 2018/07/17. We aligned the transcriptions with the recordings and extracted the corpus. The content belongs to the Catalan Parliament and the data is released conforming their terms of use.
Preparation of this corpus was partly supported by the Department of Culture of the Catalan autonomous government, and the v2.0 was supported by the Barcelona Supercomputing Center, within the framework of Projecte AINA of the Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya.
As of v2.0 the corpus is separated into 211 hours of clean and 400 hours of other quality segments. Furthermore, each speech segment is tagged with its speaker and each speaker with their gender. The statistics are detailed in the readme file.
### Supported Tasks and Leaderboards
The dataset can be used for:
- Language Modeling.
- Automatic Speech Recognition (ASR) transcribes utterances into words.
- Speaker Identification (SI) classifies each utterance for its speaker identity as a multi-class classification, where speakers are in the same predefined set for both training and testing.
### Languages
The dataset is in Catalan (`ca-ES`).
## Dataset Structure
### Data Instances
```
{
'path': 'clean_train/c/c/ccca4790a55aba3e6bcf_63.88_74.06.wav'
'audio': {
'path': 'clean_train/c/c/ccca4790a55aba3e6bcf_63.88_74.06.wav',
'array': array([-6.10351562e-05, -6.10351562e-05, -1.22070312e-04, ...,
-1.22070312e-04, 0.00000000e+00, -3.05175781e-05]),
'sampling_rate': 16000
},
'speaker_id': 167,
'sentence': "alguns d'ells avui aquí presents un agraïment a aquells que mantenen viva la memòria aquest acte de reparació i dignitat és",
'gender': 0,
'duration': 10.18
}
```
### Data Fields
- `path` (str): The path to the audio file.
- `audio` (dict): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling
rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and
resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might
take a significant amount of time. Thus, it is important to first query the sample index before the `"audio"` column,
*i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- `speaker_id` (int): The speaker ID.
- `sentence` (str): The sentence the user was prompted to speak.
- `gender` (ClassLabel): The gender of the speaker (0: 'F', 1: 'M').
- `duration` (float): Duration of the speech.
### Data Splits
The dataset is split in: "train", "validation" and "test".
## Dataset Creation
The dataset is created by aligning the parliamentary session transcripts
and the audiovisual content. For more detailed information please consult
this [paper](http://www.lrec-conf.org/proceedings/lrec2022/workshops/ParlaCLARINIII/2022.parlaclariniii-1.0.pdf#page=135).
### Curation Rationale
We created this corpus to contribute to the development of language models in Catalan, a low-resource language.
### Source Data
#### Initial Data Collection and Normalization
The audio segments were extracted from recordings the Catalan Parliament
(Parlament de Catalunya) plenary sessions, which took place between 2007/07/11 -
2018/07/17. The cleaning procedures are in the archived repository [Long Audio
Aligner](https://github.com/gullabi/long-audio-aligner)
#### Who are the source language producers?
The parliamentary members of the legislatures between 2007/07/11 -
2018/07/17
### Annotations
The dataset is unannotated.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
The initial content is publicly available furthermore, the identities of
the parliamentary members are anonymized.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this corpus contributes to the development of language models in
Catalan, a low-resource language.
### Discussion of Biases
This dataset has a gender bias, however since the speakers are tagged according to
their genders, creating a balanced subcorpus is possible.
| Subcorpus | Gender | Duration (h) |
|-------------|----------|------------|
| other_test | F | 2.516 |
| other_dev | F | 2.701 |
| other_train | F | 109.68 |
| other_test | M | 2.631 |
| other_dev | M | 2.513 |
| other_train | M | 280.196 |
|*other total*| | 400.239 |
| clean_test | F | 2.707 |
| clean_dev | F | 2.576 |
| clean_train | F | 77.905 |
| clean_test | M | 2.516 |
| clean_dev | M | 2.614 |
| clean_train | M | 123.162 |
|*clean total*| | 211.48 |
|*Total* | | 611.719 |
### Other Known Limitations
The text corpus belongs to the domain of Catalan politics
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
[Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/).
### Citation Information
```
@dataset{kulebi_baybars_2021_5541827,
author = {Külebi, Baybars},
title = {{ParlamentParla - Speech corpus of Catalan
Parliamentary sessions}},
month = oct,
year = 2021,
publisher = {Zenodo},
version = {v2.0},
doi = {10.5281/zenodo.5541827},
url = {https://doi.org/10.5281/zenodo.5541827}
}
```
For the paper:
```
@inproceedings{kulebi2022parlamentparla,
title={ParlamentParla: A Speech Corpus of Catalan Parliamentary Sessions},
author={K{\"u}lebi, Baybars and Armentano-Oller, Carme and Rodr{\'\i}guez-Penagos, Carlos and Villegas, Marta},
booktitle={Workshop on Creating, Enriching and Using Parliamentary Corpora},
volume={125},
number={130},
pages={125},
year={2022}
}
```
### Contributions
Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
| [
-0.3834335207939148,
-0.4990360140800476,
0.1159820705652237,
0.44621148705482483,
-0.3095606863498688,
-0.05755423381924629,
-0.3344554007053375,
-0.049645572900772095,
0.4187667965888977,
0.549490213394165,
-0.3426532745361328,
-0.8446192145347595,
-0.5770776867866516,
0.0554596707224845... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
zloelias/kinopoisk-reviews | zloelias | 2021-12-06T18:02:51Z | 26 | 0 | null | [
"region:us"
] | 2021-12-06T18:02:51Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SocialGrep/the-antiwork-subreddit-dataset | SocialGrep | 2022-07-01T17:57:34Z | 26 | 1 | null | [
"annotations_creators:lexyr",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-07-01T17:57:34Z | 2022-03-08T21:09:51.000Z | 2022-03-08T21:09:51 | ---
annotations_creators:
- lexyr
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
paperswithcode_id: null
---
# Dataset Card for the-antiwork-subreddit-dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets/the-antiwork-subreddit-dataset?utm_source=huggingface&utm_medium=link&utm_campaign=theantiworksubredditdataset)
- **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=theantiworksubredditdataset)
### Dataset Summary
This corpus contains the complete data for the activity of the /r/Antiwork subreddit until 2022-02-18.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'subreddit.name': the human-readable name of the data point's host subreddit.
- 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
CC-BY v4.0
### Contributions
[Needs More Information] | [
-0.584564208984375,
-0.6467908024787903,
0.29506516456604004,
0.1780901551246643,
-0.5440872311592102,
0.014668013900518417,
0.005992828402668238,
-0.3781138062477112,
0.9021268486976624,
0.4572889804840088,
-0.9393914341926575,
-0.9806064367294312,
-0.7091047763824463,
0.16488978266716003... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
rakkaalhazimi/hotel-review | rakkaalhazimi | 2022-03-12T07:23:47Z | 26 | 0 | null | [
"license:gpl-3.0",
"region:us"
] | 2022-03-12T07:23:47Z | 2022-03-12T05:52:57.000Z | 2022-03-12T05:52:57 | ---
license: gpl-3.0
---
# Review Hotel in Indonesia
### Dataset Summary
Data about reviews of hotels in Indonesia
### Languages
Indonesia
## Dataset Structure
### Data Fields
- review_id : unique identification code of each review
- review_text : the main review of text
- category : label for each review, positive (1) or negative (0)
| [
-0.21426308155059814,
-0.4614986181259155,
-0.10056590288877487,
0.5922696590423584,
-0.5593602657318115,
-0.07543923705816269,
-0.1278788149356842,
-0.25586700439453125,
0.716924786567688,
1.0688737630844116,
-0.4047076106071472,
-1.0038291215896606,
-0.4506915509700775,
0.777728438377380... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
IIC/lfqa_spanish | IIC | 2022-10-23T05:17:47Z | 26 | 3 | null | [
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:vblagoje/lfqa",
"source_datasets:vblagoje/lfqa_support_docs",
"language:es",
"region:us"
] | 2022-10-23T05:17:47Z | 2022-03-20T01:15:30.000Z | 2022-03-20T01:15:30 | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- es
multilinguality:
- monolingual
pretty_name: LFQA
size_categories:
- 100K<n<1M
source_datasets:
- vblagoje/lfqa
- vblagoje/lfqa_support_docs
task_categories:
- sequence-modeling
task_ids:
- language-modeling
---
This is an automatically translated version of [vblagoje/lfqa](https://huggingface.co/datasets/vblagoje/lfqa), a dataset used for long form question answering training. The model used for translating the dataset is [marianMT english-spanish](https://huggingface.co/Helsinki-NLP/opus-mt-en-es). | [
-0.29326385259628296,
-0.9520074725151062,
0.5031053423881531,
0.425340861082077,
-0.2781105637550354,
0.0919036865234375,
0.02213757112622261,
-0.6203569173812866,
0.4304697513580322,
0.9950453042984009,
-1.2399741411209106,
-0.3361803889274597,
-0.3938298225402832,
0.42798176407814026,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
marksverdhei/wordnet-definitions-en-2021 | marksverdhei | 2022-04-04T21:55:03Z | 26 | 1 | null | [
"region:us"
] | 2022-04-04T21:55:03Z | 2022-04-02T19:02:14.000Z | 2022-04-02T19:02:14 | # Wordnet definitions for English
Dataset by Princeton WordNet and the Open English WordNet team
https://github.com/globalwordnet/english-wordnet
This dataset contains every entry in wordnet that has a definition and an example.
Be aware that the word "null" can be misinterpreted as a null value if loading it in with e.g. pandas | [
-0.007798934355378151,
-0.32871389389038086,
0.09983710199594498,
0.1295006424188614,
-0.2071927785873413,
-0.2989712655544281,
0.047634851187467575,
-0.32303425669670105,
0.6703307628631592,
0.5006309151649475,
-0.6313707232475281,
-0.5390955805778503,
-0.6924099326133728,
0.4378818273544... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Calin/eurosat-demo | Calin | 2022-04-27T09:26:44Z | 26 | 0 | null | [
"region:us"
] | 2022-04-27T09:26:44Z | 2022-04-27T09:26:24.000Z | 2022-04-27T09:26:24 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
wza/roc_stories | wza | 2022-05-03T06:19:34Z | 26 | 2 | null | [
"region:us"
] | 2022-05-03T06:19:34Z | 2022-05-03T02:15:53.000Z | 2022-05-03T02:15:53 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
taesiri/GamePhysics_Grand_Theft_Auto_V | taesiri | 2022-05-26T06:00:19Z | 26 | 3 | null | [
"region:us"
] | 2022-05-26T06:00:19Z | 2022-05-26T05:43:59.000Z | 2022-05-26T05:43:59 | ---
annotations_creators:
- no-annotation
languages:
- en
# Dataset Card for GamePhysics_Grand_Theft_Auto_V
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://asgaardlab.github.io/CLIPxGamePhysics/
- **Repository:** https://github.com/asgaardlab/CLIPxGamePhysics
- **Paper:** CLIP meets GamePhysics
- **Leaderboard:** [N/A]
- **Point of Contact:** [Mohammad Reza Taesiri](mailto:mtaesiri@gmail.com)
### Dataset Summary
The GamePhysics Grand Theft Auto V dataset is a small video dataset of buggy gameplay videos of Grand Theft Auto V game, collected from [GamePhysics](https://www.reddit.com/r/GamePhysics/) subrredit
### Supported Tasks and Leaderboards
[N/A]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information] | [
-0.5969974994659424,
-0.651648759841919,
0.3847651183605194,
0.3283087909221649,
-0.2696179747581482,
0.27995017170906067,
-0.3049554228782654,
-0.45803478360176086,
0.5466722846031189,
0.5805314779281616,
-1.0586447715759277,
-1.0363757610321045,
-0.6340430974960327,
-0.029375968500971794... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
silver/lccc | silver | 2022-11-06T04:51:16Z | 26 | 11 | null | [
"task_categories:conversational",
"task_ids:dialogue-generation",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:zh",
"license:mit",
"dialogue-response-retrieval",
"arxiv:2008.03946",
"... | 2022-11-06T04:51:16Z | 2022-05-29T09:19:28.000Z | 2022-05-29T09:19:28 | ---
annotations_creators:
- other
language_creators:
- other
language:
- zh
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- conversational
task_ids:
- dialogue-generation
pretty_name: lccc
tags:
- dialogue-response-retrieval
---
# Dataset Card for lccc_large
## Table of Contents
- [Dataset Card for lccc_large](#dataset-card-for-lccc_large)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/thu-coai/CDial-GPT
- **Repository:** https://github.com/thu-coai/CDial-GPT
- **Paper:** https://arxiv.org/abs/2008.03946
### Dataset Summary
lccc: Large-scale Cleaned Chinese Conversation corpus (LCCC) is a large Chinese dialogue corpus originate from Chinese social medias. A rigorous data cleaning pipeline is designed to ensure the quality of the corpus. This pipeline involves a set of rules and several classifier-based filters. Noises such as offensive or sensitive words, special symbols, emojis, grammatically incorrect sentences, and incoherent conversations are filtered.
lccc是一套来自于中文社交媒体的对话数据,我们设计了一套严格的数据过滤流程来确保该数据集中对话数据的质量。 这一数据过滤流程中包括一系列手工规则以及若干基于机器学习算法所构建的分类器。 我们所过滤掉的噪声包括:脏字脏词、特殊字符、颜表情、语法不通的语句、上下文不相关的对话等。
### Supported Tasks and Leaderboards
- dialogue-generation: The dataset can be used to train a model for generating dialogue responses.
- response-retrieval: The dataset can be used to train a reranker model that can be used to implement a retrieval-based dialogue model.
### Languages
LCCC is in Chinese
LCCC中的对话是中文的
## Dataset Structure
### Data Instances
["火锅 我 在 重庆 成都 吃 了 七八 顿 火锅", "哈哈哈哈 ! 那 我 的 嘴巴 可能 要 烂掉 !", "不会 的 就是 好 油腻"]
### Data Fields
Each line is a list of utterances that consist a dialogue.
Note that the LCCC dataset provided in our original Github page is in json format,
however, we are providing LCCC in jsonl format here.
### Data Splits
We do not provide the offical split for LCCC-large.
But we provide a split for LCCC-base:
|train|valid|test|
|:---:|:---:|:---:|
|6,820,506 | 20,000 | 10,000|
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Please cite the following paper if you find this dataset useful:
```bibtex
@inproceedings{wang2020chinese,
title={A Large-Scale Chinese Short-Text Conversation Dataset},
author={Wang, Yida and Ke, Pei and Zheng, Yinhe and Huang, Kaili and Jiang, Yong and Zhu, Xiaoyan and Huang, Minlie},
booktitle={NLPCC},
year={2020},
url={https://arxiv.org/abs/2008.03946}
}
```
| [
-0.46431201696395874,
-0.7047985196113586,
0.08142229169607162,
0.08370145410299301,
-0.24084152281284332,
0.1365218162536621,
-0.512169599533081,
-0.29203280806541443,
0.29148223996162415,
0.7041053175926208,
-0.7247411608695984,
-0.9859583377838135,
-0.35391807556152344,
0.07420817017555... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gcaillaut/frwiki_el | gcaillaut | 2022-09-28T08:52:12Z | 26 | 1 | null | [
"task_categories:token-classification",
"annotations_creators:crowdsourced",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:fr",
"license:wtfpl",
"region:us"
] | 2022-09-28T08:52:12Z | 2022-06-15T09:37:40.000Z | 2022-06-15T09:37:40 | ---
annotations_creators:
- crowdsourced
language_creators:
- machine-generated
language:
- fr
license:
- wtfpl
multilinguality:
- monolingual
pretty_name: French Wikipedia dataset for Entity Linking
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- token-classification
task_ids: []
---
# Dataset Card for frwiki_good_pages_el
## Dataset Description
- Repository: [frwiki_el](https://github.com/GaaH/frwiki_el)
- Point of Contact: [Gaëtan Caillaut](mailto://g.caillaut@brgm.fr)
### Dataset Summary
This dataset contains articles from the French Wikipédia.
It is intended to be used to train Entity Linking (EL) systems. Links in articles are used to detect named entities.
The dataset `frwiki` contains sentences of each Wikipedia pages.
The dataset `entities` contains description for each Wikipedia pages.
### Languages
- French
## Dataset Structure
### frwiki
```
{
"name": "Title of the page",
"wikidata_id": "Identifier of the related Wikidata entity. Can be null.",
"wikipedia_id": "Identifier of the Wikipedia page",
"wikipedia_url": "URL to the Wikipedia page",
"wikidata_url": "URL to the Wikidata page. Can be null.",
"sentences" : [
{
"text": "text of the current sentence",
"ner": ["list", "of", "ner", "labels"],
"mention_mappings": [
(start_of_first_mention, end_of_first_mention),
(start_of_second_mention, end_of_second_mention)
],
"el_wikidata_id": ["wikidata id of first mention", "wikidata id of second mention"],
"el_wikipedia_id": [wikipedia id of first mention, wikipedia id of second mention],
"el_wikipedia_title": ["wikipedia title of first mention", "wikipedia title of second mention"]
}
]
"words": ["words", "in", "the", "sentence"],
"ner": ["ner", "labels", "of", "each", "words"],
"el": ["el", "labels", "of", "each", "words"]
}
```
### entities
```
{
"name": "Title of the page",
"wikidata_id": "Identifier of the related Wikidata entity. Can be null.",
"wikipedia_id": "Identifier of the Wikipedia page",
"wikipedia_url": "URL to the Wikipedia page",
"wikidata_url": "URL to the Wikidata page. Can be null.",
"description": "Description of the entity"
}
``` | [
-0.7437639832496643,
-0.4461604058742523,
0.23889702558517456,
0.20060642063617706,
-0.2615305483341217,
-0.174396812915802,
-0.2592330574989319,
-0.28145527839660645,
0.59858238697052,
0.4776363968849182,
-0.743661642074585,
-0.8621829748153687,
-0.5148701071739197,
0.273997038602829,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pinecone/core-2020-05-10-deduplication | pinecone | 2022-10-28T03:01:02Z | 26 | 1 | null | [
"task_categories:other",
"task_ids:natural-language-inference",
"task_ids:semantic-similarity-scoring",
"task_ids:text-scoring",
"annotations_creators:unknown",
"language_creators:unknown",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:unknown",
"language:en",
"lic... | 2022-10-28T03:01:02Z | 2022-06-18T15:43:43.000Z | 2022-06-18T15:43:43 | ---
annotations_creators:
- unknown
language_creators:
- unknown
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- unknown
task_categories:
- other
task_ids:
- natural-language-inference
- semantic-similarity-scoring
- text-scoring
pretty_name: CORE Deduplication of Scholarly Documents
tags:
- deduplication
---
# Dataset Card for CORE Deduplication
## Dataset Description
- **Homepage:** [https://core.ac.uk/about/research-outputs](https://core.ac.uk/about/research-outputs)
- **Repository:** [https://core.ac.uk/datasets/core_2020-05-10_deduplication.zip](https://core.ac.uk/datasets/core_2020-05-10_deduplication.zip)
- **Paper:** [Deduplication of Scholarly Documents using Locality Sensitive Hashing and Word Embeddings](http://oro.open.ac.uk/id/eprint/70519)
- **Point of Contact:** [CORE Team](https://core.ac.uk/about#contact)
- **Size of downloaded dataset files:** 204 MB
### Dataset Summary
CORE 2020 Deduplication dataset (https://core.ac.uk/documentation/dataset) contains 100K scholarly documents labeled as duplicates/non-duplicates.
### Languages
The dataset language is English (BCP-47 `en`)
### Citation Information
```
@inproceedings{dedup2020,
title={Deduplication of Scholarly Documents using Locality Sensitive Hashing and Word Embeddings},
author={Gyawali, Bikash and Anastasiou, Lucas and Knoth, Petr},
booktitle = {Proceedings of 12th Language Resources and Evaluation Conference},
month = may,
year = 2020,
publisher = {France European Language Resources Association},
pages = {894-903}
}
```
| [
-0.2518557012081146,
-0.3902459144592285,
0.14067783951759338,
-0.019297625869512558,
-0.5832664966583252,
0.07889355719089508,
-0.02730402909219265,
-0.3977832496166229,
0.3444695472717285,
0.45921826362609863,
-0.30783528089523315,
-0.7236388921737671,
-0.7285126447677612,
0.345777302980... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
projecte-aina/ca_zh_wikipedia | projecte-aina | 2023-01-09T07:56:07Z | 26 | 3 | null | [
"task_categories:translation",
"annotations_creators:no-annotation",
"language_creators:machine-generated",
"multilinguality:translation",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ca",
"language:zh",
"language:multilingual",
"license:cc-by-4.0",
"region:us"
] | 2023-01-09T07:56:07Z | 2022-06-27T09:03:00.000Z | 2022-06-27T09:03:00 | ---
annotations_creators:
- no-annotation
language_creators:
- machine-generated
language:
- ca
- zh
- multilingual
license:
- cc-by-4.0
multilinguality:
- translation
pretty_name: CA-ZH Wikipedia Parallel Corpus
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
---
# Dataset Card for CA-ZH Wikipedia datasets
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [cescolano3@gmail.com](cescolano3@gmail.com)
### Dataset Summary
The CA-ZH Parallel Corpus is a Catalan-Chinese dataset of mutual translations automatically crawled from Wikipedia. Two separate corpora are included, namely CA-ZH 1.05 Wikipedia and CA-ZH 1.10 Wikipedia, the latter has better general quality than the former. The dataset was created to support Catalan NLP tasks, e.g., Machine Translation.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for Multilingual Machine Translation. Success on this task is typically measured by achieving a high BLEU score. The dataset can be used to finetune a large-scale multilingual MT system such as m2m-100.
### Languages
The texts in the dataset are in Catalan and Chinese.
## Dataset Structure
### Data Instances
A typical data point comprises a pair of translations in Catalan and Chinese. An example from the Ca-Zh Parallel Corpus looks as follows:
```
{ "ca": "1591è Batalló Separat d'Artilleria autorpopulsada", "zh": "第1591自走砲营" }
```
### Data Fields
- "ca": Text in Catalan.
- "zh": Text in Chinese.
### Data Splits
The dataset contains a single split: `train`.
## Dataset Creation
### Curation Rationale
The Ca-Zh Parallel Corpus was built to provide more language data for MT tasks dedicated to low-resource languages. The dataset was built by gathering texts on the same topic in Catalan and Chinese from Wikipedia.
### Source Data
#### Initial Data Collection and Normalization
The data was obtained by automatic crawling, a quality filter was applied to improve the data quality. The original Chinese data was mixed into Traditional Chinese and Simplified Chinese, a simplification process was conducted in order to guarantee the unification.
#### Who are the source language producers?
All the texts in this dataset come from the Wikipedia.
### Annotations
The dataset is unannotated.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
No anonymisation process was performed.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop Machines Translation tasks for low-resource languages such as Catalan.
### Discussion of Biases
We are aware that since the data comes from unreliable web pages and non-curated texts, some biases may be present in the dataset. Nonetheless, we have not applied any steps to reduce their impact.
### Other Known Limitations
Wikipedia provides data of a more general domain. Application of this dataset in more specific domains such as biomedical, legal etc. would be of limited use.
## Additional Information
### Dataset Curators
Carlos Escolano, Chenuye Zhou and Zixuan Liu, Barcelona Supercomputing Center (cescolano3 at gmail dot com)
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
[Creative Commons Attribution Share Alike 4.0 International](https://creativecommons.org/licenses/by-sa/4.0/).
### Citation Information
```
@mastersthesis{MasterThesisChenuyeZhou,
author = "Chenuye Zhou",
title = "Building a Catalan-Chinese parallel corpus for use in MT",
school = "Universitat Pompeu Fabra",
year = 2022,
address = "Barcelona",
url = "https://repositori.upf.edu/handle/10230/54140"
}
@mastersthesis{MasterThesisZixuanLiu,
author = "Zixuan Liu",
title = "Improving Chinese-Catalan Machine Translation with Wikipedia Parallel",
school = "Universitat Pompeu Fabra",
year = 2022,
address = "Barcelona",
url= "https://repositori.upf.edu/handle/10230/54142"
}
```
| [
-0.39664018154144287,
-0.48191455006599426,
0.13826943933963776,
0.5049473643302917,
-0.31149569153785706,
-0.11335320770740509,
-0.5162635445594788,
-0.3480670154094696,
0.6158915162086487,
0.42759570479393005,
-0.5210592150688171,
-0.9249111413955688,
-0.5082552433013916,
0.2635720968246... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jonaskoenig/reddit-blogspot-twitter | jonaskoenig | 2022-07-11T09:49:43Z | 26 | 0 | null | [
"region:us"
] | 2022-07-11T09:49:43Z | 2022-07-11T09:47:53.000Z | 2022-07-11T09:47:53 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
alkzar90/croupier-mtg-dataset | alkzar90 | 2022-08-02T01:41:48Z | 26 | 2 | null | [
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"annotations_creators:found",
"size_categories:1K<n<10K",
"source_datasets:original",
"license:apache-2.0",
"mgt",
"magic-card-game",
"creature-dataset",
"region:us"
] | 2022-08-02T01:41:48Z | 2022-07-28T21:18:49.000Z | 2022-07-28T21:18:49 | ---
annotations_creators:
- found
language: []
language_creators: []
license:
- apache-2.0
multilinguality: []
pretty_name: 'Croupier: a Magic the Gathering creatures dataset'
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- mgt
- magic-card-game
- creature-dataset
task_categories:
- image-classification
task_ids:
- multi-class-image-classification
---
## Dataset Description
- **Homepage:** the [Gatherer](https://gatherer.wizards.com/Pages/)
- **Repository:** https://github.com/alcazar90/croupier-mtg-dataset
### Dataset Summary
A card images dataset of 4 types of creatures from Magic the Gathering card game: elf, goblin, knight, and zombie.
## Dataset Creation
All card information from Magic the Gathering card game is public available from the
[Gatherer]( https://gatherer.wizards.com/Pages/) website, the official Magic Card Database. The dataset is just
a subset selection of 4 kind of creatures from the game. | [
-0.5480072498321533,
-0.4184178411960602,
-0.17954987287521362,
-0.18092527985572815,
-0.351296991109848,
0.17061908543109894,
-0.000593166274484247,
-0.5573004484176636,
0.5318848490715027,
0.8592433333396912,
-0.54218989610672,
-0.7494009733200073,
-0.679355263710022,
0.09919458627700806... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sepidmnorozy/Korean_sentiment | sepidmnorozy | 2022-08-16T09:25:48Z | 26 | 1 | null | [
"region:us"
] | 2022-08-16T09:25:48Z | 2022-08-16T09:25:01.000Z | 2022-08-16T09:25:01 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Sachinkelenjaguri/Resume_dataset | Sachinkelenjaguri | 2022-10-06T12:04:31Z | 26 | 4 | null | [
"region:us"
] | 2022-10-06T12:04:31Z | 2022-10-06T12:03:49.000Z | 2022-10-06T12:03:49 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
RKoops/celeb-identities | RKoops | 2022-10-07T13:37:54Z | 26 | 0 | null | [
"region:us"
] | 2022-10-07T13:37:54Z | 2022-10-07T13:37:44.000Z | 2022-10-07T13:37:44 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dougtrajano/olid-br | dougtrajano | 2023-07-13T12:45:43Z | 26 | 2 | null | [
"language:pt",
"license:cc-by-4.0",
"region:us"
] | 2023-07-13T12:45:43Z | 2022-10-08T02:38:32.000Z | 2022-10-08T02:38:32 | ---
language: pt
license: cc-by-4.0
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
- name: is_offensive
dtype: string
- name: is_targeted
dtype: string
- name: targeted_type
dtype: string
- name: toxic_spans
sequence: int64
- name: health
dtype: bool
- name: ideology
dtype: bool
- name: insult
dtype: bool
- name: lgbtqphobia
dtype: bool
- name: other_lifestyle
dtype: bool
- name: physical_aspects
dtype: bool
- name: profanity_obscene
dtype: bool
- name: racism
dtype: bool
- name: religious_intolerance
dtype: bool
- name: sexism
dtype: bool
- name: xenophobia
dtype: bool
splits:
- name: train
num_bytes: 1763684
num_examples: 5214
- name: test
num_bytes: 590953
num_examples: 1738
download_size: 1011742
dataset_size: 2354637
---
# OLID-BR
Offensive Language Identification Dataset for Brazilian Portuguese (OLID-BR) is a dataset with multi-task annotations for the detection of offensive language.
The current version (v1.0) contains **7,943** (extendable to 13,538) comments from different sources, including social media (YouTube and Twitter) and related datasets.
OLID-BR contains a collection of annotated sentences in Brazilian Portuguese using an annotation model that encompasses the following levels:
- [Offensive content detection](#offensive-content-detection): Detect offensive content in sentences and categorize it.
- [Offense target identification](#offense-target-identification): Detect if an offensive sentence is targeted to a person or group of people.
- [Offensive spans identification](#offensive-spans-identification): Detect curse words in sentences.

## Categorization
### Offensive Content Detection
This level is used to detect offensive content in the sentence.
**Is this text offensive?**
We use the [Perspective API](https://www.perspectiveapi.com/) to detect if the sentence contains offensive content with double-checking by our [qualified annotators](annotation/index.en.md#who-are-qualified-annotators).
- `OFF` Offensive: Inappropriate language, insults, or threats.
- `NOT` Not offensive: No offense or profanity.
**Which kind of offense does it contain?**
The following labels were tagged by our annotators:
`Health`, `Ideology`, `Insult`, `LGBTQphobia`, `Other-Lifestyle`, `Physical Aspects`, `Profanity/Obscene`, `Racism`, `Religious Intolerance`, `Sexism`, and `Xenophobia`.
See the [**Glossary**](glossary.en.md) for further information.
### Offense Target Identification
This level is used to detect if an offensive sentence is targeted to a person or group of people.
**Is the offensive text targeted?**
- `TIN` Targeted Insult: Targeted insult or threat towards an individual, a group or other.
- `UNT` Untargeted: Non-targeted profanity and swearing.
**What is the target of the offense?**
- `IND` The offense targets an individual, often defined as “cyberbullying”.
- `GRP` The offense targets a group of people based on ethnicity, gender, sexual
- `OTH` The target can belong to other categories, such as an organization, an event, an issue, etc.
### Offensive Spans Identification
As toxic spans, we define a sequence of words that attribute to the text's toxicity.
For example, let's consider the following text:
> "USER `Canalha` URL"
The toxic spans are:
```python
[5, 6, 7, 8, 9, 10, 11, 12, 13]
```
## Dataset Structure
### Data Instances
Each instance is a social media comment with a corresponding ID and annotations for all the tasks described below.
### Data Fields
The simplified configuration includes:
- `id` (string): Unique identifier of the instance.
- `text` (string): The text of the instance.
- `is_offensive` (string): Whether the text is offensive (`OFF`) or not (`NOT`).
- `is_targeted` (string): Whether the text is targeted (`TIN`) or untargeted (`UNT`).
- `targeted_type` (string): Type of the target (individual `IND`, group `GRP`, or other `OTH`). Only available if `is_targeted` is `True`.
- `toxic_spans` (string): List of toxic spans.
- `health` (boolean): Whether the text contains hate speech based on health conditions such as disability, disease, etc.
- `ideology` (boolean): Indicates if the text contains hate speech based on a person's ideas or beliefs.
- `insult` (boolean): Whether the text contains insult, inflammatory, or provocative content.
- `lgbtqphobia` (boolean): Whether the text contains harmful content related to gender identity or sexual orientation.
- `other_lifestyle` (boolean): Whether the text contains hate speech related to life habits (e.g. veganism, vegetarianism, etc.).
- `physical_aspects` (boolean): Whether the text contains hate speech related to physical appearance.
- `profanity_obscene` (boolean): Whether the text contains profanity or obscene content.
- `racism` (boolean): Whether the text contains prejudiced thoughts or discriminatory actions based on differences in race/ethnicity.
- `religious_intolerance` (boolean): Whether the text contains religious intolerance.
- `sexism` (boolean): Whether the text contains discriminatory content based on differences in sex/gender (e.g. sexism, misogyny, etc.).
- `xenophobia` (boolean): Whether the text contains hate speech against foreigners.
See the [**Get Started**](get-started.en.md) page for more information.
## Considerations for Using the Data
### Social Impact of Dataset
Toxicity detection is a worthwhile problem that can ensure a safer online environment for everyone.
However, toxicity detection algorithms have focused on English and do not consider the specificities of other languages.
This is a problem because the toxicity of a comment can be different in different languages.
Additionally, the toxicity detection algorithms focus on the binary classification of a comment as toxic or not toxic.
Therefore, we believe that the OLID-BR dataset can help to improve the performance of toxicity detection algorithms in Brazilian Portuguese.
### Discussion of Biases
We are aware that the dataset contains biases and is not representative of global diversity.
We are aware that the language used in the dataset could not represent the language used in different contexts.
Potential biases in the data include: Inherent biases in the social media and user base biases, the offensive/vulgar word lists used for data filtering, and inherent or unconscious bias in the assessment of offensive identity labels.
All these likely affect labeling, precision, and recall for a trained model.
## Citation
Pending | [
-0.25924596190452576,
-1.0768555402755737,
0.13067294657230377,
0.42033931612968445,
-0.2123401015996933,
-0.10273344814777374,
-0.22192373871803284,
-0.5771063566207886,
0.1337018460035324,
0.538989782333374,
-0.38699594140052795,
-0.9712585806846619,
-0.7012868523597717,
0.26548442244529... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
capofwesh20/celeb-identities | capofwesh20 | 2022-10-12T21:55:30Z | 26 | 0 | null | [
"region:us"
] | 2022-10-12T21:55:30Z | 2022-10-12T18:59:32.000Z | 2022-10-12T18:59:32 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dennlinger/wiki-paragraphs | dennlinger | 2022-10-13T22:12:37Z | 26 | 0 | null | [
"task_categories:text-classification",
"task_categories:sentence-similarity",
"task_ids:semantic-similarity-scoring",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:en",
... | 2022-10-13T22:12:37Z | 2022-10-13T15:15:55.000Z | 2022-10-13T15:15:55 | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- crowdsourced
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
pretty_name: wiki-paragraphs
size_categories:
- 10M<n<100M
source_datasets:
- original
tags:
- wikipedia
- self-similarity
task_categories:
- text-classification
- sentence-similarity
task_ids:
- semantic-similarity-scoring
---
# Dataset Card for `wiki-paragraphs`
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** https://github.com/dennlinger/TopicalChange
- **Paper:** https://arxiv.org/abs/2012.03619
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Dennis Aumiller](aumiller@informatik.uni-heidelberg.de)
### Dataset Summary
The wiki-paragraphs dataset is constructed by automatically sampling two paragraphs from a Wikipedia article. If they are from the same section, they will be considered a "semantic match", otherwise as "dissimilar". Dissimilar paragraphs can in theory also be sampled from other documents, but have not shown any improvement in the particular evaluation of the linked work.
The alignment is in no way meant as an accurate depiction of similarity, but allows to quickly mine large amounts of samples.
### Supported Tasks and Leaderboards
The dataset can be used for "same-section classification", which is a binary classification task (either two sentences/paragraphs belong to the same section or not).
This can be combined with document-level coherency measures, where we can check how many misclassifications appear within a single document.
Please refer to [our paper](https://arxiv.org/abs/2012.03619) for more details.
### Languages
The data was extracted from English Wikipedia, therefore predominantly in English.
## Dataset Structure
### Data Instances
A single instance contains three attributes:
```
{
"sentence1": "<Sentence from the first paragraph>",
"sentence2": "<Sentence from the second paragraph>",
"label": 0/1 # 1 indicates two belong to the same section
}
```
### Data Fields
- sentence1: String containing the first paragraph
- sentence2: String containing the second paragraph
- label: Integer, either 0 or 1. Indicates whether two paragraphs belong to the same section (1) or come from different sections (0)
### Data Splits
We provide train, validation and test splits, which were split as 80/10/10 from a randomly shuffled original data source.
In total, we provide 25375583 training pairs, as well as 3163685 validation and test instances, respectively.
## Dataset Creation
### Curation Rationale
The original idea was applied to self-segmentation of Terms of Service documents. Given that these are of domain-specific nature, we wanted to provide a more generally applicable model trained on Wikipedia data.
It is meant as a cheap-to-acquire pre-training strategy for large-scale experimentation with semantic similarity for long texts (paragraph-level).
Based on our experiments, it is not necessarily sufficient by itself to replace traditional hand-labeled semantic similarity datasets.
### Source Data
#### Initial Data Collection and Normalization
The data was collected based on the articles considered in the Wiki-727k dataset by Koshorek et al. The dump of their dataset can be found through the [respective Github repository](https://github.com/koomri/text-segmentation). Note that we did *not* use the pre-processed data, but rather only information on the considered articles, which were re-acquired from Wikipedia at a more recent state.
This is due to the fact that paragraph information was not retained by the original Wiki-727k authors.
We did not verify the particular focus of considered pages.
#### Who are the source language producers?
We do not have any further information on the contributors; these are volunteers contributing to en.wikipedia.org.
### Annotations
#### Annotation process
No manual annotation was added to the dataset.
We automatically sampled two sections from within the same article; if these belong to the same section, they were assigned a label indicating the "similarity" (1), otherwise the label indicates that they are not belonging to the same section (0).
We sample three positive and three negative samples per section, per article.
#### Who are the annotators?
No annotators were involved in the process.
### Personal and Sensitive Information
We did not modify the original Wikipedia text in any way. Given that personal information, such as dates of birth (e.g., for a person of interest) may be on Wikipedia, this information is also considered in our dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of the dataset is to serve as a *pre-training addition* for semantic similarity learning.
Systems building on this dataset should consider additional, manually annotated data, before using a system in production.
### Discussion of Biases
To our knowledge, there are some works indicating that male people have a several times larger chance of having a Wikipedia page created (especially in historical contexts). Therefore, a slight bias towards over-representation might be left in this dataset.
### Other Known Limitations
As previously stated, the automatically extracted semantic similarity is not perfect; it should be treated as such.
## Additional Information
### Dataset Curators
The dataset was originally developed as a practical project by Lucienne-Sophie Marm� under the supervision of Dennis Aumiller.
Contributions to the original sampling strategy were made by Satya Almasian and Michael Gertz
### Licensing Information
Wikipedia data is available under the CC-BY-SA 3.0 license.
### Citation Information
```
@inproceedings{DBLP:conf/icail/AumillerAL021,
author = {Dennis Aumiller and
Satya Almasian and
Sebastian Lackner and
Michael Gertz},
editor = {Juliano Maranh{\~{a}}o and
Adam Zachary Wyner},
title = {Structural text segmentation of legal documents},
booktitle = {{ICAIL} '21: Eighteenth International Conference for Artificial Intelligence
and Law, S{\~{a}}o Paulo Brazil, June 21 - 25, 2021},
pages = {2--11},
publisher = {{ACM}},
year = {2021},
url = {https://doi.org/10.1145/3462757.3466085},
doi = {10.1145/3462757.3466085}
}
``` | [
-0.6054338812828064,
-0.671773374080658,
0.3075040280818939,
0.042794808745384216,
-0.39243021607398987,
-0.2612384557723999,
-0.21857981383800507,
-0.4403746724128723,
0.3951050341129303,
0.3709324300289154,
-0.7558982372283936,
-0.8640363216400146,
-0.5492095351219177,
0.4633474051952362... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
darrow-ai/USClassActions | darrow-ai | 2022-12-09T12:18:13Z | 26 | 0 | null | [
"license:gpl-3.0",
"arxiv:2211.00582",
"region:us"
] | 2022-12-09T12:18:13Z | 2022-10-24T12:00:55.000Z | 2022-10-24T12:00:55 | ---
license: gpl-3.0
---
## Dataset Description
- **Homepage:** https://www.darrow.ai/
- **Repository:** https://github.com/darrow-labs/ClassActionPrediction
- **Paper:** https://arxiv.org/abs/2211.00582
- **Leaderboard:** N/A
- **Point of Contact:** [Gila Hayat](mailto:gila@darrow.ai),[Gil Semo](mailto:gil.semo@darrow.ai)
#### More Details & Collaborations
Feel free to contact us in order to get a larger dataset.
We would be happy to collaborate on future works.
### Dataset Summary
USClassActions is an English dataset of 3K complaints from the US Federal Court with the respective binarized judgment outcome (Win/Lose). The dataset poses a challenging text classification task. We are happy to share this dataset in order to promote robustness and fairness studies on the critical area of legal NLP. The data was annotated using Darrow.ai proprietary tool.
### Data Instances
```python
from datasets import load_dataset
dataset = load_dataset('darrow-ai/USClassActions')
```
### Data Fields
`id`: (**int**) a unique identifier of the document \
`target_text`: (**str**) the complaint text \
`verdict`: (**str**) the outcome of the case \
### Curation Rationale
The dataset was curated by Darrow.ai (2022).
### Citation Information
*Gil Semo, Dor Bernsohn, Ben Hagag, Gila Hayat, and Joel Niklaus*
*ClassActionPrediction: A Challenging Benchmark for Legal Judgment Prediction of Class Action Cases in the US*
*Proceedings of the 2022 Natural Legal Language Processing Workshop. Abu Dhabi. 2022*
```
@InProceedings{Darrow-Niklaus-2022,
author = {Semo, Gil
and Bernsohn, Dor
and Hagag, Ben
and Hayat, Gila
and Niklaus, Joel},
title = {ClassActionPrediction: A Challenging Benchmark for Legal Judgment Prediction of Class Action Cases in the US},
booktitle = {Proceedings of the 2022 Natural Legal Language Processing Workshop},
year = {2022},
location = {Abu Dhabi, EMNLP2022},
}
```
| [
0.01317200530320406,
-0.4292580485343933,
0.14463789761066437,
0.07444864511489868,
-0.2541236877441406,
-0.03202320635318756,
-0.0001256403629668057,
-0.5121200084686279,
-0.19861741364002228,
0.6259384155273438,
-0.08485545217990875,
-0.7853333353996277,
-0.8562068343162537,
0.0174784474... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
liuyanchen1015/VALUE_mnli_lexical | liuyanchen1015 | 2022-11-28T22:31:19Z | 26 | 0 | null | [
"region:us"
] | 2022-11-28T22:31:19Z | 2022-11-28T22:30:54.000Z | 2022-11-28T22:30:54 | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: score
dtype: int64
splits:
- name: train
num_bytes: 69129827
num_examples: 331784
- name: dev_matched
num_bytes: 1720780
num_examples: 8340
- name: dev_mismatched
num_bytes: 1845954
num_examples: 8603
- name: test_matched
num_bytes: 1727232
num_examples: 8345
- name: test_mismatched
num_bytes: 1840163
num_examples: 8585
download_size: 51850969
dataset_size: 76263956
---
# Dataset Card for "VALUE2_mnli_lexical"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.378539115190506,
-0.18570829927921295,
0.01401813980191946,
0.06912127882242203,
-0.22880221903324127,
-0.18141834437847137,
0.07929562032222748,
-0.23512542247772217,
0.8687112331390381,
0.385120153427124,
-0.6375741362571716,
-0.617594301700592,
-0.7276691198348999,
-0.260981410741806... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nzh324/twinkle | nzh324 | 2022-11-29T08:56:16Z | 26 | 0 | null | [
"license:mit",
"region:us"
] | 2022-11-29T08:56:16Z | 2022-11-29T08:55:30.000Z | 2022-11-29T08:55:30 | ---
license: mit
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-staging-eval-project-cd279959-d310-4487-bd83-52389ad5ed20-107105 | autoevaluate | 2022-11-29T09:32:38Z | 26 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-29T09:32:38Z | 2022-11-29T09:32:01.000Z | 2022-11-29T09:32:01 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- glue
eval_info:
task: binary_classification
model: autoevaluate/binary-classification
metrics: ['matthews_correlation']
dataset_name: glue
dataset_config: sst2
dataset_split: validation
col_mapping:
text: sentence
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | [
-0.20361600816249847,
-0.33383119106292725,
0.2989136278629303,
0.17618101835250854,
-0.16354264318943024,
0.036154817789793015,
0.02089543454349041,
-0.39217692613601685,
0.12184587121009827,
0.3618120849132538,
-0.9186381101608276,
-0.21669895946979523,
-0.770520806312561,
-0.01348811481... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lmqg/qag_itquad | lmqg | 2022-12-18T08:21:31Z | 26 | 0 | null | [
"task_categories:text-generation",
"task_ids:language-modeling",
"multilinguality:monolingual",
"size_categories:1k<n<10K",
"source_datasets:lmqg/qg_itquad",
"language:it",
"license:cc-by-sa-4.0",
"question-generation",
"arxiv:2210.03992",
"region:us"
] | 2022-12-18T08:21:31Z | 2022-12-18T08:05:18.000Z | 2022-12-18T08:05:18 | ---
license: cc-by-sa-4.0
pretty_name: SQuAD for question generation
language: it
multilinguality: monolingual
size_categories: 1k<n<10K
source_datasets: lmqg/qg_itquad
task_categories:
- text-generation
task_ids:
- language-modeling
tags:
- question-generation
---
# Dataset Card for "lmqg/qag_itquad"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is the question & answer generation dataset based on the ITQuAD.
### Supported Tasks and Leaderboards
* `question-answer-generation`: The dataset is assumed to be used to train a model for question & answer generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
### Languages
Itallian (it)
## Dataset Structure
An example of 'train' looks as follows.
```
{
"paragraph": ""4 Minuti" è uscito come primo singolo dell' album e ha raggiunto il terzo posto sulla Billboard Hot 100. E' stato il 37° top-ten di Madonna che ha spinto Madonna oltre Elvis Presley come l' artista con i più top-ten hit. Nel Regno Unito ha mantenuto il suo record per il più numero uno single per una artista femminile;"4 Minuti" diventando il suo tredicesimo. Al 23° Japan Gold Disc Awards, Madonna ha ricevuto il suo quinto trofeo Artista dell' anno dalla Recording Industry Association of Japan, la più importante per qualsiasi artista. Per promuovere ulteriormente l' album, Madonna ha intrapreso il Sticky & Sweet Tour, la sua prima grande avventura con Live Nation. Con un lordo di 280 milioni di dollari, è diventato il tour più incassato di un artista solista, superando il precedente record di Madonna stabilito con il Confessions Tour; è stato poi superato da The Wall Live di Roger Waters. E' stato esteso al prossimo anno, aggiungendo nuove date europee, e dopo la fine, il totale lordo totale era di 408 milioni di dollari.",
"questions": [ "Qual è il nome del primo tour con Live Nation?", "4 minuti è diventato Madonna's che numero uno nel Regno Unito?", "Quanto ha incassato Stick e Sweet Tour?", "Madonna ha superato l' artista con i più alti dieci colpi?" ],
"answers": [ "Sticky & Sweet Tour", "tredicesimo", "280 milioni di dollari,", "Elvis Presley" ],
"questions_answers": "question: Qual è il nome del primo tour con Live Nation?, answer: Sticky & Sweet Tour | question: 4 minuti è diventato Madonna's che numero uno nel Regno Unito?, answer: tredicesimo | question: Quanto ha incassato Stick e Sweet Tour?, answer: 280 milioni di dollari, | question: Madonna ha superato l' artista con i più alti dieci colpi?, answer: Elvis Presley"
}
```
The data fields are the same among all splits.
- `questions`: a `list` of `string` features.
- `answers`: a `list` of `string` features.
- `paragraph`: a `string` feature.
- `questions_answers`: a `string` feature.
## Data Splits
|train|validation|test |
|----:|---------:|----:|
|16918 | 6280 | 1988|
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` | [
-0.5619582533836365,
-0.9298192262649536,
0.18552306294441223,
0.09785419702529907,
-0.2772495448589325,
-0.06926624476909637,
-0.20128224790096283,
-0.3030059337615967,
0.40273317694664,
0.46237459778785706,
-0.7836589217185974,
-0.6427546739578247,
-0.26607030630111694,
0.116085089743137... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Lunibo/autotrain-data-csgo_maps | Lunibo | 2022-12-20T17:47:15Z | 26 | 0 | null | [
"region:us"
] | 2022-12-20T17:47:15Z | 2022-12-20T17:42:34.000Z | 2022-12-20T17:42:34 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
NeelNanda/wiki-10k | NeelNanda | 2022-12-27T00:22:23Z | 26 | 0 | null | [
"region:us"
] | 2022-12-27T00:22:23Z | 2022-12-27T00:22:16.000Z | 2022-12-27T00:22:16 | ---
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 222757944
num_examples: 10000
download_size: 129077566
dataset_size: 222757944
---
# Dataset Card for "wiki-10k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7414036989212036,
-0.14263318479061127,
0.17501643300056458,
0.26754021644592285,
-0.24700801074504852,
-0.15758246183395386,
0.14137932658195496,
-0.2895405888557434,
0.9126275777816772,
0.49353715777397156,
-0.8117547631263733,
-0.6059450507164001,
-0.6725695133209229,
0.0743754431605... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
TREC-AToMiC/AToMiC-Images-v0.2 | TREC-AToMiC | 2023-02-14T21:29:39Z | 26 | 1 | null | [
"size_categories:100M<n<1B",
"license:cc-by-sa-4.0",
"arxiv:2103.01913",
"region:us"
] | 2023-02-14T21:29:39Z | 2023-01-14T08:12:44.000Z | 2023-01-14T08:12:44 | ---
dataset_info:
features:
- name: image_url
dtype: string
- name: image_id
dtype: string
- name: language
sequence: string
- name: caption_reference_description
sequence: string
- name: caption_alt_text_description
sequence: string
- name: caption_attribution_description
sequence: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 180043531167.75
num_examples: 11019202
download_size: 174258428914
dataset_size: 180043531167.75
license: cc-by-sa-4.0
size_categories:
- 100M<n<1B
---
# Dataset Card for "AToMiC-All-Images_wi-pixels"
## Dataset Description
- **Homepage:** [AToMiC homepage](https://trec-atomic.github.io/)
- **Source:** [WIT](https://github.com/google-research-datasets/wit)
- **Paper:** [WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning](https://arxiv.org/abs/2103.01913)
### Languages
The dataset contains 108 languages in Wikipedia.
### Data Instances
Each instance is an image, its representation in bytes, and its associated captions.
### Intended Usage
1. Image collection for Text-to-Image retrieval
2. Image--Caption Retrieval/Generation/Translation
### Licensing Information
[CC BY-SA 4.0 international license](https://creativecommons.org/licenses/by-sa/4.0/)
### Citation Information
TBA
### Acknowledgement
Thanks to:
[img2dataset](https://github.com/rom1504/img2dataset)
[Datasets](https://github.com/huggingface/datasets)
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.641778826713562,
-0.614959716796875,
0.3568495213985443,
-0.15540657937526703,
-0.3040218949317932,
-0.15984705090522766,
-0.29240986704826355,
-0.35209351778030396,
0.39752644300460815,
0.12564250826835632,
-0.6989575624465942,
-0.814477264881134,
-0.43261101841926575,
0.19477017223834... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DFKI-SLT/knowledge_net | DFKI-SLT | 2023-01-19T09:16:32Z | 26 | 2 | null | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:entity-linking-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"knowledgenet",
"region:us"
] | 2023-01-19T09:16:32Z | 2023-01-19T09:15:44.000Z | 2023-01-19T09:15:44 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
license: []
multilinguality:
- monolingual
pretty_name: KnowledgeNet is a dataset for automatically populating a knowledge base
size_categories:
- 10K<n<100K
source_datasets: []
tags:
- knowledgenet
task_categories:
- text-classification
task_ids:
- multi-class-classification
- entity-linking-classification
dataset_info:
- config_name: knet
features:
- name: fold
dtype: int32
- name: documentId
dtype: string
- name: source
dtype: string
- name: documentText
dtype: string
- name: passages
sequence:
- name: passageId
dtype: string
- name: passageStart
dtype: int32
- name: passageEnd
dtype: int32
- name: passageText
dtype: string
- name: exhaustivelyAnnotatedProperties
sequence:
- name: propertyId
dtype: string
- name: propertyName
dtype: string
- name: propertyDescription
dtype: string
- name: facts
sequence:
- name: factId
dtype: string
- name: propertyId
dtype: string
- name: humanReadable
dtype: string
- name: annotatedPassage
dtype: string
- name: subjectStart
dtype: int32
- name: subjectEnd
dtype: int32
- name: subjectText
dtype: string
- name: subjectUri
dtype: string
- name: objectStart
dtype: int32
- name: objectEnd
dtype: int32
- name: objectText
dtype: string
- name: objectUri
dtype: string
splits:
- name: train
num_bytes: 10161415
num_examples: 3977
download_size: 14119313
dataset_size: 10161415
- config_name: knet_tokenized
features:
- name: doc_id
dtype: string
- name: passage_id
dtype: string
- name: fact_id
dtype: string
- name: tokens
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': O
'1': PER
'2': ORG
'3': LOC
'4': DATE
- name: subj_uri
dtype: string
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': O
'1': PER
'2': ORG
'3': LOC
'4': DATE
- name: obj_uri
dtype: string
- name: relation
dtype:
class_label:
names:
'0': NO_RELATION
'1': DATE_OF_BIRTH
'2': DATE_OF_DEATH
'3': PLACE_OF_RESIDENCE
'4': PLACE_OF_BIRTH
'5': NATIONALITY
'6': EMPLOYEE_OR_MEMBER_OF
'7': EDUCATED_AT
'8': POLITICAL_AFFILIATION
'9': CHILD_OF
'10': SPOUSE
'11': DATE_FOUNDED
'12': HEADQUARTERS
'13': SUBSIDIARY_OF
'14': FOUNDED_BY
'15': CEO
splits:
- name: train
num_bytes: 4511963
num_examples: 10895
download_size: 14119313
dataset_size: 4511963
- config_name: knet_re
features:
- name: documentId
dtype: string
- name: passageId
dtype: string
- name: factId
dtype: string
- name: passageText
dtype: string
- name: humanReadable
dtype: string
- name: annotatedPassage
dtype: string
- name: subjectStart
dtype: int32
- name: subjectEnd
dtype: int32
- name: subjectText
dtype: string
- name: subjectType
dtype:
class_label:
names:
'0': O
'1': PER
'2': ORG
'3': LOC
'4': DATE
- name: subjectUri
dtype: string
- name: objectStart
dtype: int32
- name: objectEnd
dtype: int32
- name: objectText
dtype: string
- name: objectType
dtype:
class_label:
names:
'0': O
'1': PER
'2': ORG
'3': LOC
'4': DATE
- name: objectUri
dtype: string
- name: relation
dtype:
class_label:
names:
'0': NO_RELATION
'1': DATE_OF_BIRTH
'2': DATE_OF_DEATH
'3': PLACE_OF_RESIDENCE
'4': PLACE_OF_BIRTH
'5': NATIONALITY
'6': EMPLOYEE_OR_MEMBER_OF
'7': EDUCATED_AT
'8': POLITICAL_AFFILIATION
'9': CHILD_OF
'10': SPOUSE
'11': DATE_FOUNDED
'12': HEADQUARTERS
'13': SUBSIDIARY_OF
'14': FOUNDED_BY
'15': CEO
splits:
- name: train
num_bytes: 6098219
num_examples: 10895
download_size: 14119313
dataset_size: 6098219
---
# Dataset Card for "KnowledgeNet"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [knowledge-net](https://github.com/diffbot/knowledge-net)
- **Paper:** [KnowledgeNet: A Benchmark Dataset for Knowledge Base Population](https://aclanthology.org/D19-1069/)
- **Size of downloaded dataset files:** 12.59 MB
- **Size of the generated dataset:** 6.1 MB
### Dataset Summary
KnowledgeNet is a benchmark dataset for the task of automatically populating a knowledge base (Wikidata) with facts
expressed in natural language text on the web. KnowledgeNet provides text exhaustively annotated with facts, thus
enabling the holistic end-to-end evaluation of knowledge base population systems as a whole, unlike previous benchmarks
that are more suitable for the evaluation of individual subcomponents (e.g., entity linking, relation extraction).
For instance, the dataset contains text expressing the fact (Gennaro Basile; RESIDENCE; Moravia), in the passage:
"Gennaro Basile was an Italian painter, born in Naples but active in the German-speaking countries. He settled at Brünn,
in Moravia, and lived about 1756..."
For a description of the dataset and baseline systems, please refer to their
[EMNLP paper](https://github.com/diffbot/knowledge-net/blob/master/knowledgenet-emnlp-cameraready.pdf).
Note: This Datasetreader currently only supports the `train` split and does not contain negative examples.
In addition to the original format this repository also provides two version (`knet_re`, `knet_tokenized`) that are
easier to use for simple relation extraction. You can load them with
`datasets.load_dataset("DFKI-SLT/knowledge_net", name="<config>")`.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
The language in the dataset is English.
## Dataset Structure
### Data Instances
#### knet
- **Size of downloaded dataset files:** 12.59 MB
- **Size of the generated dataset:** 10.16 MB
An example of 'train' looks as follows:
```json
{
"fold": 2,
"documentId": "8313",
"source": "DBpedia Abstract",
"documentText": "Gennaro Basile\n\nGennaro Basile was an Italian painter, born in Naples but active in the German-speaking countries. He settled at Brünn, in Moravia, and lived about 1756. His best picture is the altar-piece in the chapel of the chateau at Seeberg, in Salzburg. Most of his works remained in Moravia.",
"passages": [
{
"passageId": "8313:16:114",
"passageStart": 16,
"passageEnd": 114,
"passageText": "Gennaro Basile was an Italian painter, born in Naples but active in the German-speaking countries.",
"exhaustivelyAnnotatedProperties": [
{
"propertyId": "12",
"propertyName": "PLACE_OF_BIRTH",
"propertyDescription": "Describes the relationship between a person and the location where she/he was born."
}
],
"facts": [
{
"factId": "8313:16:30:63:69:12",
"propertyId": "12",
"humanReadable": "<Gennaro Basile> <PLACE_OF_BIRTH> <Naples>",
"annotatedPassage": "<Gennaro Basile> was an Italian painter, born in <Naples> but active in the German-speaking countries.",
"subjectStart": 16,
"subjectEnd": 30,
"subjectText": "Gennaro Basile",
"subjectUri": "http://www.wikidata.org/entity/Q19517888",
"objectStart": 63,
"objectEnd": 69,
"objectText": "Naples",
"objectUri": "http://www.wikidata.org/entity/Q2634"
}
]
},
{
"passageId": "8313:115:169",
"passageStart": 115,
"passageEnd": 169,
"passageText": "He settled at Brünn, in Moravia, and lived about 1756.",
"exhaustivelyAnnotatedProperties": [
{
"propertyId": "11",
"propertyName": "PLACE_OF_RESIDENCE",
"propertyDescription": "Describes the relationship between a person and the location where she/he lives/lived."
},
{
"propertyId": "12",
"propertyName": "PLACE_OF_BIRTH",
"propertyDescription": "Describes the relationship between a person and the location where she/he was born."
}
],
"facts": [
{
"factId": "8313:115:117:129:134:11",
"propertyId": "11",
"humanReadable": "<He> <PLACE_OF_RESIDENCE> <Brünn>",
"annotatedPassage": "<He> settled at <Brünn>, in Moravia, and lived about 1756.",
"subjectStart": 115,
"subjectEnd": 117,
"subjectText": "He",
"subjectUri": "http://www.wikidata.org/entity/Q19517888",
"objectStart": 129,
"objectEnd": 134,
"objectText": "Brünn",
"objectUri": "http://www.wikidata.org/entity/Q14960"
},
{
"factId": "8313:115:117:139:146:11",
"propertyId": "11",
"humanReadable": "<He> <PLACE_OF_RESIDENCE> <Moravia>",
"annotatedPassage": "<He> settled at Brünn, in <Moravia>, and lived about 1756.",
"subjectStart": 115,
"subjectEnd": 117,
"subjectText": "He",
"subjectUri": "http://www.wikidata.org/entity/Q19517888",
"objectStart": 139,
"objectEnd": 146,
"objectText": "Moravia",
"objectUri": "http://www.wikidata.org/entity/Q43266"
}
]
}
]
}
```
#### knet_re
- **Size of downloaded dataset files:** 12.59 MB
- **Size of the generated dataset:** 6.1 MB
An example of 'train' looks as follows:
```json
{
"documentId": "7",
"passageId": "7:23:206",
"factId": "7:23:44:138:160:1",
"passageText": "Tata Chemicals Europe (formerly Brunner Mond (UK) Limited) is a UK-based chemicals company that is a subsidiary of Tata Chemicals Limited, itself a part of the India-based Tata Group.",
"humanReadable": "<Tata Chemicals Europe> <SUBSIDIARY_OF> <Tata Chemicals Limited>",
"annotatedPassage": "<Tata Chemicals Europe> (formerly Brunner Mond (UK) Limited) is a UK-based chemicals company that is a subsidiary of <Tata Chemicals Limited>, itself a part of the India-based Tata Group.",
"subjectStart": 0,
"subjectEnd": 21,
"subjectText": "Tata Chemicals Europe",
"subjectType": 2,
"subjectUri": "",
"objectStart": 115,
"objectEnd": 137,
"objectText": "Tata Chemicals Limited",
"objectType": 2,
"objectUri": "http://www.wikidata.org/entity/Q2331365",
"relation": 13
}
```
#### knet_tokenized
- **Size of downloaded dataset files:** 12.59 MB
- **Size of the generated dataset:** 4.5 MB
An example of 'train' looks as follows:
```json
{
"doc_id": "7",
"passage_id": "7:23:206",
"fact_id": "7:162:168:183:205:1",
"tokens": ["Tata", "Chemicals", "Europe", "(", "formerly", "Brunner", "Mond", "(", "UK", ")", "Limited", ")", "is", "a", "UK", "-", "based", "chemicals", "company", "that", "is", "a", "subsidiary", "of", "Tata", "Chemicals", "Limited", ",", "itself", "a", "part", "of", "the", "India", "-", "based", "Tata", "Group", "."],
"subj_start": 28,
"subj_end": 29,
"subj_type": 2,
"subj_uri": "http://www.wikidata.org/entity/Q2331365",
"obj_start": 33,
"obj_end": 38,
"obj_type": 2,
"obj_uri": "http://www.wikidata.org/entity/Q331715",
"relation": 13
}
```
### Data Fields
#### knet
- `fold`: the fold, a `int` feature.
- `documentId`: the document id, a `string` feature.
- `source`: the source, a `string` feature.
- `documenText`: the document text, a `string` feature.
- `passages`: the list of passages, a `list` of `dict`.
- `passageId`: the passage id, a `string` feature.
- `passageStart`: the passage start, a `int` feature.
- `passageEnd`: the passage end, a `int` feature.
- `passageText`: the passage text, a `string` feature.
- `exhaustivelyAnnotatedProperties`: the list of exhaustively annotated properties, a `list` of `dict`.
- `propertyId`: the property id, a `string` feature.
- `propertyName`: the property name, a `string` feature.
- `propertyDescription`: the property description, a `string` feature.
- `facts`: the list of facts, a `list` of `dict`.
- `factId`: the fact id, a `string` feature.
- `propertyId`: the property id, a `string` feature.
- `humanReadable`: the human readable annotation, a `string` feature.
- `annotatedPassage`: the annotated passage, a `string` feature.
- `subjectStart`: the subject start, a `int` feature.
- `subjectEnd`: the subject end, a `int` feature.
- `subjectText`: the subject text, a `string` feature.
- `subjectUri`: the subject uri, a `string` feature.
- `objectStart`: the object start, a `int` feature.
- `objectEnd`: the object end, a `int` feature.
- `objectText`: the object text, a `string` feature.
- `objectUri`: the object uri, a `string` feature.
#### knet_re
- `documentId`: the document id, a `string` feature.
- `passageId`: the passage id, a `string` feature.
- `passageText`: the passage text, a `string` feature.
- `factId`: the fact id, a `string` feature.
- `humanReadable`: human-readable annotation, a `string` features.
- `annotatedPassage`: annotated passage, a `string` feature.
- `subjectStart`: the index of the start character of the relation subject mention, an `ìnt` feature.
- `subjectEnd`: the index of the end character of the relation subject mention, exclusive, an `ìnt` feature.
- `subjectText`: the text the subject mention, a `string` feature.
- `subjectType`: the NER type of the subject mention, a `string` classification label.
```json
{"O": 0, "PER": 1, "ORG": 2, "LOC": 3, "DATE": 4}
```
- `subjectUri`: the Wikidata URI of the subject mention, a `string` feature.
- `objectStart`: the index of the start character of the relation object mention, an `ìnt` feature.
- `objectEnd`: the index of the end character of the relation object mention, exclusive, an `ìnt` feature.
- `objectText`: the text the object mention, a `string` feature.
- `objectType`: the NER type of the object mention, a `string` classification label.
```json
{"O": 0, "PER": 1, "ORG": 2, "LOC": 3, "DATE": 4}
```
- `objectUri`: the Wikidata URI of the object mention, a `string` feature.
- `relation`: the relation label of this instance, a `string` classification label.
```json
{"NO_RELATION": 0, "DATE_OF_BIRTH": 1, "DATE_OF_DEATH": 2, "PLACE_OF_RESIDENCE": 3, "PLACE_OF_BIRTH": 4, "NATIONALITY": 5, "EMPLOYEE_OR_MEMBER_OF": 6, "EDUCATED_AT": 7, "POLITICAL_AFFILIATION": 8, "CHILD_OF": 9, "SPOUSE": 10, "DATE_FOUNDED": 11, "HEADQUARTERS": 12, "SUBSIDIARY_OF": 13, "FOUNDED_BY": 14, "CEO": 15}
```
#### knet_tokenized
- `doc_id`: the document id, a `string` feature.
- `passage_id`: the passage id, a `string` feature.
- `factId`: the fact id, a `string` feature.
- `tokens`: the list of tokens of this passage, obtained with spaCy, a `list` of `string` features.
- `subj_start`: the index of the start token of the relation subject mention, an `ìnt` feature.
- `subj_end`: the index of the end token of the relation subject mention, exclusive, an `ìnt` feature.
- `subj_type`: the NER type of the subject mention, a `string` classification label.
```json
{"O": 0, "PER": 1, "ORG": 2, "LOC": 3, "DATE": 4}
```
- `subj_uri`: the Wikidata URI of the subject mention, a `string` feature.
- `obj_start`: the index of the start token of the relation object mention, an `ìnt` feature.
- `obj_end`: the index of the end token of the relation object mention, exclusive, an `ìnt` feature.
- `obj_type`: the NER type of the object mention, a `string` classification label.
```json
{"O": 0, "PER": 1, "ORG": 2, "LOC": 3, "DATE": 4}
```
- `obj_uri`: the Wikidata URI of the object mention, a `string` feature.
- `relation`: the relation label of this instance, a `string` classification label.
```json
{"NO_RELATION": 0, "DATE_OF_BIRTH": 1, "DATE_OF_DEATH": 2, "PLACE_OF_RESIDENCE": 3, "PLACE_OF_BIRTH": 4, "NATIONALITY": 5, "EMPLOYEE_OR_MEMBER_OF": 6, "EDUCATED_AT": 7, "POLITICAL_AFFILIATION": 8, "CHILD_OF": 9, "SPOUSE": 10, "DATE_FOUNDED": 11, "HEADQUARTERS": 12, "SUBSIDIARY_OF": 13, "FOUNDED_BY": 14, "CEO": 15}
```
### Data Splits
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
are labeled as no_relation.
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{mesquita-etal-2019-knowledgenet,
title = "{K}nowledge{N}et: A Benchmark Dataset for Knowledge Base Population",
author = "Mesquita, Filipe and
Cannaviccio, Matteo and
Schmidek, Jordan and
Mirza, Paramita and
Barbosa, Denilson",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D19-1069",
doi = "10.18653/v1/D19-1069",
pages = "749--758",}
```
### Contributions
Thanks to [@phucdev](https://github.com/phucdev) for adding this dataset. | [
-0.5844797492027283,
-0.3210296630859375,
0.23442214727401733,
0.02703050896525383,
-0.1840154379606247,
-0.23268373310565948,
-0.2533751428127289,
-0.39688777923583984,
0.6445937156677246,
0.409339964389801,
-0.7653194665908813,
-0.8677288889884949,
-0.33885592222213745,
0.233796328306198... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
joelniklaus/MultiLegalPileWikipediaFiltered | joelniklaus | 2023-03-28T19:23:38Z | 26 | 2 | null | [
"task_categories:fill-mask",
"annotations_creators:other",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:bg",
"language:cs",
"language:da",
"language:de",
"language:el",
"language:en",
"language:es",
"language... | 2023-03-28T19:23:38Z | 2023-01-31T21:51:25.000Z | 2023-01-31T21:51:25 | ---
annotations_creators:
- other
language_creators:
- found
language:
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- ga
- hr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sk
- sl
- sv
license:
- cc-by-4.0
multilinguality:
- multilingual
paperswithcode_id: null
pretty_name: "MultiLegalPileWikipediaFiltered: A filtered version of the MultiLegalPile dataset, together with wikipedia articles."
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- fill-mask
---
# Dataset Card for MultiLegalPileWikipediaFiltered: A filtered version of the MultiLegalPile dataset, together with wikipedia articles
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [Joel Niklaus](mailto:joel.niklaus.2@bfh.ch)
### Dataset Summary
The Multi_Legal_Pile is a large-scale multilingual legal dataset suited for pretraining language models.
It spans over 24 languages and four legal text types.
### Supported Tasks and Leaderboards
The dataset supports the tasks of fill-mask.
### Languages
The following languages are supported:
bg, cs, da, de, el, en, es, et, fi, fr, ga, hr, hu, it, lt, lv, mt, nl, pl, pt, ro, sk, sl, sv
## Dataset Structure
It is structured in the following format: {language}_{text_type}_{shard}.jsonl.xz
text_type is one of the following:
- caselaw
- contracts
- legislation
- other
- wikipedia
Use the dataset like this:
```python
from datasets import load_dataset
config = 'en_contracts' # {language}_{text_type}
dataset = load_dataset('joelito/Multi_Legal_Pile', config, split='train', streaming=True)
```
'config' is a combination of language and text_type, e.g. 'en_contracts' or 'de_caselaw'.
To load all the languages or all the text_types, use 'all' instead of the language or text_type (e.g., '
all_legislation').
### Data Instances
The file format is jsonl.xz and there is a `train` and `validation` split available.
Since some configurations are very small or non-existent, they might not contain a train split or not be present at all.
The complete dataset consists of five large subsets:
- [Native Multi Legal Pile](https://huggingface.co/datasets/joelito/Multi_Legal_Pile)
- [Eurlex Resources](https://huggingface.co/datasets/joelito/eurlex_resources)
- [MC4 Legal](https://huggingface.co/datasets/joelito/mc4_legal)
- [Pile of Law](https://huggingface.co/datasets/pile-of-law/pile-of-law)
- [EU Wikipedias](https://huggingface.co/datasets/joelito/EU_Wikipedias)
| Language | Source | Size (MB) | Words | Documents | Words/Document |
|:-----------|:------------|-----------------:|------------:|------------:|-----------------:|
| all | all | 1.29761e+06 | 81214262514 | 57305071 | 1417 |
| all | caselaw | 695837 | 44372248995 | 30085886 | 1474 |
| all | contracts | 122599 | 7964531030 | 1785686 | 4460 |
| all | legislation | 189135 | 10879386581 | 3601518 | 3020 |
| all | other | 126570 | 8780080882 | 3358073 | 2614 |
| all | wikipedia | 163468 | 9218015026 | 18473908 | 498 |
| bg | all | 14028 | 535256525 | 355650 | 1505 |
| bg | caselaw | 2897 | 109634090 | 52648 | 2082 |
| bg | contracts | 748 | 31292877 | 7107 | 4403 |
| bg | legislation | 8015 | 308946116 | 82777 | 3732 |
| bg | other | 0 | 0 | 0 | 0 |
| bg | wikipedia | 2368 | 85383442 | 213118 | 400 |
| cs | all | 21818 | 1123000335 | 839914 | 1337 |
| cs | caselaw | 11151 | 574336489 | 296652 | 1936 |
| cs | contracts | 492 | 28106428 | 7383 | 3806 |
| cs | legislation | 6288 | 333850509 | 88731 | 3762 |
| cs | other | 0 | 0 | 0 | 0 |
| cs | wikipedia | 3887 | 186706909 | 447148 | 417 |
| da | all | 16024 | 970954498 | 576256 | 1684 |
| da | caselaw | 3469 | 210730560 | 89702 | 2349 |
| da | contracts | 559 | 35592407 | 10827 | 3287 |
| da | legislation | 10736 | 653153146 | 265868 | 2456 |
| da | other | 0 | 0 | 0 | 0 |
| da | wikipedia | 1259 | 71478385 | 209859 | 340 |
| de | all | 63887 | 3512253170 | 3216030 | 1092 |
| de | caselaw | 31527 | 1785439383 | 596800 | 2991 |
| de | contracts | 614 | 36786772 | 11041 | 3331 |
| de | legislation | 8934 | 512840663 | 276034 | 1857 |
| de | other | 0 | 0 | 0 | 0 |
| de | wikipedia | 22812 | 1177186352 | 2332155 | 504 |
| el | all | 23167 | 800722723 | 457553 | 1750 |
| el | caselaw | 6007 | 203770918 | 85496 | 2383 |
| el | contracts | 1050 | 38963772 | 10266 | 3795 |
| el | legislation | 12906 | 455240770 | 171356 | 2656 |
| el | other | 0 | 0 | 0 | 0 |
| el | wikipedia | 3204 | 102747263 | 190435 | 539 |
| en | all | 712173 | 47279626514 | 21112650 | 2239 |
| en | caselaw | 380976 | 25561971376 | 10240724 | 2496 |
| en | contracts | 71360 | 7260323438 | 1594942 | 4552 |
| en | legislation | 36587 | 2537696894 | 657805 | 3857 |
| en | other | 126570 | 8780080882 | 3358073 | 2614 |
| en | wikipedia | 51053 | 3139553924 | 5261106 | 596 |
| es | all | 23657 | 1515689548 | 1567527 | 966 |
| es | caselaw | 3299 | 220506573 | 83872 | 2629 |
| es | contracts | 594 | 41840328 | 10048 | 4164 |
| es | legislation | 6837 | 462661276 | 149368 | 3097 |
| es | other | 0 | 0 | 0 | 0 |
| es | wikipedia | 12928 | 790681371 | 1324239 | 597 |
| et | all | 7446 | 372896353 | 261641 | 1425 |
| et | caselaw | 1835 | 92951578 | 58736 | 1582 |
| et | contracts | 433 | 24017402 | 7371 | 3258 |
| et | legislation | 4200 | 210952455 | 63922 | 3300 |
| et | other | 0 | 0 | 0 | 0 |
| et | wikipedia | 978 | 44974918 | 131612 | 341 |
| fi | all | 11501 | 513990484 | 592986 | 866 |
| fi | caselaw | 2854 | 126368889 | 77882 | 1622 |
| fi | contracts | 504 | 25386705 | 8894 | 2854 |
| fi | legislation | 5532 | 252344531 | 103907 | 2428 |
| fi | other | 0 | 0 | 0 | 0 |
| fi | wikipedia | 2610 | 109890359 | 402303 | 273 |
| fr | all | 47186 | 2936056985 | 2734954 | 1073 |
| fr | caselaw | 18313 | 1170335690 | 435569 | 2686 |
| fr | contracts | 633 | 41983091 | 11071 | 3792 |
| fr | legislation | 9297 | 600170792 | 243313 | 2466 |
| fr | other | 0 | 0 | 0 | 0 |
| fr | wikipedia | 18942 | 1123567412 | 2045001 | 549 |
| ga | all | 1209 | 72041312 | 30064 | 2396 |
| ga | caselaw | 11 | 676795 | 835 | 810 |
| ga | contracts | 29 | 1820765 | 365 | 4988 |
| ga | legislation | 1048 | 62513018 | 5983 | 10448 |
| ga | other | 0 | 0 | 0 | 0 |
| ga | wikipedia | 122 | 7030734 | 22881 | 307 |
| hr | all | 5377 | 315295665 | 211151 | 1493 |
| hr | caselaw | 1026 | 62358456 | 31322 | 1990 |
| hr | contracts | 395 | 24957774 | 6552 | 3809 |
| hr | legislation | 2906 | 171415656 | 36365 | 4713 |
| hr | other | 0 | 0 | 0 | 0 |
| hr | wikipedia | 1050 | 56563779 | 136912 | 413 |
| hu | all | 12351 | 564082537 | 495822 | 1137 |
| hu | caselaw | 2376 | 110034426 | 59074 | 1862 |
| hu | contracts | 534 | 27258352 | 7385 | 3691 |
| hu | legislation | 5744 | 264572303 | 86862 | 3045 |
| hu | other | 0 | 0 | 0 | 0 |
| hu | wikipedia | 3697 | 162217456 | 342501 | 473 |
| it | all | 26744 | 1658638775 | 1615301 | 1026 |
| it | caselaw | 6483 | 406520336 | 156630 | 2595 |
| it | contracts | 597 | 40131223 | 10985 | 3653 |
| it | legislation | 8332 | 542579039 | 227968 | 2380 |
| it | other | 0 | 0 | 0 | 0 |
| it | wikipedia | 11332 | 669408177 | 1219718 | 548 |
| lt | all | 7772 | 399310081 | 264537 | 1509 |
| lt | caselaw | 1992 | 101672069 | 59485 | 1709 |
| lt | contracts | 475 | 27009922 | 7473 | 3614 |
| lt | legislation | 4550 | 235543873 | 64106 | 3674 |
| lt | other | 0 | 0 | 0 | 0 |
| lt | wikipedia | 755 | 35084217 | 133473 | 262 |
| lv | all | 7701 | 386833125 | 211244 | 1831 |
| lv | caselaw | 2082 | 103311512 | 58992 | 1751 |
| lv | contracts | 481 | 26692972 | 7429 | 3593 |
| lv | legislation | 4621 | 233088284 | 64087 | 3637 |
| lv | other | 0 | 0 | 0 | 0 |
| lv | wikipedia | 518 | 23740357 | 80736 | 294 |
| mt | all | 7180 | 370558634 | 122056 | 3035 |
| mt | caselaw | 2016 | 100309542 | 52942 | 1894 |
| mt | contracts | 486 | 27701852 | 6937 | 3993 |
| mt | legislation | 4620 | 239708644 | 57979 | 4134 |
| mt | other | 0 | 0 | 0 | 0 |
| mt | wikipedia | 58 | 2838596 | 4198 | 676 |
| nl | all | 17674 | 1112460059 | 1200534 | 926 |
| nl | caselaw | 3227 | 206147113 | 87170 | 2364 |
| nl | contracts | 604 | 40245662 | 11027 | 3649 |
| nl | legislation | 8484 | 550788527 | 232204 | 2372 |
| nl | other | 0 | 0 | 0 | 0 |
| nl | wikipedia | 5360 | 315278757 | 870133 | 362 |
| pl | all | 14762 | 773692198 | 1160849 | 666 |
| pl | caselaw | 2141 | 115695709 | 59649 | 1939 |
| pl | contracts | 489 | 28543526 | 7478 | 3817 |
| pl | legislation | 5459 | 299334705 | 89264 | 3353 |
| pl | other | 0 | 0 | 0 | 0 |
| pl | wikipedia | 6672 | 330118258 | 1004458 | 328 |
| pt | all | 210656 | 13466463586 | 18173061 | 741 |
| pt | caselaw | 196919 | 12611760973 | 17251236 | 731 |
| pt | contracts | 571 | 37997495 | 9897 | 3839 |
| pt | legislation | 6853 | 439066783 | 148176 | 2963 |
| pt | other | 0 | 0 | 0 | 0 |
| pt | wikipedia | 6313 | 377638335 | 763752 | 494 |
| ro | all | 14794 | 808799454 | 481763 | 1678 |
| ro | caselaw | 1960 | 114665535 | 53092 | 2159 |
| ro | contracts | 495 | 31496978 | 7202 | 4373 |
| ro | legislation | 10464 | 559092153 | 215694 | 2592 |
| ro | other | 0 | 0 | 0 | 0 |
| ro | wikipedia | 1874 | 103544788 | 205775 | 503 |
| sk | all | 8700 | 463447112 | 262638 | 1764 |
| sk | caselaw | 2072 | 109996398 | 59383 | 1852 |
| sk | contracts | 489 | 28298113 | 7470 | 3788 |
| sk | legislation | 5208 | 280182047 | 76760 | 3650 |
| sk | other | 0 | 0 | 0 | 0 |
| sk | wikipedia | 931 | 44970554 | 119025 | 377 |
| sl | all | 9345 | 561775614 | 277497 | 2024 |
| sl | caselaw | 1816 | 111097741 | 59193 | 1876 |
| sl | contracts | 432 | 28238938 | 7475 | 3777 |
| sl | legislation | 6057 | 365513763 | 88651 | 4123 |
| sl | other | 0 | 0 | 0 | 0 |
| sl | wikipedia | 1041 | 56925172 | 122178 | 465 |
| sv | all | 12457 | 700417227 | 1083393 | 646 |
| sv | caselaw | 2806 | 161956844 | 78802 | 2055 |
| sv | contracts | 491 | 29844238 | 9061 | 3293 |
| sv | legislation | 5456 | 308130634 | 104338 | 2953 |
| sv | other | 0 | 0 | 0 | 0 |
| sv | wikipedia | 3704 | 200485511 | 891192 | 224 |
### Data Fields
[More Information Needed]
### Data Splits
There are two splits: train and validation. The validation split contains 1000 examples and the training split contains the rest of the data.
#### Data Size
```bash
$ xz --list data/*.xz
Strms Blocks Compressed Uncompressed Ratio Check Filename
1 1 167.6 MiB 3’276.3 MiB 0.051 CRC64 data/bg_caselaw_train.0.jsonl.xz
1 1 502.3 KiB 9’398.0 KiB 0.053 CRC64 data/bg_caselaw_validation.0.jsonl.xz
1 1 33.4 MiB 700.3 MiB 0.048 CRC64 data/bg_contracts_train.0.jsonl.xz
1 1 5’989.6 KiB 123.0 MiB 0.048 CRC64 data/bg_contracts_validation.0.jsonl.xz
1 1 418.5 MiB 8’931.0 MiB 0.047 CRC64 data/bg_legislation_train.0.jsonl.xz
1 1 5’029.4 KiB 103.1 MiB 0.048 CRC64 data/bg_legislation_validation.0.jsonl.xz
1 0 32 B 0 B --- CRC64 data/bg_other_validation.0.jsonl.xz
1 1 192.2 MiB 2’488.6 MiB 0.077 CRC64 data/bg_wikipedia_train.0.jsonl.xz
1 1 1’757.8 KiB 22.9 MiB 0.075 CRC64 data/bg_wikipedia_validation.0.jsonl.xz
1 1 476.9 MiB 4’126.1 MiB 0.116 CRC64 data/cs_caselaw_train.0.jsonl.xz
1 1 259.8 MiB 2’556.9 MiB 0.102 CRC64 data/cs_caselaw_train.1.jsonl.xz
1 1 420.1 KiB 3’370.3 KiB 0.125 CRC64 data/cs_caselaw_validation.0.jsonl.xz
1 1 24.9 MiB 237.9 MiB 0.105 CRC64 data/cs_contracts_train.0.jsonl.xz
1 1 4’412.1 KiB 41.7 MiB 0.103 CRC64 data/cs_contracts_validation.0.jsonl.xz
1 1 361.2 MiB 3’488.9 MiB 0.104 CRC64 data/cs_legislation_train.0.jsonl.xz
1 1 10.3 MiB 91.6 MiB 0.112 CRC64 data/cs_legislation_validation.0.jsonl.xz
1 0 32 B 0 B --- CRC64 data/cs_other_validation.0.jsonl.xz
1 1 390.6 MiB 1’939.4 MiB 0.201 CRC64 data/cs_wikipedia_train.0.jsonl.xz
1 1 2’604.7 KiB 12.2 MiB 0.209 CRC64 data/cs_wikipedia_validation.0.jsonl.xz
1 1 252.5 MiB 1’529.7 MiB 0.165 CRC64 data/da_caselaw_train.0.jsonl.xz
1 1 555.9 KiB 3’227.1 KiB 0.172 CRC64 data/da_caselaw_validation.0.jsonl.xz
1 1 30.1 MiB 233.9 MiB 0.129 CRC64 data/da_contracts_train.0.jsonl.xz
1 1 2’897.6 KiB 23.6 MiB 0.120 CRC64 data/da_contracts_validation.0.jsonl.xz
1 1 476.9 MiB 3’325.8 MiB 0.143 CRC64 data/da_legislation_train.0.jsonl.xz
1 1 237.3 MiB 1’444.5 MiB 0.164 CRC64 data/da_legislation_train.1.jsonl.xz
1 1 3’232.5 KiB 60.6 MiB 0.052 CRC64 data/da_legislation_validation.0.jsonl.xz
1 0 32 B 0 B --- CRC64 data/da_other_validation.0.jsonl.xz
1 1 128.8 MiB 512.1 MiB 0.252 CRC64 data/da_wikipedia_train.0.jsonl.xz
1 1 1’514.1 KiB 5’476.3 KiB 0.276 CRC64 data/da_wikipedia_validation.0.jsonl.xz
1 1 476.9 MiB 2’803.8 MiB 0.170 CRC64 data/de_caselaw_train.0.jsonl.xz
1 1 476.9 MiB 2’821.4 MiB 0.169 CRC64 data/de_caselaw_train.1.jsonl.xz
1 1 476.9 MiB 2’720.2 MiB 0.175 CRC64 data/de_caselaw_train.2.jsonl.xz
1 1 476.9 MiB 2’704.1 MiB 0.176 CRC64 data/de_caselaw_train.3.jsonl.xz
1 1 460.5 MiB 2’504.5 MiB 0.184 CRC64 data/de_caselaw_train.4.jsonl.xz
1 1 594.0 KiB 3’416.4 KiB 0.174 CRC64 data/de_caselaw_validation.0.jsonl.xz
1 1 32.0 MiB 255.8 MiB 0.125 CRC64 data/de_contracts_train.0.jsonl.xz
1 1 3’037.7 KiB 24.7 MiB 0.120 CRC64 data/de_contracts_validation.0.jsonl.xz
1 1 476.9 MiB 3’386.0 MiB 0.141 CRC64 data/de_legislation_train.0.jsonl.xz
1 1 93.3 MiB 592.3 MiB 0.158 CRC64 data/de_legislation_train.1.jsonl.xz
1 1 3’265.9 KiB 20.5 MiB 0.156 CRC64 data/de_legislation_validation.0.jsonl.xz
1 0 32 B 0 B --- CRC64 data/de_other_validation.0.jsonl.xz
1 1 476.9 MiB 1’883.7 MiB 0.253 CRC64 data/de_wikipedia_train.0.jsonl.xz
1 1 476.9 MiB 1’891.6 MiB 0.252 CRC64 data/de_wikipedia_train.1.jsonl.xz
1 1 476.9 MiB 1’893.7 MiB 0.252 CRC64 data/de_wikipedia_train.2.jsonl.xz
1 1 476.9 MiB 1’894.1 MiB 0.252 CRC64 data/de_wikipedia_train.3.jsonl.xz
1 1 407.9 MiB 1’622.0 MiB 0.251 CRC64 data/de_wikipedia_train.4.jsonl.xz
1 1 1’172.5 KiB 4’210.2 KiB 0.278 CRC64 data/de_wikipedia_validation.0.jsonl.xz
1 1 344.7 MiB 6’908.3 MiB 0.050 CRC64 data/el_caselaw_train.0.jsonl.xz
1 1 870.4 KiB 14.3 MiB 0.060 CRC64 data/el_caselaw_validation.0.jsonl.xz
1 1 49.7 MiB 1’083.8 MiB 0.046 CRC64 data/el_contracts_train.0.jsonl.xz
1 1 4’701.3 KiB 101.6 MiB 0.045 CRC64 data/el_contracts_validation.0.jsonl.xz
1 1 476.9 MiB 10.2 GiB 0.046 CRC64 data/el_legislation_train.0.jsonl.xz
1 1 203.0 MiB 3’994.0 MiB 0.051 CRC64 data/el_legislation_train.1.jsonl.xz
1 1 9’744.3 KiB 186.6 MiB 0.051 CRC64 data/el_legislation_validation.0.jsonl.xz
1 0 32 B 0 B --- CRC64 data/el_other_validation.0.jsonl.xz
1 1 246.4 MiB 3’465.7 MiB 0.071 CRC64 data/el_wikipedia_train.0.jsonl.xz
1 1 2’591.7 KiB 35.6 MiB 0.071 CRC64 data/el_wikipedia_validation.0.jsonl.xz
1 1 476.9 MiB 2’188.6 MiB 0.218 CRC64 data/en_caselaw_train.0.jsonl.xz
1 1 476.9 MiB 2’416.1 MiB 0.197 CRC64 data/en_caselaw_train.10.jsonl.xz
1 1 477.2 MiB 2’688.1 MiB 0.178 CRC64 data/en_caselaw_train.11.jsonl.xz
1 1 476.9 MiB 2’865.9 MiB 0.166 CRC64 data/en_caselaw_train.12.jsonl.xz
1 1 476.9 MiB 2’494.1 MiB 0.191 CRC64 data/en_caselaw_train.13.jsonl.xz
1 1 476.9 MiB 2’126.6 MiB 0.224 CRC64 data/en_caselaw_train.14.jsonl.xz
1 1 476.9 MiB 2’440.9 MiB 0.195 CRC64 data/en_caselaw_train.15.jsonl.xz
1 1 476.9 MiB 3’822.2 MiB 0.125 CRC64 data/en_caselaw_train.16.jsonl.xz
1 1 476.9 MiB 3’831.4 MiB 0.124 CRC64 data/en_caselaw_train.17.jsonl.xz
1 1 476.9 MiB 3’812.2 MiB 0.125 CRC64 data/en_caselaw_train.18.jsonl.xz
1 1 476.9 MiB 2’233.5 MiB 0.214 CRC64 data/en_caselaw_train.19.jsonl.xz
1 1 476.9 MiB 2’195.9 MiB 0.217 CRC64 data/en_caselaw_train.1.jsonl.xz
1 1 476.9 MiB 2’185.8 MiB 0.218 CRC64 data/en_caselaw_train.20.jsonl.xz
1 1 476.9 MiB 2’634.9 MiB 0.181 CRC64 data/en_caselaw_train.21.jsonl.xz
1 1 476.9 MiB 2’670.8 MiB 0.179 CRC64 data/en_caselaw_train.22.jsonl.xz
1 1 476.9 MiB 2’762.0 MiB 0.173 CRC64 data/en_caselaw_train.23.jsonl.xz
1 1 476.9 MiB 2’153.6 MiB 0.221 CRC64 data/en_caselaw_train.24.jsonl.xz
1 1 476.9 MiB 2’152.0 MiB 0.222 CRC64 data/en_caselaw_train.25.jsonl.xz
1 1 476.9 MiB 2’205.0 MiB 0.216 CRC64 data/en_caselaw_train.26.jsonl.xz
1 1 476.9 MiB 2’141.0 MiB 0.223 CRC64 data/en_caselaw_train.27.jsonl.xz
1 1 476.9 MiB 2’145.1 MiB 0.222 CRC64 data/en_caselaw_train.28.jsonl.xz
1 1 476.9 MiB 2’137.9 MiB 0.223 CRC64 data/en_caselaw_train.29.jsonl.xz
1 1 476.9 MiB 2’189.0 MiB 0.218 CRC64 data/en_caselaw_train.2.jsonl.xz
1 1 476.9 MiB 2’150.9 MiB 0.222 CRC64 data/en_caselaw_train.30.jsonl.xz
1 1 476.9 MiB 2’142.7 MiB 0.223 CRC64 data/en_caselaw_train.31.jsonl.xz
1 1 476.9 MiB 2’203.4 MiB 0.216 CRC64 data/en_caselaw_train.32.jsonl.xz
1 1 476.9 MiB 2’205.4 MiB 0.216 CRC64 data/en_caselaw_train.33.jsonl.xz
1 1 476.9 MiB 2’206.0 MiB 0.216 CRC64 data/en_caselaw_train.34.jsonl.xz
1 1 476.9 MiB 2’164.9 MiB 0.220 CRC64 data/en_caselaw_train.35.jsonl.xz
1 1 476.9 MiB 2’810.3 MiB 0.170 CRC64 data/en_caselaw_train.36.jsonl.xz
1 1 476.9 MiB 2’854.1 MiB 0.167 CRC64 data/en_caselaw_train.37.jsonl.xz
1 1 476.9 MiB 3’109.2 MiB 0.153 CRC64 data/en_caselaw_train.38.jsonl.xz
1 1 476.9 MiB 3’323.6 MiB 0.143 CRC64 data/en_caselaw_train.39.jsonl.xz
1 1 476.9 MiB 2’155.3 MiB 0.221 CRC64 data/en_caselaw_train.3.jsonl.xz
1 1 476.9 MiB 2’881.5 MiB 0.165 CRC64 data/en_caselaw_train.40.jsonl.xz
1 1 476.9 MiB 2’157.1 MiB 0.221 CRC64 data/en_caselaw_train.41.jsonl.xz
1 1 477.0 MiB 2’530.2 MiB 0.189 CRC64 data/en_caselaw_train.42.jsonl.xz
1 1 476.8 MiB 2’540.1 MiB 0.188 CRC64 data/en_caselaw_train.43.jsonl.xz
1 1 476.9 MiB 2’182.2 MiB 0.219 CRC64 data/en_caselaw_train.44.jsonl.xz
1 1 476.9 MiB 2’163.2 MiB 0.220 CRC64 data/en_caselaw_train.45.jsonl.xz
1 1 476.9 MiB 2’213.3 MiB 0.215 CRC64 data/en_caselaw_train.46.jsonl.xz
1 1 476.9 MiB 2’241.5 MiB 0.213 CRC64 data/en_caselaw_train.47.jsonl.xz
1 1 476.9 MiB 2’203.6 MiB 0.216 CRC64 data/en_caselaw_train.48.jsonl.xz
1 1 476.9 MiB 2’480.6 MiB 0.192 CRC64 data/en_caselaw_train.49.jsonl.xz
1 1 476.9 MiB 2’176.7 MiB 0.219 CRC64 data/en_caselaw_train.4.jsonl.xz
1 1 476.9 MiB 2’214.7 MiB 0.215 CRC64 data/en_caselaw_train.50.jsonl.xz
1 1 476.9 MiB 2’128.0 MiB 0.224 CRC64 data/en_caselaw_train.51.jsonl.xz
1 1 476.9 MiB 2’151.0 MiB 0.222 CRC64 data/en_caselaw_train.52.jsonl.xz
1 1 476.9 MiB 2’173.6 MiB 0.219 CRC64 data/en_caselaw_train.53.jsonl.xz
1 1 476.9 MiB 2’773.8 MiB 0.172 CRC64 data/en_caselaw_train.54.jsonl.xz
1 1 476.9 MiB 2’806.2 MiB 0.170 CRC64 data/en_caselaw_train.55.jsonl.xz
1 1 476.9 MiB 3’920.9 MiB 0.122 CRC64 data/en_caselaw_train.56.jsonl.xz
1 1 476.9 MiB 2’517.2 MiB 0.189 CRC64 data/en_caselaw_train.57.jsonl.xz
1 1 477.5 MiB 2’844.0 MiB 0.168 CRC64 data/en_caselaw_train.58.jsonl.xz
1 1 476.9 MiB 2’810.7 MiB 0.170 CRC64 data/en_caselaw_train.59.jsonl.xz
1 1 476.9 MiB 2’160.4 MiB 0.221 CRC64 data/en_caselaw_train.5.jsonl.xz
1 1 476.9 MiB 3’033.0 MiB 0.157 CRC64 data/en_caselaw_train.60.jsonl.xz
1 1 476.9 MiB 2’255.1 MiB 0.211 CRC64 data/en_caselaw_train.61.jsonl.xz
1 1 476.9 MiB 2’110.1 MiB 0.226 CRC64 data/en_caselaw_train.62.jsonl.xz
1 1 476.9 MiB 2’130.3 MiB 0.224 CRC64 data/en_caselaw_train.63.jsonl.xz
1 1 476.9 MiB 2’133.2 MiB 0.224 CRC64 data/en_caselaw_train.64.jsonl.xz
1 1 44.8 MiB 199.6 MiB 0.225 CRC64 data/en_caselaw_train.65.jsonl.xz
1 1 476.9 MiB 2’153.3 MiB 0.221 CRC64 data/en_caselaw_train.6.jsonl.xz
1 1 476.9 MiB 2’130.8 MiB 0.224 CRC64 data/en_caselaw_train.7.jsonl.xz
1 1 476.9 MiB 2’152.2 MiB 0.222 CRC64 data/en_caselaw_train.8.jsonl.xz
1 1 476.9 MiB 2’173.3 MiB 0.219 CRC64 data/en_caselaw_train.9.jsonl.xz
1 1 2’977.4 KiB 12.9 MiB 0.226 CRC64 data/en_caselaw_validation.0.jsonl.xz
1 1 476.9 MiB 3’016.6 MiB 0.158 CRC64 data/en_contracts_train.0.jsonl.xz
1 1 476.9 MiB 3’015.3 MiB 0.158 CRC64 data/en_contracts_train.10.jsonl.xz
1 1 476.9 MiB 3’012.5 MiB 0.158 CRC64 data/en_contracts_train.11.jsonl.xz
1 1 477.0 MiB 3’002.5 MiB 0.159 CRC64 data/en_contracts_train.12.jsonl.xz
1 1 476.9 MiB 2’962.4 MiB 0.161 CRC64 data/en_contracts_train.13.jsonl.xz
1 1 476.9 MiB 3’019.4 MiB 0.158 CRC64 data/en_contracts_train.14.jsonl.xz
1 1 124.1 MiB 781.2 MiB 0.159 CRC64 data/en_contracts_train.15.jsonl.xz
1 1 476.9 MiB 2’994.0 MiB 0.159 CRC64 data/en_contracts_train.1.jsonl.xz
1 1 476.8 MiB 3’084.9 MiB 0.155 CRC64 data/en_contracts_train.2.jsonl.xz
1 1 476.9 MiB 3’123.4 MiB 0.153 CRC64 data/en_contracts_train.3.jsonl.xz
1 1 476.9 MiB 3’120.7 MiB 0.153 CRC64 data/en_contracts_train.4.jsonl.xz
1 1 477.0 MiB 3’094.2 MiB 0.154 CRC64 data/en_contracts_train.5.jsonl.xz
1 1 476.9 MiB 3’010.9 MiB 0.158 CRC64 data/en_contracts_train.6.jsonl.xz
1 1 476.9 MiB 3’015.0 MiB 0.158 CRC64 data/en_contracts_train.7.jsonl.xz
1 1 476.9 MiB 2’995.7 MiB 0.159 CRC64 data/en_contracts_train.8.jsonl.xz
1 1 476.9 MiB 3’017.9 MiB 0.158 CRC64 data/en_contracts_train.9.jsonl.xz
1 1 9’980.4 KiB 63.7 MiB 0.153 CRC64 data/en_contracts_validation.0.jsonl.xz
1 1 476.9 MiB 3’040.8 MiB 0.157 CRC64 data/en_legislation_train.0.jsonl.xz
1 1 476.9 MiB 3’047.3 MiB 0.156 CRC64 data/en_legislation_train.1.jsonl.xz
1 1 476.9 MiB 3’351.5 MiB 0.142 CRC64 data/en_legislation_train.2.jsonl.xz
1 1 478.7 MiB 3’408.4 MiB 0.140 CRC64 data/en_legislation_train.3.jsonl.xz
1 1 372.5 MiB 2’620.0 MiB 0.142 CRC64 data/en_legislation_train.4.jsonl.xz
1 1 2’733.5 KiB 13.8 MiB 0.193 CRC64 data/en_legislation_validation.0.jsonl.xz
1 1 476.9 MiB 4’782.4 MiB 0.100 CRC64 data/en_other_train.0.jsonl.xz
1 1 476.9 MiB 4’347.1 MiB 0.110 CRC64 data/en_other_train.10.jsonl.xz
1 1 477.1 MiB 3’044.6 MiB 0.157 CRC64 data/en_other_train.11.jsonl.xz
1 1 477.1 MiB 2’147.8 MiB 0.222 CRC64 data/en_other_train.12.jsonl.xz
1 1 477.0 MiB 2’182.8 MiB 0.219 CRC64 data/en_other_train.13.jsonl.xz
1 1 33.3 MiB 151.7 MiB 0.219 CRC64 data/en_other_train.14.jsonl.xz
1 1 476.9 MiB 4’883.8 MiB 0.098 CRC64 data/en_other_train.1.jsonl.xz
1 1 476.9 MiB 4’646.7 MiB 0.103 CRC64 data/en_other_train.2.jsonl.xz
1 1 476.9 MiB 4’542.8 MiB 0.105 CRC64 data/en_other_train.3.jsonl.xz
1 1 476.9 MiB 4’574.8 MiB 0.104 CRC64 data/en_other_train.4.jsonl.xz
1 1 476.9 MiB 4’622.5 MiB 0.103 CRC64 data/en_other_train.5.jsonl.xz
1 1 476.9 MiB 4’520.7 MiB 0.105 CRC64 data/en_other_train.6.jsonl.xz
1 1 476.9 MiB 2’942.4 MiB 0.162 CRC64 data/en_other_train.7.jsonl.xz
1 1 476.9 MiB 2’544.0 MiB 0.187 CRC64 data/en_other_train.8.jsonl.xz
1 1 476.9 MiB 4’515.4 MiB 0.106 CRC64 data/en_other_train.9.jsonl.xz
1 1 2’165.8 KiB 19.6 MiB 0.108 CRC64 data/en_other_validation.0.jsonl.xz
1 1 476.9 MiB 1’803.2 MiB 0.264 CRC64 data/en_wikipedia_train.0.jsonl.xz
1 1 441.1 MiB 1’670.5 MiB 0.264 CRC64 data/en_wikipedia_train.10.jsonl.xz
1 1 476.9 MiB 1’803.6 MiB 0.264 CRC64 data/en_wikipedia_train.1.jsonl.xz
1 1 476.9 MiB 1’802.5 MiB 0.265 CRC64 data/en_wikipedia_train.2.jsonl.xz
1 1 476.9 MiB 1’805.0 MiB 0.264 CRC64 data/en_wikipedia_train.3.jsonl.xz
1 1 476.9 MiB 1’804.3 MiB 0.264 CRC64 data/en_wikipedia_train.4.jsonl.xz
1 1 476.9 MiB 1’804.0 MiB 0.264 CRC64 data/en_wikipedia_train.5.jsonl.xz
1 1 476.9 MiB 1’804.1 MiB 0.264 CRC64 data/en_wikipedia_train.6.jsonl.xz
1 1 476.9 MiB 1’803.6 MiB 0.264 CRC64 data/en_wikipedia_train.7.jsonl.xz
1 1 476.9 MiB 1’805.2 MiB 0.264 CRC64 data/en_wikipedia_train.8.jsonl.xz
1 1 476.9 MiB 1’804.3 MiB 0.264 CRC64 data/en_wikipedia_train.9.jsonl.xz
1 1 1’004.9 KiB 3’492.7 KiB 0.288 CRC64 data/en_wikipedia_validation.0.jsonl.xz
1 1 216.4 MiB 1’458.0 MiB 0.148 CRC64 data/es_caselaw_train.0.jsonl.xz
1 1 586.4 KiB 3’537.8 KiB 0.166 CRC64 data/es_caselaw_validation.0.jsonl.xz
1 1 29.0 MiB 244.0 MiB 0.119 CRC64 data/es_contracts_train.0.jsonl.xz
1 1 3’826.2 KiB 31.2 MiB 0.120 CRC64 data/es_contracts_validation.0.jsonl.xz
1 1 401.8 MiB 3’054.9 MiB 0.132 CRC64 data/es_legislation_train.0.jsonl.xz
1 1 8’217.6 KiB 56.6 MiB 0.142 CRC64 data/es_legislation_validation.0.jsonl.xz
1 0 32 B 0 B --- CRC64 data/es_other_validation.0.jsonl.xz
1 1 476.9 MiB 2’017.9 MiB 0.236 CRC64 data/es_wikipedia_train.0.jsonl.xz
1 1 476.9 MiB 2’025.0 MiB 0.235 CRC64 data/es_wikipedia_train.1.jsonl.xz
1 1 308.8 MiB 1’305.6 MiB 0.237 CRC64 data/es_wikipedia_train.2.jsonl.xz
1 1 1’339.7 KiB 5’265.5 KiB 0.254 CRC64 data/es_wikipedia_validation.0.jsonl.xz
1 1 132.5 MiB 831.3 MiB 0.159 CRC64 data/et_caselaw_train.0.jsonl.xz
1 1 387.2 KiB 2’310.9 KiB 0.168 CRC64 data/et_caselaw_validation.0.jsonl.xz
1 1 22.9 MiB 179.6 MiB 0.128 CRC64 data/et_contracts_train.0.jsonl.xz
1 1 3’164.3 KiB 26.8 MiB 0.115 CRC64 data/et_contracts_validation.0.jsonl.xz
1 1 255.2 MiB 1’908.2 MiB 0.134 CRC64 data/et_legislation_train.0.jsonl.xz
1 1 9’239.2 KiB 64.7 MiB 0.140 CRC64 data/et_legislation_validation.0.jsonl.xz
1 0 32 B 0 B --- CRC64 data/et_other_validation.0.jsonl.xz
1 1 100.5 MiB 408.8 MiB 0.246 CRC64 data/et_wikipedia_train.0.jsonl.xz
1 1 1’352.2 KiB 4’921.0 KiB 0.275 CRC64 data/et_wikipedia_validation.0.jsonl.xz
1 1 194.5 MiB 1’359.0 MiB 0.143 CRC64 data/fi_caselaw_train.0.jsonl.xz
1 1 604.1 KiB 3’656.1 KiB 0.165 CRC64 data/fi_caselaw_validation.0.jsonl.xz
1 1 26.0 MiB 219.8 MiB 0.118 CRC64 data/fi_contracts_train.0.jsonl.xz
1 1 2’971.2 KiB 27.4 MiB 0.106 CRC64 data/fi_contracts_validation.0.jsonl.xz
1 1 334.7 MiB 2’599.3 MiB 0.129 CRC64 data/fi_legislation_train.0.jsonl.xz
1 1 7’476.3 KiB 53.9 MiB 0.136 CRC64 data/fi_legislation_validation.0.jsonl.xz
1 0 32 B 0 B --- CRC64 data/fi_other_validation.0.jsonl.xz
1 1 255.6 MiB 1’118.0 MiB 0.229 CRC64 data/fi_wikipedia_train.0.jsonl.xz
1 1 2’464.2 KiB 9.9 MiB 0.242 CRC64 data/fi_wikipedia_validation.0.jsonl.xz
1 1 476.9 MiB 3’128.1 MiB 0.152 CRC64 data/fr_caselaw_train.0.jsonl.xz
1 1 476.9 MiB 3’104.4 MiB 0.154 CRC64 data/fr_caselaw_train.1.jsonl.xz
1 1 350.2 MiB 2’194.9 MiB 0.160 CRC64 data/fr_caselaw_train.2.jsonl.xz
1 1 603.0 KiB 3’778.7 KiB 0.160 CRC64 data/fr_caselaw_validation.0.jsonl.xz
1 1 31.9 MiB 278.3 MiB 0.115 CRC64 data/fr_contracts_train.0.jsonl.xz
1 1 3’034.4 KiB 26.6 MiB 0.111 CRC64 data/fr_contracts_validation.0.jsonl.xz
1 1 477.0 MiB 3’721.8 MiB 0.128 CRC64 data/fr_legislation_train.0.jsonl.xz
1 1 89.3 MiB 670.9 MiB 0.133 CRC64 data/fr_legislation_train.1.jsonl.xz
1 1 3’185.5 KiB 22.6 MiB 0.138 CRC64 data/fr_legislation_validation.0.jsonl.xz
1 0 32 B 0 B --- CRC64 data/fr_other_validation.0.jsonl.xz
1 1 476.9 MiB 2’150.5 MiB 0.222 CRC64 data/fr_wikipedia_train.0.jsonl.xz
1 1 476.9 MiB 2’151.4 MiB 0.222 CRC64 data/fr_wikipedia_train.1.jsonl.xz
1 1 476.9 MiB 2’151.2 MiB 0.222 CRC64 data/fr_wikipedia_train.2.jsonl.xz
1 1 384.8 MiB 1’736.1 MiB 0.222 CRC64 data/fr_wikipedia_train.3.jsonl.xz
1 1 937.8 KiB 3’777.6 KiB 0.248 CRC64 data/fr_wikipedia_validation.0.jsonl.xz
1 1 721.9 KiB 5’663.9 KiB 0.127 CRC64 data/ga_caselaw_validation.0.jsonl.xz
1 1 1’246.1 KiB 15.6 MiB 0.078 CRC64 data/ga_contracts_validation.0.jsonl.xz
1 1 41.2 MiB 419.0 MiB 0.098 CRC64 data/ga_legislation_train.0.jsonl.xz
1 1 14.9 MiB 123.2 MiB 0.121 CRC64 data/ga_legislation_validation.0.jsonl.xz
1 0 32 B 0 B --- CRC64 data/ga_other_validation.0.jsonl.xz
1 1 11.0 MiB 52.9 MiB 0.207 CRC64 data/ga_wikipedia_train.0.jsonl.xz
1 1 782.4 KiB 3’438.9 KiB 0.228 CRC64 data/ga_wikipedia_validation.0.jsonl.xz
1 1 72.7 MiB 460.3 MiB 0.158 CRC64 data/hr_caselaw_train.0.jsonl.xz
1 1 359.9 KiB 2’214.8 KiB 0.162 CRC64 data/hr_caselaw_validation.0.jsonl.xz
1 1 21.2 MiB 158.3 MiB 0.134 CRC64 data/hr_contracts_train.0.jsonl.xz
1 1 3’785.9 KiB 26.6 MiB 0.139 CRC64 data/hr_contracts_validation.0.jsonl.xz
1 1 160.6 MiB 1’258.7 MiB 0.128 CRC64 data/hr_legislation_train.0.jsonl.xz
1 1 11.2 MiB 86.1 MiB 0.130 CRC64 data/hr_legislation_validation.0.jsonl.xz
1 0 32 B 0 B --- CRC64 data/hr_other_validation.0.jsonl.xz
1 1 110.3 MiB 425.5 MiB 0.259 CRC64 data/hr_wikipedia_train.0.jsonl.xz
1 1 1’743.8 KiB 6’170.1 KiB 0.283 CRC64 data/hr_wikipedia_validation.0.jsonl.xz
1 1 150.6 MiB 1’320.5 MiB 0.114 CRC64 data/hu_caselaw_train.0.jsonl.xz
1 1 423.8 KiB 3’496.6 KiB 0.121 CRC64 data/hu_caselaw_validation.0.jsonl.xz
1 1 26.9 MiB 266.0 MiB 0.101 CRC64 data/hu_contracts_train.0.jsonl.xz
1 1 3’532.6 KiB 36.1 MiB 0.096 CRC64 data/hu_contracts_validation.0.jsonl.xz
1 1 337.6 MiB 3’129.4 MiB 0.108 CRC64 data/hu_legislation_train.0.jsonl.xz
1 1 3’913.7 KiB 94.8 MiB 0.040 CRC64 data/hu_legislation_validation.0.jsonl.xz
1 0 32 B 0 B --- CRC64 data/hu_other_validation.0.jsonl.xz
1 1 364.2 MiB 1’835.0 MiB 0.198 CRC64 data/hu_wikipedia_train.0.jsonl.xz
1 1 1’719.5 KiB 8’000.8 KiB 0.215 CRC64 data/hu_wikipedia_validation.0.jsonl.xz
1 1 459.8 MiB 2’742.8 MiB 0.168 CRC64 data/it_caselaw_train.0.jsonl.xz
1 1 577.8 KiB 3’194.2 KiB 0.181 CRC64 data/it_caselaw_validation.0.jsonl.xz
1 1 31.2 MiB 240.4 MiB 0.130 CRC64 data/it_contracts_train.0.jsonl.xz
1 1 3’068.9 KiB 24.0 MiB 0.125 CRC64 data/it_contracts_validation.0.jsonl.xz
1 1 476.9 MiB 3’362.3 MiB 0.142 CRC64 data/it_legislation_train.0.jsonl.xz
1 1 38.9 MiB 238.7 MiB 0.163 CRC64 data/it_legislation_train.1.jsonl.xz
1 1 3’211.3 KiB 25.3 MiB 0.124 CRC64 data/it_legislation_validation.0.jsonl.xz
1 0 32 B 0 B --- CRC64 data/it_other_validation.0.jsonl.xz
1 1 476.9 MiB 1’864.5 MiB 0.256 CRC64 data/it_wikipedia_train.0.jsonl.xz
1 1 476.9 MiB 1’864.8 MiB 0.256 CRC64 data/it_wikipedia_train.1.jsonl.xz
1 1 184.6 MiB 726.2 MiB 0.254 CRC64 data/it_wikipedia_train.2.jsonl.xz
1 1 1’334.0 KiB 4’843.5 KiB 0.275 CRC64 data/it_wikipedia_validation.0.jsonl.xz
1 1 136.6 MiB 975.7 MiB 0.140 CRC64 data/lt_caselaw_train.0.jsonl.xz
1 1 397.0 KiB 2’660.9 KiB 0.149 CRC64 data/lt_caselaw_validation.0.jsonl.xz
1 1 24.9 MiB 211.8 MiB 0.118 CRC64 data/lt_contracts_train.0.jsonl.xz
1 1 3’275.5 KiB 26.1 MiB 0.123 CRC64 data/lt_contracts_validation.0.jsonl.xz
1 1 274.0 MiB 2’174.1 MiB 0.126 CRC64 data/lt_legislation_train.0.jsonl.xz
1 1 9’780.7 KiB 73.4 MiB 0.130 CRC64 data/lt_legislation_validation.0.jsonl.xz
1 0 32 B 0 B --- CRC64 data/lt_other_validation.0.jsonl.xz
1 1 72.6 MiB 349.5 MiB 0.208 CRC64 data/lt_wikipedia_train.0.jsonl.xz
1 1 1’251.2 KiB 5’369.5 KiB 0.233 CRC64 data/lt_wikipedia_validation.0.jsonl.xz
1 1 141.0 MiB 1’106.7 MiB 0.127 CRC64 data/lv_caselaw_train.0.jsonl.xz
1 1 410.3 KiB 3’004.0 KiB 0.137 CRC64 data/lv_caselaw_validation.0.jsonl.xz
1 1 24.9 MiB 224.5 MiB 0.111 CRC64 data/lv_contracts_train.0.jsonl.xz
1 1 3’629.0 KiB 33.6 MiB 0.106 CRC64 data/lv_contracts_validation.0.jsonl.xz
1 1 271.5 MiB 2’377.4 MiB 0.114 CRC64 data/lv_legislation_train.0.jsonl.xz
1 1 10.5 MiB 87.5 MiB 0.120 CRC64 data/lv_legislation_validation.0.jsonl.xz
1 0 32 B 0 B --- CRC64 data/lv_other_validation.0.jsonl.xz
1 1 47.5 MiB 254.7 MiB 0.186 CRC64 data/lv_wikipedia_train.0.jsonl.xz
1 1 984.1 KiB 4’559.4 KiB 0.216 CRC64 data/lv_wikipedia_validation.0.jsonl.xz
1 1 132.2 MiB 956.6 MiB 0.138 CRC64 data/mt_caselaw_train.0.jsonl.xz
1 1 396.1 KiB 2’680.0 KiB 0.148 CRC64 data/mt_caselaw_validation.0.jsonl.xz
1 1 25.6 MiB 201.0 MiB 0.127 CRC64 data/mt_contracts_train.0.jsonl.xz
1 1 4’178.4 KiB 34.0 MiB 0.120 CRC64 data/mt_contracts_validation.0.jsonl.xz
1 1 270.7 MiB 2’121.7 MiB 0.128 CRC64 data/mt_legislation_train.0.jsonl.xz
1 1 11.4 MiB 84.2 MiB 0.135 CRC64 data/mt_legislation_validation.0.jsonl.xz
1 0 32 B 0 B --- CRC64 data/mt_other_validation.0.jsonl.xz
1 1 4’608.3 KiB 19.5 MiB 0.231 CRC64 data/mt_wikipedia_train.0.jsonl.xz
1 1 1’405.0 KiB 5’754.4 KiB 0.244 CRC64 data/mt_wikipedia_validation.0.jsonl.xz
1 1 223.1 MiB 1’338.9 MiB 0.167 CRC64 data/nl_caselaw_train.0.jsonl.xz
1 1 566.0 KiB 3’152.2 KiB 0.180 CRC64 data/nl_caselaw_validation.0.jsonl.xz
1 1 31.6 MiB 242.3 MiB 0.130 CRC64 data/nl_contracts_train.0.jsonl.xz
1 1 2’663.9 KiB 22.4 MiB 0.116 CRC64 data/nl_contracts_validation.0.jsonl.xz
1 1 476.9 MiB 3’311.9 MiB 0.144 CRC64 data/nl_legislation_train.0.jsonl.xz
1 1 41.1 MiB 268.7 MiB 0.153 CRC64 data/nl_legislation_train.1.jsonl.xz
1 1 3’678.8 KiB 72.9 MiB 0.049 CRC64 data/nl_legislation_validation.0.jsonl.xz
1 0 32 B 0 B --- CRC64 data/nl_other_validation.0.jsonl.xz
1 1 476.9 MiB 1’856.9 MiB 0.257 CRC64 data/nl_wikipedia_train.0.jsonl.xz
1 1 59.9 MiB 236.4 MiB 0.253 CRC64 data/nl_wikipedia_train.1.jsonl.xz
1 1 979.4 KiB 3’414.8 KiB 0.287 CRC64 data/nl_wikipedia_validation.0.jsonl.xz
1 1 147.9 MiB 1’034.1 MiB 0.143 CRC64 data/pl_caselaw_train.0.jsonl.xz
1 1 416.2 KiB 2’737.2 KiB 0.152 CRC64 data/pl_caselaw_validation.0.jsonl.xz
1 1 24.8 MiB 208.9 MiB 0.119 CRC64 data/pl_contracts_train.0.jsonl.xz
1 1 4’241.9 KiB 34.6 MiB 0.120 CRC64 data/pl_contracts_validation.0.jsonl.xz
1 1 325.0 MiB 2’646.2 MiB 0.123 CRC64 data/pl_legislation_train.0.jsonl.xz
1 1 3’593.0 KiB 29.0 MiB 0.121 CRC64 data/pl_legislation_validation.0.jsonl.xz
1 0 32 B 0 B --- CRC64 data/pl_other_validation.0.jsonl.xz
1 1 476.9 MiB 2’144.7 MiB 0.222 CRC64 data/pl_wikipedia_train.0.jsonl.xz
1 1 189.5 MiB 864.0 MiB 0.219 CRC64 data/pl_wikipedia_train.1.jsonl.xz
1 1 1’233.2 KiB 4’965.9 KiB 0.248 CRC64 data/pl_wikipedia_validation.0.jsonl.xz
1 1 476.9 MiB 3’494.2 MiB 0.136 CRC64 data/pt_caselaw_train.0.jsonl.xz
1 1 476.9 MiB 3’392.1 MiB 0.141 CRC64 data/pt_caselaw_train.10.jsonl.xz
1 1 476.9 MiB 3’505.3 MiB 0.136 CRC64 data/pt_caselaw_train.11.jsonl.xz
1 1 476.9 MiB 3’524.1 MiB 0.135 CRC64 data/pt_caselaw_train.12.jsonl.xz
1 1 476.9 MiB 3’458.4 MiB 0.138 CRC64 data/pt_caselaw_train.13.jsonl.xz
1 1 476.9 MiB 3’602.9 MiB 0.132 CRC64 data/pt_caselaw_train.14.jsonl.xz
1 1 476.9 MiB 4’923.4 MiB 0.097 CRC64 data/pt_caselaw_train.15.jsonl.xz
1 1 476.9 MiB 6’648.8 MiB 0.072 CRC64 data/pt_caselaw_train.16.jsonl.xz
1 1 476.9 MiB 7’461.0 MiB 0.064 CRC64 data/pt_caselaw_train.17.jsonl.xz
1 1 476.9 MiB 6’866.4 MiB 0.069 CRC64 data/pt_caselaw_train.18.jsonl.xz
1 1 476.9 MiB 3’455.7 MiB 0.138 CRC64 data/pt_caselaw_train.19.jsonl.xz
1 1 476.9 MiB 3’513.7 MiB 0.136 CRC64 data/pt_caselaw_train.1.jsonl.xz
1 1 476.9 MiB 3’477.3 MiB 0.137 CRC64 data/pt_caselaw_train.20.jsonl.xz
1 1 476.9 MiB 3’492.8 MiB 0.137 CRC64 data/pt_caselaw_train.21.jsonl.xz
1 1 476.9 MiB 3’528.6 MiB 0.135 CRC64 data/pt_caselaw_train.22.jsonl.xz
1 1 94.1 MiB 694.3 MiB 0.135 CRC64 data/pt_caselaw_train.23.jsonl.xz
1 1 476.9 MiB 3’436.5 MiB 0.139 CRC64 data/pt_caselaw_train.2.jsonl.xz
1 1 476.9 MiB 3’527.9 MiB 0.135 CRC64 data/pt_caselaw_train.3.jsonl.xz
1 1 476.9 MiB 3’492.2 MiB 0.137 CRC64 data/pt_caselaw_train.4.jsonl.xz
1 1 476.9 MiB 3’554.8 MiB 0.134 CRC64 data/pt_caselaw_train.5.jsonl.xz
1 1 476.9 MiB 3’494.7 MiB 0.136 CRC64 data/pt_caselaw_train.6.jsonl.xz
1 1 476.9 MiB 3’439.1 MiB 0.139 CRC64 data/pt_caselaw_train.7.jsonl.xz
1 1 476.9 MiB 3’625.6 MiB 0.132 CRC64 data/pt_caselaw_train.8.jsonl.xz
1 1 476.9 MiB 3’726.4 MiB 0.128 CRC64 data/pt_caselaw_train.9.jsonl.xz
1 1 798.9 KiB 4’820.6 KiB 0.166 CRC64 data/pt_caselaw_validation.0.jsonl.xz
1 1 28.4 MiB 243.2 MiB 0.117 CRC64 data/pt_contracts_train.0.jsonl.xz
1 1 3’899.7 KiB 32.6 MiB 0.117 CRC64 data/pt_contracts_validation.0.jsonl.xz
1 1 406.2 MiB 3’217.5 MiB 0.126 CRC64 data/pt_legislation_train.0.jsonl.xz
1 1 8’350.4 KiB 58.4 MiB 0.140 CRC64 data/pt_legislation_validation.0.jsonl.xz
1 0 32 B 0 B --- CRC64 data/pt_other_validation.0.jsonl.xz
1 1 476.9 MiB 2’050.4 MiB 0.233 CRC64 data/pt_wikipedia_train.0.jsonl.xz
1 1 140.6 MiB 617.4 MiB 0.228 CRC64 data/pt_wikipedia_train.1.jsonl.xz
1 1 1’480.0 KiB 6’344.8 KiB 0.233 CRC64 data/pt_wikipedia_validation.0.jsonl.xz
1 1 124.9 MiB 956.9 MiB 0.131 CRC64 data/ro_caselaw_train.0.jsonl.xz
1 1 400.4 KiB 2’785.0 KiB 0.144 CRC64 data/ro_caselaw_validation.0.jsonl.xz
1 1 24.6 MiB 210.5 MiB 0.117 CRC64 data/ro_contracts_train.0.jsonl.xz
1 1 3’886.3 KiB 34.3 MiB 0.111 CRC64 data/ro_contracts_validation.0.jsonl.xz
1 1 476.9 MiB 4’496.4 MiB 0.106 CRC64 data/ro_legislation_train.0.jsonl.xz
1 1 97.6 MiB 1’053.6 MiB 0.093 CRC64 data/ro_legislation_train.1.jsonl.xz
1 1 3’691.3 KiB 33.4 MiB 0.108 CRC64 data/ro_legislation_validation.0.jsonl.xz
1 0 32 B 0 B --- CRC64 data/ro_other_validation.0.jsonl.xz
1 1 179.7 MiB 833.0 MiB 0.216 CRC64 data/ro_wikipedia_train.0.jsonl.xz
1 1 2’089.4 KiB 9’053.5 KiB 0.231 CRC64 data/ro_wikipedia_validation.0.jsonl.xz
1 1 143.6 MiB 1’094.2 MiB 0.131 CRC64 data/sk_caselaw_train.0.jsonl.xz
1 1 415.8 KiB 3’012.4 KiB 0.138 CRC64 data/sk_caselaw_validation.0.jsonl.xz
1 1 25.9 MiB 226.7 MiB 0.114 CRC64 data/sk_contracts_train.0.jsonl.xz
1 1 3’933.6 KiB 35.2 MiB 0.109 CRC64 data/sk_contracts_validation.0.jsonl.xz
1 1 322.4 MiB 2’745.5 MiB 0.117 CRC64 data/sk_legislation_train.0.jsonl.xz
1 1 3’735.8 KiB 31.7 MiB 0.115 CRC64 data/sk_legislation_validation.0.jsonl.xz
1 0 32 B 0 B --- CRC64 data/sk_other_validation.0.jsonl.xz
1 1 91.2 MiB 435.3 MiB 0.210 CRC64 data/sk_wikipedia_train.0.jsonl.xz
1 1 1’724.4 KiB 7’568.3 KiB 0.228 CRC64 data/sk_wikipedia_validation.0.jsonl.xz
1 1 131.9 MiB 815.8 MiB 0.162 CRC64 data/sl_caselaw_train.0.jsonl.xz
1 1 392.8 KiB 2’328.2 KiB 0.169 CRC64 data/sl_caselaw_validation.0.jsonl.xz
1 1 22.9 MiB 172.4 MiB 0.133 CRC64 data/sl_contracts_train.0.jsonl.xz
1 1 3’493.7 KiB 27.2 MiB 0.125 CRC64 data/sl_contracts_validation.0.jsonl.xz
1 1 388.1 MiB 2’732.3 MiB 0.142 CRC64 data/sl_legislation_train.0.jsonl.xz
1 1 3’429.8 KiB 24.3 MiB 0.138 CRC64 data/sl_legislation_validation.0.jsonl.xz
1 0 32 B 0 B --- CRC64 data/sl_other_validation.0.jsonl.xz
1 1 104.6 MiB 425.6 MiB 0.246 CRC64 data/sl_wikipedia_train.0.jsonl.xz
1 1 1’392.8 KiB 5’004.9 KiB 0.278 CRC64 data/sl_wikipedia_validation.0.jsonl.xz
1 1 189.5 MiB 1’325.4 MiB 0.143 CRC64 data/sv_caselaw_train.0.jsonl.xz
1 1 581.2 KiB 3’566.7 KiB 0.163 CRC64 data/sv_caselaw_validation.0.jsonl.xz
1 1 25.3 MiB 211.7 MiB 0.119 CRC64 data/sv_contracts_train.0.jsonl.xz
1 1 2’890.6 KiB 26.0 MiB 0.108 CRC64 data/sv_contracts_validation.0.jsonl.xz
1 1 324.5 MiB 2’570.4 MiB 0.126 CRC64 data/sv_legislation_train.0.jsonl.xz
1 1 6’984.8 KiB 50.1 MiB 0.136 CRC64 data/sv_legislation_validation.0.jsonl.xz
1 0 32 B 0 B --- CRC64 data/sv_other_validation.0.jsonl.xz
1 1 333.4 MiB 1’668.1 MiB 0.200 CRC64 data/sv_wikipedia_train.0.jsonl.xz
1 1 1’088.6 KiB 4’372.9 KiB 0.249 CRC64 data/sv_wikipedia_validation.0.jsonl.xz
-------------------------------------------------------------------------------
374 351 90.1 GiB 579.9 GiB 0.155 CRC64 374 files
```
## Dataset Creation
This dataset has been created by combining the following datasets:
Native Multi Legal Pile, Eurlex Resources, MC4 Legal, Pile of Law, EU Wikipedias.
It has been filtered to remove short documents (less than 64 whitespace-separated tokens) and
documents with more than 30% punctuation or numbers (see prepare_legal_data.py for more details).
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
TODO add citation
```
### Contributions
Thanks to [@JoelNiklaus](https://github.com/joelniklaus) for adding this dataset.
| [
-0.8353033661842346,
-0.3468284010887146,
0.2072868049144745,
0.22319424152374268,
-0.24256718158721924,
0.06932395696640015,
-0.11628355830907822,
-0.1257791966199875,
0.7333142161369324,
0.7398194670677185,
-0.36014047265052795,
-0.6959396004676819,
-0.6047418713569641,
-0.11195028573274... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
LFBMS/class_dataset_donut2 | LFBMS | 2023-02-12T16:25:06Z | 26 | 0 | null | [
"region:us"
] | 2023-02-12T16:25:06Z | 2023-02-12T16:22:15.000Z | 2023-02-12T16:22:15 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': bilanz_datev
'1': bilanz_lexware
'2': guv
'3': other
- name: ground_truth
dtype: string
splits:
- name: test
num_bytes: 559064953.0
num_examples: 500
- name: train
num_bytes: 4343890380.0
num_examples: 4000
- name: validation
num_bytes: 548645901.0
num_examples: 500
download_size: 5424719748
dataset_size: 5451601234.0
---
# Dataset Card for "class_dataset_donut2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.2374121993780136,
-0.2484021633863449,
0.13905096054077148,
-0.014742565341293812,
0.033590346574783325,
0.21307151019573212,
0.19073614478111267,
-0.00902680866420269,
0.6557952165603638,
0.4086958169937134,
-0.5873467326164246,
-0.5534753799438477,
-0.6851805448532104,
-0.363216698169... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
TobiTob/CityLearn | TobiTob | 2023-06-27T11:14:53Z | 26 | 1 | null | [
"region:us"
] | 2023-06-27T11:14:53Z | 2023-02-16T12:16:52.000Z | 2023-02-16T12:16:52 | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
# Dataset Card for Dataset CityLearn
This dataset is used to train a decision Transformer for the CityLearn 2022 environment https://www.aicrowd.com/challenges/neurips-2022-citylearn-challenge.
You can load data from this dataset via:
datasets.load_dataset('TobiTob/CityLearn', 'data_name')
A short description of all data sets can be found in file CityLearn.py | [
-0.4948835074901581,
-0.029667776077985764,
-0.02921891212463379,
0.08820661157369614,
-0.04339459910988808,
0.11271954327821732,
0.3488984704017639,
-0.09543729573488235,
-0.14382438361644745,
0.8236443996429443,
-0.7916699647903442,
-0.3767281472682953,
-0.2720078229904175,
0.08887002617... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
thewall/jolma_unique | thewall | 2023-03-23T09:44:58Z | 26 | 0 | null | [
"license:openrail",
"region:us"
] | 2023-03-23T09:44:58Z | 2023-03-11T07:07:53.000Z | 2023-03-11T07:07:53 | ---
license: openrail
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mstz/bank | mstz | 2023-04-15T11:16:43Z | 26 | 0 | null | [
"task_categories:tabular-classification",
"size_categories:1K<n<10K",
"language:en",
"compas",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | 2023-04-15T11:16:43Z | 2023-03-23T00:56:08.000Z | 2023-03-23T00:56:08 | ---
language:
- en
tags:
- compas
- tabular_classification
- binary_classification
- UCI
pretty_name: Bank
size_categories:
- 1K<n<10K
task_categories:
- tabular-classification
configs:
- encoding
- subscription
---
# Bank
The [Bank dataset](https://archive.ics.uci.edu/ml/datasets/bank+marketing) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Potential clients are contacted by a bank during a second advertisement campaign.
This datasets records the customer, the interaction with the AD campaign, and if they subscribed to a proposed bank plan or not.
# Configurations and tasks
| **Configuration** | **Task** | Description |
|-------------------|---------------------------|-----------------------------------------------------------------|
| encoding | | Encoding dictionary showing original values of encoded features.|
| subscription | Binary classification | Has the customer subscribed to a bank plan? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/bank", "subscription")["train"]
```
# Features
| **Name** |**Type** |
|-----------------------------------------------|-----------|
|`age` |`int64` |
|`job` |`string` |
|`marital_status` |`string` |
|`education` |`int8` |
|`has_defaulted` |`int8` |
|`account_balance` |`int64` |
|`has_housing_loan` |`int8` |
|`has_personal_loan` |`int8` |
|`month_of_last_contact` |`string` |
|`number_of_calls_in_ad_campaign` |`string` |
|`days_since_last_contact_of_previous_campaign` |`int16` |
|`number_of_calls_before_this_campaign` |`int16` |
|`successfull_subscription` |`int8` | | [
-0.36679959297180176,
-0.4459039866924286,
0.0869087353348732,
0.12364338338375092,
0.06957896798849106,
-0.21256844699382782,
0.016630543395876884,
-0.18233363330364227,
0.07836629450321198,
0.9845700860023499,
-0.7541525959968567,
-0.8130478858947754,
-0.4929313659667969,
-0.120456770062... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
argilla/alpaca_data_cleaned | argilla | 2023-03-30T22:09:39Z | 26 | 1 | null | [
"region:us"
] | 2023-03-30T22:09:39Z | 2023-03-30T22:08:20.000Z | 2023-03-30T22:08:20 | ---
dataset_info:
features:
- name: text
dtype: 'null'
- name: inputs
struct:
- name: _instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: prediction
dtype: 'null'
- name: prediction_agent
dtype: 'null'
- name: annotation
dtype: string
- name: annotation_agent
dtype: string
- name: vectors
struct:
- name: input
sequence: float64
- name: instruction
sequence: float64
- name: output
sequence: float64
- name: multi_label
dtype: bool
- name: explanation
dtype: 'null'
- name: id
dtype: string
- name: metadata
dtype: 'null'
- name: status
dtype: string
- name: event_timestamp
dtype: timestamp[us]
- name: metrics
struct:
- name: text_length
dtype: int64
splits:
- name: train
num_bytes: 975104502
num_examples: 51713
download_size: 679574648
dataset_size: 975104502
---
# Dataset Card for "alpaca_data_cleaned"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7024192810058594,
-0.4956094026565552,
0.11191605031490326,
0.07095365226268768,
-0.4370412528514862,
-0.14205819368362427,
0.33158549666404724,
-0.3588125705718994,
1.1097828149795532,
0.7474632859230042,
-0.8751496076583862,
-0.7927899360656738,
-0.5702518224716187,
-0.234298259019851... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sklearn-docs/digits | sklearn-docs | 2023-04-06T19:05:28Z | 26 | 0 | null | [
"size_categories:1K<n<10K",
"license:cc0-1.0",
"region:us"
] | 2023-04-06T19:05:28Z | 2023-04-01T14:09:07.000Z | 2023-04-01T14:09:07 | ---
license: cc0-1.0
size_categories:
- 1K<n<10K
---
# Dataset Card for digits dataset
Optical recognition of handwritten digits dataset
## Dataset Description
- **Homepage:** https://scikit-learn.org/stable/datasets/toy_dataset.html#digits-dataset
## Note - How to load this dataset directly with the datasets library
```
from datasets import load_dataset
dataset = load_dataset("sklearn-docs/digits",header=None)
```
### Dataset Summary
This is a copy of the test set of the UCI ML hand-written digits datasets https://archive.ics.uci.edu/ml/datasets/Optical+Recognition+of+Handwritten+Digits
The data set contains images of hand-written digits: 10 classes where each class refers to a digit.
Preprocessing programs made available by NIST were used to extract normalized bitmaps of handwritten digits from a preprinted form. From a total of 43 people, 30 contributed to the training set and different 13 to the test set. 32x32 bitmaps are divided into nonoverlapping blocks of 4x4 and the number of on pixels are counted in each block. This generates an input matrix of 8x8 where each element is an integer in the range 0..16. This reduces dimensionality and gives invariance to small distortions.
For info on NIST preprocessing routines, see M. D. Garris, J. L. Blue, G. T. Candela, D. L. Dimmick, J. Geist, P. J. Grother, S. A. Janet, and C. L. Wilson, NIST Form-Based Handprint Recognition System, NISTIR 5469, 1994.
### Data Instances
Number of Instances:
1797
Number of Attributes:
64
Attribute Information:
8x8 image of integer pixels in the range 0..16.
Missing Attribute Values:
None
Creator:
5. Alpaydin (alpaydin ‘@’ boun.edu.tr)
Date:
July; 1998
### Citation Information
References
C. Kaynak (1995) Methods of Combining Multiple Classifiers and Their Applications to Handwritten Digit Recognition, MSc Thesis, Institute of Graduate Studies in Science and Engineering, Bogazici University.
Alpaydin, C. Kaynak (1998) Cascading Classifiers, Kybernetika.
Ken Tang and Ponnuthurai N. Suganthan and Xi Yao and A. Kai Qin. Linear dimensionalityreduction using relevance weighted LDA. School of Electrical and Electronic Engineering Nanyang Technological University. 2005.
Claudio Gentile. A New Approximate Maximal Margin Classification Algorithm. NIPS. 2000.
| [
-0.3384701907634735,
-0.037248436361551285,
0.35353848338127136,
-0.033372242003679276,
-0.42550915479660034,
0.0915779396891594,
0.06012193113565445,
-0.4596155881881714,
0.14207038283348083,
0.4322032630443573,
-0.4572645127773285,
-0.4024772047996521,
-0.4752788245677948,
0.218687966465... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nlplabtdtu/mfag_vi | nlplabtdtu | 2023-04-05T16:19:13Z | 26 | 0 | null | [
"region:us"
] | 2023-04-05T16:19:13Z | 2023-04-05T16:18:27.000Z | 2023-04-05T16:18:27 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nanakonoda/xnli_cm_sample | nanakonoda | 2023-05-01T22:13:21Z | 26 | 0 | null | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:1M<n<10M",
"source_datasets:extended|xnli",
"language:en",
"language:de",
"language:fr",
"mode classification",
"aligned",
"code-mixed",
... | 2023-05-01T22:13:21Z | 2023-04-14T05:49:35.000Z | 2023-04-14T05:49:35 | ---
annotations_creators:
- expert-generated
language:
- en
- de
- fr
language_creators:
- found
license: []
multilinguality:
- multilingual
pretty_name: XNLI Code-Mixed Corpus (Sampled)
size_categories:
- 1M<n<10M
source_datasets:
- extended|xnli
tags:
- mode classification
- aligned
- code-mixed
task_categories:
- text-classification
task_ids: []
dataset_info:
- config_name: monolingual
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 317164
num_examples: 2490
- name: test
num_bytes: 641496
num_examples: 5007
download_size: 891209
dataset_size: 958660
- config_name: de_ec
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 317164
num_examples: 2490
- name: test
num_bytes: 1136549
num_examples: 14543
download_size: 1298619
dataset_size: 1453713
- config_name: de_ml
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 317164
num_examples: 2490
- name: test
num_bytes: 1068937
num_examples: 12750
download_size: 1248962
dataset_size: 1386101
- config_name: fr_ec
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 317164
num_examples: 2490
- name: test
num_bytes: 1520429
num_examples: 18653
download_size: 1644995
dataset_size: 1837593
- config_name: fr_ml
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 317164
num_examples: 2490
- name: test
num_bytes: 1544539
num_examples: 17381
download_size: 1682885
dataset_size: 1861703
download_size: 891209
dataset_size: 958660
---
# Dataset Card for XNLI Code-Mixed Corpus (Sampled)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
### Supported Tasks and Leaderboards
Binary mode classification (spoken vs written)
### Languages
- English
- German
- French
- German-English code-mixed by Equivalence Constraint Theory
- German-English code-mixed by Matrix Language Theory
- French-English code-mixed by Equivalence Constraint Theory
- German-English code-mixed by Matrix Language Theory
## Dataset Structure
### Data Instances
{
'text': "And he said , Mama , I 'm home",
'label': 0
}
### Data Fields
- text: sentence
- label: binary label of text (0: spoken 1: written)
### Data Splits
- monolingual
- train (English, German, French monolingual): 2490
- test (English, German, French monolingual): 5007
- de_ec
- train (English, German, French monolingual): 2490
- test (German-English code-mixed by Equivalence Constraint Theory): 14543
- de_ml
- train (English, German, French monolingual): 2490
- test (German-English code-mixed by Matrix Language Theory): 12750
- fr_ec
- train (English, German, French monolingual): 2490
- test (French-English code-mixed by Equivalence Constraint Theory): 18653
- fr_ml
- train (English, German, French monolingual): 2490
- test (French-English code-mixed by Matrix Language Theory): 17381
### Other Statistics
#### Average Sentence Length
- monolingual
- train: 19.18714859437751
- test: 19.321150389454765
- de_ec
- train: 19.18714859437751
- test: 11.24314103004882
- de_ml
- train: 19.18714859437751
- test: 12.159450980392156
- fr_ec
- train: 19.18714859437751
- test: 12.26526564091567
- fr_ml
- train: 19.18714859437751
- test: 13.486968528853346
#### Label Split
- monolingual
- train
- 0: 498
- 1: 1992
- test
- 0: 1002
- 1: 4005
- de_ec
- train
- 0: 498
- 1: 1992
- test
- 0: 2777
- 1: 11766
- de_ml
- train
- 0: 498
- 1: 1992
- test
- 0: 2329
- 1: 10421
- fr_ec
- train
- 0: 498
- 1: 1992
- test
- 0: 3322
- 1: 15331
- fr_ml
- train
- 0: 498
- 1: 1992
- test
- 0: 2788
- 1: 14593
## Dataset Creation
### Curation Rationale
Using the XNLI Parallel Corpus, we generated a code-mixed corpus using CodeMixed Text Generator, and sampled a maximum of 30 sentences per original English sentence.
The XNLI Parallel Corpus is available here:
https://huggingface.co/datasets/nanakonoda/xnli_parallel
It was created from the XNLI corpus.
More information is available in the datacard for the XNLI Parallel Corpus.
Here is the link and citation for the original CodeMixed Text Generator paper.
https://github.com/microsoft/CodeMixed-Text-Generator
```
@inproceedings{rizvi-etal-2021-gcm,
title = "{GCM}: A Toolkit for Generating Synthetic Code-mixed Text",
author = "Rizvi, Mohd Sanad Zaki and
Srinivasan, Anirudh and
Ganu, Tanuja and
Choudhury, Monojit and
Sitaram, Sunayana",
booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations",
month = apr,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.eacl-demos.24",
pages = "205--211",
abstract = "Code-mixing is common in multilingual communities around the world, and processing it is challenging due to the lack of labeled and unlabeled data. We describe a tool that can automatically generate code-mixed data given parallel data in two languages. We implement two linguistic theories of code-mixing, the Equivalence Constraint theory and the Matrix Language theory to generate all possible code-mixed sentences in the language-pair, followed by sampling of the generated data to generate natural code-mixed sentences. The toolkit provides three modes: a batch mode, an interactive library mode and a web-interface to address the needs of researchers, linguists and language experts. The toolkit can be used to generate unlabeled text data for pre-trained models, as well as visualize linguistic theories of code-mixing. We plan to release the toolkit as open source and extend it by adding more implementations of linguistic theories, visualization techniques and better sampling techniques. We expect that the release of this toolkit will help facilitate more research in code-mixing in diverse language pairs.",
}
```
### Source Data
XNLI Code-Mixed Corpus
https://huggingface.co/datasets/nanakonoda/xnli_cm
XNLI Parallel Corpus
https://huggingface.co/datasets/nanakonoda/xnli_parallel
#### Original Source Data
XNLI Parallel Corpus was created using the XNLI Corpus.
https://github.com/facebookresearch/XNLI
Here is the citation for the original XNLI paper.
```
@InProceedings{conneau2018xnli,
author = "Conneau, Alexis
and Rinott, Ruty
and Lample, Guillaume
and Williams, Adina
and Bowman, Samuel R.
and Schwenk, Holger
and Stoyanov, Veselin",
title = "XNLI: Evaluating Cross-lingual Sentence Representations",
booktitle = "Proceedings of the 2018 Conference on Empirical Methods
in Natural Language Processing",
year = "2018",
publisher = "Association for Computational Linguistics",
location = "Brussels, Belgium",
}
```
#### Initial Data Collection and Normalization
We removed all punctuation from the XNLI Parallel Corpus except apostrophes.
#### Who are the source language producers?
N/A
### Annotations
#### Annotation process
N/A
#### Who are the annotators?
N/A
### Personal and Sensitive Information
N/A
## Considerations for Using the Data
### Social Impact of Dataset
N/A
### Discussion of Biases
N/A
### Other Known Limitations
N/A
## Additional Information
### Dataset Curators
N/A
### Licensing Information
N/A
### Citation Information
### Contributions
N/A | [
-0.4217049777507782,
-0.4496506154537201,
0.005214304197579622,
0.43975383043289185,
-0.13757357001304626,
0.3357814848423004,
-0.6446220874786377,
-0.46683427691459656,
0.6075382828712463,
0.25115668773651123,
-0.4918833374977112,
-0.759037971496582,
-0.34393230080604553,
0.35630634427070... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mstz/segment | mstz | 2023-04-14T10:25:43Z | 26 | 0 | null | [
"license:cc-by-4.0",
"region:us"
] | 2023-04-14T10:25:43Z | 2023-04-14T10:21:45.000Z | 2023-04-14T10:21:45 | ---
license: cc-by-4.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
zetavg/coct-en-zh-tw-translations-twp-300k | zetavg | 2023-05-07T05:05:22Z | 26 | 11 | null | [
"task_categories:translation",
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:zh",
"language:en",
"region:us"
] | 2023-05-07T05:05:22Z | 2023-05-07T04:09:52.000Z | 2023-05-07T04:09:52 | ---
dataset_info:
features:
- name: en
dtype: string
- name: ch
dtype: string
splits:
- name: train
num_bytes: 103139635
num_examples: 310916
download_size: 75689895
dataset_size: 103139635
task_categories:
- translation
- text-generation
language:
- zh
- en
pretty_name: ~300K English ↔ Traditional Chinese Sentences from the COCT Database
size_categories:
- 100K<n<1M
---
# ~300K English ↔ Traditional Chinese Sentences from the COCT Database
The data in this dataset are collected from the Corpus of Contemporary Taiwanese Mandarin (COCT), mostly contributed by the [Taiwan Panorama](https://www.taiwan-panorama.com/) magazine. | [
-0.25276705622673035,
-0.9581892490386963,
0.11423276364803314,
0.0663149505853653,
-0.17812202870845795,
0.033164605498313904,
-0.3106919527053833,
-0.5071777105331421,
0.3496274948120117,
0.7247048616409302,
-0.7836677432060242,
-0.28010985255241394,
0.25035083293914795,
0.47438514232635... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
edarchimbaud/perimeter-sp500 | edarchimbaud | 2023-11-21T15:00:04Z | 26 | 2 | null | [
"task_categories:tabular-classification",
"language:en",
"license:mit",
"region:us"
] | 2023-11-21T15:00:04Z | 2023-05-14T21:03:49.000Z | 2023-05-14T21:03:49 | ---
language:
- en
license: mit
task_categories:
- tabular-classification
dataset_info:
features:
- name: symbol
dtype: string
- name: security
dtype: string
- name: gics_sector
dtype: string
- name: gics_sub_industry
dtype: string
splits:
- name: train
num_bytes: 35469
num_examples: 503
download_size: 0
dataset_size: 35469
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "index-constituents-sp500"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://edarchimbaud.substack.com
- **Repository:** https://github.com/edarchimbaud
- **Point of Contact:** contact@edarchimbaud.com
### Dataset Summary
The index-constituents-sp500 dataset provides information about the constituents of the S&P 500 index. It contains several features that describe each constituent company.
### Supported Tasks and Leaderboards
[N/A]
### Languages
[N/A]
## Dataset Structure
### Data Instances
[N/A]
### Data Fields
- symbol (string): A string representing the ticker symbol or abbreviation used to identify the company.
- security (string): A string specifying the name or title of the security.
- gics_sector (string): A string indicating the Global Industry Classification Standard (GICS) sector to which the company belongs. GICS is a widely used classification system for categorizing companies based on their primary business activities.
- gics_sub_industry (string): A string specifying the GICS sub-industry of the company, which provides further granularity within the sector classification.
- headquarters_location (string): A string representing the location of the company's headquarters.
- date_added (string): A string indicating the date when the company was added to the S&P 500 index.
- cik (string): A string representing the Central Index Key (CIK) assigned to the company by the United States Securities and Exchange Commission (SEC). The CIK is a unique identifier used for regulatory filings.
- founded (string): A string indicating the year or date of the company's founding.
### Data Splits
[N/A]
## Dataset Creation
### Curation Rationale
The index-constituents-sp500 dataset was developed to support the development of low-frequency trading algorithms.
### Source Data
#### Initial Data Collection and Normalization
This data was sourced from the web, and aggregated.
### Annotations
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
[N/A]
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
The index-constituents-sp500 dataset was collected by https://edarchimbaud.substack.com.
### Licensing Information
The index-constituents-sp500 dataset is licensed under the MIT License.
### Citation Information
> https://edarchimbaud.substack.com, index-constituents-sp500 dataset, GitHub repository, https://github.com/edarchimbaud
### Contributions
Thanks to [@edarchimbaud](https://github.com/edarchimbaud) for adding this dataset. | [
-0.5817974209785461,
-0.370148241519928,
0.03552142530679703,
0.15535037219524384,
-0.10969775170087814,
0.2985178530216217,
-0.10010498017072678,
-0.17217589914798737,
0.8243984580039978,
0.4331996440887451,
-1.0506592988967896,
-0.9094070792198181,
-0.4060954451560974,
0.0986673980951309... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
adsabs/FOCAL | adsabs | 2023-10-18T19:15:03Z | 26 | 1 | null | [
"task_categories:token-classification",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-4.0",
"astronomy",
"region:us"
] | 2023-10-18T19:15:03Z | 2023-05-17T19:09:34.000Z | 2023-05-17T19:09:34 | ---
annotations_creators:
- expert-generated
license: cc-by-4.0
task_categories:
- token-classification
language:
- en
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
tags:
- astronomy
dataset_info:
features:
- name: Identifier
dtype: string
- name: Paragraph
dtype: string
- name: Citation Text
sequence: string
- name: Functions Text
sequence: string
- name: Functions Label
sequence: string
- name: Citation Start End
sequence:
sequence: int64
- name: Functions Start End
sequence:
sequence: int64
splits:
- name: train
num_bytes: 7096500
num_examples: 2421
- name: validation
num_bytes: 1761751
num_examples: 606
- name: test
num_bytes: 2512022
num_examples: 821
download_size: 5649484
dataset_size: 11370273
---
# Function Of Citation in Astrophysics Literature (FOCAL): Dataset and Task
*Can you explain why the authors made a given citation?*
This dataset was created as a [shared task](https://ui.adsabs.harvard.edu/WIESP/2023/shared_task_1) for [WIESP @ AACL-IJCNLP 2023](https://ui.adsabs.harvard.edu/WIESP/2023/).
## Dataset Description
Datasets are in JSON Lines format (each line is a json dictionary).
Each entry consists of a dictionary with the following keys:
- `"Identifier"`: unique string to identify the entry
- `"Paragraph"`: text string from an astrophysics paper
- `"Citation Text"`: list of strings forming the citation (most often a single string, but sometimes the citation text is split up)
- `"Citation Start End"`: list of integer pairs denoting where the citation starts and end in `"Paragraph"` (most often a single pair, sometimes the citation text is split up, if so follows the order in `"Citation Text"`)
- `"Functions Text"`: list of strings highlighting parts of the paragraph that explain the function of the citation
- `"Functions Label"`: list of strings with the label for each text element in `"Functions Text"` (in same order)
- `"Functions Start End"`: list of integer pairs denoting where the elements in `"Functions Text"` start and end in `"Paragraph"`(in same order)
start and end are defined by the character position in the `"Paragraph"` string.
## Instructions for Workshop Participants:
How to load the data using the Huggingface library:
```python
from datasets import load_dataset
dataset = load_dataset("adsabs/FOCAL")
```
How to load the data if you cloned the repository locally:
(assuming `./FOCAL-TRAINING.jsonl` is in the current directory, change as needed)
- python (as list of dictionaries):
```python
import json
with open("./FOCAL-TRAINING.jsonl", 'r') as f:
focal_training_from_json = [json.loads(l) for l in list(f)]
```
- into Huggingface (as a Huggingface Dataset):
```python
from datasets import Dataset
focal_training_from_json = Dataset.from_json(path_or_paths="./FOCAL-TRAINING.jsonl")
```
## File List
```
├── FOCAL-TRAINING.jsonl (2421 samples for training)
├── FOCAL-VALIDATION.jsonl (606 samples for validating your training methods)
├── FOCAL-TESTING.jsonl (821 samples for testing)
├── FOCAL-VALIDATION-NO-LABELS.jsonl (606 samples for validation without the labels. Used during the shared task of [WIESP-2023](https://ui.adsabs.harvard.edu/WIESP/2023/)
├── FOCAL-TESTING-NO-LABELS.jsonl (821 samples for testing without the labels. Used during the shared task of [WIESP-2023](https://ui.adsabs.harvard.edu/WIESP/2023/)
├── /scoring_scripts/score_focal_seqeval.py (scoring scripts used during the shared task of [WIESP-2023](https://ui.adsabs.harvard.edu/WIESP/2023/)
├── /scoring_scripts/score_focal_labels_only.py (scoring scripts used during the shared task of [WIESP-2023](https://ui.adsabs.harvard.edu/WIESP/2023/)
├── /data/*.parquet (files used when loading the dataset through Huggingface's API)
├── README.MD (this file)
└──
```
Maintainer: Felix Grezes (ORCID: 0000-0001-8714-7774)
Data annotator: Tom Allen (ORCID: 0000-0002-5532-4809) | [
-0.6691468954086304,
-0.576462984085083,
0.3435211479663849,
0.4006268084049225,
0.026348499581217766,
-0.45531147718429565,
-0.12234445661306381,
-0.5755860805511475,
0.3057323396205902,
0.3167668581008911,
-0.5377025008201599,
-0.452134370803833,
-0.5235774517059326,
0.2699032723903656,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ShuaKang/calvin_abc_d | ShuaKang | 2023-05-26T15:16:56Z | 26 | 0 | null | [
"region:us"
] | 2023-05-26T15:16:56Z | 2023-05-26T14:49:47.000Z | 2023-05-26T14:49:47 | ---
dataset_info:
features:
- name: goal_image
dtype: image
- name: obs_image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1548380473.5
num_examples: 17870
download_size: 1547702724
dataset_size: 1548380473.5
---
# Dataset Card for "calvin_abc_d"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.47992342710494995,
-0.3499959409236908,
0.2012760043144226,
0.3548780381679535,
-0.20062603056430817,
0.09099503606557846,
0.1817903220653534,
-0.3358728885650635,
0.9327250719070435,
0.42480432987213135,
-0.9154248237609863,
-0.8991711139678955,
-0.6333520412445068,
-0.1000368595123291... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Kamtera/ParsiGoo | Kamtera | 2023-06-11T09:21:29Z | 26 | 1 | null | [
"task_categories:text-to-speech",
"task_categories:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:fa",
"license:cc0-1.0",
"region:us"
] | 2023-06-11T09:21:29Z | 2023-06-03T18:05:09.000Z | 2023-06-03T18:05:09 | ---
license:
- cc0-1.0
description: A Persian multispeaker dataset for text-to-speech purposes.
homepage: https://example.com/parsigoo
keywords:
- text-to-speech
- Persian
- multispeaker
language: fa
multilinguality: monolingual
name: parsi_goo
pretty_name: ParsiGoo
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-to-speech
- other
task_ids: []
---
# ParsiGoo Dataset Cart
This is a Persian multispeaker dataset for text-to-speech purposes. The dataset includes the following speakers:
- ariana_Male2
- moujeze_Female1
- ariana_Male1
- ariana_Female1
## Technical detailes
#### the beginning and the end with nonspeech parts trimmed
#### Sample rate: 22050
#### Durations:
```
|> ariana_Male2 0:46:36.908685
|> edge_Dilara 0:54:31.448820
|> moujeze_Female1 0:29:24.339590
|> ariana_Male1 0:55:41.996847
|> ariana_Female1 0:53:38.396217
|> edge_Farid 0:53:11.961018
```
## Dataset Information
- **Name:** ParsGoo
- **Description:** A Persian multispeaker dataset for text-to-speech purposes.
- **Homepage:** https://github.com/karim23657/ParsGoo
- **License:** CC BY-SA 4.0
## Speaker info
- ariana_Male2
- moujeze_Female1
- ariana_Male1
- ariana_Female1
| [
-0.5372516512870789,
-0.5633143782615662,
0.4226202368736267,
0.362968385219574,
-0.4030478596687317,
-0.028878429904580116,
-0.4862578511238098,
0.04439734295010567,
0.47790443897247314,
0.7086853384971619,
-0.8874719142913818,
-0.7493371367454529,
-0.6285843849182129,
-0.0910211279988288... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
clarin-knext/nq-pl-qrels | clarin-knext | 2023-06-07T08:23:58Z | 26 | 0 | null | [
"language:pl",
"license:cc-by-sa-4.0",
"arxiv:2305.19840",
"region:us"
] | 2023-06-07T08:23:58Z | 2023-06-06T17:45:32.000Z | 2023-06-06T17:45:32 | ---
license: cc-by-sa-4.0
language:
- pl
---
Part of **BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language**.
Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf
Contact: konrad.wojtasik@pwr.edu.pl | [
-0.2209915816783905,
-0.9029768109321594,
0.5094643235206604,
0.2354193478822708,
-0.3185211718082428,
-0.1491904854774475,
-0.16673950850963593,
-0.4962919354438782,
-0.018960798159241676,
0.4112257659435272,
-0.5503100752830505,
-0.691356897354126,
-0.4166182279586792,
-0.048304602503776... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
eastwind/semeval-2016-absa-reviews-english-translated-stanford-alpaca | eastwind | 2023-06-09T11:08:27Z | 26 | 1 | null | [
"task_categories:text-classification",
"task_categories:zero-shot-classification",
"task_categories:question-answering",
"task_categories:text2text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"region:us"
] | 2023-06-09T11:08:27Z | 2023-06-09T11:05:14.000Z | 2023-06-09T11:05:14 | ---
license: mit
task_categories:
- text-classification
- zero-shot-classification
- question-answering
- text2text-generation
language:
- en
pretty_name: >-
SemEval 2016 Hotel Aspect Based Sentiment Analysis translated and alpaca
format for LLM training
size_categories:
- 1K<n<10K
---
# Dataset Card for Dataset Name
Derived from eastwind/semeval-2016-absa-reviews-arabic using Helsinki-NLP/opus-mt-tc-big-ar-en | [
-0.5427840948104858,
-0.7312263250350952,
-0.16740046441555023,
0.06980495899915695,
-1.0208226442337036,
-0.05593222752213478,
-0.026685403659939766,
-0.3522794544696808,
0.7136684656143188,
0.48774510622024536,
-0.8754356503486633,
-1.1864628791809082,
-0.545477569103241,
0.3050197362899... | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.