id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
nympshee/ZatsuRVC | 2023-09-27T00:11:45.000Z | [
"region:us"
] | nympshee | null | null | null | 0 | 0 | Entry not found |
LIKirin/klonetai-prompts | 2023-10-03T13:11:26.000Z | [
"license:mit",
"region:us"
] | LIKirin | null | null | null | 0 | 0 | ---
license: mit
---
|
Monkaro/Woman-Regularisation | 2023-09-29T13:32:48.000Z | [
"license:unknown",
"region:us"
] | Monkaro | null | null | null | 0 | 0 | ---
license: unknown
---
|
shengs/LLaVA-SFT-122K | 2023-09-27T01:17:31.000Z | [
"region:us"
] | shengs | null | null | null | 1 | 0 | Entry not found |
alagaesia/auto-sql-create-context | 2023-09-27T16:40:17.000Z | [
"license:agpl-3.0",
"region:us"
] | alagaesia | null | null | null | 0 | 0 | ---
license: agpl-3.0
---
|
Monkaro/Man-Regularisation | 2023-09-27T02:24:03.000Z | [
"license:unknown",
"region:us"
] | Monkaro | null | null | null | 0 | 0 | ---
license: unknown
---
|
p208p2002/csl-1.8G | 2023-09-27T04:28:27.000Z | [
"language:zh",
"region:us"
] | p208p2002 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: csl.jsonl
language:
- zh
---
# CSL 中文科學論文摘要資料集
資料來源: https://github.com/ydli-ai/CSL |
mesolitica/semisupervised-malaysian-youtube-whisper-large-v2 | 2023-09-27T08:59:14.000Z | [
"region:us"
] | mesolitica | null | null | null | 0 | 0 | Entry not found |
DebasishDhal99/exonyms-for-lithuanian-places | 2023-09-27T03:04:03.000Z | [
"region:us"
] | DebasishDhal99 | null | null | null | 0 | 0 | Entry not found |
yzhuang/autotree_automl_MiniBooNE_gosdt_l256_d3_sd0 | 2023-09-27T03:13:59.000Z | [
"region:us"
] | yzhuang | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float64
- name: input_y
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float64
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 11580000000
num_examples: 100000
- name: validation
num_bytes: 1158000000
num_examples: 10000
download_size: 11596980285
dataset_size: 12738000000
---
# Dataset Card for "autotree_automl_MiniBooNE_gosdt_l256_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
adeaven/dn_dataset | 2023-09-27T03:34:01.000Z | [
"task_categories:text-to-image",
"task_categories:image-to-text",
"task_categories:object-detection",
"task_categories:zero-shot-classification",
"task_ids:image-captioning",
"multilinguality:monolingual",
"size_categories:100M<n<1B",
"source_datasets:COYO-700M",
"language:en",
"license:ms-pl",
... | adeaven | null | null | null | 0 | 0 | ---
license: ms-pl
language:
- en
multilinguality:
- monolingual
pretty_name: GRIT
size_categories:
- 100M<n<1B
source_datasets:
- COYO-700M
tags:
- image-text-bounding-box pairs
- image-text pairs
task_categories:
- text-to-image
- image-to-text
- object-detection
- zero-shot-classification
task_ids:
- image-captioning
--- |
orlando1021/juhe_v1 | 2023-09-27T03:37:43.000Z | [
"license:bigscience-openrail-m",
"region:us"
] | orlando1021 | null | null | null | 0 | 0 | ---
license: bigscience-openrail-m
---
|
sxandie/data-full-df-sep23-xlmrobbase | 2023-10-05T19:33:46.000Z | [
"task_categories:token-classification",
"region:us"
] | sxandie | null | null | null | 0 | 0 | ---
task_categories:
- token-classification
---
# AutoTrain Dataset for project: full-dfsep23-xlmrobbase
## Dataset Description
This dataset has been automatically processed by AutoTrain for project full-dfsep23-xlmrobbase.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"feat_Unnamed: 0.1": 0,
"feat_Unnamed: 0": 0,
"tokens": [
"terms",
"fm",
"door",
"Quinto",
"Di",
"Treviso",
"to",
"HKG",
"/",
"Tablo/",
"2",
"Plts",
"/",
"348",
"Kgs/",
"3.84",
"Cbm",
"/",
"Cargo",
"ready:",
"6",
"Jun",
"Ciao",
"Ale",
";",
"120*80*200",
"-",
"348",
"kgs.",
"Totali",
";",
"pick",
"up",
"address:",
";",
"Viale",
"dell'Industria,",
"26",
";",
"310",
"55",
"QUINTO",
"DI",
"TREVISO",
";",
"And",
"kindly",
"quote",
"upto",
"HKG",
"under",
"CPT",
"terms",
";",
"Grazie",
";",
"alessio",
";",
"Alessio",
"Rovetta",
";",
"Italy",
"Seafreight",
"Product",
"Manager",
";",
"[New",
"Logo",
"Mail]",
";",
"S.P.",
"14",
"Rivoltana",
"Km",
"9,500",
";",
"20060",
"-",
"Vignate",
"(MI)",
";",
"*si",
"accede",
"al",
"sito",
"da",
"via",
"Bruno",
"Buozzi",
"snc,",
"Liscate",
"(MI)",
";",
"Telefono:",
"+39",
"236766530",
";",
"Cellulare:",
"+39",
"3427670429",
";",
"E-mail:",
"a.rovetta@erixmar.com<mailto:a.rovetta@erixmar.com>",
";",
"In",
"relazione",
"all'entrata",
"in",
"vigore",
"del",
"cos\u00ec",
"detto",
"GDPR,",
"General",
"Data",
"Protection",
"Regulation,",
"anche",
"noi",
"in",
"ERIXMAR",
"SRL"
],
"tags": [
0,
0,
0,
12,
12,
12,
0,
5,
0,
0,
15,
10,
21,
21,
21,
20,
20,
0,
0,
0,
0,
0,
0,
0,
0,
8,
8,
19,
19,
0,
0,
0,
0,
0,
0,
12,
12,
12,
0,
11,
11,
12,
12,
12,
0,
0,
0,
0,
0,
5,
0,
7,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
]
},
{
"feat_Unnamed: 0.1": 412,
"feat_Unnamed: 0": 417,
"tokens": [
"Buongiorno",
";",
"Prego",
"quotare",
";",
"n.",
"1",
"CASSA",
"160",
"X",
"210",
"X",
"150",
"KG",
"1.50",
";",
";",
";",
"da",
"10127",
"Torino",
";",
"CIF",
"DAMMAM",
"PORT",
"-",
"SAUDI",
"ARABIA"
],
"tags": [
0,
0,
0,
0,
0,
0,
15,
10,
8,
8,
8,
8,
8,
21,
21,
0,
0,
0,
0,
11,
12,
0,
7,
5,
5,
5,
6,
6
]
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"feat_Unnamed: 0.1": "Value(dtype='int64', id=None)",
"feat_Unnamed: 0": "Value(dtype='int64', id=None)",
"tokens": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
"tags": "Sequence(feature=ClassLabel(names=['O', 'commodity', 'company', 'delivery_cap', 'delivery_location', 'delivery_port', 'delivery_state', 'incoterms', 'measures', 'nan', 'package_type', 'pickup_cap', 'pickup_location', 'pickup_port', 'pickup_state', 'quantity', 'stackable', 'total_quantity', 'total_volume', 'total_weight', 'volume', 'weight'], id=None), length=-1, id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 613 |
| valid | 269 |
|
chunpingvi/dataset_format4 | 2023-10-01T08:50:56.000Z | [
"region:us"
] | chunpingvi | null | null | null | 0 | 0 | Entry not found |
vllg/lichess_classic_2000 | 2023-09-27T05:44:38.000Z | [
"task_categories:text-generation",
"size_categories:1M<n<10M",
"language:en",
"license:cc",
"chess",
"region:us"
] | vllg | null | null | null | 0 | 0 | ---
license: cc
task_categories:
- text-generation
language:
- en
tags:
- chess
size_categories:
- 1M<n<10M
---
6,643,902 chess games from the Lichess Open Database (https://database.lichess.org/#standard_games) that meet the following criteria:
1. At least one player with ELO>=2,000
2. Rated Classical game mode
3. Normal termination
4. Result of 0-1 or 1-0 (no ties) |
mabryCodes/tiny-cot-alpaca | 2023-09-27T05:19:49.000Z | [
"region:us"
] | mabryCodes | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 999663185
num_examples: 599093
download_size: 609742524
dataset_size: 999663185
---
# Dataset Card for "tiny-cot-hermes"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Raghavan/beit3_vqa_answer2label.txt | 2023-09-27T06:21:31.000Z | [
"region:us"
] | Raghavan | null | null | null | 0 | 0 | Entry not found |
AngoHF/ANGO-S1 | 2023-09-27T06:43:02.000Z | [
"task_categories:question-answering",
"task_categories:text2text-generation",
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:zh",
"license:llama2",
"region:us"
] | AngoHF | null | null | null | 1 | 0 | ---
license: llama2
task_categories:
- question-answering
- text2text-generation
- text-generation
language:
- zh
size_categories:
- 1K<n<10K
pretty_name: ANGO
---
ANGO is A Novel Generation-Oriented Chinese LLM evaluation benchmark.
We introduces the format of single-question multiple-keypoints dataset for the first time, which include 171 keypoints accumulated in 4 hierarchical levels and 9 difficulty categories.
The data were exclusively obtained from the Administrative Proficiency Test, which serves as a significant component of the Chinese civil service examination.
We will apply a seasonal system for the leaderboard, updating them every two months. The corresponding test dataset will be announced at the beginning of each season, and some questions will be eliminated at the end of the season.
More details are at our [space](https://huggingface.co/spaces/AngoHF/ANGO-Leaderboard) |
UrbanJoe/Test_Dataset | 2023-09-27T06:49:50.000Z | [
"region:us"
] | UrbanJoe | null | null | null | 0 | 0 | Entry not found |
Omnibus/chat-at | 2023-10-01T00:24:46.000Z | [
"region:us"
] | Omnibus | null | null | null | 0 | 0 | Entry not found |
Yip/test | 2023-09-27T07:17:08.000Z | [
"region:us"
] | Yip | null | null | null | 0 | 0 | Entry not found |
Sabarivenkatesh3/pn_diode | 2023-09-27T07:21:03.000Z | [
"region:us"
] | Sabarivenkatesh3 | null | null | null | 0 | 0 | Entry not found |
lunarflu/HuggingCast-v1-AI-News-and-Demos | 2023-09-27T07:31:06.000Z | [
"region:us"
] | lunarflu | null | null | null | 0 | 0 | https://youtu.be/nfQ8vB3cn2Q?list=PLo2EIpI_JMQtpPdB3QSGW8bkZostiB7Y2

|
rohdimp24/localKnow07 | 2023-09-27T07:35:49.000Z | [
"region:us"
] | rohdimp24 | null | null | null | 0 | 0 | Entry not found |
cantabile-kwok/libritts-all-kaldi-data | 2023-09-27T08:52:03.000Z | [
"region:us"
] | cantabile-kwok | null | null | null | 0 | 0 | Entry not found |
HumanCompatibleAI/ppo-seals-Humanoid-v1 | 2023-09-27T07:53:48.000Z | [
"region:us"
] | HumanCompatibleAI | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: obs
sequence:
sequence: float64
- name: acts
sequence:
sequence: float32
- name: infos
sequence: string
- name: terminal
dtype: bool
- name: rews
sequence: float32
splits:
- name: train
num_bytes: 447344692
num_examples: 104
download_size: 244295905
dataset_size: 447344692
---
# Dataset Card for "ppo-seals-Humanoid-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
chiyuxing/cyx-dataset | 2023-09-27T07:57:19.000Z | [
"license:bsd",
"region:us"
] | chiyuxing | null | null | null | 0 | 0 | ---
license: bsd
---
|
Mohamedhussein736/Kitti-Ros2bag | 2023-09-27T08:12:09.000Z | [
"region:us"
] | Mohamedhussein736 | null | null | null | 0 | 0 | |
DSSGxMunich/nrw-bplan-pdfs | 2023-10-05T09:57:02.000Z | [
"license:mit",
"region:us"
] | DSSGxMunich | null | null | null | 1 | 0 | ---
license: mit
---
This dataset contains zips of all pdf files which were downloaded from the [NRW Geoportal](https://www.geoportal.nrw/?activetab=portal). The pdfs filenames and document ids can be linked back to the [document_text](https://huggingface.co/datasets/DSSGxMunich/document_text) table. |
JeswinMS4/code_text_classifier | 2023-09-27T08:32:08.000Z | [
"region:us"
] | JeswinMS4 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': code
'1': text
splits:
- name: train
num_bytes: 58725
num_examples: 823
- name: validation
num_bytes: 3311
num_examples: 46
- name: test
num_bytes: 3320
num_examples: 46
download_size: 35195
dataset_size: 65356
---
# Dataset Card for "code_text_classifier"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
DSSGxMunich/nrw-bplan-images | 2023-10-05T10:29:24.000Z | [
"license:mit",
"region:us"
] | DSSGxMunich | null | null | null | 1 | 0 | ---
license: mit
---
This dataset contains the images extracted from all building plans. The images were extracted by fine-tuning a CNN for object detection. The model places boudning boxes around all image parts of the documents, which corresponds to an image of the land parcel. All other areas, typically corresponding to text, are considered background.
## Dataset Structure
The zip file contains the following:
- images: a folder containing all images
- img_pdf_mapping: a table containing two columns;
- pdf_filename: the filename of the pdf from which the image was extracted. Corresponds to the filename that can be foudn in the [document_text](https://huggingface.co/datasets/DSSGxMunich/document_text) table
- img_filename: the filename of each image as can be found in the images folder |
neatcreater/social_profiles | 2023-09-27T08:50:04.000Z | [
"region:us"
] | neatcreater | null | null | null | 0 | 0 | Entry not found |
Abdelwahab/SMS | 2023-09-27T08:57:18.000Z | [
"license:apache-2.0",
"region:us"
] | Abdelwahab | null | null | null | 0 | 0 | ---
license: apache-2.0
---
|
Abdelwahab/gg | 2023-09-27T09:01:44.000Z | [
"license:apache-2.0",
"region:us"
] | Abdelwahab | null | null | null | 0 | 0 | ---
license: apache-2.0
---
|
QEU/databricks-dolly-16k-line_ja-1_of_4 | 2023-09-27T09:29:59.000Z | [
"license:apache-2.0",
"region:us"
] | QEU | null | null | null | 0 | 0 | ---
license: apache-2.0
---
# このデータセットは、2023年に有名になったdatabrick-15kの日本語版です。
## ただし、データは4分割されています。
## データの内容は非常に変わっています。(半分ぐらいは、原型をとどめていません)
- カタカナ語にカッコ付けで英語を追記しました。
- このデータセットには、QnAとして異常なレコードが見られることから修正しました。
- 「ゲームオブスローン」に関するトリビアなど、情報価値が低いものは削除しました。
- その他、いろいろなトライアルとして情報を追加しました。
詳しい情報は[こちらのブログ](https://jpnqeur23lmqsw.blogspot.com/2023/09/qeur23llmdss9llm.html)を参考にしてください。
|
QEU/databricks-dolly-16k-line_ja-2_of_4 | 2023-09-27T09:29:24.000Z | [
"license:apache-2.0",
"region:us"
] | QEU | null | null | null | 0 | 0 | ---
license: apache-2.0
---
# このデータセットは、2023年に有名になったdatabrick-15kの日本語版です。
## ただし、データは4分割されています。
## データの内容は非常に変わっています。(半分ぐらいは、原型をとどめていません)
- カタカナ語にカッコ付けで英語を追記しました。
- このデータセットには、QnAとして異常なレコードが見られることから修正しました。
- 「ゲームオブスローン」に関するトリビアなど、情報価値が低いものは削除しました。
- その他、いろいろなトライアルとして情報を追加しました。
詳しい情報は[こちらのブログ](https://jpnqeur23lmqsw.blogspot.com/2023/09/qeur23llmdss9llm.html)を参考にしてください。
|
QEU/databricks-dolly-16k-line_ja-3_of_4 | 2023-10-09T03:14:40.000Z | [
"license:apache-2.0",
"region:us"
] | QEU | null | null | null | 0 | 0 | ---
license: apache-2.0
---
# このデータセットは、2023年に有名になったdatabrick-15kの日本語版です。
## ただし、データは4分割されています。
## データの内容は非常に変わっています。(半分ぐらいは、原型をとどめていません)
- カタカナ語にカッコ付けで英語を追記しました。
- このデータセットには、QnAとして異常なレコードが見られることから修正しました。
- 「ゲームオブスローン」に関するトリビアなど、情報価値が低いものは削除しました。
- その他、いろいろなトライアルとして情報を追加しました。
詳しい情報は[こちらのブログ](https://jpnqeur23lmqsw.blogspot.com/2023/09/qeur23llmdss9llm.html)を参考にしてください。
|
oserikov/arabic_billion_words_old | 2023-09-27T10:15:43.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"size_categories:10K<n<100K",
"size_categories:1M<... | oserikov | null | null | null | 0 | 0 | ---
annotations_creators:
- found
language_creators:
- found
language:
- ar
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 10K<n<100K
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: null
pretty_name: Arabic Billion Words
dataset_info:
- config_name: Alittihad
features:
- name: url
dtype: string
- name: head_line
dtype: string
- name: date
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1601790302
num_examples: 349342
download_size: 348259999
dataset_size: 1601790302
- config_name: Almasryalyoum
features:
- name: url
dtype: string
- name: head_line
dtype: string
- name: date
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1056197870
num_examples: 291723
download_size: 242604438
dataset_size: 1056197870
- config_name: Almustaqbal
features:
- name: url
dtype: string
- name: head_line
dtype: string
- name: date
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1545659336
num_examples: 446873
download_size: 350826797
dataset_size: 1545659336
- config_name: Alqabas
features:
- name: url
dtype: string
- name: head_line
dtype: string
- name: date
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2631729746
num_examples: 817274
download_size: 595274646
dataset_size: 2631729746
- config_name: Echoroukonline
features:
- name: url
dtype: string
- name: head_line
dtype: string
- name: date
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 464386206
num_examples: 139732
download_size: 108184378
dataset_size: 464386206
- config_name: Ryiadh
features:
- name: url
dtype: string
- name: head_line
dtype: string
- name: date
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3101294859
num_examples: 858188
download_size: 691264971
dataset_size: 3101294859
- config_name: Sabanews
features:
- name: url
dtype: string
- name: head_line
dtype: string
- name: date
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 198019614
num_examples: 92149
download_size: 38214558
dataset_size: 198019614
- config_name: SaudiYoum
features:
- name: url
dtype: string
- name: head_line
dtype: string
- name: date
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2723291416
num_examples: 888068
download_size: 605537923
dataset_size: 2723291416
- config_name: Techreen
features:
- name: url
dtype: string
- name: head_line
dtype: string
- name: date
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1103458209
num_examples: 314597
download_size: 252976781
dataset_size: 1103458209
- config_name: Youm7
features:
- name: url
dtype: string
- name: head_line
dtype: string
- name: date
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3004689464
num_examples: 1172136
download_size: 617708074
dataset_size: 3004689464
config_names:
- Alittihad
- Almasryalyoum
- Almustaqbal
- Alqabas
- Echoroukonline
- Ryiadh
- Sabanews
- SaudiYoum
- Techreen
- Youm7
---
# Dataset Card for Arabic Billion Words Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://www.abuelkhair.net/index.php/en/arabic/abu-el-khair-corpus
- **Repository:**
- **Paper:** https://arxiv.org/pdf/1611.04033
- **Leaderboard:**
- **Point of Contact:**[Ibrahim Abu El-Khair](iabuelkhair@gmail.com)
### Dataset Summary
Abu El-Khair Corpus is an Arabic text corpus, that includes more than five million newspaper articles.
It contains over a billion and a half words in total, out of which, there are about three million unique words.
The corpus is encoded with two types of encoding, namely: UTF-8, and Windows CP-1256.
Also it was marked with two mark-up languages, namely: SGML, and XML.
**NB:** this dataset is based on the [unofficial copy](https://drive.google.com/drive/folders/1F2wCEfFHzJqX7eTuWhh-pGtrsaHPvTT8?usp=drive_link) ([discussion](https://huggingface.co/datasets/arabic_billion_words/discussions/3)) of the data, and assumes it was downloaded properly. Put the `new_data_*` files to the `./dataset` folder like this:
```
[user@machine /path/to/dataset]$ tree
.
├── arabic_billion_words.py
├── dataset
│ ├── new_data_Alittihad_XML_utf_8.rar
│ ├── new_data_Almasryalyoum_XML_utf_8.rar
│ ├── new_data_Almustaqbal_XML_utf_8.rar
│ ├── new_data_Alqabas_XML_utf_8.rar
│ ├── new_data_Echoroukonline_XML_utf_8.rar
│ ├── new_data_Ryiadh_XML_utf_8.rar
│ ├── new_data_Sabanews_XML_utf_8.rar
│ ├── new_data_SaudiYoum_XML_utf_8.rar
│ ├── new_data_Techreen_XML_utf_8.rar
│ └── new_data_Youm7_XML_utf_8.rar
├── dataset_infos.json
├── README.md
└── usage_example.py
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Arabic
## Dataset Structure
### Data Instances
This is an example of the "Almasryalyoum" configuration subset:
```python
{
"url": "http://today.almasryalyoum.com/printerfriendly.aspx?ArticleID=61300",
"head_line": "رئيس وزراء المجر: عنصرية جماهير أوجبيست جلبت العار للبلاد",
"date": "19/5/2007",
"text": """قال متحدث باسم الحكومة المجرية: إن رئيس الوزراء فيرنك جيوركساني رحب بقرار اتحاد كرة القدم المجري بخصم ثلاث نقاط من نادي أوجبيست بسبب السلوك العنصري الذي صدر من جماهيره.
وعاقب الاتحاد المجري فريق أوجبيست بعد أن سخرت جماهيره من إبراهيم سيديبي مهاجم فريق ديبرينسين الأسود أثناء مباراة الفريقين أوائل مايو الجاري.
يذكر أن الاتحاد فرض أيضا غرامة مالية قدرها 20 ألف دولار علي أوجبيست في عام 2005 بعد أن رددت جماهيره شعارات معادية للسامية خلال مباراة بالدوري المجري.
وأوضح جيوركساني في خطاب إلي إيستفان كيستليكي رئيس الاتحاد المجري لكرة القدم، أن هذا السلوك العنصري من الجماهير «جلب العار لكرة القدم وللمجر». يذكر أن المجر بها مجموعة من مشجعي كرة القدم المشاغبين «الهوليجانز»، وشارك الكثير منهم في أعمال شغب معادية للحكومة في العام الماضي.""",
}
```
### Data Fields
The data fields are:
- "url": string, original url of the article,
- "head_line": string, headline of the article,
- "date": string, date of the article,
- "text": string, text content of the article,
### Data Splits
There is only one "training" split for all configuration subsets, containing the following number of examples:
| | Number of examples |
|:---------------|-------------------:|
| Alittihad | 349342 |
| Almasryalyoum | 291723 |
| Almustaqbal | 446873 |
| Alqabas | 817274 |
| Echoroukonline | 139732 |
| Ryiadh | 858188 |
| Sabanews | 92149 |
| SaudiYoum | 888068 |
| Techreen | 314597 |
| Youm7 | 1172136 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@article{el20161,
title={1.5 billion words arabic corpus},
author={El-Khair, Ibrahim Abu},
journal={arXiv preprint arXiv:1611.04033},
year={2016}
}
```
### Contributions
Thanks to [@zaidalyafeai](https://github.com/zaidalyafeai) and [@albertvillanova](https://github.com/albertvillanova) for adding this dataset. |
Rutson/AOA | 2023-09-27T09:31:21.000Z | [
"region:us"
] | Rutson | null | null | null | 0 | 0 | Entry not found |
Rutson/Astro | 2023-09-27T09:33:21.000Z | [
"region:us"
] | Rutson | null | null | null | 0 | 0 | Entry not found |
Rutson/Blackswan | 2023-09-27T09:34:49.000Z | [
"region:us"
] | Rutson | null | null | null | 0 | 0 | Entry not found |
Rutson/BraveGirls | 2023-09-27T09:36:56.000Z | [
"region:us"
] | Rutson | null | null | null | 0 | 0 | Entry not found |
Rutson/Davichi | 2023-09-27T09:38:43.000Z | [
"region:us"
] | Rutson | null | null | null | 0 | 0 | Entry not found |
Rutson/DKB | 2023-09-27T09:39:22.000Z | [
"region:us"
] | Rutson | null | null | null | 0 | 0 | Entry not found |
Rutson/Enhypen | 2023-09-27T09:45:12.000Z | [
"region:us"
] | Rutson | null | null | null | 0 | 0 | Entry not found |
Rutson/EXO | 2023-10-02T16:46:11.000Z | [
"region:us"
] | Rutson | null | null | null | 0 | 0 | Entry not found |
Accede/vecDB | 2023-09-27T10:22:39.000Z | [
"license:cc-by-4.0",
"region:us"
] | Accede | null | null | null | 0 | 0 | ---
license: cc-by-4.0
---
|
Rutson/Fromis_9 | 2023-09-27T10:12:51.000Z | [
"region:us"
] | Rutson | null | null | null | 0 | 0 | Entry not found |
Arrivedercis/QA_Comparison | 2023-09-27T10:27:13.000Z | [
"region:us"
] | Arrivedercis | null | null | null | 0 | 0 | Entry not found |
k-nick/NLVL | 2023-09-27T12:23:37.000Z | [
"region:us"
] | k-nick | null | null | null | 0 | 0 | Entry not found |
nryn21/interior | 2023-09-28T09:43:29.000Z | [
"license:apache-2.0",
"region:us"
] | nryn21 | null | null | null | 0 | 0 | ---
license: apache-2.0
---
|
bond005/sberdevices_golos_10h_crowd_noised_0db | 2023-09-27T12:40:34.000Z | [
"region:us"
] | bond005 | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 2130213651.24
num_examples: 18201
- name: test
num_bytes: 1158197618.04
num_examples: 9896
- name: validation
num_bytes: 91249570.0
num_examples: 755
download_size: 3291645871
dataset_size: 3379660839.2799997
---
# Dataset Card for "sberdevices_golos_10h_crowd_noised_0db"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Rutson/GWSN | 2023-09-27T12:10:11.000Z | [
"region:us"
] | Rutson | null | null | null | 0 | 0 | Entry not found |
CyberHarem/ulrich_von_hutten_azurlane | 2023-09-27T12:12:54.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | CyberHarem | null | null | null | 0 | 0 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of ulrich_von_hutten (Azur Lane)
This is the dataset of ulrich_von_hutten (Azur Lane), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 498 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 539 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 498 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 498 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 422 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 539 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 539 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
Rutson/Lovelyz | 2023-09-27T12:15:05.000Z | [
"region:us"
] | Rutson | null | null | null | 0 | 0 | Entry not found |
Rutson/Mamamoo | 2023-09-27T12:19:15.000Z | [
"region:us"
] | Rutson | null | null | null | 0 | 0 | Entry not found |
Rutson/NCT | 2023-09-27T12:21:34.000Z | [
"region:us"
] | Rutson | null | null | null | 0 | 0 | Entry not found |
Rutson/NMIXX | 2023-09-27T12:26:05.000Z | [
"region:us"
] | Rutson | null | null | null | 0 | 0 | Entry not found |
Rutson/OhMyGirl | 2023-09-27T12:31:30.000Z | [
"region:us"
] | Rutson | null | null | null | 0 | 0 | Entry not found |
deepghs/anime_halfbody_detection | 2023-09-27T12:57:14.000Z | [
"license:openrail",
"region:us"
] | deepghs | null | null | null | 0 | 0 | ---
license: openrail
---
|
Hana01/Lyney-jp | 2023-09-27T13:00:31.000Z | [
"region:us"
] | Hana01 | null | null | null | 0 | 0 | Entry not found |
1232eee/butters | 2023-09-27T12:44:08.000Z | [
"license:unknown",
"region:us"
] | 1232eee | null | null | null | 0 | 0 | ---
license: unknown
---
|
coelhi/cool | 2023-09-27T18:14:16.000Z | [
"region:us"
] | coelhi | null | null | null | 0 | 0 | Entry not found |
mehta77/guanaco-llama2-200 | 2023-09-27T13:50:02.000Z | [
"region:us"
] | mehta77 | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 338808
num_examples: 200
download_size: 0
dataset_size: 338808
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "guanaco-llama2-200"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
maastrichtlawtech/lleqa | 2023-10-03T09:16:13.000Z | [
"task_categories:question-answering",
"task_categories:text-retrieval",
"task_categories:text-classification",
"task_ids:closed-domain-qa",
"task_ids:document-question-answering",
"task_ids:document-retrieval",
"task_ids:topic-classification",
"annotations_creators:expert-generated",
"language_creat... | maastrichtlawtech | The Long-form Legal Question Answering (LLeQA) dataset is a French-native expert-annotated dataset for studying legal question answering.
LLeQA builds upon BSARD (Louis and Spanakis, 2022), an information retrieval dataset comprising 1,108 legal questions labeled with relevant
provisions from a corpus of 22,633 Belgian law articles, and enhance it in two ways. First, we introduce 760 new legal questions (+69%) and
5,308 additional statutory articles (+23%). Second, we supplement the data with new types of annotations, including an exhaustive taxonomy
for the question, the jurisdictions concerned, the exact paragraph-level references within the relevant articles, and a comprehensive answer
written by seasoned legal professionals. Owing to the rich variety of its annotations, LLeQA serves as a multifaceted resource that extends
its utility beyond legal question answering and has the potential to catalyze significant progress in various legal tasks, such as legal
inquiry classification, legal topic modeling, and legal information retrieval. | @article{louis2023interpretable,
author = {Louis, Antoine and van Dijck, Gijs and Spanakis, Gerasimos},
title = {Interpretable Long-Form Legal Question Answering with Retrieval-Augmented Large Language Models},
journal = {CoRR},
volume = {abs/2309.xxxxx},
year = {2023},
url = {https://doi.org/},
doi = {},
eprinttype = {arXiv},
eprint = {2309.xxxxx},
} | null | 1 | 0 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- fr
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
pretty_name: LLeQA
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
- text-retrieval
- text-classification
task_ids:
- closed-domain-qa
- document-question-answering
- document-retrieval
- topic-classification
paperswithcode_id: lleqa
tags:
- legal
extra_gated_fields:
Name: text
Email: text
Affiliation: text
Job Title: text
Country: text
I agree to use this dataset for non-commerical use ONLY: checkbox
---
# Dataset Card for LLeQA
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [maastrichtlawtech/lleqa](https://github.com/maastrichtlawtech/lleqa)
- **Paper:** [Interpretable Long-Form Legal Question Answering with Retrieval-Augmented Large Language Models](https://arxiv.org/abs/2309.17050)
- **Point of Contact:** [Maastricht Law & Tech Lab](law-techlab@maastrichtuniversity.nl)
### Dataset Summary
The Long-form Legal Question Answering (LLeQA) dataset is a French-native expert-annotated dataset for studying legal question answering. LLeQA builds upon [BSARD](https://huggingface.co/datasets/maastrichtlawtech/bsard), an information retrieval dataset comprising 1,108 legal questions labeled with relevant provisions from a corpus of 22,633 Belgian law articles, and enhance it in two ways:
1. We introduce 760 new legal questions (+69\%) and 5,308 additional statutory articles (+23\%).
2. We supplement the data with new types of annotations, including an exhaustive taxonomy for the question, the jurisdictions concerned, the exact paragraph-level references within the relevant articles, and a comprehensive answer written by seasoned legal professionals.
Owing to the rich variety of its annotations, LLeQA serves as a multifaceted resource that extends its utility beyond legal question answering and has the potential to catalyze significant progress in various legal tasks, such as legal inquiry classification, legal topic modeling, and legal information retrieval.
### Supported Tasks and Leaderboards
- `qestion-answering`: The dataset can be used to train a model for long-form question-answering (LFQA) in the legal domain, which consists in comprehensively answering a short legal question in a free-form based on a given context of one or several statutory articles. Success on this task is typically measured by achieving high [ROUGE](https://huggingface.co/spaces/evaluate-metric/rouge) or [METEOR](https://huggingface.co/spaces/evaluate-metric/meteor) scores, even though these metrics are not always correlated with human judgment.
- `text-retrieval`: The dataset can be used to train a model for information retrieval (IR) in the legal domain, which consists in retrieving relevant statutory articles based on a given legal question. Success on this task is typically measured by achieving high [recall](https://huggingface.co/spaces/evaluate-metric/recall) and [precision](https://huggingface.co/spaces/evaluate-metric/precision) scores at various cut-offs.
- `text-classification`: The dataset can be used to train a model for text classification in the legal domain, which consists in classifying a legal question into a predefined set of topics. Success on this task is typically measured by achieving high [accuracy](https://huggingface.co/spaces/evaluate-metric/accuracy) scores.
### Languages
The text in the dataset is in French, as spoken in Wallonia and Brussels-Capital region. The associated BCP-47 code is `fr-BE`.
## Dataset Structure
### Data Instances
A `question` sample typically comprises a unique identifier (*int*), the question itself (*str*), the regions concerned (*List[str]*), related topics (*List[str]*), the IDs of the relevant articles from the knowledge corpus (*List[int]*), the exact paragraphs within those articles that are relevant to the question (*List[str]*), and a comprehensive expert-written answer (*str*). Below is an example of such sample from the LLeQA test set:
```json
{
"id":696,
"question":"Je souhaite divorcer pour cause de désunion irrémédiable. Puis-je fixer une limite dans le temps pour la pension alimentaire ?",
"regions":["Région wallonne", "Région de Bruxelles-Capitale", "Région flamande"],
"topics":["Famille, Obligations alimentaires, Les pensions alimentaires (entre époux/ex-époux), Pensions alimentaires dans le cadre d'une procédure de divorce, Procédure de divorce pour cause de désunion irrémédiable"],
"article_ids":[3604],
"paragraph_ids":["3604§4", "3604§10"],
"answer":"Oui, c'est le juge qui fixe cette limite dans le jugement de divorce. En principe, la durée de la pension alimentaire après divorce est limitée au maximum à la durée du mariage. Mais le juge peut la fixer pour une durée plus courte. Il décide toujours en fonction de la situation concrète des ex-conjoints. A l’expiration de ce délai, le juge peut prolonger le paiement de la pension alimentaire. Celui qui reçoit la pension alimentaire doit prouver qu'à cause de circonstances exceptionnelles et pour des raisons indépendantes de sa volonté, il est toujours dans un état de besoin. L'obligation de payer la pension alimentaire prend également fin si : celui qui reçoit la pension alimentaire se remarie ou fait une déclaration de cohabitation légale. Dans ce cas, il perd automatiquement son droit à la pension alimentaire après divorce, sauf si le jugement de divorce prévoit autre chose ; celui qui reçoit la pension alimentaire vit maritalement avec une autre personne. Dans ce cas, le juge peut décider de mettre fin à la pension alimentaire ; celui qui reçoit la pension alimentaire décède. Dans ce cas, le paiement de la pension alimentaire prend automatiquement fin.",
}
```
An `article` sample typically contains a unique identifier (*int*), a legislative reference (*str*), the authority that issued the article (*str*), a description resulting from the concatenated headings of the sections the article belong to (*str*), the individual headings of these sections (*str*), the article number in the statute (*str*), the full content of the article (*str*), and the content of its individual paragraphs (*Dict[str]*). Below is an example of such sample from the knwoledge corpus:
```json
{
"id":3604,
"reference":"Art. 301, Code civil (Livre I, Titre VI, Chapitre IV)",
"authority":"federale",
"description":"Des personnes, Du divorce, Des effets du divorce",
"article_no":"301",
"code":"Code civil",
"book":"Des personnes",
"part":null,
"act":"Du divorce",
"chapter":"Des effets du divorce",
"section":null,
"subsection":null,
"article":"§ 1er. Les époux peuvent convenir à tout moment de la pension alimentaire éventuelle, du montant de celle-ci et des modalités selon lesquelles le montant convenu pourrait être revu.§ 2. A défaut de la convention visée au § 1er, le tribunal de la famillepeut, dans le jugement prononçant le divorce ou lors d'une décision ultérieure, accorder, à la demande de l'époux dans le besoin, une pension alimentaire à charge de l'autre époux.Le tribunal peut refuser de faire droit à la demande de pension si le défendeur prouve que le demandeur a commis une faute grave ayant rendu impossible la poursuite de la vie commune.En aucun cas, la pension alimentaire n'est accordée au conjoint reconnu coupable d'un fait visé aux articles 375, 398 à 400, 402, 403 ou 405 du Code pénal, commis contre la personne du défendeur, ou d'une tentative de commettre un fait visé aux articles 375, 393, 394 ou 397 du même Code contre cette même personne.Par dérogation à l'article 4 du titre préliminaire du Code de procédure pénale, le juge peut, en attendant que la décision sur l'action publique soit coulée en force de chose jugée, allouer au demandeur une pension provisionnelle, en tenant compte de toutes les circonstances de la cause. Il peut subordonner l'octroi de cette pension provisionnelle à la constitution d'une garantie qu'il détermine et dont il fixe les modalités.§ 3. Le tribunal fixe le montant de la pension alimentaire qui doit couvrir au moins l'état de besoin du bénéficiaire.Il tient compte des revenus et possibilités des conjoints et de la dégradation significative de la situation économique du bénéficiaire. Pour apprécier cette dégradation, le juge se fonde notamment sur la durée du mariage, l'âge des parties, leur comportement durant le mariage quant à l'organisation de leurs besoins, la charge des enfants pendant la vie commune ou après celle-ci. Le juge peut décider le cas échéant que la pension sera dégressive et déterminer dans quelle mesure elle le sera.La pension alimentaire ne peut excéder le tiers des revenus du conjoint débiteur.§ 4. La durée de la pension ne peut être supérieure à celle du mariage.En cas de circonstances exceptionnelles, si le bénéficiaire démontre qu'à l'expiration du délai visé à l'alinéa 1er, il reste, pour des raisons indépendantes de sa volonté, dans un état de besoin, le tribunal peut prolonger le délai. Dans ce cas, le montant de la pension correspond au montant nécessaire pour couvrir l'état de besoin du bénéficiaire.§ 5. Si le défendeur prouve que l'état de besoin du demandeur résulte d'une décision prise unilatéralement par celui-ci, et sans que les besoins de la famille aient justifié ce choix, il peut être dispensé de payer la pension ou n'être tenu que de payer une pension réduite.§ 6. Le tribunal qui accorde la pension constate que celle-ci est adaptée de plein droit aux fluctuations de l'indice des prix à la consommation.Le montant de base de la pension correspond à l'indice des prix à la consommation du mois au cours duquel le jugement ou l'arrêt prononçant le divorce est coulé en force de chose jugée, à moins que le tribunal n'en décide autrement. Tous les douze mois, le montant de la pension est adapté en fonction de la hausse ou de la baisse de l'indice des prix à la consommation du mois correspondant.Ces modifications sont appliquées à la pension dès l'échéance qui suit la publication au Moniteur belge de l'indice nouveau à prendre en considération.Le tribunal peut, dans certains cas, appliquer un autre système d'adaptation de la pension au coût de la vie.§ 7. Sauf si les parties ont convenu expressément le contraire, le tribunal peut, ultérieurement, à la demande d'une des parties, augmenter, réduire ou supprimer la pension, si, à la suite de circonstances nouvelles et indépendantes de la volonté des parties, son montant n'est plus adapté.De même, si à la suite de la dissolution du mariage, la liquidation-partage du patrimoine commun ou de l'indivision ayant existé entre les époux entraîne une modification de leur situation financière qui justifie une adaptation de la pension alimentaire ayant fait l'objet d'un jugement ou d'une convention intervenus avant l'établissement de comptes de la liquidation, le tribunal peut adapter la pension, 2.§ 8. La pension peut à tout moment être remplacée, de l'accord des parties, par un capital homologué par le tribunal. A la demande du débiteur de la pension, le tribunal peut également accorder à tout moment la capitalisation.§ 9. Les époux ne peuvent pas renoncer aux droits à la pension alimentaire avant la dissolution du mariage.Ils peuvent néanmoins transiger, en cours de procédure, sur le montant de cette pension 5.§ 10. La pension n'est plus due au décès du débiteur, mais le bénéficiaire peut demander des aliments à charge de la succession aux conditions prévues à l'article 205bis, § 1er et §§ 3 à 6 .La pension prend, en toute hypothèse, définitivement fin en cas de remariage du bénéficiaire de la pension ou au moment où ce dernier fait une déclaration de cohabitation légale, sauf convention contraire des parties.Le juge peut mettre fin à la pension lorsque le bénéficiaire vit maritalement avec une autre personne.§ 11. Le tribunal peut décider qu'en cas de défaut d'exécution par le débiteur de son obligation de paiement, le bénéficiaire de la pension sera autorisé à percevoir les revenus de celui-ci ou ceux des biens qu'il administre en vertu de leur régime matrimonial, ainsi que toutes autres sommes qui lui sont dues par des tiers.Cette décision est opposable à tout tiers débiteur, actuel ou futur, sur la notification qui leur en est faite par le greffier à la requête du demandeur.§ 12. 1.",
"paragraphs":{
"1":"§ 1er. Les époux peuvent convenir à tout moment de la pension alimentaire éventuelle, du montant de celle-ci et des modalités selon lesquelles le montant convenu pourrait être revu",
"2":"§ 2. A défaut de la convention visée au § 1er, le tribunal de la famillepeut, dans le jugement prononçant le divorce ou lors d'une décision ultérieure, accorder, à la demande de l'époux dans le besoin, une pension alimentaire à charge de l'autre époux.Le tribunal peut refuser de faire droit à la demande de pension si le défendeur prouve que le demandeur a commis une faute grave ayant rendu impossible la poursuite de la vie commune.En aucun cas, la pension alimentaire n'est accordée au conjoint reconnu coupable d'un fait visé aux articles 375, 398 à 400, 402, 403 ou 405 du Code pénal, commis contre la personne du défendeur, ou d'une tentative de commettre un fait visé aux articles 375, 393, 394 ou 397 du même Code contre cette même personne.Par dérogation à l'article 4 du titre préliminaire du Code de procédure pénale, le juge peut, en attendant que la décision sur l'action publique soit coulée en force de chose jugée, allouer au demandeur une pension provisionnelle, en tenant compte de toutes les circonstances de la cause. Il peut subordonner l'octroi de cette pension provisionnelle à la constitution d'une garantie qu'il détermine et dont il fixe les modalités",
"3":"§ 3. Le tribunal fixe le montant de la pension alimentaire qui doit couvrir au moins l'état de besoin du bénéficiaire.Il tient compte des revenus et possibilités des conjoints et de la dégradation significative de la situation économique du bénéficiaire. Pour apprécier cette dégradation, le juge se fonde notamment sur la durée du mariage, l'âge des parties, leur comportement durant le mariage quant à l'organisation de leurs besoins, la charge des enfants pendant la vie commune ou après celle-ci. Le juge peut décider le cas échéant que la pension sera dégressive et déterminer dans quelle mesure elle le sera.La pension alimentaire ne peut excéder le tiers des revenus du conjoint débiteur",
"4":"§ 4. La durée de la pension ne peut être supérieure à celle du mariage.En cas de circonstances exceptionnelles, si le bénéficiaire démontre qu'à l'expiration du délai visé à l'alinéa 1er, il reste, pour des raisons indépendantes de sa volonté, dans un état de besoin, le tribunal peut prolonger le délai. Dans ce cas, le montant de la pension correspond au montant nécessaire pour couvrir l'état de besoin du bénéficiaire",
"5":"§ 5. Si le défendeur prouve que l'état de besoin du demandeur résulte d'une décision prise unilatéralement par celui-ci, et sans que les besoins de la famille aient justifié ce choix, il peut être dispensé de payer la pension ou n'être tenu que de payer une pension réduite",
"6":"§ 6. Le tribunal qui accorde la pension constate que celle-ci est adaptée de plein droit aux fluctuations de l'indice des prix à la consommation.Le montant de base de la pension correspond à l'indice des prix à la consommation du mois au cours duquel le jugement ou l'arrêt prononçant le divorce est coulé en force de chose jugée, à moins que le tribunal n'en décide autrement. Tous les douze mois, le montant de la pension est adapté en fonction de la hausse ou de la baisse de l'indice des prix à la consommation du mois correspondant.Ces modifications sont appliquées à la pension dès l'échéance qui suit la publication au Moniteur belge de l'indice nouveau à prendre en considération.Le tribunal peut, dans certains cas, appliquer un autre système d'adaptation de la pension au coût de la vie",
"7":"§ 7. Sauf si les parties ont convenu expressément le contraire, le tribunal peut, ultérieurement, à la demande d'une des parties, augmenter, réduire ou supprimer la pension, si, à la suite de circonstances nouvelles et indépendantes de la volonté des parties, son montant n'est plus adapté.De même, si à la suite de la dissolution du mariage, la liquidation-partage du patrimoine commun ou de l'indivision ayant existé entre les époux entraîne une modification de leur situation financière qui justifie une adaptation de la pension alimentaire ayant fait l'objet d'un jugement ou d'une convention intervenus avant l'établissement de comptes de la liquidation, le tribunal peut adapter la pension, 2",
"8":"§ 8. La pension peut à tout moment être remplacée, de l'accord des parties, par un capital homologué par le tribunal. A la demande du débiteur de la pension, le tribunal peut également accorder à tout moment la capitalisation",
"9":"§ 9. Les époux ne peuvent pas renoncer aux droits à la pension alimentaire avant la dissolution du mariage.Ils peuvent néanmoins transiger, en cours de procédure, sur le montant de cette pension 5",
"10":"§ 10. La pension n'est plus due au décès du débiteur, mais le bénéficiaire peut demander des aliments à charge de la succession aux conditions prévues à l'article 205bis, § 1er et §§ 3 à 6 .La pension prend, en toute hypothèse, définitivement fin en cas de remariage du bénéficiaire de la pension ou au moment où ce dernier fait une déclaration de cohabitation légale, sauf convention contraire des parties.Le juge peut mettre fin à la pension lorsque le bénéficiaire vit maritalement avec une autre personne",
"11":"§ 11. Le tribunal peut décider qu'en cas de défaut d'exécution par le débiteur de son obligation de paiement, le bénéficiaire de la pension sera autorisé à percevoir les revenus de celui-ci ou ceux des biens qu'il administre en vertu de leur régime matrimonial, ainsi que toutes autres sommes qui lui sont dues par des tiers.Cette décision est opposable à tout tiers débiteur, actuel ou futur, sur la notification qui leur en est faite par le greffier à la requête du demandeur"
}
}
``````
### Data Fields
- The `question`samples have the following fields:
- `id`: an *int32* feature corresponding to a unique ID number for the question.
- `question`: a *string* feature corresponding to the question.
- `regions`: a *list of strings* feature of regions concerned by the question.
- `topics`: a *list of strings* feature of topics related to the question.
- `article_ids`: a *list of ints* feature of article IDs from the knowledge corpus relevant to the question.
- `paragraph_ids`: a *list of strings* feature of the exact paragraph IDs within the articles that are relevant to the question.
- `answer`: a *string* feature corresponding to the comprehensive answer to the question.
- The `article` samples have the following fields:
- `id`: an *int32* feature corresponding to a unique ID number for the article.
- `reference`: a *string* feature corresponding to the legislative reference of the article.
- `authority`: a *string* feature corresponding to the authority that issued the article (either *"regional"* or *"federal"*).
- `description`: a *string* feature corresponding to the concatenated headings of the article.
- `article_no`: a *string* feature corresponding to the article number in the statute.
- `code`: a *string* feature corresponding to the law code to which the article belongs.
- `book`: a *string* feature corresponding to the book to which the article belongs.
- `part`: a *string* feature corresponding to the part to which the article belongs.
- `act`: a *string* feature corresponding to the act to which the article belongs.
- `chapter`: a *string* feature corresponding to the chapter to which the article belongs.
- `section`: a *string* feature corresponding to the section to which the article belongs.
- `subsection`: a *string* feature corresponding to the subsection to which the article belongs.
- `article`: a *string* feature corresponding to the full content of the article.
- `paragraphs`: a *dict of strings* feature corresponding to the content of the individual paragraphs of the article.
### Data Splits
The LLeQA dataset is split into a train, dev, and test sets with a 90/10/10 ratio, respectively. Number of `question` samples in each set is given below:
| | Train | Dev | Test |
| ----- | ------ | ---- | ----- |
| LLeQA | 1472 | 201 | 195 |
## Dataset Creation
### Curation Rationale
The dataset is intended to be used by researchers to build and evaluate IR and QA models in the legal domain. It should not be regarded as a reliable source of legal information at this point in time, as both the questions and articles correspond to an outdated version of the Belgian law from May 2023 (time of dataset collection). In the latter case, the user is advised to consult daily updated official legal resources (e.g., the Belgian Official Gazette).
### Source Data
#### Initial Data Collection and Normalization
The collection process of LLeQA involves three main stages. First, we gather and refine annotated legal questions. Then, we build an expansive corpus of supportive statutory articles drawn from Belgian legislation. Finally, we enrich the question annotations by generating paragraph-level references within relevant articles. We elaborate upon each of these steps below. Please refer to the paper for more details.
#### Who are the source language producers?
Speakers were not directly approached for inclusion in this dataset and thus could not be asked for demographic information. Questions were collected, anonimyzed, and reformulated by Belgian jurists from [Droits Quotidiens](https://www.droitsquotidiens.be/fr/equipe). Therefore, no direct information about the speakers’ age and gender distribution, or socioeconomic status is available. However, it is expected that most, but not all, of the speakers are adults (18+ years), speak French as a native language, and live in Wallonia or Brussels-Capital region.
### Annotations
#### Annotation process
We partner with [Droits Quotidiens](https://www.droitsquotidiens.be/fr/equipe), a Belgian non-profit organization that endeavors to make the law comprehensible and accessible to the most vulnerable. To this end, the organization maintains a rich website featuring thousands of legal questions commonly posed by Belgian citizens. Each question comes with its own individual page, encompassing one or more categorizations, references to relevant legislative statutes, and a detailed answer written in layman's terms by experienced jurists. Practically, their legal clarification process consists of four steps. First, they select a common legal issue based on the numerous support requests they receive every day. Then, they define a new anonymized "model" question on that issue expressed in simple terms, as close as possible as if a layperson had asked it. Finally, the jurists search the Belgian law for articles that help answer the model question, reference them, and write a comprehensive answer in a language that is understandable by the general public.
#### Who are the annotators?
A total of six Belgian jurists from [Droits Quotidiens](https://www.droitsquotidiens.be/fr/equipe) contributed to annotating the questions. All have a law degree from a Belgian university and years of experience in providing legal advice and clarifications of the law. They range in age from 30-60 years, including one man and five women, gave their ethnicity as white European, speak French as a native language, and represent upper middle class based on income levels.
### Personal and Sensitive Information
The questions represent informal, asynchronous, edited, written language that have an average length of 15 words. None of them contained hateful, aggressive, or inappropriate language as they were all reviewed and reworded by Droits Quotidiens to be neutral, anonymous, and comprehensive. The legal articles represent strong, formal, written language that have a median length of 84 words (yet 1500+ articles exceed 500 words).
## Considerations for Using the Data
### Social Impact of Dataset
We believe LLeQA can serve as a robust foundation for advancements in interpretable, long-form legal question answering, thereby contributing to the democratization of legal access.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
- It is essential to note that not all legal questions can be answered with statutes alone. For instance, the question “Can I evict my tenants if they make too much noise?” might not have a detailed answer within the statutory law that quantifies a specific noise threshold at which eviction is allowed. Instead, the landlord should probably rely more on case law and find precedents similar to their current situation (e.g., the tenant makes two parties a week until 2 am). Hence, some questions are better suited than others to the statutory article retrieval task, and the domain of the less suitable ones remains to be determined.
## Additional Information
### Dataset Curators
The dataset was created by Antoine Louis during work done at the Law & Tech lab of Maastricht University, with the help of jurists from [Droits Quotidiens](https://www.droitsquotidiens.be/fr/equipe).
### Licensing Information
LLeQA is distributed under a gated access for research purposes only and is licensed under the [CC BY-NC-SA 4.0 license](https://creativecommons.org/licenses/by-nc-sa/4.0/).
### Citation Information
```latex
@article{louis2023interpretable,
author = {Louis, Antoine and van Dijck, Gijs and Spanakis, Gerasimos},
title = {Interpretable Long-Form Legal Question Answering with Retrieval-Augmented Large Language Models},
journal = {CoRR},
volume = {abs/2309.17050},
year = {2023},
url = {https://arxiv.org/abs/2309.17050},
eprinttype = {arXiv},
eprint = {2309.17050},
}
```
### Contributions
Thanks to [@antoiloui](https://github.com/antoiloui) for adding this dataset.
|
fredfang/RH20T | 2023-09-27T13:55:26.000Z | [
"license:other",
"region:us"
] | fredfang | null | null | null | 0 | 0 | ---
license: other
---
|
elizathornton/gaskell-bp | 2023-09-27T14:15:17.000Z | [
"region:us"
] | elizathornton | null | null | null | 0 | 0 | Entry not found |
Maxstan/trager_coef_by_date_sarov | 2023-09-27T14:23:07.000Z | [
"license:cc-by-4.0",
"doi:10.57967/hf/1156",
"region:us"
] | Maxstan | null | null | null | 0 | 0 | ---
license: cc-by-4.0
---
|
usholanb/relevancy-dataset | 2023-09-27T14:25:10.000Z | [
"region:us"
] | usholanb | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: relevancy_classes
sequence: string
splits:
- name: train
num_bytes: 101
num_examples: 1
download_size: 1959
dataset_size: 101
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "relevancy-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Napitz/Anime_Regularization | 2023-09-27T15:39:57.000Z | [
"region:us"
] | Napitz | null | null | null | 0 | 0 | Entry not found |
Arsik/bodakosotshavel | 2023-09-27T15:38:35.000Z | [
"license:apache-2.0",
"region:us"
] | Arsik | null | null | null | 0 | 0 | ---
license: apache-2.0
---
|
marasama/nva-odawara | 2023-09-27T16:37:47.000Z | [
"region:us"
] | marasama | null | null | null | 0 | 0 | Entry not found |
AlexWortega/video_data | 2023-09-27T16:39:12.000Z | [
"region:us"
] | AlexWortega | null | null | null | 0 | 0 | Entry not found |
Comparons/DuaLipa | 2023-09-27T16:45:18.000Z | [
"region:us"
] | Comparons | null | null | null | 0 | 0 | Entry not found |
fjsaojago/l0al-4qj7-caob | 2023-09-27T16:52:20.000Z | [
"region:us"
] | fjsaojago | null | null | null | 0 | 0 | Entry not found |
fjsaojago/autotrain-data-l0al-4qj7-caob | 2023-09-27T16:54:27.000Z | [
"region:us"
] | fjsaojago | null | null | null | 0 | 0 | Entry not found |
martinakaduc/hh-rlhf-llama2-7b-embedding | 2023-09-28T01:04:40.000Z | [
"language:en",
"license:mit",
"region:us"
] | martinakaduc | null | null | null | 0 | 0 | ---
license: mit
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: chosen
sequence: float64
- name: rejected
sequence: float64
splits:
- name: train
num_bytes: 10539475200
num_examples: 160800
- name: test
num_bytes: 560532288
num_examples: 8552
download_size: 6413844185
dataset_size: 11100007488
language:
- en
--- |
k-nick/VidSTG | 2023-09-27T17:11:07.000Z | [
"region:us"
] | k-nick | null | null | null | 0 | 0 | Entry not found |
hamidpiya/hamid | 2023-09-27T17:15:33.000Z | [
"region:us"
] | hamidpiya | null | null | null | 0 | 0 | |
DoctorSlimm/mozart-api-demo-pages | 2023-10-04T19:54:23.000Z | [
"region:us"
] | DoctorSlimm | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data.csv
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
DoctorSlimm/mozart-api-demo-documents | 2023-10-05T12:26:14.000Z | [
"region:us"
] | DoctorSlimm | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data.csv
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
BangumiBase/lycorisrecoil | 2023-09-29T12:40:36.000Z | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | null | 0 | 0 | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Lycoris Recoil
This is the image base of bangumi Lycoris Recoil, we detected 31 characters, 2149 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 22 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 67 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 17 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 117 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 120 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 21 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 79 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 36 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 16 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 24 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 11 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 21 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 11 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 10 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 10 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 118 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 10 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 54 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 50 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 23 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 10 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 9 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 407 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 13 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 102 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 9 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 27 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 510 | [Download](27/dataset.zip) |  |  |  |  |  |  |  |  |
| 28 | 33 | [Download](28/dataset.zip) |  |  |  |  |  |  |  |  |
| 29 | 27 | [Download](29/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 165 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
gonzalopolavieja/guanaco-llama2-1k | 2023-09-27T17:50:32.000Z | [
"region:us"
] | gonzalopolavieja | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1654448
num_examples: 1000
download_size: 966693
dataset_size: 1654448
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "guanaco-llama2-1k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
gachan/Lulav1 | 2023-09-27T18:49:26.000Z | [
"license:openrail",
"region:us"
] | gachan | null | null | null | 0 | 0 | ---
license: openrail
---
|
lilacai/lilac-textbook_quality_programming | 2023-10-05T14:03:01.000Z | [
"region:us"
] | lilacai | null | null | null | 0 | 0 | This dataset is generated by [Lilac](http://lilacml.com) for a HuggingFace Space: [huggingface.co/spaces/lilacai/lilac](https://huggingface.co/spaces/lilacai/lilac).
Original dataset: [https://huggingface.co/datasets/vikp/textbook_quality_programming](https://huggingface.co/datasets/vikp/textbook_quality_programming)
Lilac dataset config:
```namespace: lilac
name: textbook_quality_programming
source:
dataset_name: vikp/textbook_quality_programming
source_name: huggingface
embeddings:
- path:
- outline
- '*'
embedding: gte-small
- path:
- concepts
- '*'
embedding: gte-small
- path: markdown
embedding: gte-small
signals:
- path:
- outline
- '*'
signal:
signal_name: pii
- path:
- outline
- '*'
signal:
signal_name: text_statistics
- path:
- outline
- '*'
signal:
signal_name: near_dup
- path:
- outline
- '*'
signal:
signal_name: lang_detection
- path:
- outline
- '*'
signal:
embedding: gte-small
namespace: lilac
concept_name: legal-termination
signal_name: concept_score
- path:
- outline
- '*'
signal:
embedding: gte-small
namespace: lilac
concept_name: negative-sentiment
signal_name: concept_score
- path:
- outline
- '*'
signal:
embedding: gte-small
namespace: lilac
concept_name: non-english
signal_name: concept_score
- path:
- outline
- '*'
signal:
embedding: gte-small
namespace: lilac
concept_name: positive-sentiment
signal_name: concept_score
- path:
- outline
- '*'
signal:
embedding: gte-small
namespace: lilac
concept_name: profanity
signal_name: concept_score
- path:
- outline
- '*'
signal:
embedding: gte-small
namespace: lilac
concept_name: question
signal_name: concept_score
- path:
- outline
- '*'
signal:
embedding: gte-small
namespace: lilac
concept_name: source-code
signal_name: concept_score
- path:
- outline
- '*'
signal:
embedding: gte-small
namespace: lilac
concept_name: toxicity
signal_name: concept_score
- path:
- concepts
- '*'
signal:
signal_name: pii
- path:
- concepts
- '*'
signal:
signal_name: text_statistics
- path:
- concepts
- '*'
signal:
signal_name: near_dup
- path:
- concepts
- '*'
signal:
signal_name: lang_detection
- path:
- concepts
- '*'
signal:
embedding: gte-small
namespace: lilac
concept_name: legal-termination
signal_name: concept_score
- path:
- concepts
- '*'
signal:
embedding: gte-small
namespace: lilac
concept_name: negative-sentiment
signal_name: concept_score
- path:
- concepts
- '*'
signal:
embedding: gte-small
namespace: lilac
concept_name: non-english
signal_name: concept_score
- path:
- concepts
- '*'
signal:
embedding: gte-small
namespace: lilac
concept_name: positive-sentiment
signal_name: concept_score
- path:
- concepts
- '*'
signal:
embedding: gte-small
namespace: lilac
concept_name: profanity
signal_name: concept_score
- path:
- concepts
- '*'
signal:
embedding: gte-small
namespace: lilac
concept_name: question
signal_name: concept_score
- path:
- concepts
- '*'
signal:
embedding: gte-small
namespace: lilac
concept_name: source-code
signal_name: concept_score
- path:
- concepts
- '*'
signal:
embedding: gte-small
namespace: lilac
concept_name: toxicity
signal_name: concept_score
- path: markdown
signal:
signal_name: pii
- path: markdown
signal:
signal_name: text_statistics
- path: markdown
signal:
signal_name: near_dup
- path: markdown
signal:
signal_name: lang_detection
- path: markdown
signal:
embedding: gte-small
namespace: lilac
concept_name: legal-termination
signal_name: concept_score
- path: markdown
signal:
embedding: gte-small
namespace: lilac
concept_name: negative-sentiment
signal_name: concept_score
- path: markdown
signal:
embedding: gte-small
namespace: lilac
concept_name: non-english
signal_name: concept_score
- path: markdown
signal:
embedding: gte-small
namespace: lilac
concept_name: positive-sentiment
signal_name: concept_score
- path: markdown
signal:
embedding: gte-small
namespace: lilac
concept_name: profanity
signal_name: concept_score
- path: markdown
signal:
embedding: gte-small
namespace: lilac
concept_name: question
signal_name: concept_score
- path: markdown
signal:
embedding: gte-small
namespace: lilac
concept_name: source-code
signal_name: concept_score
- path: markdown
signal:
embedding: gte-small
namespace: lilac
concept_name: toxicity
signal_name: concept_score
- path:
- outline
- '*'
signal:
embedding: gte-small
signal_name: cluster_dbscan
- path:
- concepts
- '*'
signal:
embedding: gte-small
signal_name: cluster_dbscan
- path: markdown
signal:
embedding: gte-small
signal_name: cluster_dbscan
settings:
ui:
media_paths:
- - outline
- '*'
- - concepts
- '*'
- markdown
markdown_paths:
- markdown
preferred_embedding: gte-small
```
|
emny/heid_v1 | 2023-10-01T05:33:16.000Z | [
"license:apache-2.0",
"region:us"
] | emny | null | null | null | 0 | 0 | ---
license: apache-2.0
---
|
DmitrMakeev/prm_stl | 2023-09-27T19:30:37.000Z | [
"license:openrail",
"region:us"
] | DmitrMakeev | null | null | null | 0 | 0 | ---
license: openrail
---
|
Smooke/test | 2023-09-27T19:43:58.000Z | [
"license:apache-2.0",
"region:us"
] | Smooke | null | null | null | 0 | 0 | ---
license: apache-2.0
---
|
tanzuhuggingface/creditcardfraudtraining | 2023-09-27T20:30:31.000Z | [
"task_categories:feature-extraction",
"size_categories:1M<n<10M",
"fraud detection",
"anomaly detection",
"upsampling",
"region:us"
] | tanzuhuggingface | null | null | null | 0 | 0 | ---
task_categories:
- feature-extraction
tags:
- fraud detection
- anomaly detection
- upsampling
pretty_name: credit_card_transactions_resampled.csv
size_categories:
- 1M<n<10M
--- |
smarquie/fincrisis | 2023-09-27T20:12:13.000Z | [
"region:us"
] | smarquie | null | null | null | 0 | 0 | Entry not found |
zgcarvalho/oas-test | 2023-09-28T19:34:40.000Z | [
"size_categories:10M<n<100M",
"license:cc-by-4.0",
"biology",
"protein",
"region:us"
] | zgcarvalho | null | null | null | 0 | 0 | ---
license: cc-by-4.0
size_categories: 10M<n<100M
pretty_name: Observed Antibody Space
config_names:
- paired
- unpaired
tags:
- biology
- protein
dataset_info:
- config_name: paired
features:
- name: sequence_heavy
dtype: string
- name: sequence_light
dtype: string
- name: cdr1_heavy
dtype: string
- name: cdr2_heavy
dtype: string
- name: cdr3_heavy
dtype: string
- name: fwr1_heavy
dtype: string
- name: fwr2_heavy
dtype: string
- name: fwr3_heavy
dtype: string
- name: fwr4_heavy
dtype: string
- name: cdr1_light
dtype: string
- name: cdr2_light
dtype: string
- name: cdr3_light
dtype: string
- name: fwr1_light
dtype: string
- name: fwr2_light
dtype: string
- name: fwr3_light
dtype: string
- name: fwr4_light
dtype: string
- name: species
dtype: string
- name: vaccine
dtype: string
- name: disease
dtype: string
splits:
- name: train
num_bytes: 985822519
num_examples: 1777462
download_size: 0
dataset_size: 985822519
- config_name: unpaired
features:
- name: sequence
dtype: string
- name: chain
dtype: string
- name: cdr1
dtype: string
- name: cdr2
dtype: string
- name: cdr3
dtype: string
- name: fwr1
dtype: string
- name: fwr2
dtype: string
- name: fwr3
dtype: string
- name: fwr4
dtype: string
- name: species
dtype: string
- name: vaccine
dtype: string
- name: disease
dtype: string
splits:
- name: train
num_bytes: 4671469078
num_examples: 15925303
download_size: 0
dataset_size: 4671469078
configs:
- config_name: paired
data_files:
- split: train
path: paired/train-*
- config_name: unpaired
data_files:
- split: train
path: unpaired/train-*
---
# Dataset Card for Observed Antibody Space
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
manuel-yao/pokemon-keras-community | 2023-09-27T20:39:06.000Z | [
"region:us"
] | manuel-yao | null | null | null | 0 | 0 | Entry not found |
BangumiBase/yagatekimininaru | 2023-09-29T12:50:36.000Z | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | null | 0 | 0 | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Yagate Kimi Ni Naru
This is the image base of bangumi Yagate Kimi ni Naru, we detected 17 characters, 1763 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 597 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 9 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 49 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 46 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 451 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 18 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 76 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 52 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 17 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 16 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 20 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 44 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 82 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 39 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 129 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 7 | [Download](15/dataset.zip) |  |  |  |  |  |  |  | N/A |
| noise | 111 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
okaris/man-woman-reg | 2023-09-27T21:49:17.000Z | [
"region:us"
] | okaris | null | null | null | 0 | 0 | Entry not found |
CyberHarem/miyauchi_hikage_nonnonbiyori | 2023-09-27T21:37:51.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | CyberHarem | null | null | null | 0 | 0 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of Miyauchi Hikage
This is the dataset of Miyauchi Hikage, containing 192 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 192 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 446 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 497 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 192 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 192 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 192 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 446 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 446 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 393 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 497 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 497 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
CyberHarem/fujimiya_konomi_nonnonbiyori | 2023-09-27T21:58:56.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | CyberHarem | null | null | null | 0 | 0 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of Fujimiya Konomi
This is the dataset of Fujimiya Konomi, containing 160 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 160 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 389 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 427 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 160 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 160 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 160 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 389 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 389 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 331 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 427 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 427 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
CyberHarem/shinoda_akane_nonnonbiyori | 2023-09-27T22:10:06.000Z | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | CyberHarem | null | null | null | 0 | 0 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of Shinoda Akane
This is the dataset of Shinoda Akane, containing 82 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 82 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 205 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 233 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 82 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 82 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 82 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 205 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 205 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 185 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 233 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 233 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
Eu001/Testi | 2023-10-05T12:38:32.000Z | [
"license:openrail",
"region:us"
] | Eu001 | null | null | null | 0 | 0 | ---
license: openrail
---
|
KonstantyM/wtamu_qa | 2023-09-27T22:12:21.000Z | [
"region:us"
] | KonstantyM | null | null | null | 0 | 0 | Entry not found |
neila8/cai | 2023-10-03T15:39:42.000Z | [
"task_categories:question-answering",
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"finance",
"region:us"
] | neila8 | null | null | null | 0 | 0 | ---
task_categories:
- question-answering
- text-generation
language:
- en
tags:
- finance
size_categories:
- n<1K
--- |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.