id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
ccore/rhetoric-saint-thomas-aquinas | 2023-10-07T19:25:07.000Z | [
"license:mit",
"region:us"
] | ccore | null | null | null | 0 | 0 | ---
license: mit
---
Whether God Is Composed of Matter and Form?
Objection 1: It seems that God is composed of matter and form. For
whatever has a soul is composed of matter and form; since the soul is
the form of the body. But Scripture attributes a soul to God; for it
is mentioned in Hebrews (Heb. 10:38), where God says: "But My just man
liveth by faith; but if he withdraw himself, he shall not please My
soul." Therefore God is composed of matter and form.
Objection 2: Further, anger, joy and the like are passions of the
composite. But these are attributed to God in Scripture: "The Lord was
exceeding angry with His people" (Ps. 105:40). Therefore God is
composed of matter and form.
Objection 3: Further, matter is the principle of individualization.
But God seems to be individual, for He cannot be predicated of many.
Therefore He is composed of matter and form.
Contrary: Whatever is composed of matter and form is a body;
for dimensive quantity is the first property of matter. But God is not
a body as proved in the preceding Article; therefore He is not
composed of matter and form.
Response: It is impossible that matter should exist in God.
First, because matter is in potentiality. But we have shown (Q. 2, A. 3)
that God is pure act, without any potentiality. Hence it is
impossible that God should be composed of matter and form. Secondly,
because everything composed of matter and form owes its perfection and
goodness to its form; therefore its goodness is participated, inasmuch
as matter participates the form. Now the first good and the
best--viz. God--is not a participated good, because the essential
good is prior to the participated good. Hence it is impossible that
God should be composed of matter and form. Thirdly, because every
agent acts by its form; hence the manner in which it has its form is
the manner in which it is an agent. Therefore whatever is primarily
and essentially an agent must be primarily and essentially form. Now
God is the first agent, since He is the first efficient cause. He is
therefore of His essence a form; and not composed of matter and form.
Reply Objection 1: A soul is attributed to God because His acts
resemble the acts of a soul; for, that we will anything, is due to our
soul. Hence what is pleasing to His will is said to be pleasing to His
soul.
Reply Objection 2: Anger and the like are attributed to God on
account of a similitude of effect. Thus, because to punish is properly
the act of an angry man, God's punishment is metaphorically spoken of
as His anger.
Reply Objection 3: Forms which can be received in matter are
individualized by matter, which cannot be in another as in a subject
since it is the first underlying subject; although form of itself,
unless something else prevents it, can be received by many. But that
form which cannot be received in matter, but is self-subsisting, is
individualized precisely because it cannot be received in a subject;
and such a form is God. Hence it does not follow that matter exists in
God.
_______________________ |
BangumiBase/sekaisaikounoansatsushaisekaikizokunitenseisuru | 2023-10-07T20:43:37.000Z | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | null | 0 | 0 | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Sekai Saikou No Ansatsusha, Isekai Kizoku Ni Tensei Suru
This is the image base of bangumi Sekai Saikou no Ansatsusha, Isekai Kizoku ni Tensei Suru, we detected 32 characters, 1510 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 118 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 40 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 27 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 23 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 17 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 20 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 270 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 9 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 98 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 91 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 20 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 27 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 29 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 23 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 16 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 86 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 11 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 15 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 13 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 14 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 16 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 10 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 6 | [Download](22/dataset.zip) |  |  |  |  |  |  | N/A | N/A |
| 23 | 39 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 150 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 38 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 70 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 15 | [Download](27/dataset.zip) |  |  |  |  |  |  |  |  |
| 28 | 10 | [Download](28/dataset.zip) |  |  |  |  |  |  |  |  |
| 29 | 11 | [Download](29/dataset.zip) |  |  |  |  |  |  |  |  |
| 30 | 9 | [Download](30/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 169 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
Vinisasasasas/gleemercedes | 2023-10-07T19:34:33.000Z | [
"region:us"
] | Vinisasasasas | null | null | null | 0 | 0 | Entry not found |
Hack90/ncbi_genbank_part_75 | 2023-10-07T19:53:04.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 35009212242
num_examples: 74649
download_size: 15493347795
dataset_size: 35009212242
---
# Dataset Card for "ncbi_genbank_part_75"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Misterjo/Jo | 2023-10-07T19:39:01.000Z | [
"region:us"
] | Misterjo | null | null | null | 0 | 0 | Entry not found |
tr416/catholic_model_v2_dataset_20231007_194934 | 2023-10-07T19:49:35.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 760128.0
num_examples: 296
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 52253
dataset_size: 767832.0
---
# Dataset Card for "catholic_model_v2_dataset_20231007_194934"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
PocketDoc/Floyd-Text-Adventures | 2023-10-07T23:32:07.000Z | [
"task_categories:conversational",
"language:en",
"not-for-all-audiences",
"region:us"
] | PocketDoc | null | null | null | 0 | 0 | ---
tags:
- not-for-all-audiences
task_categories:
- conversational
language:
- en
pretty_name: Floyd Text Adventures
---
This is the 'Floyd' text adventure dataset converted to a chat format with system messages. The system messages were randomly constructed from a table of phrases and templates. The original data can be found in the .7z archive.
**Credits:**
Thank you to VE Forbryderne from KoboldAI for scraping the dataset. |
Hack90/ncbi_genbank_part_76 | 2023-10-07T20:21:23.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 31427190646
num_examples: 832959
download_size: 13887863083
dataset_size: 31427190646
---
# Dataset Card for "ncbi_genbank_part_76"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
MikuHH/gghh | 2023-10-07T20:10:01.000Z | [
"region:us"
] | MikuHH | null | null | null | 0 | 0 | Entry not found |
BangumiBase/ishuzokureviewers | 2023-10-07T21:23:49.000Z | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | null | 0 | 0 | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Ishuzoku Reviewers
This is the image base of bangumi Ishuzoku Reviewers, we detected 37 characters, 1196 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 148 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 25 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 24 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 11 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 12 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 8 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 201 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 9 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 15 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 9 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 6 | [Download](10/dataset.zip) |  |  |  |  |  |  | N/A | N/A |
| 11 | 14 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 202 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 18 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 7 | [Download](14/dataset.zip) |  |  |  |  |  |  |  | N/A |
| 15 | 19 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 11 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 7 | [Download](17/dataset.zip) |  |  |  |  |  |  |  | N/A |
| 18 | 59 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 11 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 9 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 7 | [Download](21/dataset.zip) |  |  |  |  |  |  |  | N/A |
| 22 | 49 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 14 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 11 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 13 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 9 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 7 | [Download](27/dataset.zip) |  |  |  |  |  |  |  | N/A |
| 28 | 9 | [Download](28/dataset.zip) |  |  |  |  |  |  |  |  |
| 29 | 7 | [Download](29/dataset.zip) |  |  |  |  |  |  |  | N/A |
| 30 | 6 | [Download](30/dataset.zip) |  |  |  |  |  |  | N/A | N/A |
| 31 | 6 | [Download](31/dataset.zip) |  |  |  |  |  |  | N/A | N/A |
| 32 | 5 | [Download](32/dataset.zip) |  |  |  |  |  | N/A | N/A | N/A |
| 33 | 21 | [Download](33/dataset.zip) |  |  |  |  |  |  |  |  |
| 34 | 7 | [Download](34/dataset.zip) |  |  |  |  |  |  |  | N/A |
| 35 | 5 | [Download](35/dataset.zip) |  |  |  |  |  | N/A | N/A | N/A |
| noise | 195 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
rain4242/nva-emma | 2023-10-10T13:14:07.000Z | [
"region:us"
] | rain4242 | null | null | null | 0 | 0 | Entry not found |
Gabizu/toad | 2023-10-07T20:21:35.000Z | [
"license:openrail",
"region:us"
] | Gabizu | null | null | null | 0 | 0 | ---
license: openrail
---
|
nasa-cisto-data-science-group/senegal-lcluc-tutorial | 2023-10-09T11:23:15.000Z | [
"license:apache-2.0",
"region:us"
] | nasa-cisto-data-science-group | null | null | null | 0 | 0 | ---
license: apache-2.0
---
|
Hack90/ncbi_genbank_part_77 | 2023-10-07T20:46:41.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 29897565069
num_examples: 1177983
download_size: 13158660518
dataset_size: 29897565069
---
# Dataset Card for "ncbi_genbank_part_77"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Drcx989/Erick | 2023-10-07T21:27:55.000Z | [
"region:us"
] | Drcx989 | null | null | null | 0 | 0 | Entry not found |
cestwc/SG-subzone-poi-sentiment_1 | 2023-10-07T21:45:35.000Z | [
"region:us"
] | cestwc | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: local_created_at
dtype: string
- name: id
dtype: int64
- name: text
dtype: string
- name: source
dtype: string
- name: truncated
dtype: bool
- name: in_reply_to_status_id
dtype: float64
- name: in_reply_to_user_id
dtype: float64
- name: user_id
dtype: int64
- name: user_name
dtype: string
- name: user_screen_name
dtype: string
- name: user_location
dtype: string
- name: user_url
dtype: string
- name: user_verified
dtype: bool
- name: user_default_profile
dtype: bool
- name: user_description
dtype: string
- name: user_followers_count
dtype: int64
- name: user_friends_count
dtype: int64
- name: user_listed_count
dtype: int64
- name: user_favourites_count
dtype: int64
- name: user_statuses_count
dtype: int64
- name: local_user_created_at
dtype: string
- name: place_id
dtype: string
- name: place_url
dtype: string
- name: place_place_type
dtype: string
- name: place_name
dtype: string
- name: place_country_code
dtype: string
- name: place_bounding_box_type
dtype: string
- name: place_bounding_box_coordinates
dtype: string
- name: is_quote_status
dtype: bool
- name: retweet_count
dtype: int64
- name: favorite_count
dtype: int64
- name: entities_hashtags
dtype: string
- name: entities_urls
dtype: string
- name: entities_symbols
dtype: string
- name: entities_user_mentions
dtype: string
- name: favorited
dtype: bool
- name: retweeted
dtype: bool
- name: possibly_sensitive
dtype: bool
- name: lang
dtype: string
- name: latitude
dtype: float64
- name: longitude
dtype: float64
- name: year_created_at
dtype: int64
- name: month_created_at
dtype: int64
- name: day_created_at
dtype: int64
- name: weekday_created_at
dtype: int64
- name: hour_created_at
dtype: int64
- name: minute_created_at
dtype: int64
- name: year_user_created_at
dtype: int64
- name: month_user_created_at
dtype: int64
- name: day_user_created_at
dtype: int64
- name: weekday_user_created_at
dtype: int64
- name: hour_user_created_at
dtype: int64
- name: minute_user_created_at
dtype: int64
- name: subzone
dtype: string
- name: planning_area
dtype: string
- name: poi_flag
dtype: float64
- name: poi_id
dtype: string
- name: poi_dist
dtype: float64
- name: poi_latitude
dtype: float64
- name: poi_longitude
dtype: float64
- name: poi_name
dtype: string
- name: poi_type
dtype: string
- name: poi_cate2
dtype: string
- name: poi_cate3
dtype: string
- name: clean_text
dtype: string
- name: joy_score
dtype: float64
- name: trust_score
dtype: float64
- name: positive_score
dtype: float64
- name: sadness_score
dtype: float64
- name: disgust_score
dtype: float64
- name: anger_score
dtype: float64
- name: anticipation_score
dtype: float64
- name: negative_score
dtype: float64
- name: fear_score
dtype: float64
- name: surprise_score
dtype: float64
- name: words
dtype: string
- name: polarity_score
dtype: float64
- name: labels
dtype: int64
- name: related_0
dtype: string
- name: related_1
dtype: float64
splits:
- name: '0203'
num_bytes: 1540471709
num_examples: 1025135
download_size: 423764259
dataset_size: 1540471709
configs:
- config_name: default
data_files:
- split: '0203'
path: data/0203-*
---
# Dataset Card for "SG-subzone-poi-sentiment_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Safeer143/eli5_dataset_title_text | 2023-10-07T22:16:45.000Z | [
"region:us"
] | Safeer143 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1224245207
num_examples: 1442904
download_size: 717614202
dataset_size: 1224245207
---
# Dataset Card for "eli5_dataset_title_text"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Intuit-GenSRF/ziq-depression-tweet-es | 2023-10-07T22:25:29.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
- name: processed_text
sequence: string
- name: num_tokens
dtype: int64
- name: text_en
dtype: string
splits:
- name: train
num_bytes: 51261868
num_examples: 51132
download_size: 32137564
dataset_size: 51261868
---
# Dataset Card for "ziq-depression_tweet-en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Webti/Jk | 2023-10-07T23:39:58.000Z | [
"region:us"
] | Webti | null | null | null | 0 | 0 | |
lexaizero/magimagidazo | 2023-10-07T23:36:42.000Z | [
"license:mit",
"region:us"
] | lexaizero | null | null | null | 0 | 0 | ---
license: mit
---
|
hacktoberfest-corpus-es/spanish_dish_instruction | 2023-10-07T23:08:06.000Z | [
"license:mit",
"region:us"
] | hacktoberfest-corpus-es | null | null | null | 0 | 0 | ---
license: mit
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: text
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 163569776.9644282
num_examples: 4416
- name: test
num_bytes: 8142090.336714364
num_examples: 221
- name: valid
num_bytes: 31971355.346857455
num_examples: 884
download_size: 206512305
dataset_size: 203683222.648
---
|
BounharAbdelaziz/English-to-Moroccan-Darija | 2023-10-07T23:51:00.000Z | [
"task_categories:translation",
"size_categories:10K<n<100K",
"language:ar",
"region:us"
] | BounharAbdelaziz | null | null | null | 1 | 0 | ---
dataset_info:
features:
- name: english
dtype: string
- name: darija
dtype: string
splits:
- name: train
num_bytes: 636610
num_examples: 10062
download_size: 447249
dataset_size: 636610
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- translation
language:
- ar
size_categories:
- 10K<n<100K
---
# Dataset Card for "English-to-Moroccan-Darija"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
RickBigL/role_play_chat_llama2_format_v27_100k | 2023-10-08T00:05:02.000Z | [
"region:us"
] | RickBigL | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 321835997
num_examples: 74722
download_size: 41878767
dataset_size: 321835997
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "role_play_chat_llama2_format_v27_100k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AI-Tools-Review/Automatic1111-AWS-Setup | 2023-10-08T00:12:44.000Z | [
"region:us"
] | AI-Tools-Review | null | null | null | 0 | 0 | |
Jellywibble/20231007_chai_prize_model_feedback_all | 2023-10-08T00:14:05.000Z | [
"region:us"
] | Jellywibble | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: conversation_id
dtype: string
- name: bot_id
dtype: string
- name: user_id
dtype: string
- name: conversation
dtype: string
- name: thumbs_up
dtype: bool
- name: feedback
dtype: string
- name: model_name
dtype: string
- name: season
dtype: string
splits:
- name: train
num_bytes: 242533107
num_examples: 124233
download_size: 127593487
dataset_size: 242533107
---
# Dataset Card for "20231007_chai_prize_model_feedback_all"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/christianGPTv2_dataset_20231008_001740 | 2023-10-08T00:17:40.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | Entry not found |
tr416/fullv2_dataset_20231008_001946 | 2023-10-08T00:19:47.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | Entry not found |
tr416/test | 2023-10-08T00:20:28.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | Entry not found |
tr416/v2_dataset_20231008_002216 | 2023-10-08T00:22:19.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 75203880.0
num_examples: 29285
- name: test
num_bytes: 760128.0
num_examples: 296
download_size: 12799490
dataset_size: 75964008.0
---
# Dataset Card for "v2_dataset_20231008_002216"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/v2_dataset_20231008_002613 | 2023-10-08T00:26:15.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 75203880.0
num_examples: 29285
- name: test
num_bytes: 760128.0
num_examples: 296
download_size: 12818386
dataset_size: 75964008.0
---
# Dataset Card for "v2_dataset_20231008_002613"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
gabrielcava/GabrielC | 2023-10-09T11:06:16.000Z | [
"license:mit",
"region:us"
] | gabrielcava | null | null | null | 1 | 0 | ---
license: mit
---
|
tr416/v2_dataset_20231008_002916 | 2023-10-08T00:29:27.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 75203880.0
num_examples: 29285
- name: test
num_bytes: 760128.0
num_examples: 296
download_size: 12811954
dataset_size: 75964008.0
---
# Dataset Card for "v2_dataset_20231008_002916"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/v2_dataset_20231008_003113 | 2023-10-08T00:31:15.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 75203880.0
num_examples: 29285
- name: test
num_bytes: 760128.0
num_examples: 296
download_size: 12796324
dataset_size: 75964008.0
---
# Dataset Card for "v2_dataset_20231008_003113"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nanoshinonomecom/RVC | 2023-10-08T00:48:29.000Z | [
"region:us"
] | nanoshinonomecom | null | null | null | 0 | 0 | Entry not found |
metric-space/test-images | 2023-10-08T00:52:35.000Z | [
"region:us"
] | metric-space | null | null | null | 0 | 0 | Entry not found |
katryo/jeneral-stb | 2023-10-08T01:05:51.000Z | [
"region:us"
] | katryo | null | null | null | 0 | 0 | Entry not found |
ZelaAI/lex_encodec | 2023-10-08T02:08:11.000Z | [
"region:us"
] | ZelaAI | null | null | null | 0 | 0 | Entry not found |
Sanjay19tsh/fastFood | 2023-10-08T01:53:46.000Z | [
"region:us"
] | Sanjay19tsh | null | null | null | 0 | 0 | Entry not found |
Fraol/1ColDedupedRefDatasetWMetricFinal | 2023-10-08T03:23:35.000Z | [
"region:us"
] | Fraol | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: source
dtype: string
- name: path_name
dtype: string
- name: file_name
dtype: string
- name: ref_type
dtype: string
- name: hash
dtype: string
- name: class_name
dtype: string
- name: method_name
dtype: string
- name: row_number
dtype: int64
- name: cbo
dtype: float64
- name: wmc
dtype: float64
- name: lcom*
dtype: float64
- name: loc
dtype: float64
- name: source_after
dtype: string
- name: cbo_after
dtype: float64
- name: wmc_after
dtype: float64
- name: lcom*_after
dtype: float64
- name: loc_after
dtype: float64
- name: issue_name
dtype: string
- name: issue_localize
dtype: string
splits:
- name: train
num_bytes: 476226598
num_examples: 37325
download_size: 0
dataset_size: 476226598
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "1ColDedupedRefDatasetWMetricFinal"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
gabrielcava/GabrielV2 | 2023-10-08T16:21:13.000Z | [
"region:us"
] | gabrielcava | null | null | null | 0 | 0 | Entry not found |
pytc/zebrafinch-j0126 | 2023-10-08T02:22:42.000Z | [
"region:us"
] | pytc | null | null | null | 0 | 0 | Entry not found |
AiForTheChurch/30000_christian_non_denominational_dataset | 2023-10-08T02:28:47.000Z | [
"region:us"
] | AiForTheChurch | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: user
dtype: string
- name: llm
dtype: string
splits:
- name: train
num_bytes: 29503223
num_examples: 29581
download_size: 15020646
dataset_size: 29503223
---
# Dataset Card for "30000_christian_non_denominational_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
stevez/test_db | 2023-10-08T02:46:17.000Z | [
"license:mit",
"region:us"
] | stevez | null | null | null | 0 | 0 | ---
license: mit
---
|
AiForTheChurch/catholic_denomination_300 | 2023-10-08T02:46:20.000Z | [
"region:us"
] | AiForTheChurch | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: user
dtype: string
- name: llm
dtype: string
splits:
- name: train
num_bytes: 172156
num_examples: 300
download_size: 91806
dataset_size: 172156
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "catholic_denomination_300"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Sreya27/titanic | 2023-10-08T03:21:25.000Z | [
"region:us"
] | Sreya27 | null | null | null | 0 | 0 | |
asgaardlab/GamePhysicsDailyDump | 2023-10-10T23:37:59.000Z | [
"task_categories:video-classification",
"language:en",
"license:mit",
"game",
"game-physics",
"game-bug",
"video-understanding",
"region:us"
] | asgaardlab | null | null | null | 0 | 0 | ---
license: mit
task_categories:
- video-classification
language:
- en
tags:
- game
- game-physics
- game-bug
- video-understanding
pretty_name: GamePhysics
---
# GamePhysics Dataset (Daily Dump)
|
BangumiBase/narutoshippuden | 2023-10-08T15:11:06.000Z | [
"size_categories:10K<n<100K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | null | 0 | 0 | ---
license: mit
tags:
- art
size_categories:
- 10K<n<100K
---
# Bangumi Image Base of Naruto Shippuden
This is the image base of bangumi Naruto Shippuden, we detected 196 characters, 36722 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:----------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|
| 0 | 2958 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 726 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 1111 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 442 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 132 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 1913 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 80 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 719 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 7149 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 71 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 946 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 159 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 1667 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 109 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 158 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 94 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 1473 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 1392 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 88 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 70 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 333 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 178 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 628 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 139 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 418 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 1193 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 287 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 142 | [Download](27/dataset.zip) |  |  |  |  |  |  |  |  |
| 28 | 45 | [Download](28/dataset.zip) |  |  |  |  |  |  |  |  |
| 29 | 49 | [Download](29/dataset.zip) |  |  |  |  |  |  |  |  |
| 30 | 356 | [Download](30/dataset.zip) |  |  |  |  |  |  |  |  |
| 31 | 172 | [Download](31/dataset.zip) |  |  |  |  |  |  |  |  |
| 32 | 85 | [Download](32/dataset.zip) |  |  |  |  |  |  |  |  |
| 33 | 122 | [Download](33/dataset.zip) |  |  |  |  |  |  |  |  |
| 34 | 292 | [Download](34/dataset.zip) |  |  |  |  |  |  |  |  |
| 35 | 115 | [Download](35/dataset.zip) |  |  |  |  |  |  |  |  |
| 36 | 103 | [Download](36/dataset.zip) |  |  |  |  |  |  |  |  |
| 37 | 96 | [Download](37/dataset.zip) |  |  |  |  |  |  |  |  |
| 38 | 190 | [Download](38/dataset.zip) |  |  |  |  |  |  |  |  |
| 39 | 49 | [Download](39/dataset.zip) |  |  |  |  |  |  |  |  |
| 40 | 22 | [Download](40/dataset.zip) |  |  |  |  |  |  |  |  |
| 41 | 65 | [Download](41/dataset.zip) |  |  |  |  |  |  |  |  |
| 42 | 643 | [Download](42/dataset.zip) |  |  |  |  |  |  |  |  |
| 43 | 59 | [Download](43/dataset.zip) |  |  |  |  |  |  |  |  |
| 44 | 162 | [Download](44/dataset.zip) |  |  |  |  |  |  |  |  |
| 45 | 347 | [Download](45/dataset.zip) |  |  |  |  |  |  |  |  |
| 46 | 55 | [Download](46/dataset.zip) |  |  |  |  |  |  |  |  |
| 47 | 122 | [Download](47/dataset.zip) |  |  |  |  |  |  |  |  |
| 48 | 45 | [Download](48/dataset.zip) |  |  |  |  |  |  |  |  |
| 49 | 179 | [Download](49/dataset.zip) |  |  |  |  |  |  |  |  |
| 50 | 68 | [Download](50/dataset.zip) |  |  |  |  |  |  |  |  |
| 51 | 88 | [Download](51/dataset.zip) |  |  |  |  |  |  |  |  |
| 52 | 32 | [Download](52/dataset.zip) |  |  |  |  |  |  |  |  |
| 53 | 33 | [Download](53/dataset.zip) |  |  |  |  |  |  |  |  |
| 54 | 148 | [Download](54/dataset.zip) |  |  |  |  |  |  |  |  |
| 55 | 228 | [Download](55/dataset.zip) |  |  |  |  |  |  |  |  |
| 56 | 170 | [Download](56/dataset.zip) |  |  |  |  |  |  |  |  |
| 57 | 112 | [Download](57/dataset.zip) |  |  |  |  |  |  |  |  |
| 58 | 234 | [Download](58/dataset.zip) |  |  |  |  |  |  |  |  |
| 59 | 29 | [Download](59/dataset.zip) |  |  |  |  |  |  |  |  |
| 60 | 106 | [Download](60/dataset.zip) |  |  |  |  |  |  |  |  |
| 61 | 247 | [Download](61/dataset.zip) |  |  |  |  |  |  |  |  |
| 62 | 37 | [Download](62/dataset.zip) |  |  |  |  |  |  |  |  |
| 63 | 66 | [Download](63/dataset.zip) |  |  |  |  |  |  |  |  |
| 64 | 43 | [Download](64/dataset.zip) |  |  |  |  |  |  |  |  |
| 65 | 34 | [Download](65/dataset.zip) |  |  |  |  |  |  |  |  |
| 66 | 36 | [Download](66/dataset.zip) |  |  |  |  |  |  |  |  |
| 67 | 36 | [Download](67/dataset.zip) |  |  |  |  |  |  |  |  |
| 68 | 38 | [Download](68/dataset.zip) |  |  |  |  |  |  |  |  |
| 69 | 12 | [Download](69/dataset.zip) |  |  |  |  |  |  |  |  |
| 70 | 65 | [Download](70/dataset.zip) |  |  |  |  |  |  |  |  |
| 71 | 81 | [Download](71/dataset.zip) |  |  |  |  |  |  |  |  |
| 72 | 33 | [Download](72/dataset.zip) |  |  |  |  |  |  |  |  |
| 73 | 16 | [Download](73/dataset.zip) |  |  |  |  |  |  |  |  |
| 74 | 315 | [Download](74/dataset.zip) |  |  |  |  |  |  |  |  |
| 75 | 15 | [Download](75/dataset.zip) |  |  |  |  |  |  |  |  |
| 76 | 56 | [Download](76/dataset.zip) |  |  |  |  |  |  |  |  |
| 77 | 50 | [Download](77/dataset.zip) |  |  |  |  |  |  |  |  |
| 78 | 60 | [Download](78/dataset.zip) |  |  |  |  |  |  |  |  |
| 79 | 48 | [Download](79/dataset.zip) |  |  |  |  |  |  |  |  |
| 80 | 115 | [Download](80/dataset.zip) |  |  |  |  |  |  |  |  |
| 81 | 15 | [Download](81/dataset.zip) |  |  |  |  |  |  |  |  |
| 82 | 163 | [Download](82/dataset.zip) |  |  |  |  |  |  |  |  |
| 83 | 36 | [Download](83/dataset.zip) |  |  |  |  |  |  |  |  |
| 84 | 237 | [Download](84/dataset.zip) |  |  |  |  |  |  |  |  |
| 85 | 20 | [Download](85/dataset.zip) |  |  |  |  |  |  |  |  |
| 86 | 1991 | [Download](86/dataset.zip) |  |  |  |  |  |  |  |  |
| 87 | 36 | [Download](87/dataset.zip) |  |  |  |  |  |  |  |  |
| 88 | 62 | [Download](88/dataset.zip) |  |  |  |  |  |  |  |  |
| 89 | 63 | [Download](89/dataset.zip) |  |  |  |  |  |  |  |  |
| 90 | 28 | [Download](90/dataset.zip) |  |  |  |  |  |  |  |  |
| 91 | 57 | [Download](91/dataset.zip) |  |  |  |  |  |  |  |  |
| 92 | 48 | [Download](92/dataset.zip) |  |  |  |  |  |  |  |  |
| 93 | 54 | [Download](93/dataset.zip) |  |  |  |  |  |  |  |  |
| 94 | 17 | [Download](94/dataset.zip) |  |  |  |  |  |  |  |  |
| 95 | 60 | [Download](95/dataset.zip) |  |  |  |  |  |  |  |  |
| 96 | 69 | [Download](96/dataset.zip) |  |  |  |  |  |  |  |  |
| 97 | 36 | [Download](97/dataset.zip) |  |  |  |  |  |  |  |  |
| 98 | 33 | [Download](98/dataset.zip) |  |  |  |  |  |  |  |  |
| 99 | 67 | [Download](99/dataset.zip) |  |  |  |  |  |  |  |  |
| 100 | 128 | [Download](100/dataset.zip) |  |  |  |  |  |  |  |  |
| 101 | 34 | [Download](101/dataset.zip) |  |  |  |  |  |  |  |  |
| 102 | 11 | [Download](102/dataset.zip) |  |  |  |  |  |  |  |  |
| 103 | 114 | [Download](103/dataset.zip) |  |  |  |  |  |  |  |  |
| 104 | 63 | [Download](104/dataset.zip) |  |  |  |  |  |  |  |  |
| 105 | 22 | [Download](105/dataset.zip) |  |  |  |  |  |  |  |  |
| 106 | 15 | [Download](106/dataset.zip) |  |  |  |  |  |  |  |  |
| 107 | 53 | [Download](107/dataset.zip) |  |  |  |  |  |  |  |  |
| 108 | 88 | [Download](108/dataset.zip) |  |  |  |  |  |  |  |  |
| 109 | 26 | [Download](109/dataset.zip) |  |  |  |  |  |  |  |  |
| 110 | 26 | [Download](110/dataset.zip) |  |  |  |  |  |  |  |  |
| 111 | 50 | [Download](111/dataset.zip) |  |  |  |  |  |  |  |  |
| 112 | 26 | [Download](112/dataset.zip) |  |  |  |  |  |  |  |  |
| 113 | 99 | [Download](113/dataset.zip) |  |  |  |  |  |  |  |  |
| 114 | 29 | [Download](114/dataset.zip) |  |  |  |  |  |  |  |  |
| 115 | 67 | [Download](115/dataset.zip) |  |  |  |  |  |  |  |  |
| 116 | 18 | [Download](116/dataset.zip) |  |  |  |  |  |  |  |  |
| 117 | 8 | [Download](117/dataset.zip) |  |  |  |  |  |  |  |  |
| 118 | 34 | [Download](118/dataset.zip) |  |  |  |  |  |  |  |  |
| 119 | 21 | [Download](119/dataset.zip) |  |  |  |  |  |  |  |  |
| 120 | 15 | [Download](120/dataset.zip) |  |  |  |  |  |  |  |  |
| 121 | 22 | [Download](121/dataset.zip) |  |  |  |  |  |  |  |  |
| 122 | 26 | [Download](122/dataset.zip) |  |  |  |  |  |  |  |  |
| 123 | 32 | [Download](123/dataset.zip) |  |  |  |  |  |  |  |  |
| 124 | 16 | [Download](124/dataset.zip) |  |  |  |  |  |  |  |  |
| 125 | 22 | [Download](125/dataset.zip) |  |  |  |  |  |  |  |  |
| 126 | 45 | [Download](126/dataset.zip) |  |  |  |  |  |  |  |  |
| 127 | 12 | [Download](127/dataset.zip) |  |  |  |  |  |  |  |  |
| 128 | 40 | [Download](128/dataset.zip) |  |  |  |  |  |  |  |  |
| 129 | 28 | [Download](129/dataset.zip) |  |  |  |  |  |  |  |  |
| 130 | 55 | [Download](130/dataset.zip) |  |  |  |  |  |  |  |  |
| 131 | 22 | [Download](131/dataset.zip) |  |  |  |  |  |  |  |  |
| 132 | 53 | [Download](132/dataset.zip) |  |  |  |  |  |  |  |  |
| 133 | 30 | [Download](133/dataset.zip) |  |  |  |  |  |  |  |  |
| 134 | 18 | [Download](134/dataset.zip) |  |  |  |  |  |  |  |  |
| 135 | 35 | [Download](135/dataset.zip) |  |  |  |  |  |  |  |  |
| 136 | 31 | [Download](136/dataset.zip) |  |  |  |  |  |  |  |  |
| 137 | 60 | [Download](137/dataset.zip) |  |  |  |  |  |  |  |  |
| 138 | 52 | [Download](138/dataset.zip) |  |  |  |  |  |  |  |  |
| 139 | 16 | [Download](139/dataset.zip) |  |  |  |  |  |  |  |  |
| 140 | 17 | [Download](140/dataset.zip) |  |  |  |  |  |  |  |  |
| 141 | 41 | [Download](141/dataset.zip) |  |  |  |  |  |  |  |  |
| 142 | 49 | [Download](142/dataset.zip) |  |  |  |  |  |  |  |  |
| 143 | 37 | [Download](143/dataset.zip) |  |  |  |  |  |  |  |  |
| 144 | 14 | [Download](144/dataset.zip) |  |  |  |  |  |  |  |  |
| 145 | 26 | [Download](145/dataset.zip) |  |  |  |  |  |  |  |  |
| 146 | 31 | [Download](146/dataset.zip) |  |  |  |  |  |  |  |  |
| 147 | 32 | [Download](147/dataset.zip) |  |  |  |  |  |  |  |  |
| 148 | 21 | [Download](148/dataset.zip) |  |  |  |  |  |  |  |  |
| 149 | 28 | [Download](149/dataset.zip) |  |  |  |  |  |  |  |  |
| 150 | 15 | [Download](150/dataset.zip) |  |  |  |  |  |  |  |  |
| 151 | 21 | [Download](151/dataset.zip) |  |  |  |  |  |  |  |  |
| 152 | 33 | [Download](152/dataset.zip) |  |  |  |  |  |  |  |  |
| 153 | 26 | [Download](153/dataset.zip) |  |  |  |  |  |  |  |  |
| 154 | 17 | [Download](154/dataset.zip) |  |  |  |  |  |  |  |  |
| 155 | 14 | [Download](155/dataset.zip) |  |  |  |  |  |  |  |  |
| 156 | 27 | [Download](156/dataset.zip) |  |  |  |  |  |  |  |  |
| 157 | 15 | [Download](157/dataset.zip) |  |  |  |  |  |  |  |  |
| 158 | 12 | [Download](158/dataset.zip) |  |  |  |  |  |  |  |  |
| 159 | 21 | [Download](159/dataset.zip) |  |  |  |  |  |  |  |  |
| 160 | 31 | [Download](160/dataset.zip) |  |  |  |  |  |  |  |  |
| 161 | 21 | [Download](161/dataset.zip) |  |  |  |  |  |  |  |  |
| 162 | 11 | [Download](162/dataset.zip) |  |  |  |  |  |  |  |  |
| 163 | 13 | [Download](163/dataset.zip) |  |  |  |  |  |  |  |  |
| 164 | 32 | [Download](164/dataset.zip) |  |  |  |  |  |  |  |  |
| 165 | 8 | [Download](165/dataset.zip) |  |  |  |  |  |  |  |  |
| 166 | 16 | [Download](166/dataset.zip) |  |  |  |  |  |  |  |  |
| 167 | 16 | [Download](167/dataset.zip) |  |  |  |  |  |  |  |  |
| 168 | 19 | [Download](168/dataset.zip) |  |  |  |  |  |  |  |  |
| 169 | 22 | [Download](169/dataset.zip) |  |  |  |  |  |  |  |  |
| 170 | 8 | [Download](170/dataset.zip) |  |  |  |  |  |  |  |  |
| 171 | 21 | [Download](171/dataset.zip) |  |  |  |  |  |  |  |  |
| 172 | 9 | [Download](172/dataset.zip) |  |  |  |  |  |  |  |  |
| 173 | 14 | [Download](173/dataset.zip) |  |  |  |  |  |  |  |  |
| 174 | 8 | [Download](174/dataset.zip) |  |  |  |  |  |  |  |  |
| 175 | 24 | [Download](175/dataset.zip) |  |  |  |  |  |  |  |  |
| 176 | 43 | [Download](176/dataset.zip) |  |  |  |  |  |  |  |  |
| 177 | 27 | [Download](177/dataset.zip) |  |  |  |  |  |  |  |  |
| 178 | 11 | [Download](178/dataset.zip) |  |  |  |  |  |  |  |  |
| 179 | 18 | [Download](179/dataset.zip) |  |  |  |  |  |  |  |  |
| 180 | 26 | [Download](180/dataset.zip) |  |  |  |  |  |  |  |  |
| 181 | 26 | [Download](181/dataset.zip) |  |  |  |  |  |  |  |  |
| 182 | 33 | [Download](182/dataset.zip) |  |  |  |  |  |  |  |  |
| 183 | 8 | [Download](183/dataset.zip) |  |  |  |  |  |  |  |  |
| 184 | 17 | [Download](184/dataset.zip) |  |  |  |  |  |  |  |  |
| 185 | 12 | [Download](185/dataset.zip) |  |  |  |  |  |  |  |  |
| 186 | 10 | [Download](186/dataset.zip) |  |  |  |  |  |  |  |  |
| 187 | 17 | [Download](187/dataset.zip) |  |  |  |  |  |  |  |  |
| 188 | 11 | [Download](188/dataset.zip) |  |  |  |  |  |  |  |  |
| 189 | 5 | [Download](189/dataset.zip) |  |  |  |  |  | N/A | N/A | N/A |
| 190 | 24 | [Download](190/dataset.zip) |  |  |  |  |  |  |  |  |
| 191 | 23 | [Download](191/dataset.zip) |  |  |  |  |  |  |  |  |
| 192 | 9 | [Download](192/dataset.zip) |  |  |  |  |  |  |  |  |
| 193 | 14 | [Download](193/dataset.zip) |  |  |  |  |  |  |  |  |
| 194 | 17 | [Download](194/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 148 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
Intuit-GenSRF/toxigen-train-es | 2023-10-08T03:50:40.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
- name: processed_text
sequence: string
- name: num_tokens
dtype: int64
- name: text_es
dtype: string
splits:
- name: train
num_bytes: 426023671
num_examples: 250880
download_size: 10528800
dataset_size: 426023671
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "toxigen-train-es"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
chunpingvi/dataset_tone4 | 2023-10-08T05:07:47.000Z | [
"region:us"
] | chunpingvi | null | null | null | 0 | 0 | Entry not found |
sordonia/wikipedia-en | 2023-10-10T21:16:05.000Z | [
"region:us"
] | sordonia | Wikipedia with math and latex included. | null | null | 0 | 0 | Entry not found |
SuodhanJ6/Money_laundering | 2023-10-08T05:12:51.000Z | [
"region:us"
] | SuodhanJ6 | null | null | null | 0 | 0 | |
Elriggs/pythia-6.9-rm | 2023-10-08T05:22:15.000Z | [
"region:us"
] | Elriggs | null | null | null | 0 | 0 | Entry not found |
m-aliabbas1/test_ner | 2023-10-08T05:35:00.000Z | [
"region:us"
] | m-aliabbas1 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: tokens
sequence: string
- name: ner_tags
sequence: string
splits:
- name: train
num_bytes: 40184.8938547486
num_examples: 304
- name: test
num_bytes: 7138.106145251397
num_examples: 54
download_size: 8540
dataset_size: 47323.0
---
# Dataset Card for "test_ner"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Hyder12/Fine-tuning-gpt-3.5-Dataset | 2023-10-08T05:38:45.000Z | [
"region:us"
] | Hyder12 | null | null | null | 0 | 0 | Entry not found |
katryo/jeneral-stb-2 | 2023-10-08T06:00:42.000Z | [
"region:us"
] | katryo | null | null | null | 0 | 0 | Entry not found |
datazeit/gpt_target_group_v1-2 | 2023-10-08T07:03:50.000Z | [
"region:us"
] | datazeit | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: title
dtype: string
- name: category
dtype: string
- name: description
dtype: string
- name: result
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2879849
num_examples: 1984
download_size: 1125328
dataset_size: 2879849
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "gpt_target_group_v1-2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yyy1227/test_public | 2023-10-08T06:19:38.000Z | [
"region:us"
] | yyy1227 | null | null | null | 0 | 0 | Entry not found |
dmarx/whats-in-a-name_v0.1_embeds_clip-b32 | 2023-10-08T06:31:14.000Z | [
"region:us"
] | dmarx | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: class_idx
dtype: int64
- name: name
dtype: string
- name: root
dtype: string
- name: image_id
dtype: string
- name: embed_type
dtype: string
- name: path
dtype: string
- name: embed
sequence: float32
- name: embed_normed
sequence: float32
- name: similarity@6
dtype: float64
- name: DIV@6
dtype: float64
- name: similarity@12
dtype: float64
- name: DIV@12
dtype: float64
- name: similarity@18
dtype: float64
- name: DIV@18
dtype: float64
- name: similarity@24
dtype: float64
- name: DIV@24
dtype: float64
splits:
- name: train
num_bytes: 149815296
num_examples: 34200
download_size: 72810192
dataset_size: 149815296
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "whats-in-a-name_v0.1_embeds_clip-b32"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ohtaman/aozora | 2023-10-08T21:08:28.000Z | [
"region:us"
] | ohtaman | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: title
dtype: string
- name: author
dtype: string
- name: content
dtype: string
- name: filename
dtype: string
splits:
- name: train
num_bytes: 737611844.6300713
num_examples: 17006
- name: test
num_bytes: 4337362.36992868
num_examples: 100
download_size: 416278415
dataset_size: 741949207.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "aozora"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
lidiapierre/fr_sexism_labelled | 2023-10-08T06:42:19.000Z | [
"region:us"
] | lidiapierre | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: Sentences
dtype: string
- name: Label
dtype: int64
- name: fr_sentences
dtype: string
splits:
- name: train
num_bytes: 192216
num_examples: 1137
download_size: 119626
dataset_size: 192216
---
# Dataset Card for "fr_sexism_labelled"
Based on the Kaggle dataset [Sexist Workplace Statements](https://www.kaggle.com/datasets/dgrosz/sexist-workplace-statements).
This dataset features more than 1100 examples of statements of workplace sexism, roughly balanced between examples of certain sexism and ambiguous or neutral cases (labeled with a “1” and “0” respectively).
The original English dataset has been translated into French via machine translation with the [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) model. |
Asdiansyah/eimirdad-test | 2023-10-08T06:43:26.000Z | [
"region:us"
] | Asdiansyah | null | null | null | 0 | 0 | Entry not found |
jamsonE/dtv1 | 2023-10-08T07:35:58.000Z | [
"license:apache-2.0",
"region:us"
] | jamsonE | null | null | null | 0 | 0 | ---
license: apache-2.0
---
|
autoevaluate/autoeval-eval-ade_corpus_v2-Ade_corpus_v2_classification-6ee4d3-93701145894 | 2023-10-08T07:00:09.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-ade_corpus_v2-Ade_corpus_v2_classification-b12b80-93702145895 | 2023-10-08T07:00:13.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
sajjadamjad/storyteller | 2023-10-08T07:02:57.000Z | [
"region:us"
] | sajjadamjad | null | null | null | 0 | 0 | Entry not found |
Psychxy/autotrain-data-athiba-man | 2023-10-08T08:24:08.000Z | [
"region:us"
] | Psychxy | null | null | null | 0 | 0 | Entry not found |
QJHao/sd-conf | 2023-10-08T08:02:53.000Z | [
"region:us"
] | QJHao | null | null | null | 0 | 0 | Entry not found |
tech-winning/health_insurance_test_set | 2023-10-08T09:35:10.000Z | [
"region:us"
] | tech-winning | null | null | null | 1 | 0 | (1)本数据集是医疗保险中文数据集。数据集采用单项选择的形式,可用于测试大语言模型在医疗保险领域的知识掌握程度。
(2)本数据集包含医保基础知识、医保监管方法和医保监管依据三大知识模块,覆盖医保政策文件、医保数据模型、医保监管规则定义和明细、诊疗项目目录、药品说明书、检查化验操作等多个细分领域的知识。
(3)本数据集基于医保知识库文本,运用Qwen-14B-chat自动构建。由于是大模型自动构建,部分测试数据可能存在错误和纰漏,但是整体质量较高,能在一定程度上反映测评大模型的知识掌握程度。 |
minh21/COVID-QA-Chunk-64-question-answering-biencoder-data-65_25_10 | 2023-10-08T08:41:24.000Z | [
"region:us"
] | minh21 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: context_chunks
sequence: string
- name: document_id
dtype: int64
- name: id
dtype: int64
splits:
- name: train
num_bytes: 48800727
num_examples: 1176
- name: validation
num_bytes: 4517266
num_examples: 134
download_size: 13294538
dataset_size: 53317993
---
# Dataset Card for "COVID-QA-Chunk-64-question-answering-biencoder-data-65_25_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cantabile-kwok/ljspeech-1024-256-dur | 2023-10-08T08:47:40.000Z | [
"license:mit",
"region:us"
] | cantabile-kwok | null | null | null | 0 | 0 | ---
license: mit
---
|
shihuojian/RVC-WebUI | 2023-10-09T04:46:43.000Z | [
"region:us"
] | shihuojian | null | null | null | 0 | 0 | Entry not found |
junaav/1.5loras | 2023-10-10T23:33:35.000Z | [
"license:other",
"region:us"
] | junaav | null | null | null | 0 | 0 | ---
license: other
license_name: sdds
license_link: LICENSE
---
|
chunpingvi/dataset_tone5 | 2023-10-08T09:48:21.000Z | [
"region:us"
] | chunpingvi | null | null | null | 0 | 0 | Entry not found |
openskyml/models | 2023-10-08T10:32:59.000Z | [
"language:en",
"code",
"region:us"
] | openskyml | null | null | null | 0 | 0 | ---
language:
- en
tags:
- code
---
# Models
## GPTs:
[• Pigeon-TextGen](https://huggingface.co/openskyml/pigeon-textgen)
[• GPT-2](https://huggingface.co/gpt2)
## Chats:
[• Falcon-180B-chat](https://huggingface.co/tiiuae/falcon-180B-chat)
[• LLaMA-13B-Chat-GGUF](https://huggingface.co/TheBloke/Llama-2-13B-chat-GGUF)
## Diffusions:
[• SD-1.5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
[• Dall●E-mini](https://huggingface.co/dalle-mini/dalle-mini)
|
openskyml/wikipedia | 2023-10-08T10:37:06.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:n<1K",
"size_categories:1K<n<10K",
"size_categories:10K<n<100K",
"size_categories:100K<n<1M",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:aa",
"language:ab",
"language:ace",
"language:af",
"language:ak",
"language:als",
"language:am",
"language:an",
"language:ang",
"language:ar",
"language:arc",
"language:arz",
"language:as",
"language:ast",
"language:atj",
"language:av",
"language:ay",
"language:az",
"language:azb",
"language:ba",
"language:bar",
"language:bcl",
"language:be",
"language:bg",
"language:bh",
"language:bi",
"language:bjn",
"language:bm",
"language:bn",
"language:bo",
"language:bpy",
"language:br",
"language:bs",
"language:bug",
"language:bxr",
"language:ca",
"language:cbk",
"language:cdo",
"language:ce",
"language:ceb",
"language:ch",
"language:cho",
"language:chr",
"language:chy",
"language:ckb",
"language:co",
"language:cr",
"language:crh",
"language:cs",
"language:csb",
"language:cu",
"language:cv",
"language:cy",
"language:da",
"language:de",
"language:din",
"language:diq",
"language:dsb",
"language:dty",
"language:dv",
"language:dz",
"language:ee",
"language:el",
"language:eml",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:ext",
"language:fa",
"language:ff",
"language:fi",
"language:fj",
"language:fo",
"language:fr",
"language:frp",
"language:frr",
"language:fur",
"language:fy",
"language:ga",
"language:gag",
"language:gan",
"language:gd",
"language:gl",
"language:glk",
"language:gn",
"language:gom",
"language:gor",
"language:got",
"language:gu",
"language:gv",
"language:ha",
"language:hak",
"language:haw",
"language:he",
"language:hi",
"language:hif",
"language:ho",
"language:hr",
"language:hsb",
"language:ht",
"language:hu",
"language:hy",
"language:ia",
"language:id",
"language:ie",
"language:ig",
"language:ii",
"language:ik",
"language:ilo",
"language:inh",
"language:io",
"language:is",
"language:it",
"language:iu",
"language:ja",
"language:jam",
"language:jbo",
"language:jv",
"language:ka",
"language:kaa",
"language:kab",
"language:kbd",
"language:kbp",
"language:kg",
"language:ki",
"language:kj",
"language:kk",
"language:kl",
"language:km",
"language:kn",
"language:ko",
"language:koi",
"language:krc",
"language:ks",
"language:ksh",
"language:ku",
"language:kv",
"language:kw",
"language:ky",
"language:la",
"language:lad",
"language:lb",
"language:lbe",
"language:lez",
"language:lfn",
"language:lg",
"language:li",
"language:lij",
"language:lmo",
"language:ln",
"language:lo",
"language:lrc",
"language:lt",
"language:ltg",
"language:lv",
"language:lzh",
"language:mai",
"language:mdf",
"language:mg",
"language:mh",
"language:mhr",
"language:mi",
"language:min",
"language:mk",
"language:ml",
"language:mn",
"language:mr",
"language:mrj",
"language:ms",
"language:mt",
"language:mus",
"language:mwl",
"language:my",
"language:myv",
"language:mzn",
"language:na",
"language:nah",
"language:nan",
"language:nap",
"language:nds",
"language:ne",
"language:new",
"language:ng",
"language:nl",
"language:nn",
"language:no",
"language:nov",
"language:nrf",
"language:nso",
"language:nv",
"language:ny",
"language:oc",
"language:olo",
"language:om",
"language:or",
"language:os",
"language:pa",
"language:pag",
"language:pam",
"language:pap",
"language:pcd",
"language:pdc",
"language:pfl",
"language:pi",
"language:pih",
"language:pl",
"language:pms",
"language:pnb",
"language:pnt",
"language:ps",
"language:pt",
"language:qu",
"language:rm",
"language:rmy",
"language:rn",
"language:ro",
"language:ru",
"language:rue",
"language:rup",
"language:rw",
"language:sa",
"language:sah",
"language:sat",
"language:sc",
"language:scn",
"language:sco",
"language:sd",
"language:se",
"language:sg",
"language:sgs",
"language:sh",
"language:si",
"language:sk",
"language:sl",
"language:sm",
"language:sn",
"language:so",
"language:sq",
"language:sr",
"language:srn",
"language:ss",
"language:st",
"language:stq",
"language:su",
"language:sv",
"language:sw",
"language:szl",
"language:ta",
"language:tcy",
"language:tdt",
"language:te",
"language:tg",
"language:th",
"language:ti",
"language:tk",
"language:tl",
"language:tn",
"language:to",
"language:tpi",
"language:tr",
"language:ts",
"language:tt",
"language:tum",
"language:tw",
"language:ty",
"language:tyv",
"language:udm",
"language:ug",
"language:uk",
"language:ur",
"language:uz",
"language:ve",
"language:vec",
"language:vep",
"language:vi",
"language:vls",
"language:vo",
"language:vro",
"language:wa",
"language:war",
"language:wo",
"language:wuu",
"language:xal",
"language:xh",
"language:xmf",
"language:yi",
"language:yo",
"language:yue",
"language:za",
"language:zea",
"language:zh",
"language:zu",
"license:cc-by-sa-3.0",
"license:gfdl",
"region:us"
] | openskyml | Wikipedia dataset containing cleaned articles of all languages.
The datasets are built from the Wikipedia dump
(https://dumps.wikimedia.org/) with one split per language. Each example
contains the content of one full Wikipedia article with cleaning to strip
markdown and unwanted sections (references, etc.). | @ONLINE {wikidump,
author = {Wikimedia Foundation},
title = {Wikimedia Downloads},
url = {https://dumps.wikimedia.org}
} | null | 1 | 0 | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
pretty_name: Wikipedia
paperswithcode_id: null
license:
- cc-by-sa-3.0
- gfdl
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
source_datasets:
- original
multilinguality:
- multilingual
size_categories:
- n<1K
- 1K<n<10K
- 10K<n<100K
- 100K<n<1M
- 1M<n<10M
language:
- aa
- ab
- ace
- af
- ak
- als
- am
- an
- ang
- ar
- arc
- arz
- as
- ast
- atj
- av
- ay
- az
- azb
- ba
- bar
- bcl
- be
- bg
- bh
- bi
- bjn
- bm
- bn
- bo
- bpy
- br
- bs
- bug
- bxr
- ca
- cbk
- cdo
- ce
- ceb
- ch
- cho
- chr
- chy
- ckb
- co
- cr
- crh
- cs
- csb
- cu
- cv
- cy
- da
- de
- din
- diq
- dsb
- dty
- dv
- dz
- ee
- el
- eml
- en
- eo
- es
- et
- eu
- ext
- fa
- ff
- fi
- fj
- fo
- fr
- frp
- frr
- fur
- fy
- ga
- gag
- gan
- gd
- gl
- glk
- gn
- gom
- gor
- got
- gu
- gv
- ha
- hak
- haw
- he
- hi
- hif
- ho
- hr
- hsb
- ht
- hu
- hy
- ia
- id
- ie
- ig
- ii
- ik
- ilo
- inh
- io
- is
- it
- iu
- ja
- jam
- jbo
- jv
- ka
- kaa
- kab
- kbd
- kbp
- kg
- ki
- kj
- kk
- kl
- km
- kn
- ko
- koi
- krc
- ks
- ksh
- ku
- kv
- kw
- ky
- la
- lad
- lb
- lbe
- lez
- lfn
- lg
- li
- lij
- lmo
- ln
- lo
- lrc
- lt
- ltg
- lv
- lzh
- mai
- mdf
- mg
- mh
- mhr
- mi
- min
- mk
- ml
- mn
- mr
- mrj
- ms
- mt
- mus
- mwl
- my
- myv
- mzn
- na
- nah
- nan
- nap
- nds
- ne
- new
- ng
- nl
- nn
- 'no'
- nov
- nrf
- nso
- nv
- ny
- oc
- olo
- om
- or
- os
- pa
- pag
- pam
- pap
- pcd
- pdc
- pfl
- pi
- pih
- pl
- pms
- pnb
- pnt
- ps
- pt
- qu
- rm
- rmy
- rn
- ro
- ru
- rue
- rup
- rw
- sa
- sah
- sat
- sc
- scn
- sco
- sd
- se
- sg
- sgs
- sh
- si
- sk
- sl
- sm
- sn
- so
- sq
- sr
- srn
- ss
- st
- stq
- su
- sv
- sw
- szl
- ta
- tcy
- tdt
- te
- tg
- th
- ti
- tk
- tl
- tn
- to
- tpi
- tr
- ts
- tt
- tum
- tw
- ty
- tyv
- udm
- ug
- uk
- ur
- uz
- ve
- vec
- vep
- vi
- vls
- vo
- vro
- wa
- war
- wo
- wuu
- xal
- xh
- xmf
- yi
- yo
- yue
- za
- zea
- zh
- zu
language_bcp47:
- nds-nl
dataset_info:
- config_name: 20220301.de
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 8905282792
num_examples: 2665357
download_size: 6523215105
dataset_size: 8905282792
- config_name: 20220301.en
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 20275516160
num_examples: 6458670
download_size: 20598313936
dataset_size: 20275516160
- config_name: 20220301.fr
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 7375920768
num_examples: 2402095
download_size: 5602565274
dataset_size: 7375920768
- config_name: 20220301.frr
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 9129760
num_examples: 15199
download_size: 12438017
dataset_size: 9129760
- config_name: 20220301.it
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4539944448
num_examples: 1743035
download_size: 3516441239
dataset_size: 4539944448
- config_name: 20220301.simple
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 235072360
num_examples: 205328
download_size: 239682796
dataset_size: 235072360
config_names:
- 20220301.aa
- 20220301.ab
- 20220301.ace
- 20220301.ady
- 20220301.af
- 20220301.ak
- 20220301.als
- 20220301.am
- 20220301.an
- 20220301.ang
- 20220301.ar
- 20220301.arc
- 20220301.arz
- 20220301.as
- 20220301.ast
- 20220301.atj
- 20220301.av
- 20220301.ay
- 20220301.az
- 20220301.azb
- 20220301.ba
- 20220301.bar
- 20220301.bat-smg
- 20220301.bcl
- 20220301.be
- 20220301.be-x-old
- 20220301.bg
- 20220301.bh
- 20220301.bi
- 20220301.bjn
- 20220301.bm
- 20220301.bn
- 20220301.bo
- 20220301.bpy
- 20220301.br
- 20220301.bs
- 20220301.bug
- 20220301.bxr
- 20220301.ca
- 20220301.cbk-zam
- 20220301.cdo
- 20220301.ce
- 20220301.ceb
- 20220301.ch
- 20220301.cho
- 20220301.chr
- 20220301.chy
- 20220301.ckb
- 20220301.co
- 20220301.cr
- 20220301.crh
- 20220301.cs
- 20220301.csb
- 20220301.cu
- 20220301.cv
- 20220301.cy
- 20220301.da
- 20220301.de
- 20220301.din
- 20220301.diq
- 20220301.dsb
- 20220301.dty
- 20220301.dv
- 20220301.dz
- 20220301.ee
- 20220301.el
- 20220301.eml
- 20220301.en
- 20220301.eo
- 20220301.es
- 20220301.et
- 20220301.eu
- 20220301.ext
- 20220301.fa
- 20220301.ff
- 20220301.fi
- 20220301.fiu-vro
- 20220301.fj
- 20220301.fo
- 20220301.fr
- 20220301.frp
- 20220301.frr
- 20220301.fur
- 20220301.fy
- 20220301.ga
- 20220301.gag
- 20220301.gan
- 20220301.gd
- 20220301.gl
- 20220301.glk
- 20220301.gn
- 20220301.gom
- 20220301.gor
- 20220301.got
- 20220301.gu
- 20220301.gv
- 20220301.ha
- 20220301.hak
- 20220301.haw
- 20220301.he
- 20220301.hi
- 20220301.hif
- 20220301.ho
- 20220301.hr
- 20220301.hsb
- 20220301.ht
- 20220301.hu
- 20220301.hy
- 20220301.ia
- 20220301.id
- 20220301.ie
- 20220301.ig
- 20220301.ii
- 20220301.ik
- 20220301.ilo
- 20220301.inh
- 20220301.io
- 20220301.is
- 20220301.it
- 20220301.iu
- 20220301.ja
- 20220301.jam
- 20220301.jbo
- 20220301.jv
- 20220301.ka
- 20220301.kaa
- 20220301.kab
- 20220301.kbd
- 20220301.kbp
- 20220301.kg
- 20220301.ki
- 20220301.kj
- 20220301.kk
- 20220301.kl
- 20220301.km
- 20220301.kn
- 20220301.ko
- 20220301.koi
- 20220301.krc
- 20220301.ks
- 20220301.ksh
- 20220301.ku
- 20220301.kv
- 20220301.kw
- 20220301.ky
- 20220301.la
- 20220301.lad
- 20220301.lb
- 20220301.lbe
- 20220301.lez
- 20220301.lfn
- 20220301.lg
- 20220301.li
- 20220301.lij
- 20220301.lmo
- 20220301.ln
- 20220301.lo
- 20220301.lrc
- 20220301.lt
- 20220301.ltg
- 20220301.lv
- 20220301.mai
- 20220301.map-bms
- 20220301.mdf
- 20220301.mg
- 20220301.mh
- 20220301.mhr
- 20220301.mi
- 20220301.min
- 20220301.mk
- 20220301.ml
- 20220301.mn
- 20220301.mr
- 20220301.mrj
- 20220301.ms
- 20220301.mt
- 20220301.mus
- 20220301.mwl
- 20220301.my
- 20220301.myv
- 20220301.mzn
- 20220301.na
- 20220301.nah
- 20220301.nap
- 20220301.nds
- 20220301.nds-nl
- 20220301.ne
- 20220301.new
- 20220301.ng
- 20220301.nl
- 20220301.nn
- 20220301.no
- 20220301.nov
- 20220301.nrm
- 20220301.nso
- 20220301.nv
- 20220301.ny
- 20220301.oc
- 20220301.olo
- 20220301.om
- 20220301.or
- 20220301.os
- 20220301.pa
- 20220301.pag
- 20220301.pam
- 20220301.pap
- 20220301.pcd
- 20220301.pdc
- 20220301.pfl
- 20220301.pi
- 20220301.pih
- 20220301.pl
- 20220301.pms
- 20220301.pnb
- 20220301.pnt
- 20220301.ps
- 20220301.pt
- 20220301.qu
- 20220301.rm
- 20220301.rmy
- 20220301.rn
- 20220301.ro
- 20220301.roa-rup
- 20220301.roa-tara
- 20220301.ru
- 20220301.rue
- 20220301.rw
- 20220301.sa
- 20220301.sah
- 20220301.sat
- 20220301.sc
- 20220301.scn
- 20220301.sco
- 20220301.sd
- 20220301.se
- 20220301.sg
- 20220301.sh
- 20220301.si
- 20220301.simple
- 20220301.sk
- 20220301.sl
- 20220301.sm
- 20220301.sn
- 20220301.so
- 20220301.sq
- 20220301.sr
- 20220301.srn
- 20220301.ss
- 20220301.st
- 20220301.stq
- 20220301.su
- 20220301.sv
- 20220301.sw
- 20220301.szl
- 20220301.ta
- 20220301.tcy
- 20220301.te
- 20220301.tet
- 20220301.tg
- 20220301.th
- 20220301.ti
- 20220301.tk
- 20220301.tl
- 20220301.tn
- 20220301.to
- 20220301.tpi
- 20220301.tr
- 20220301.ts
- 20220301.tt
- 20220301.tum
- 20220301.tw
- 20220301.ty
- 20220301.tyv
- 20220301.udm
- 20220301.ug
- 20220301.uk
- 20220301.ur
- 20220301.uz
- 20220301.ve
- 20220301.vec
- 20220301.vep
- 20220301.vi
- 20220301.vls
- 20220301.vo
- 20220301.wa
- 20220301.war
- 20220301.wo
- 20220301.wuu
- 20220301.xal
- 20220301.xh
- 20220301.xmf
- 20220301.yi
- 20220301.yo
- 20220301.za
- 20220301.zea
- 20220301.zh
- 20220301.zh-classical
- 20220301.zh-min-nan
- 20220301.zh-yue
- 20220301.zu
---
# Dataset Card for Wikipedia
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://dumps.wikimedia.org](https://dumps.wikimedia.org)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
Wikipedia dataset containing cleaned articles of all languages.
The datasets are built from the Wikipedia dump
(https://dumps.wikimedia.org/) with one split per language. Each example
contains the content of one full Wikipedia article with cleaning to strip
markdown and unwanted sections (references, etc.).
The articles are parsed using the ``mwparserfromhell`` tool.
To load this dataset you need to install Apache Beam and ``mwparserfromhell`` first:
```
pip install apache_beam mwparserfromhell
```
Then, you can load any subset of Wikipedia per language and per date this way:
```python
from datasets import load_dataset
load_dataset("wikipedia", language="sw", date="20220120", beam_runner=...)
```
where you can pass as `beam_runner` any Apache Beam supported runner for (distributed) data processing
(see [here](https://beam.apache.org/documentation/runners/capability-matrix/)).
Pass "DirectRunner" to run it on your machine.
You can find the full list of languages and dates [here](https://dumps.wikimedia.org/backup-index.html).
Some subsets of Wikipedia have already been processed by HuggingFace, and you can load them just with:
```python
from datasets import load_dataset
load_dataset("wikipedia", "20220301.en")
```
The list of pre-processed subsets is:
- "20220301.de"
- "20220301.en"
- "20220301.fr"
- "20220301.frr"
- "20220301.it"
- "20220301.simple"
### Supported Tasks and Leaderboards
The dataset is generally used for Language Modeling.
### Languages
You can find the list of languages [here](https://meta.wikimedia.org/wiki/List_of_Wikipedias).
## Dataset Structure
### Data Instances
An example looks as follows:
```
{'id': '1',
'url': 'https://simple.wikipedia.org/wiki/April',
'title': 'April',
'text': 'April is the fourth month...'
}
```
Some subsets of Wikipedia have already been processed by HuggingFace, as you can see below:
#### 20220301.de
- **Size of downloaded dataset files:** 6.84 GB
- **Size of the generated dataset:** 9.34 GB
- **Total amount of disk used:** 16.18 GB
#### 20220301.en
- **Size of downloaded dataset files:** 21.60 GB
- **Size of the generated dataset:** 21.26 GB
- **Total amount of disk used:** 42.86 GB
#### 20220301.fr
- **Size of downloaded dataset files:** 5.87 GB
- **Size of the generated dataset:** 7.73 GB
- **Total amount of disk used:** 13.61 GB
#### 20220301.frr
- **Size of downloaded dataset files:** 13.04 MB
- **Size of the generated dataset:** 9.57 MB
- **Total amount of disk used:** 22.62 MB
#### 20220301.it
- **Size of downloaded dataset files:** 3.69 GB
- **Size of the generated dataset:** 4.76 GB
- **Total amount of disk used:** 8.45 GB
#### 20220301.simple
- **Size of downloaded dataset files:** 251.32 MB
- **Size of the generated dataset:** 246.49 MB
- **Total amount of disk used:** 497.82 MB
### Data Fields
The data fields are the same among all configurations:
- `id` (`str`): ID of the article.
- `url` (`str`): URL of the article.
- `title` (`str`): Title of the article.
- `text` (`str`): Text content of the article.
### Data Splits
Here are the number of examples for several configurations:
| name | train |
|-----------------|--------:|
| 20220301.de | 2665357 |
| 20220301.en | 6458670 |
| 20220301.fr | 2402095 |
| 20220301.frr | 15199 |
| 20220301.it | 1743035 |
| 20220301.simple | 205328 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
Most of Wikipedia's text and many of its images are co-licensed under the
[Creative Commons Attribution-ShareAlike 3.0 Unported License](https://en.wikipedia.org/wiki/Wikipedia:Text_of_Creative_Commons_Attribution-ShareAlike_3.0_Unported_License)
(CC BY-SA) and the [GNU Free Documentation License](https://en.wikipedia.org/wiki/Wikipedia:Text_of_the_GNU_Free_Documentation_License)
(GFDL) (unversioned, with no invariant sections, front-cover texts, or back-cover texts).
Some text has been imported only under CC BY-SA and CC BY-SA-compatible license and cannot be reused under GFDL; such
text will be identified on the page footer, in the page history, or on the discussion page of the article that utilizes
the text.
### Citation Information
```
@ONLINE{wikidump,
author = "Wikimedia Foundation",
title = "Wikimedia Downloads",
url = "https://dumps.wikimedia.org"
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. |
hk-kaden-kim/uzh-hs23-etsp-eval-single-nogrid-bar | 2023-10-08T10:54:02.000Z | [
"region:us"
] | hk-kaden-kim | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
splits:
- name: test
num_bytes: 5078088.0
num_examples: 100
download_size: 5042214
dataset_size: 5078088.0
---
# Dataset Card for "uzh-hs23-etsp-eval-single-nogrid-bar"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hk-kaden-kim/uzh-hs23-etsp-eval-single-nogrid-line | 2023-10-08T10:54:11.000Z | [
"region:us"
] | hk-kaden-kim | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
splits:
- name: test
num_bytes: 3881934.0
num_examples: 100
download_size: 3869794
dataset_size: 3881934.0
---
# Dataset Card for "uzh-hs23-etsp-eval-single-nogrid-line"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Guke/imoto_sora | 2023-10-08T13:06:31.000Z | [
"license:mit",
"region:us"
] | Guke | null | null | null | 0 | 0 | ---
license: mit
---
|
arieg/spike_prime_robot_images | 2023-10-08T11:40:18.000Z | [
"license:mit",
"region:us"
] | arieg | null | null | null | 0 | 0 | ---
license: mit
---
|
Praghxx/Prag | 2023-10-08T12:00:08.000Z | [
"license:openrail",
"region:us"
] | Praghxx | null | null | null | 0 | 0 | ---
license: openrail
---
|
chunpingvi/dataset_tone5a | 2023-10-08T11:56:03.000Z | [
"region:us"
] | chunpingvi | null | null | null | 0 | 0 | Entry not found |
llmware/rag_instruct_test_dataset_0.1 | 2023-10-08T17:04:46.000Z | [
"license:apache-2.0",
"finance",
"legal",
"region:us"
] | llmware | null | null | null | 1 | 0 | ---
license: apache-2.0
tags:
- finance
- legal
pretty_name: RAG Instruct Test Dataset - Basic - v0.1
---
# Dataset Card for RAG-Instruct-Test-Dataset
### Dataset Summary
This is a test dataset for basic "retrieval augmented generation" (RAG) use cases in the enterprise, especially for finance and legal. This test dataset includes 100 samples with context passages pulled from common 'retrieval scenarios', e.g., financial news, earnings releases,
contracts, invoices, technical articles, general news and short texts. The primary use case is to evaluate the effectiveness of an
instruct-fine-tuned LLM used in conjunction with closed-context, fact-based question-answering, key-value extraction, and summarization with bulletpoints. The context passages are relatively short in this test-set ranging from ~100 tokens to ~500 tokens, and was designed for use with the
BLING series of models but is suitable for comparison evaluations of any LLM for basic RAG scenarios.
We will be enhancing the test dataset as well as creating more advanced test datasets in the future.
### Languages
English
## Dataset Structure
100 JSONL samples with 4 keys - "query" | "context" | "answer" | "sample_number"
### Personal and Sensitive Information
The dataset samples were written bespoke for this objective, but do rely upon some public information, including major public figures and widely reported events.
Any other names were created/masked and any overlap with real companies or people is coincidental.
## Dataset Card Contact
Darren Oberst & llmware team
Please reach out anytime if you are interested in this project and would like to participate and work with us!
|
mfmezger/de_test | 2023-10-08T12:06:00.000Z | [
"region:us"
] | mfmezger | null | null | null | 0 | 0 | Entry not found |
Plona/claims_1000 | 2023-10-08T12:11:38.000Z | [
"region:us"
] | Plona | null | null | null | 0 | 0 | Entry not found |
maxzancanaro/autotrain-data-data-protection_194 | 2023-10-08T12:30:49.000Z | [
"task_categories:text-classification",
"region:us"
] | maxzancanaro | null | null | null | 0 | 0 | ---
task_categories:
- text-classification
---
# AutoTrain Dataset for project: data-protection_194
## Dataset Description
This dataset has been automatically processed by AutoTrain for project data-protection_194.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "grindr conserver\u00e0 i registri delle applicazioni in virt\u00f9 della riservatezza, in un ambiente controllato e sicuro, per sei (6) mesi dalla data di sottoscrizione",
"target": 0
},
{
"text": "riceve una licenza revocabile, non- esclusiva, non-cedibile, limitata e personale per l'accesso e la scelta dei diritti che ea rende espressamente disponibili",
"target": 1
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(names=['data protection', 'other'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 154 |
| valid | 40 |
|
pixel-coping/c4_derived | 2023-10-08T12:33:07.000Z | [
"region:us"
] | pixel-coping | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: c4
path: data/c4-*
- split: biomedical
path: data/biomedical-*
- split: counterfactual
path: data/counterfactual-*
- split: academic
path: data/academic-*
dataset_info:
features:
- name: text
dtype: string
- name: url
dtype: string
splits:
- name: c4
num_bytes: 1820234
num_examples: 1000
- name: biomedical
num_bytes: 1803036
num_examples: 989
- name: counterfactual
num_bytes: 1813882
num_examples: 985
- name: academic
num_bytes: 1199491
num_examples: 986
download_size: 4124290
dataset_size: 6636643
---
# Dataset Card for "c4_derived"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
PenguinMan/vec_dbs | 2023-10-09T07:02:54.000Z | [
"license:apache-2.0",
"region:us"
] | PenguinMan | null | null | null | 0 | 0 | ---
license: apache-2.0
---
|
Ronal999/finance-alpaca-demo | 2023-10-08T12:51:43.000Z | [
"region:us"
] | Ronal999 | null | null | null | 1 | 0 | ---
dataset_info:
features:
- name: text
dtype: string
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 825832
num_examples: 690
download_size: 456544
dataset_size: 825832
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "finance-alpaca-demo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/catholic_4800_dataset_20231008_131846 | 2023-10-08T13:18:48.000Z | [
"region:us"
] | tr416 | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 760128.0
num_examples: 296
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 52079
dataset_size: 767832.0
---
# Dataset Card for "catholic_4800_dataset_20231008_131846"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
juliagsy/immute | 2023-10-08T13:20:51.000Z | [
"region:us"
] | juliagsy | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: ytid
dtype: string
- name: start_s
dtype: int64
- name: end_s
dtype: int64
- name: caption
dtype: string
- name: image_link
dtype: string
splits:
- name: train
num_bytes: 2213251
num_examples: 5521
download_size: 918845
dataset_size: 2213251
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "immute"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
moayyad-16/potato_and_weeds-detection_dataset | 2023-10-08T14:00:34.000Z | [
"region:us"
] | moayyad-16 | null | null | null | 0 | 0 | Entry not found |
akshaysaju9660/llamav2_9660 | 2023-10-08T13:36:51.000Z | [
"region:us"
] | akshaysaju9660 | null | null | null | 0 | 0 | Entry not found |
NikiTricky/test1 | 2023-10-08T14:20:47.000Z | [
"region:us"
] | NikiTricky | null | null | null | 0 | 0 | Entry not found |
isp-uv-es/demo | 2023-10-08T13:47:29.000Z | [
"license:apache-2.0",
"region:us"
] | isp-uv-es | null | null | null | 0 | 0 | ---
license: apache-2.0
---
|
JelleWestra/splat-test | 2023-10-08T14:10:05.000Z | [
"license:mit",
"region:us"
] | JelleWestra | null | null | null | 0 | 0 | ---
license: mit
---
|
Beracles/test | 2023-10-10T07:07:35.000Z | [
"license:mit",
"region:us"
] | Beracles | null | null | null | 0 | 0 | ---
license: mit
---
|
DarkyMan/nsfw-image-classification | 2023-10-08T15:03:08.000Z | [
"region:us"
] | DarkyMan | null | null | null | 0 | 0 | Entry not found |
oroikon/vistext_chart_captioning | 2023-10-08T14:12:09.000Z | [
"region:us"
] | oroikon | null | null | null | 0 | 0 | Entry not found |
mcorsa/swifterX-4k | 2023-10-08T15:02:45.000Z | [
"license:apache-2.0",
"region:us"
] | mcorsa | null | null | null | 0 | 0 | ---
license: apache-2.0
---
|
vietlegalqa/fewshot_tvpl_2023 | 2023-10-08T14:46:13.000Z | [
"region:us"
] | vietlegalqa | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: Index
dtype: int64
- name: URL
dtype: string
- name: Q
dtype: string
- name: Doc
dtype: string
- name: MASKED Doc
dtype: string
- name: Ans
dtype: string
splits:
- name: train
num_bytes: 68105
num_examples: 10
download_size: 49074
dataset_size: 68105
---
# Dataset Card for "fewshot_tvpl_2023"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.