id stringlengths 2 115 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 ⌀ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 ⌀ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ciscak/homesecurityzones | ciscak | 2023-11-27T07:27:51Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | 2023-11-27T07:27:51Z | 2023-11-27T04:29:43.000Z | 2023-11-27T04:29:43 | ---
license: mit
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
seongwoon/labor_market_context_data | seongwoon | 2023-11-27T04:52:19Z | 0 | 0 | null | [
"license:cc-by-nc-nd-4.0",
"region:us"
] | 2023-11-27T04:52:19Z | 2023-11-27T04:50:43.000Z | 2023-11-27T04:50:43 | ---
license: cc-by-nc-nd-4.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
cp229/pubmed-summarization | cp229 | 2023-11-27T05:01:47Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | 2023-11-27T05:01:47Z | 2023-11-27T05:01:46.000Z | 2023-11-27T05:01:46 | ---
license: mit
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
allenai/archive-dolma-v1 | allenai | 2023-11-27T05:31:20Z | 0 | 0 | null | [
"task_categories:text-generation",
"size_categories:n>1T",
"language:en",
"license:other",
"language-modeling",
"casual-lm",
"llm",
"region:us"
] | 2023-11-27T05:31:20Z | 2023-11-27T05:25:00.000Z | 2023-11-27T05:25:00 | ---
license: other
license_name: impact-license-medium-risk
license_link: https://allenai.org/licenses/impact-mr
viewer: false
task_categories:
- text-generation
language:
- en
tags:
- language-modeling
- casual-lm
- llm
pretty_name: Dolma
size_categories:
- n>1T
extra_gated_prompt: "Access to this dataset is automatically granted upon accepting the [**AI2 ImpACT License - Medium Risk Artifacts (“MR Agreement”)**](https://allenai.org/licenses/impact-mr) and completing all fields below."
extra_gated_fields:
Your full name: text
Organization or entity you are affiliated with: text
State or country you are located in: text
Contact email: text
Please describe your intended use of the medium risk artifact(s): text
I AGREE to the terms and conditions of the MR Agreement above: checkbox
I AGREE to AI2’s use of my information for legal notices and administrative matters: checkbox
I CERTIFY that the information I have provided is true and accurate: checkbox
---
# Dolma
<img alt="Dolma's official logo. It's dolma written in yellow, round lowercase letters over a blue background." src="https://raw.githubusercontent.com/allenai/dolma/main/docs/assets/AI2_Blog_1400x685_2x.webp" width="100%">
Dolma is a dataset of 3 trillion tokens from a diverse mix of web content, academic publications, code, books, and encyclopedic materials. It is openly released under AI2’s ImpACT license as a medium risk artifact.
More information:
- Read Dolma **announcement blogpost** [on Medium](https://soldni.medium.com/dolma-3-trillion-tokens-open-llm-corpus-9a0ff4b8da64);
- Learn more about Dolma on its [**Data Sheet**](https://drive.google.com/file/d/12gOf5I5RytsD159nSP7iim_5zN31FCXq/view?usp=drive_link);
- Review Dolma's [**ImpACT license** for medium risk artifacts](https://allenai.org/licenses/impact-mr);
- Explore the [**open source tools**](https://github.com/allenai/dolma) we created to curate Dolma.
- Want to request removal of personal data? Use [this form](https://forms.gle/q4BNUUxUxKwKkfdT6) to notify us of documents containing PII about a specific user.
To learn more about the toolkit used to create Dolma, including how to replicate this dataset, head over our [GitHub project page](https://github.com/allenai/dolma/tree/main/docs)!
## Summary Statistics
|**Source**|**Type**|**Gzip files (GB)**|**Documents (millions)**|**[GPT-NeoX](https://huggingface.co/EleutherAI/gpt-neox-20b) Tokens (billions)**|
|:---|:---:|:---:|:---:|:----:|
|[CommonCrawl](https://commoncrawl.org/)|web|4,197|4,600|2,415|
|[C4](https://huggingface.co/datasets/allenai/c4)|web|302|364|175|
|[peS2o](https://huggingface.co/datasets/allenai/peS2o)|academic|150|38.8|57|
|[The Stack](https://huggingface.co/datasets/bigcode/the-stack)|code|319|236|430|
|[Project Gutenberg](https://www.gutenberg.org/)|books|6.6|0.052|4.8|
|[Wikipedia](https://dumps.wikimedia.org/)|encyclopedic|5.8|6.1|3.6|
||**Total** |**4980.4**|**5,245**|**3,084**|
## Download
The fastest way to download Dolma is to directly download the individual files across multiple threads.
This can be achieved using wget or [aria2](https://github.com/aria2/aria2) Linux/Mac/Windows package (`sudo apt-get install aria2` on Ubuntu).
For downloading individual files, simply use `wget` as follows:
`wget --header 'Authorization: Bearer YOUR_HF_HUB_ACCESS_TOKEN' https://huggingface.co/datasets/allenai/dolma/resolve/main/data/peS2o/s2_v3-0000.json.gz`
For downloading many files across multiple threads, first prepare a `.txt` file with the urls you would like such as via the script below:
```python
OUT_DIRECTORY = "/scratch/dolma/data"
# URLs for cc_en_head
cc_en_head_base_url = "https://huggingface.co/datasets/allenai/dolma/resolve/main/data/common-crawl/cc_en_head/cc_en_head-"
cc_en_head_url_list = [f"{cc_en_head_base_url}{str(i).zfill(4)}.json.gz\n dir={OUT_DIRECTORY}/cc_en_head\n out=cc_en_head-{str(i).zfill(4)}.json.gz" for i in range(612)]
# URLs for cc_en_middle
cc_en_middle_base_url = "https://huggingface.co/datasets/allenai/dolma/resolve/main/data/common-crawl/cc_en_middle/cc_en_middle-"
cc_en_middle_url_list = [f"{cc_en_middle_base_url}{str(i).zfill(4)}.json.gz\n dir={OUT_DIRECTORY}/cc_en_middle\n out=cc_en_middle-{str(i).zfill(4)}.json.gz" for i in range(777)]
# URLs for cc_en_tail
cc_en_tail_base_url = "https://huggingface.co/datasets/allenai/dolma/resolve/main/data/common-crawl/cc_en_tail/cc_en_tail-"
cc_en_tail_url_list = [f"{cc_en_tail_base_url}{str(i).zfill(4)}.json.gz\n dir={OUT_DIRECTORY}/cc_en_tail\n out=cc_en_tail-{str(i).zfill(4)}.json.gz" for i in range(1493)]
# URLs for s2_v3
s2_v3_base_url = "https://huggingface.co/datasets/allenai/dolma/resolve/main/data/peS2o/s2_v3-"
s2_v3_url_list = [f"{s2_v3_base_url}{str(i).zfill(4)}.json.gz\n dir={OUT_DIRECTORY}/peS2o\n out=s2_v3-{str(i).zfill(4)}.json.gz" for i in range(42)]
# URLs for The Stack
LANG_TO_FILES = {'lasso': 1, 'nsis': 1, 'literate-agda': 1, 'metal': 1, 'xojo': 1, 'max': 8, 'jupyter-notebook': 101, 'asp': 7, 'elixir': 14, 'html+erb': 19, 'julia': 22, 'dart': 63, 'ragel-in-ruby-host': 1, 'api-blueprint': 1, 'gams': 1, 'tex': 71, 'xml': 101, 'smalltalk': 17, 'cmake': 11, 'piglatin': 1, "cap'n-proto": 1, 'common-lisp': 21, 'stylus': 3, 'typescript': 101, 'jflex': 1, 'factor': 1, 'arc': 1, 'parrot-internal-representation': 1, 'aspectj': 1, 'go': 101, 'urweb': 1, 'dns-zone': 1, 'purebasic': 1, 'toml': 15, 'erlang': 11, 'hy': 1, 'component-pascal': 2, 'oz': 1, 'opa': 1, 'handlebars': 10, 'gas': 15, 'less': 17, 'gnuplot': 15, 'harbour': 1, 'vhdl': 16, 'octave': 1, 'powershell': 21, 'clips': 1, 'fish': 1, 'prolog': 1, 'sparql': 1, 'objective-j': 1, 'scaml': 1, 'twig': 20, 'gettext-catalog': 101, 'purescript': 2, 'vala': 1, 'gosu': 1, 'apacheconf': 1, 'xc': 1, 'lean': 3, 'mako': 1, 'r': 4, 'unrealscript': 1, 'solidity': 21, 'pike': 1, 'cartocss': 1, 'maple': 1, 'graphql': 3, 'unity3d-asset': 101, 'swift': 101, 'dockerfile': 13, 'digital-command-language': 1, 'scala': 83, 'sqf': 2, 'logtalk': 1, 'coq': 1, 'shellsession': 1, 'befunge': 1, 'nu': 1, 'ecere-projects': 1, 'zimpl': 1, 'shen': 1, 'golo': 1, 'web-ontology-language': 12, 'sas': 2, 'uno': 1, 'livescript': 1, 'literate-haskell': 1, 'clojure': 8, 'perl6': 1, 'zig': 3, 'liquid': 2, 'ec': 1, 'blitzbasic': 1, 'sql': 101, 'http': 2, 'xproc': 1, 'kit': 1, 'textile': 1, 'netlinx': 1, 'propeller-spin': 1, 'cython': 5, 'realbasic': 1, 'dogescript': 1, 'llvm': 9, 'pawn': 1, 'groff': 40, 'html+django': 3, 'csound': 1, 'd': 1, 'agda': 2, 'css': 101, 'yacc': 7, 'robotframework': 1, 'kotlin': 101, 'grace': 1, 'abap': 2, 'blitzmax': 1, 'webassembly': 3, 'ampl': 1, 'postscript': 16, 'nit': 1, 'gentoo-eclass': 1, 'xpages': 1, 'linker-script': 2, 'yang': 3, 'jade': 4, 'standard-ml': 6, 'javascript': 101, 'moonscript': 1, 'mtml': 1, 'saltstack': 1, 'freemarker': 5, 'ston': 1, 'html+eex': 1, 'xs': 1, 'c++': 101, 'matlab': 1, 'm4': 2, 'xbase': 1, 'perl': 37, 'emacs-lisp': 7, 'bison': 1, 'slim': 2, 'grammatical-framework': 1, 'rdoc': 1, 'nix': 10, 'clean': 1, 'module-management-system': 1, 'nimrod': 6, 'raml': 1, 'forth': 1, 'squirrel': 1, 'alloy': 1, 'opencl': 3, 'c': 101, 'sass': 4, 'eiffel': 2, 'papyrus': 1, 'html': 109, 'java': 101, 'hcl': 14, 'isabelle': 2, 'markdown': 101, 'gentoo-ebuild': 2, 'objdump': 1, 'emberscript': 1, 'text': 101, 'bro': 1, 'opal': 1, 'haskell': 35, 'mupad': 1, 'desktop': 1, 'modelica': 2, 'coldfusion-cfc': 2, 'fantom': 1, 'glsl': 10, 'ocaml': 16, 'nesc': 2, 'scheme': 7, 'crystal': 5, 'tcsh': 1, 'c2hs-haskell': 1, 'idris': 1, 'logos': 4, 'coffeescript': 13, 'g-code': 10, 'sage': 1, 'haml': 4, 'tcl': 7, 'smt': 5, 'ox': 1, 'chuck': 1, 'xquery': 1, 'batchfile': 7, 'pod': 2, 'xtend': 1, 'restructuredtext': 61, 'rmarkdown': 1, 'turtle': 33, 'jsx': 45, 'protocol-buffer': 8, "ren'py": 2, 'diff': 32, 'slash': 1, 'darcs-patch': 1, 'numpy': 1, 'augeas': 1, 'wisp': 1, 'edn': 15, 'ooc': 1, 'bitbake': 2, 'labview': 1, 'inform-7': 1, 'rust': 101, 'creole': 1, 'apl': 1, 'arduino': 11, 'openscad': 2, 'cuda': 9, 'thrift': 1, 'yaml': 101, 'fancy': 1, 'coldfusion': 1, 'python': 101, 'clarion': 1, 'glyph': 1, 'parrot': 1, 'lookml': 1, 'java-server-pages': 19, 'oxygene': 1, 'flux': 1, 'scilab': 1, 'groovy-server-pages': 2, 'rhtml': 1, 'eagle': 52, 'parrot-assembly': 1, 'igor-pro': 1, 'webidl': 1, 'bluespec': 1, 'unified-parallel-c': 1, 'smali': 38, 'haxe': 9, 'ada': 7, 'lua': 48, 'pascal': 21, 'html+php': 6, 'irc-log': 1, 'x10': 1, 'netlogo': 1, 'ioke': 1, 'dm': 1, 'self': 1, 'elm': 5, 'ats': 1, 'brainfuck': 1, 'mask': 1, 'rouge': 1, 'turing': 1, 'lex': 2, 'gap': 1, 'pogoscript': 1, 'kicad': 30, 'io': 1, 'objective-c++': 8, 'qml': 4, 'redcode': 1, 'autoit': 2, 'processing': 4, 'systemverilog': 6, 'gdscript': 5, 'f-sharp': 12, 'fortran': 23, 'monkey': 1, 'c-sharp': 101, 'xslt': 9, 'viml': 6, 'renderscript': 1, 'scss': 84, 'cucumber': 4, 'verilog': 1, 'genshi': 1, 'racket': 1, 'krl': 1, 'actionscript': 10, 'pan': 1, 'cirru': 1, 'chapel': 1, 'pure-data': 2, 'm': 1, 'applescript': 1, 'inno-setup': 1, 'volt': 1, 'myghty': 1, 'groovy': 17, 'ags-script': 1, 'mirah': 1, 'lsl': 1, 'brightscript': 1, 'python-traceback': 1, 'sourcepawn': 2, 'maxscript': 1, 'zephir': 1, 'supercollider': 1, 'mathematica': 20, 'awk': 1, 'autohotkey': 2, 'lfe': 1, 'ruby': 101, 'visual-basic': 20, 'ini': 59, 'red': 1, 'omgrofl': 1, 'idl': 1, 'rebol': 1, 'vue': 101, 'ninja': 2, 'ecl': 1, 'lolcode': 1, 'tea': 1, 'txl': 1, 'smarty': 9, 'vcl': 1, 'php': 101, 'literate-coffeescript': 1, 'click': 1, 'pony': 1, 'mediawiki': 5, 'stata': 5, 'stan': 1, 'nginx': 1, 'asciidoc': 16, 'antlr': 1, 'cobol': 1, 'org': 5, 'latte': 1, 'makefile': 32, 'ceylon': 1, 'graphviz-(dot)': 13, 'lilypond': 1, 'dylan': 1, 'qmake': 1, 'muf': 1, 'j': 1, 'pov-ray-sdl': 1, 'jasmin': 1, 'shell': 73, 'cycript': 1, 'boo': 1, 'hlsl': 2}
stack_base_url = "https://huggingface.co/datasets/allenai/dolma/resolve/main/data/stack-code/"
stack_url_list = []
for lang, num_files in sorted(LANG_TO_FILES.items()):
for i in range(num_files):
stack_url_list.append(f"{stack_base_url}{lang}/v3-{str(i).zfill(4)}.json.gz\n dir={OUT_DIRECTORY}/stack-code/{lang}\n out=v3-{str(i).zfill(4)}.json.gz")
# Combine all URL lists
all_url_list = cc_en_head_url_list + cc_en_middle_url_list + cc_en_tail_url_list + s2_v3_url_list + stack_url_list
out = open("files.txt", "a")
# Print the combined list of URLs
for i, url in enumerate(all_url_list):
out.write(url + "\n")
```
Then you can download them all in parallel using:
`aria2c --input-file files.txt --header 'Authorization: Bearer YOUR_HF_HUB_ACCESS_TOKEN'`
You can also add `-s` to increase the number of connections, e.g. `-s 10` (defaults to 5).
To get the exact file counts that are used for The Stack in the above script (`LANG_TO_FILES`), you can follow the below:
Fetch all files (does not download them, so should be fast): `GIT_LFS_SKIP_SMUDGE=1 git clone git@hf.co:datasets/allenai/dolma.git`
Then run:
```python
import os
directory = "dolma/data/stack-code"
folder_dict = {}
for folder in os.listdir(directory):
folder_path = os.path.join(directory, folder)
if os.path.isdir(folder_path):
file_count = len([f for f in os.listdir(folder_path) if os.path.isfile(os.path.join(folder_path, f))])
folder_dict[folder] = file_count
print(folder_dict)
```
| [
-0.5167892575263977,
-0.5468116402626038,
0.2582781910896301,
0.0716937854886055,
-0.03642783313989639,
0.4475589096546173,
-0.1702003926038742,
-0.1961478441953659,
0.5595974922180176,
0.2060997039079666,
-0.5839243531227112,
-0.8254532814025879,
-0.5435904264450073,
0.2304079532623291,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
seongwoon/industry-occupation | seongwoon | 2023-11-27T05:26:37Z | 0 | 0 | null | [
"license:cc-by-nc-nd-4.0",
"region:us"
] | 2023-11-27T05:26:37Z | 2023-11-27T05:26:02.000Z | 2023-11-27T05:26:02 | ---
license: cc-by-nc-nd-4.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
zicsx/hindi-bookcorpus2 | zicsx | 2023-11-27T09:30:49Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T09:30:49Z | 2023-11-27T05:38:13.000Z | 2023-11-27T05:38:13 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 10234923744
num_examples: 13542
download_size: 0
dataset_size: 10234923744
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "hindi-bookcorpus2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.238590806722641,
-0.14587116241455078,
-0.36879485845565796,
0.400918573141098,
-0.33171364665031433,
0.22528575360774994,
0.21363921463489532,
-0.20779643952846527,
0.5493127107620239,
0.37675216794013977,
-0.8420077562332153,
-0.5963070392608643,
-0.6603380441665649,
-0.18366631865501... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BangumiBase/konoototomare | BangumiBase | 2023-11-27T08:10:57Z | 0 | 0 | null | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | 2023-11-27T08:10:57Z | 2023-11-27T06:04:00.000Z | 2023-11-27T06:04:00 | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Kono Oto Tomare!
This is the image base of bangumi Kono Oto Tomare!, we detected 34 characters, 3706 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 489 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 61 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 270 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 572 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 146 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 134 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 154 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 45 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 56 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 42 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 26 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 166 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 75 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 43 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 12 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 18 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 50 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 78 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 188 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 24 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 13 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 162 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 29 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 565 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 33 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 11 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 10 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 9 | [Download](27/dataset.zip) |  |  |  |  |  |  |  |  |
| 28 | 14 | [Download](28/dataset.zip) |  |  |  |  |  |  |  |  |
| 29 | 17 | [Download](29/dataset.zip) |  |  |  |  |  |  |  |  |
| 30 | 9 | [Download](30/dataset.zip) |  |  |  |  |  |  |  |  |
| 31 | 40 | [Download](31/dataset.zip) |  |  |  |  |  |  |  |  |
| 32 | 9 | [Download](32/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 136 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
| [
-0.7036236524581909,
-0.1403399258852005,
0.1637083888053894,
0.20052699744701385,
-0.30944570899009705,
-0.10486651957035065,
-0.04995530843734741,
-0.38477960228919983,
0.6390154957771301,
0.5454707741737366,
-0.921292245388031,
-0.8702892661094666,
-0.6782029867172241,
0.532882213592529... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Zainab984/BP | Zainab984 | 2023-11-28T12:06:05Z | 0 | 0 | null | [
"region:us"
] | 2023-11-28T12:06:05Z | 2023-11-27T06:05:35.000Z | 2023-11-27T06:05:35 | ---
dataset_info:
features:
- name: Target
dtype: int64
- name: PC
dtype: string
- name: GSHARE
dtype: string
- name: GA table
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 162560000
num_examples: 320000
- name: test
num_bytes: 40640000
num_examples: 80000
download_size: 11804069
dataset_size: 203200000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Zainab984/BP-balanced | Zainab984 | 2023-11-28T12:06:46Z | 0 | 0 | null | [
"region:us"
] | 2023-11-28T12:06:46Z | 2023-11-27T06:05:50.000Z | 2023-11-27T06:05:50 | ---
dataset_info:
features:
- name: Target
dtype: int64
- name: PC
dtype: string
- name: GSHARE
dtype: string
- name: GA table
dtype: string
splits:
- name: train
num_bytes: 41082000
num_examples: 82164
- name: test
num_bytes: 10271000
num_examples: 20542
download_size: 2354826
dataset_size: 51353000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AILab-CVC/SEED-Bench-2 | AILab-CVC | 2023-11-28T04:01:56Z | 0 | 3 | null | [
"task_categories:visual-question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-nc-4.0",
"region:us"
] | 2023-11-28T04:01:56Z | 2023-11-27T06:38:48.000Z | 2023-11-27T06:38:48 | ---
license: cc-by-nc-4.0
task_categories:
- visual-question-answering
language:
- en
pretty_name: SEED-Bench-2
size_categories:
- 10K<n<100K
---
# SEED-Bench Card
## Benchmark details
**Benchmark type:**
SEED-Bench-2 is a comprehensive large-scale benchmark for evaluating Multimodal Large Language Models (MLLMs), featuring 24K multiple-choice questions with precise human annotations.
It spans 27 evaluation dimensions, assessing both text and image generation.
**Benchmark date:**
SEED-Bench was collected in November 2023.
**Paper or resources for more information:**
https://github.com/AILab-CVC/SEED-Bench
**License:**
Attribution-NonCommercial 4.0 International. It should abide by the policy of OpenAI: https://openai.com/policies/terms-of-use.
Data Sources:
- Dimensions 1-9, 23 (In-Context Captioning): Conceptual Captions Dataset (https://ai.google.com/research/ConceptualCaptions/) under its license (https://github.com/google-research-datasets/conceptual-captions/blob/master/LICENSE). Copyright belongs to the original dataset owner.
- Dimension 9 (Text Recognition): ICDAR2003 (http://www.imglab.org/db/index.html), ICDAR2013(https://rrc.cvc.uab.es/?ch=2), IIIT5k(https://cvit.iiit.ac.in/research/projects/cvit-projects/the-iiit-5k-word-dataset), and SVT(http://vision.ucsd.edu/~kai/svt/). Copyright belongs to the original dataset owner.
- Dimension 10 (Celebrity Recognition): MME (https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models/tree/Evaluation) and MMBench (https://github.com/open-compass/MMBench) under MMBench license (https://github.com/open-compass/MMBench/blob/main/LICENSE). Copyright belongs to the original dataset owners.
- Dimension 11 (Landmark Recognition): Google Landmark Dataset v2 (https://github.com/cvdfoundation/google-landmark) under CC-BY licenses without ND restrictions.
- Dimension 12 (Chart Understanding): PlotQA (https://github.com/NiteshMethani/PlotQA) under its license (https://github.com/NiteshMethani/PlotQA/blob/master/LICENSE).
- Dimension 13 (Visual Referring Expression): VCR (http://visualcommonsense.com) under its license (http://visualcommonsense.com/license/).
- Dimension 14 (Science Knowledge): ScienceQA (https://github.com/lupantech/ScienceQA) under its license (https://github.com/lupantech/ScienceQA/blob/main/LICENSE-DATA).
- Dimension 15 (Emotion Recognition): FER2013 (https://www.kaggle.com/competitions/challenges-in-representation-learning-facial-expression-recognition-challenge/data) under its license (https://www.kaggle.com/competitions/challenges-in-representation-learning-facial-expression-recognition-challenge/rules#7-competition-data).
- Dimension 16 (Visual Mathematics): MME (https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models/tree/Evaluation) and data from the internet under CC-BY licenses.
- Dimension 17 (Difference Spotting): MIMICIT (https://github.com/Luodian/Otter/blob/main/mimic-it/README.md) under its license (https://github.com/Luodian/Otter/tree/main/mimic-it#eggs).
- Dimension 18 (Meme Comprehension): Data from the internet under CC-BY licenses.
- Dimension 19 (Global Video Understanding): Charades (https://prior.allenai.org/projects/charades) under its license (https://prior.allenai.org/projects/data/charades/license.txt). SEED-Bench-2 provides 8 frames per video.
- Dimensions 20-22 (Action Recognition, Action Prediction, Procedure Understanding): Something-Something v2 (https://developer.qualcomm.com/software/ai-datasets/something-something), Epic-Kitchen 100 (https://epic-kitchens.github.io/2023), and Breakfast (https://serre-lab.clps.brown.edu/resource/breakfast-actions-dataset/). SEED-Bench-2 provides 8 frames per video.
- Dimension 24 (Interleaved Image-Text Analysis): Data from the internet under CC-BY licenses.
- Dimension 25 (Text-to-Image Generation): CC-500 (https://github.com/weixi-feng/Structured-Diffusion-Guidance) and ABC-6k (https://github.com/weixi-feng/Structured-Diffusion-Guidance) under their license (https://github.com/weixi-feng/Structured-Diffusion-Guidance/blob/master/LICENSE), with images generated by Stable-Diffusion-XL (https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) under its license (https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/LICENSE.md).
- Dimension 26 (Next Image Prediction): Epic-Kitchen 100 (https://epic-kitchens.github.io/2023) under its license (https://creativecommons.org/licenses/by-nc/4.0/).
- Dimension 27 (Text-Image Creation): Data from the internet under CC-BY licenses.
Please contact us if you believe any data infringes upon your rights, and we will remove it.
**Where to send questions or comments about the benchmark:**
https://github.com/AILab-CVC/SEED-Bench/issues
## Intended use
**Primary intended uses:**
SEED-Bench-2 is primarily designed to evaluate Multimodal Large Language Models in text and image generation tasks.
**Primary intended users:**
Researchers and enthusiasts in computer vision, natural language processing, machine learning, and artificial intelligence are the main target users of the benchmark. | [
-0.4503992795944214,
-0.5458508133888245,
0.3523251712322235,
0.49968963861465454,
-0.054362501949071884,
-0.2139848917722702,
-0.17476724088191986,
-0.5262218117713928,
-0.05039682984352112,
0.04669643193483353,
-0.45486533641815186,
-0.6449980139732361,
-0.6267052292823792,
0.03947022557... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Otherwa/GenAi-Public-Response | Otherwa | 2023-11-27T06:49:51Z | 0 | 0 | null | [
"size_categories:n<1K",
"language:en",
"license:openrail",
"code",
"legal",
"finance",
"biology",
"chemistry",
"music",
"art",
"medical",
"climate",
"region:us"
] | 2023-11-27T06:49:51Z | 2023-11-27T06:45:14.000Z | 2023-11-27T06:45:14 | ---
license: openrail
language:
- en
tags:
- code
- legal
- finance
- biology
- chemistry
- music
- art
- medical
- climate
size_categories:
- n<1K
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ryanhe/VIP | ryanhe | 2023-11-27T21:09:41Z | 0 | 1 | null | [
"region:us"
] | 2023-11-27T21:09:41Z | 2023-11-27T06:57:24.000Z | 2023-11-27T06:57:24 | ---
# For reference on dataset card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | [
-0.5322356224060059,
-0.5534716248512268,
0.1290130317211151,
0.23470574617385864,
-0.39626216888427734,
-0.1176246926188469,
-0.03545304760336876,
-0.6389272212982178,
0.5699821710586548,
0.7838326096534729,
-0.7834625840187073,
-0.9173274040222168,
-0.55633145570755,
0.13078095018863678,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
listen2you002/ChartLlama-Dataset | listen2you002 | 2023-11-27T07:02:58Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T07:02:58Z | 2023-11-27T07:02:58.000Z | 2023-11-27T07:02:58 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ErhaChen/2d_game_scence | ErhaChen | 2023-11-27T07:14:03Z | 0 | 0 | null | [
"task_categories:text-to-image",
"license:apache-2.0",
"2d game scence",
"style",
"lora",
"region:us"
] | 2023-11-27T07:14:03Z | 2023-11-27T07:04:14.000Z | 2023-11-27T07:04:14 | ---
license: apache-2.0
task_categories:
- text-to-image
tags:
- 2d game scence
- style
- lora
--- | [
-0.1285339742898941,
-0.18616800010204315,
0.6529127359390259,
0.4943626821041107,
-0.1931934952735901,
0.2360742688179016,
0.360720157623291,
0.05056300014257431,
0.5793654322624207,
0.7400140166282654,
-0.6508105993270874,
-0.23783984780311584,
-0.7102248668670654,
-0.047826044261455536,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
saillab/taco-datasets | saillab | 2023-11-27T07:32:04Z | 0 | 2 | null | [
"size_categories:100K<n<1M",
"language:en",
"language:ne",
"language:sn",
"language:mai",
"language:fa",
"language:hi",
"language:af",
"language:sq",
"language:am",
"language:ar",
"language:hy",
"language:as",
"language:ay",
"language:az",
"language:bm",
"language:eu",
"language:be... | 2023-11-27T07:32:04Z | 2023-11-27T07:15:33.000Z | 2023-11-27T07:15:33 | ---
language:
- en
- ne
- sn
- mai
- fa
- hi
- af
- sq
- am
- ar
- hy
- as
- ay
- az
- bm
- eu
- be
- bn
- bh
- bs
- bg
- ca
- ceb
- ny
- zh
- co
- hr
- cs
- da
- dv
- dog
- nl
- eo
- et
- ee
- tl
- fi
- fr
- fy
- gl
- ka
- de
- el
- gn
- gu
- ht
- ha
- haw
- he
- hmn
- hu
- is
- ig
- ilo
- id
- ga
- it
- ja
- jv
- kn
- kk
- km
- rw
- kok
- ko
- kri
- ku
- ky
- lo
- la
- lv
- ln
- lt
- lg
- lb
- mk
- ml
- mt
- mi
- mr
- mni
- ms
- mg
- mt
- my
- 'no'
- or
- om
- ps
- pl
- pt
- pa
- ro
- ru
- sm
- gd
- sr
- st
- tn
- sd
- si
- sk
- sl
- so
- es
- su
- sw
- sv
- tg
- ta
- tt
- te
- th
- ti
- to
- tr
- tk
- tw
- uk
- ur
- ug
- uz
- vi
- cy
- xh
- yi
- yo
- zu
pretty_name: t
size_categories:
- 100K<n<1M
---
This repo consists of the datasets used for the TaCo paper. There are four datasets:
* Multilingual Alpaca-52K GPT-4 dataset
* Multilingual Dolly-15K GPT-4 dataset
* TaCo dataset
* Multilingual Vicuna Benchmark dataset
We translated the first three datasets using Google Cloud Translation.
The TaCo dataset is created by using the TaCo approach as described in our paper, combining the Alpaca-52K and Dolly-15K datasets.
If you would like to create the TaCo dataset for a specific language, you can simply follow the method as mentioned in the paper and use the above translated datasets.
```
{
"instruction": "instruction in xx",
"input": "input in xx",
"output": "Instruction in English: instruction in en ,
Response in English: response in en ,
Response in xx: response in xx "
}
```
**Citation**
```
@article{upadhayay2023taco,
title={TaCo: Enhancing Cross-Lingual Transfer for Low-Resource Languages in LLMs through Translation-Assisted Chain-of-Thought Processes},
author={Upadhayay, Bibek and Behzadan, Vahid},
journal={arXiv preprint arXiv:2311.10797},
year={2023}
}
```
**Copyright and Intended Use**
This dataset has been released under CC BY-NC, intended for academic and research purposes only. Please review the licenses and terms and conditions of Alpaca-52K, Dolly-15K, and Google Cloud Translation before using this dataset for any purpose other than research. | [
-0.35334664583206177,
-0.5688294768333435,
0.18241077661514282,
0.15191252529621124,
-0.3387555181980133,
0.1674756407737732,
-0.2873972952365875,
-0.5025938153266907,
0.5893757939338684,
0.7923609614372253,
-0.46460652351379395,
-0.7608001828193665,
-0.4069410562515259,
0.4583113491535187... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
chienpham/vietnamese-sts | chienpham | 2023-11-27T13:05:01Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T13:05:01Z | 2023-11-27T07:18:30.000Z | 2023-11-27T07:18:30 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sabbahat12/brain_tumor | sabbahat12 | 2023-11-27T08:36:05Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T08:36:05Z | 2023-11-27T07:19:49.000Z | 2023-11-27T07:19:49 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yunzliang/SuiDetectData | yunzliang | 2023-11-27T09:52:17Z | 0 | 0 | null | [
"task_categories:text-classification",
"size_categories:n<1K",
"language:en",
"linguistics",
"psychology",
"region:us"
] | 2023-11-27T09:52:17Z | 2023-11-27T07:27:17.000Z | 2023-11-27T07:27:17 | ---
task_categories:
- text-classification
size_categories:
- n<1K
language:
- en
tags:
- linguistics
- psychology
---
# Dataset Card for SuiDetectData
## Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
This dataset consists of annotations from 500 users by four experienced psychiatrists following the Columbia Suicide Severity Rating Scale (C-SSRS) guidelines. Each expert worked on the same set of posts, after which the consistency of their annotations was evaluated using the Krippendorff’s α metric, a measure for assessing inter-annotator agreement. The expert's annotation scheme was based on the C-SSRS questionnaire, which contains a series of survey questions that help gauge the urgency and severity of suicidal risks. These four experts rated each Reddit post into five distinct categories: supportive, suicide indicator, suicidal ideation, suicidal behavior, and suicide attempt. Each of these categories contains 108, 171, 77, 45, and 99 posts respectively (N=500).
## Five Categories in the Reddit C-SSRS Suicide Corpus
**Here are the five variables of interests in the outcome category:**
1. **Supportive** (22%): This category represents users who engage in discussions without any language suggesting past or current at-risk suicidal behavior or feelings.
2. **Suicide Indicator** (20%): This category encompasses users who mention known C-SSRS risk factors, primarily to empathize with others expressing suicidal thoughts or actions.
3. **Suicidal Ideation** (34%): This category describes users having thoughts of suicide, which may be linked to specific risk factors such as job loss, ending significant relationships, chronic diseases, mental disorders, or substance abuse.
4. **Suicidal Behavior** (15%): This category highlights users showcasing behaviors with a heightened risk of self-harm, either currently or in the past, active suicide planning, or previous institutionalization for mental health reasons.
5. **Suicide Attempt** (9%): This category is characterized by users who have undertaken deliberate actions, whether successful or not, with the intention of ending their own lives.
## Dataset Code Sources
<!-- Provide the basic links for the dataset. -->
- **Original Paper:** Gaur, M., Aribandi, V., Alambo, A., Kursuncu, U., Thirunarayan, K., Beich, J., Pathak, J., & Sheth, A. (2021). Characterization of time-variant and time-invariant assessment of suicidality on Reddit using C-SSRS. PloS one, 16(5), e0250448. https://doi.org/10.1371/journal.pone.0250448
- **Repository for Code demonstrations:** https://github.com/yunzhen1/Suicidality
## Risks and Privacy Information
The dataset contains data that are de-identified, but may include Reddit usernames without revealing other sensitive and personal information (e.g., data that reveals addresses, sexual orientations, religious beliefs, political opinions, financial or health data, etc.) | [
-0.3185293972492218,
-0.40260303020477295,
0.6466564536094666,
0.4914417266845703,
-0.10486829280853271,
0.06341490894556046,
-0.06475827097892761,
-0.17646946012973785,
0.4433957040309906,
0.15500031411647797,
-0.65240079164505,
-0.9183685183525085,
-0.629815399646759,
0.22115693986415863... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
keiliniheaterusa/keilini-portable-heater | keiliniheaterusa | 2023-11-27T07:29:38Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T07:29:38Z | 2023-11-27T07:29:21.000Z | 2023-11-27T07:29:21 | <p>The Keilini portable heater was created by experienced engineers who needed to develop a high-end portable heater that was efficient and energy-saving compared to other heaters. The product manufactured by Kielini Company is economical, user-friendly, portable is suitable for offices, homes, and indoor use.</p>
<p>During the harsh winter, many people use space heaters to help warm their homes or office. That may be because they don't control their thermostat or because their house is drafty and certain areas don't warm up like others. Some people may also turn to area heaters to save money by not running their furnaces.</p>
<h2><span style="background-color: #ffcc00; color: black;"><a style="background-color: #ffcc00; color: black;" href="https://www.globalfitnessmart.com/get-keilini-portable-heater"><strong>Click Here -- Official Website -- Order Now}</strong></a></span></h2>
<h2><span style="color: #ff6600;"><strong>➡️<span style="color: maroon;">● For Order Official Website - <a style="color: maroon;" href="https://www.globalfitnessmart.com/get-keilini-portable-heater">https://www.globalfitnessmart.com/get-keilini-portable-heater</a></span></strong></span><br /><strong>➡️<span style="color: red;">● Item Name: — <a style="color: red;" href="https://www.globalfitnessmart.com/get-keilini-portable-heater">Keilini Portable Heater</a></span></strong><br /><span style="color: red;"><strong>➡️<span style="color: green;">● Ingredients: — All Natural</span></strong></span><br /><strong>➡️<span style="color: purple;">● Incidental Effects: — NA</span></strong><br /><strong>➡️<span style="color: blue;">● Accessibility: — <a style="color: blue;" href="https://www.globalfitnessmart.com/get-keilini-portable-heater">Online</a></span></strong></h2>
<h2><span style="background-color: #ffcc00; color: black;"><a style="background-color: #ffcc00; color: black;" href="https://www.globalfitnessmart.com/get-keilini-portable-heater"><strong>✅HUGE DISCOUNT ! HURRY UP! ORDER NOW!✅</strong></a></span><br /><span style="background-color: #ffcc00; color: black;"><a style="background-color: #ffcc00; color: black;" href="https://www.globalfitnessmart.com/get-keilini-portable-heater"><strong>✅HUGE DISCOUNT ! HURRY UP! ORDER NOW!✅</strong></a></span><br /><span style="background-color: #ffcc00; color: black;"><a style="background-color: #ffcc00; color: black;" href="https://www.globalfitnessmart.com/get-keilini-portable-heater"><strong>✅HUGE DISCOUNT ! HURRY UP! ORDER NOW!✅</strong></a></span></h2>
<h2 style="text-align: center;"><span style="text-decoration: underline;"><strong>Company and Product Overview</strong></span></h2>
<p>The Keilini Company has operated for over ten years with numerous reviews from satisfied customers. Keilini is known for manufacturing innovative and unique products, and they have increased sales over the years by manufacturing quality products. Each of the Keilini Company's products has a particular style, making them superior to other competitors. Their most popular products include; portable heaters, bug-repellent lamps, HD dash cams, and light bulb cameras.</p>
<p>The Keilini portable heater is a portable heater that is highly efficient and cost-effective. The heater has a high ceramic PTC heating technology that warms up a room and the person in it in one minute, irrespective of how cold the space has been.</p>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="https://www.globalfitnessmart.com/get-keilini-portable-heater"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjzmDgU9wo3MOwkeVYbEnZyXak6oivMk8AmG7q4mxeWUlmctRCneA9NtWUKYbpWLeG4WYj2qgNoEKuP1OUebDxJGcmEbvuwa1NY6WDPEwFwTDIAVf3frsZY00OhU07l8yO2JeLSGdW_YG64qHWNEEwOazFXrtbLD29ouQUUxRlQYDbOMcnAhzF7qld8p7QE/w640-h322/Keilini%20Portable%20Heater004.jpg" alt="" width="640" height="322" border="0" data-original-height="357" data-original-width="708" /></a></div>
<h2 style="text-align: center;"><span style="text-decoration: underline;"><strong>What Is Keilini Heater - Keilini Heater UK Reviews</strong></span></h2>
<p>Keilini Heater is a highly innovative portable heater that has been designed by world-leading engineers to help UK users maintain normal temperature in the cold season. Keilini Portable Heater is a novel convection ceramic heater in the UK. Convection Heaters are known for their efficiency and speed of heating any room and sustaining the temperature for a long time. Keilini Heater UK comes with an incredibly high efficiency compared to other heaters. Keilini Heater is a brainchild of a group of experienced engineers who recognized that the heating industry needed some real innovations in the heating industry. The aim of Keilini Heater in UK is to offer cost friendly heating solutions to every household.</p>
<p>All Keilini Heater UK Reviews agreed that it is an inexpensive and easy-to-use portable heater that would suit every home, office, bathroom, and any space at all. Thanks to the fact that the Keilini Portable Heater does not require any installation or maintenance costs and is extremely energy-efficient. Keilini heaters come with unique advantages, they have incredible efficiency and don't waste energy at all. It heats every area in your room in just 60 seconds. In addition, the Keilini Heater is really cheap, compact and very lightweight, but very powerful and effective in doing its job.</p>
<h2 style="text-align: center;"><span style="color: #800000;"><a style="color: #800000;" href="https://www.globalfitnessmart.com/get-keilini-portable-heater"><strong>(EXCLUSIVE OFFER)Click Here : "Keilini Portable Heater USA"Official Website!</strong></a></span></h2>
<h2 style="text-align: center;"><span style="text-decoration: underline;"><strong>How To Use - Keilini Heater Reviews UK</strong></span></h2>
<ul>
<li>Take it to any room where you want to use it.</li>
<li>Plug the Keilini Portable Heater into the outlet.</li>
<li>Set the desired mode.</li>
<li>Then just wait for this powerful device to slowly heat up the entire room.</li>
<li>You can take the heater with you anywhere and fight back the biting cold anytime, anywhere.
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="https://www.globalfitnessmart.com/get-keilini-portable-heater"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEim78oHj7CvD1YGvwNZ4QWal4LH3YqJXheEd5Taj_nMbbFjGYH86-mlgxzBaf-EavU1Vp-M-IQyhVRyqU5qijouXtL510dCkIyaqdcVOg2OtAvjYap5znCf3aqL8HDigcyBprHAVa_cuQ_gAlYfUPPkzjQJTrZ4k1NvErfWGhU-zL-MgjYlylWK3QZcfAa4/w640-h470/Keilini%20Portable%20Heater007.jpg" alt="" width="640" height="470" border="0" data-original-height="438" data-original-width="597" /></a></div>
</li>
</ul>
<h2 style="text-align: center;"><span style="text-decoration: underline;"><strong>Unique Features - Keilini Heater UK Reviews</strong> </span></h2>
<p><strong>Customizable Individual Settings</strong></p>
<p>The Keilini Heater is equipped with three-gear adjustment. These flexible three gears can be toggled as you need. It allows you to choose the suitable heating gear as the indoor temperature changes. Using PTC ceramic heating technology, it never gets too warm or too cold, you can always select the heat setting that is most comfortable for you. Gear one is natural wind, the second gear is warm wind, and the third gear is strong warm current. (Product Power 750W/1500W)</p>
<p><strong>Fast & Easy Set Up</strong></p>
<p>One unique advantage of Keilini Heater UK over other products is the ease of using it. Keilini Heater is super ease to set up and doesn't require extra maintenance from you. All you need to do is to plug it into any wall socket in the room or space you want to keep warm, and the device will do the rest. Keilini Portable Heater only takes seconds to produce heat and can run at full blast for as long as you want. You can comfortably take Keilini heater anywhere with you and stay warm. No more relying on the high cost inefficient central heating systems.</p>
<p><strong>Safe & Quiet Operation</strong></p>
<p>Keilini Heater is built to ensure safety and peace of mind. Compared to other heaters, the Keilini Portable Heater UK has no exposed elements that could accidentally burn you. The Automatic switch-off, switches off if the device accidentally falls over. Also, Keilini Heater comes with a portable handle design, which makes it easy to move the heater without hand burns. To ensure maximum concentration and relaxation, Keilini heater operates with zero noise. With the 37-45dB (Quieter than in a library) No loud uncomfortable valve pops to disturb your concentration or sleep. With all these incredible features, you can use Keilini Portable Heater in your home or office with absolute peace of mind!</p>
<p><strong>Portable/ Compact Design</strong></p>
<p>The compact and portable feature of Keilini Heater makes it ideal for different locations. This portable heater is very powerful, although, not big and chunky but very effective in dealing with the cold winter. Due to the lightweight feature, you can easily take it with you wherever you go. Also, the outer casing hardly ever gets warm, making it easy to carry the Keilini from room to room without burning your fingers.</p>
<h2 style="text-align: center;"><span style="color: #800000;"><a style="color: #800000;" href="https://www.globalfitnessmart.com/get-keilini-portable-heater"><strong>SPECIAL PROMO[Limited Discount]: "Keilini Portable Heater USA"Official Website!</strong></a></span></h2>
<h2 style="text-align: center;"><span style="text-decoration: underline;"><strong>Does Keilini Portable Heater UK Really Work?</strong></span></h2>
<p>Keilini Heater is a revolutionary portable heater with lots of incredible energy efficient features. This device works effectively to deliver the needed heating to any room. Many Keilini Heater Reviews UK online confirm it works very simply. The Keilini Portable Heater does not require any installation or maintenance, making it extremely energy-efficient and cost effective. Keilini heater is an easy-to-use portable heater that would suit any environment. Keilini portable heater is efficient and doesn't waste energy. Keilini Heater is manufactured to the highest standards of quality, and will deliver beyond your expectations. It comes with 3 gears that can be toggled at will, one gear is natural wind, the second gear is warm wind, and the third gear is strong warm current. (Keilini Heater Power is 750W/1500W)</p>
<h2 style="text-align: center;"><span style="text-decoration: underline;"><strong>Pros - Keilini Portable Heater Reviews UK</strong></span></h2>
<ul>
<li>No complicated setup or maintenance. Use it straight out of the box!</li>
<li>Highly Efficient Ceramic PTC Heating Technology</li>
<li>Safe&Quiet to Use 37-45dB (Quieter than in a library)</li>
<li>Three-Gear Adjustment With Power Of 750w/1500w.</li>
<li>Ultra Compact & Sleek Design. Perfect for any home or office Decor.</li>
<li>You can use Keilini Portable Heater in your home with peace of mind.</li>
<li>If it is knocked over, it will automatically shut down.</li>
<li>Keilini heater makes low noise during operation.</li>
<li>Portable handle design, which makes it easy to move the heater without hand burns.</li>
<li>50% Discount When You Make Purchase Today!</li>
<li>30-Day Money Back Guarantee.</li>
<li>Sustain the Perfect Temperature</li>
<li>Flexible three gears can be toggled as you need.</li>
<li>Energy-saving. Keilini heater can heat any room and save a lot.</li>
<li>You can choose the suitable heating gear as the indoor temperature changes.</li>
<li>Keilini heater only takes 60 seconds to produce heat and can run at full blast for as long as you want.</li>
<li>Guaranteed high quality. Made to the highest standards of quality.</li>
</ul>
<h2><strong>Cons</strong></h2>
<ul>
<li>Available on the Keilini Heater's official website.</li>
<li>Supply and the 50% Discount Offer may end anytime soon.</li>
</ul>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="https://www.globalfitnessmart.com/get-keilini-portable-heater"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhQRIALkNVAj_F0bVYYuT0Y07gqGuALem8UgGd7jnZ6YN02J6ebyBsPcu9L0wR960MkjrMYaceyrOhBjX9BWXox83mwITWhwqbSGolytqxfvvkjN-RZuqPg_xPAj2ai7qV7mLizKrI9ek43kNx8dDiUO5BMQg58ICgZHHfFRj5x8kguUGmIUHLuSsOfB5sp/w640-h222/Keilini%20Portable%20Heater002.jpg" alt="" width="640" height="222" border="0" data-original-height="332" data-original-width="955" /></a></div>
<h2 style="text-align: center;"><span style="text-decoration: underline;"><strong>Where to Buy Keilini Heater – Keilini Portable Heater Reviews</strong></span></h2>
<p>Whether you want to confirm the current price of Keilini heaters, gain access to the ongoing 50% promo, or buy Keilini ceramic portable heater, you must visit the official website.</p>
<p>The Keilini company is currently running out of stock due to high demand. To ensure you don’t miss out, confirm Keilini heater availability on their website and seize any chance you’ve got to buy one today.</p>
<h2 style="text-align: center;"><span style="color: #800000;"><a style="color: #800000;" href="https://www.globalfitnessmart.com/get-keilini-portable-heater"><strong>SPECIAL PROMO: Get Keilini Portable Heater at the Lowest Discounted Price Online</strong></a></span></h2>
<h2 style="text-align: center;"><span style="text-decoration: underline;"><strong>Keilini Portable Heater Price</strong></span></h2>
<p>Below is the current cost price of keilini portable heater. You can confirm recent price changes from their official website.</p>
<ul>
<li><strong>Buy 1 unit of Keilini Portable Heater price at £59.99</strong></li>
<li><strong>Buy 2 units of Keilini Portable Heater price at £99.98</strong></li>
<li><strong>Buy 3 units of Keilini Portable Heater price at £139.99</strong></li>
<li><strong>Buy 4 units of Keilini Portable Heater price at £159.96</strong></li>
</ul>
<h2 style="text-align: center;"><span style="text-decoration: underline;"><strong>Final Remarks - Keilini Portable Heater Reviews UK</strong></span></h2>
<p>Keilini Portable Heater is designed to be a cost effective and energy efficient portable heater. This device is perfect for any home and office. And it can easily heat up the temperature of any room in just seconds. Keilini is loaded with incredible features that work effectively to ensure you stay toasty warm throughout the coldest days of winter.</p>
<p>With Keilini heater the extremely powerful, efficient and portable heater with ceramic PTC heating technology, you're in for an amazing/personalized heating experience this season. With all these amazing features and benefits, Keilini Heaters is selling at a mouth watering 50% discount! Kindly visit Keilini Heater official website to place your order, so you don't miss out on the ongoing Offer!</p>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="https://www.globalfitnessmart.com/get-keilini-portable-heater"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEii1ERZaYC5RgA6wRcOXzUqX-zZVU0FVZD2l8Hw2JCRvAC2-bfoxJjmzY_ygCFsyUrOu_F_eQrRWVfenMteYaiR4AId0etHO_PWgkjguboD0JWVKneTBcC77tOmS1AAQQWizTsrHiCz2H2hh9z-VDpq1BCpTBTM79wvzQmMl23AQc7IeZXznuZti4JZRZ9U/w640-h470/Keilini%20Portable%20Heater007.jpg" alt="" width="640" height="470" border="0" data-original-height="438" data-original-width="597" /></a></div>
<h2 style="text-align: center;"><span style="color: #800000;"><a style="color: #800000;" href="https://www.globalfitnessmart.com/get-keilini-portable-heater"><strong>Read This: "More Information From Knowledgeable Expertise of Keilini Portable Heater"</strong></a></span></h2>
<h2><strong><span style="color: #800000;">@ READ MORE</span></strong></h2>
<p><strong><span style="color: #800000;"><a href="https://keilini-portable.clubeo.com/calendar/2023/11/29/keilini-portable-heater">https://keilini-portable.clubeo.com/calendar/2023/11/29/keilini-portable-heater</a></span></strong></p>
<p><strong><span style="color: #800000;"><a href="https://keilini-portable.clubeo.com/page/keilini-portable-heater-usa-no-1-platinum-keilini-portable-heater-3gear-adjustment-with-power-of-750w.html">https://keilini-portable.clubeo.com/page/keilini-portable-heater-usa-no-1-platinum-keilini-portable-heater-3gear-adjustment-with-power-of-750w.html</a></span></strong></p>
<p><strong><span style="color: #800000;"><a href="https://keilini-portable.clubeo.com/page/keilini-portable-heater-usa-ca-uk-official-website.html">https://keilini-portable.clubeo.com/page/keilini-portable-heater-usa-ca-uk-official-website.html</a></span></strong></p>
<p><strong><span style="color: #800000;"><a href="https://keilini-portable.clubeo.com/">https://keilini-portable.clubeo.com/</a></span></strong></p>
<p><strong><span style="color: #800000;"><a href="https://groups.google.com/g/keilini-portable-heater/c/Y1Vr6NBSMto">https://groups.google.com/g/keilini-portable-heater/c/Y1Vr6NBSMto</a></span></strong></p>
<p><strong><span style="color: #800000;"><a href="https://sites.google.com/view/keilini-portable-heater-review/home">https://sites.google.com/view/keilini-portable-heater-review/home</a></span></strong></p>
<p><strong><span style="color: #800000;"><a href="https://lookerstudio.google.com/u/0/reporting/22409471-350c-4287-a76a-a574ee233ac0/page/8QQjD">https://lookerstudio.google.com/u/0/reporting/22409471-350c-4287-a76a-a574ee233ac0/page/8QQjD</a></span></strong></p>
<p><strong><span style="color: #800000;"><a href="https://colab.research.google.com/drive/1umiLVq-m09bH_JqUh8mhu4PXOZoODdTz">https://colab.research.google.com/drive/1umiLVq-m09bH_JqUh8mhu4PXOZoODdTz</a></span></strong></p>
<p><strong><span style="color: #800000;"><a href="https://gamma.app/public/Keilini-Portable-Heater---USACA-UKOfficial-Website-2vvh0oxf7q1p2sm?mode=doc">https://gamma.app/public/Keilini-Portable-Heater---USACA-UKOfficial-Website-2vvh0oxf7q1p2sm?mode=doc</a></span></strong></p>
<p><strong><span style="color: #800000;"><a href="https://keilini-portable-heater.jimdosite.com/">https://keilini-portable-heater.jimdosite.com/</a></span></strong></p>
<p><strong><span style="color: #800000;"><a href="https://www.scoop.it/topic/keilini-portable-heater/p/4149130319/2023/11/27/keilini-portable-heater-usa-ca-uk-official-website">https://www.scoop.it/topic/keilini-portable-heater/p/4149130319/2023/11/27/keilini-portable-heater-usa-ca-uk-official-website</a></span></strong></p>
<p><strong><span style="color: #800000;"><a href="https://www.scoop.it/topic/keilini-portable-heater-usa-no-1-platinum-keilini-portable-heater-3gear-adjustment-with-power-of-750w">https://www.scoop.it/topic/keilini-portable-heater-usa-no-1-platinum-keilini-portable-heater-3gear-adjustment-with-power-of-750w</a></span></strong></p>
<p><strong><span style="color: #800000;"><a href="https://groups.google.com/g/sci.lang.japan/c/tVwhnwPYMy8">https://groups.google.com/g/sci.lang.japan/c/tVwhnwPYMy8</a></span></strong></p> | [
-0.6734063625335693,
-0.26283323764801025,
0.5107226967811584,
0.1415749192237854,
-0.4170694351196289,
-0.3124373257160187,
-0.043167728930711746,
-0.3672611713409424,
0.5886781215667725,
-0.2227831482887268,
-0.33640578389167786,
-0.15511848032474518,
-0.22645346820354462,
0.159603476524... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Imran1/dogtrainset | Imran1 | 2023-11-27T07:37:04Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T07:37:04Z | 2023-11-27T07:36:58.000Z | 2023-11-27T07:36:58 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Afghan_hound
'1': French_bulldog
splits:
- name: train
num_bytes: 26798257.0
num_examples: 398
download_size: 26755684
dataset_size: 26798257.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Imran1/dogtestset | Imran1 | 2023-11-27T07:37:26Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T07:37:26Z | 2023-11-27T07:37:24.000Z | 2023-11-27T07:37:24 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Afghan_hound
'1': French_bulldog
splits:
- name: train
num_bytes: 290465.0
num_examples: 6
download_size: 292088
dataset_size: 290465.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853386998176575,
-0.18616756796836853,
0.652912974357605,
0.4943627715110779,
-0.1931934952735901,
0.2360743284225464,
0.3607199192047119,
0.05056323856115341,
0.5793654918670654,
0.7400139570236206,
-0.6508104205131531,
-0.2378396987915039,
-0.7102250456809998,
-0.047825999557971954,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
akash140500/Predictive_M_1 | akash140500 | 2023-11-27T07:58:49Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-27T07:58:49Z | 2023-11-27T07:51:29.000Z | 2023-11-27T07:51:29 | ---
license: apache-2.0
---
| [
-0.12853386998176575,
-0.18616756796836853,
0.652912974357605,
0.4943627715110779,
-0.1931934952735901,
0.2360743284225464,
0.3607199192047119,
0.05056323856115341,
0.5793654918670654,
0.7400139570236206,
-0.6508104205131531,
-0.2378396987915039,
-0.7102250456809998,
-0.047825999557971954,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
imone/ARB | imone | 2023-11-27T14:25:59Z | 0 | 2 | null | [
"license:mit",
"region:us"
] | 2023-11-27T14:25:59Z | 2023-11-27T08:01:11.000Z | 2023-11-27T08:01:11 | ---
license: mit
---
ARB data from [DuckAI](https://arb-dataset.netlify.app/). Categorized into `law, math, physics, reading, science` | [
-0.367598295211792,
-0.42834195494651794,
0.001902009709738195,
0.4027610123157501,
-0.11154154688119888,
0.20513980090618134,
0.555305540561676,
-0.2519780695438385,
0.46916621923446655,
0.7663065791130066,
-0.5178731083869934,
-0.6154830455780029,
-0.3803060054779053,
-0.3172522187232971... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Ramyashree/Banking_dataset1 | Ramyashree | 2023-11-27T08:04:04Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T08:04:04Z | 2023-11-27T08:04:04.000Z | 2023-11-27T08:04:04 | Entry not found | [
-0.3227648138999939,
-0.22568409144878387,
0.8622256517410278,
0.43461480736732483,
-0.5282989144325256,
0.7012966275215149,
0.7915716171264648,
0.07618606090545654,
0.7746022939682007,
0.25632181763648987,
-0.7852815985679626,
-0.2257382869720459,
-0.9104483723640442,
0.571566641330719,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
blvrxdnthnhv/test_img2img | blvrxdnthnhv | 2023-11-27T08:20:44Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T08:20:44Z | 2023-11-27T08:11:28.000Z | 2023-11-27T08:11:28 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
collabora/ai4bharat-shrutilipi | collabora | 2023-11-27T12:15:25Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T12:15:25Z | 2023-11-27T08:16:39.000Z | 2023-11-27T08:16:39 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 204030048047.146
num_examples: 408114
download_size: 200034353797
dataset_size: 204030048047.146
---
# Dataset Card for "ai4bharat-shrutilipi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4223867952823639,
-0.18673349916934967,
0.03218065947294235,
0.09973998367786407,
-0.32549527287483215,
0.12437497079372406,
0.23280012607574463,
-0.32030799984931946,
0.8274096846580505,
0.2529613673686981,
-0.6370425224304199,
-0.6479700803756714,
-0.7072018980979919,
-0.1534174978733... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
opendal/huggingface-testdata | opendal | 2023-11-27T09:12:40Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-27T09:12:40Z | 2023-11-27T08:17:46.000Z | 2023-11-27T08:17:46 | ---
license: apache-2.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sabbahat12/dataset | sabbahat12 | 2023-11-27T08:46:14Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T08:46:14Z | 2023-11-27T08:39:46.000Z | 2023-11-27T08:39:46 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Msun/waymo | Msun | 2023-11-27T09:09:45Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T09:09:45Z | 2023-11-27T08:41:28.000Z | 2023-11-27T08:41:28 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
armpower/llama2_rlhf_v_2 | armpower | 2023-11-27T08:44:45Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T08:44:45Z | 2023-11-27T08:44:45.000Z | 2023-11-27T08:44:45 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
MuuZy/TJNU-NLP | MuuZy | 2023-11-27T08:52:16Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T08:52:16Z | 2023-11-27T08:52:16.000Z | 2023-11-27T08:52:16 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DataStudio/OCRFontsEnhance | DataStudio | 2023-11-27T08:57:32Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T08:57:32Z | 2023-11-27T08:55:39.000Z | 2023-11-27T08:55:39 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: Qwinley
num_bytes: 106757090.0
num_examples: 5240
- name: Ariston
num_bytes: 123619963.125
num_examples: 5239
- name: Zap
num_bytes: 119724880.125
num_examples: 5239
- name: Campai
num_bytes: 123461953.0
num_examples: 5240
- name: Mandalay
num_bytes: 129332184.125
num_examples: 5239
- name: Brush
num_bytes: 121988610.0
num_examples: 5240
- name: AlexBrush
num_bytes: 109786433.0
num_examples: 5240
- name: Free
num_bytes: 104433511.0
num_examples: 5240
- name: Allegie
num_bytes: 100527446.0
num_examples: 5240
- name: Kun
num_bytes: 97646207.125
num_examples: 5239
- name: Script
num_bytes: 92350425.0
num_examples: 5240
- name: JackieO
num_bytes: 115897467.0
num_examples: 5240
- name: Rush
num_bytes: 105160861.0
num_examples: 5240
- name: Ambiance
num_bytes: 100961160.0
num_examples: 5240
- name: Saliere
num_bytes: 124157300.0
num_examples: 5240
- name: Chancery
num_bytes: 98686795.125
num_examples: 5239
- name: Tridico
num_bytes: 144127029.0
num_examples: 5240
- name: Commerce
num_bytes: 124982660.0
num_examples: 5240
- name: Times
num_bytes: 127224541.0
num_examples: 5240
- name: Freewrite
num_bytes: 101639441.125
num_examples: 5239
- name: Helve
num_bytes: 123461009.0
num_examples: 5240
- name: Sandy
num_bytes: 130526102.0
num_examples: 5240
- name: ShishoniBrush
num_bytes: 91591669.0
num_examples: 5240
- name: Casca
num_bytes: 95300918.125
num_examples: 5239
download_size: 2711725230
dataset_size: 2713345654.875
configs:
- config_name: default
data_files:
- split: Qwinley
path: data/Qwinley-*
- split: Ariston
path: data/Ariston-*
- split: Zap
path: data/Zap-*
- split: Campai
path: data/Campai-*
- split: Mandalay
path: data/Mandalay-*
- split: Brush
path: data/Brush-*
- split: AlexBrush
path: data/AlexBrush-*
- split: Free
path: data/Free-*
- split: Allegie
path: data/Allegie-*
- split: Kun
path: data/Kun-*
- split: Script
path: data/Script-*
- split: JackieO
path: data/JackieO-*
- split: Rush
path: data/Rush-*
- split: Ambiance
path: data/Ambiance-*
- split: Saliere
path: data/Saliere-*
- split: Chancery
path: data/Chancery-*
- split: Tridico
path: data/Tridico-*
- split: Commerce
path: data/Commerce-*
- split: Times
path: data/Times-*
- split: Freewrite
path: data/Freewrite-*
- split: Helve
path: data/Helve-*
- split: Sandy
path: data/Sandy-*
- split: ShishoniBrush
path: data/ShishoniBrush-*
- split: Casca
path: data/Casca-*
---
| [
-0.12853367626667023,
-0.18616794049739838,
0.6529126763343811,
0.4943627417087555,
-0.19319313764572144,
0.23607443273067474,
0.36071979999542236,
0.05056338757276535,
0.5793654322624207,
0.7400138974189758,
-0.6508103013038635,
-0.23783987760543823,
-0.710224986076355,
-0.047825977206230... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nbodagh/test2 | nbodagh | 2023-11-27T08:56:24Z | 0 | 0 | null | [
"license:cc-by-4.0",
"region:us"
] | 2023-11-27T08:56:24Z | 2023-11-27T08:56:24.000Z | 2023-11-27T08:56:24 | ---
license: cc-by-4.0
---
| [
-0.12853367626667023,
-0.18616794049739838,
0.6529126763343811,
0.4943627417087555,
-0.19319313764572144,
0.23607443273067474,
0.36071979999542236,
0.05056338757276535,
0.5793654322624207,
0.7400138974189758,
-0.6508103013038635,
-0.23783987760543823,
-0.710224986076355,
-0.047825977206230... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
CHEN0312/gssgd | CHEN0312 | 2023-11-27T09:06:35Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-27T09:06:35Z | 2023-11-27T09:06:35.000Z | 2023-11-27T09:06:35 | ---
license: apache-2.0
---
| [
-0.12853367626667023,
-0.18616794049739838,
0.6529126763343811,
0.4943627417087555,
-0.19319313764572144,
0.23607443273067474,
0.36071979999542236,
0.05056338757276535,
0.5793654322624207,
0.7400138974189758,
-0.6508103013038635,
-0.23783987760543823,
-0.710224986076355,
-0.047825977206230... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Rbin/rlmrec_data | Rbin | 2023-11-27T09:13:29Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T09:13:29Z | 2023-11-27T09:13:29.000Z | 2023-11-27T09:13:29 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
CodeBlackwell/avant_assist | CodeBlackwell | 2023-11-27T09:35:41Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T09:35:41Z | 2023-11-27T09:22:35.000Z | 2023-11-27T09:22:35 | ---
dataset_info:
features:
- name: content
dtype: string
splits:
- name: test
num_bytes: 4091597
num_examples: 3452
- name: train
num_bytes: 45222545
num_examples: 22845
download_size: 15363271
dataset_size: 49314142
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
---
| [
-0.12853367626667023,
-0.18616794049739838,
0.6529126763343811,
0.4943627417087555,
-0.19319313764572144,
0.23607443273067474,
0.36071979999542236,
0.05056338757276535,
0.5793654322624207,
0.7400138974189758,
-0.6508103013038635,
-0.23783987760543823,
-0.710224986076355,
-0.047825977206230... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Odunope/LSE_test | Odunope | 2023-11-27T09:31:23Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T09:31:23Z | 2023-11-27T09:31:22.000Z | 2023-11-27T09:31:22 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Chars/symlink-models | Chars | 2023-11-27T10:09:52Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | 2023-11-27T10:09:52Z | 2023-11-27T09:38:33.000Z | 2023-11-27T09:38:33 | ---
license: mit
---
# 说明
使用第三方代码涉及到调用模型的时候,会不可避免的从huggingface下载模型
如果网络状况不良好的情况下会导致无法下载,而如果手动下载则需要一个个修改第三方代码使其读取本地模型的路径
故这里存储好对应的cache 目录,直接解压到 `~/.cache/huggingface/hub/` 路径下即可 | [
-0.7513185739517212,
-0.9283232688903809,
0.22446438670158386,
0.994420051574707,
-0.5963516235351562,
-0.451415091753006,
0.392631471157074,
-0.5455238819122314,
1.0609537363052368,
0.5821233987808228,
-0.6859228610992432,
-0.4824157953262329,
-0.6670774221420288,
-0.004553498700261116,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Elfsong/seven_cups | Elfsong | 2023-11-27T09:59:39Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T09:59:39Z | 2023-11-27T09:58:50.000Z | 2023-11-27T09:58:50 | ---
configs:
- config_name: default
data_files:
- split: anxiety
path: data/anxiety-*
- split: bipolar
path: data/bipolar-*
- split: depression
path: data/depression-*
- split: personalitydisorders
path: data/personalitydisorders-*
- split: trauma
path: data/trauma-*
- split: eds
path: data/eds-*
- split: substanceaddiction
path: data/substanceaddiction-*
- split: relationships
path: data/relationships-*
dataset_info:
features:
- name: lead_post
struct:
- name: author
dtype: string
- name: content
dtype: string
- name: date
dtype: string
- name: thread_id
dtype: string
- name: title
dtype: string
- name: topic
dtype: string
- name: url
dtype: string
- name: comment_posts
list:
- name: author
dtype: string
- name: content
dtype: string
- name: parent_ids
sequence: string
- name: post_id
dtype: string
- name: thread_id
dtype: string
- name: url
dtype: string
splits:
- name: anxiety
num_bytes: 24332055
num_examples: 7948
- name: bipolar
num_bytes: 3496018
num_examples: 1033
- name: depression
num_bytes: 59927557
num_examples: 10243
- name: personalitydisorders
num_bytes: 9791687
num_examples: 1854
- name: trauma
num_bytes: 53211657
num_examples: 5763
- name: eds
num_bytes: 9837092
num_examples: 2382
- name: substanceaddiction
num_bytes: 1957813
num_examples: 687
- name: relationships
num_bytes: 56187112
num_examples: 12652
download_size: 94273903
dataset_size: 218740991
---
# Dataset Card for "seven_cups"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.633118212223053,
-0.1110132560133934,
0.42536482214927673,
0.2893005907535553,
-0.2897314131259918,
-0.13844147324562073,
0.48538562655448914,
-0.15527121722698212,
0.689557671546936,
0.7266517877578735,
-0.6718838810920715,
-0.7726978063583374,
-0.7644940614700317,
0.11486484855413437,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Hendrik-a/dancer-data | Hendrik-a | 2023-11-27T10:04:16Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T10:04:16Z | 2023-11-27T10:02:33.000Z | 2023-11-27T10:02:33 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
projecte-aina/CA-FR_Parallel_Corpus | projecte-aina | 2023-11-27T15:28:44Z | 0 | 0 | null | [
"task_categories:translation",
"multilinguality:translation",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:ca",
"language:fr",
"language:multilingual",
"region:us"
] | 2023-11-27T15:28:44Z | 2023-11-27T10:07:55.000Z | 2023-11-27T10:07:55 | ---
language:
- ca
- fr
- multilingual
multilinguality:
- translation
pretty_name: CA-FR Parallel Corpus
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- translation
task_ids: []
---
# Dataset Card for CA-FR Parallel Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Data preparation](#data-preparation)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Author](#author)
- [Contact Information](#contact-information)
- [Copyright](#copyright)
- [Licensing information](#licenciung-informatrion)
- [Funding](#funding)
## Dataset Description
### Dataset Summary
The CA-FR Parallel Corpus is a Catalan-French dataset of **18.634.844** parallel sentences. The dataset was created to support Catalan NLP tasks, e.g.,
Machine Translation.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for Multilingual Machine Translation. Success on this task is typically measured by achieving a high BLEU score.
### Languages
The texts in the dataset are in Catalan and French.
## Dataset Structure
Two separated txt files are provided with the sentences sorted in the same order:
- ca-fr_corpus.ca: contains 18.634.844 Catalan sentences.
- ca-fr_corpus.fr: contains 18.634.844 French sentences.
### Data Splits
The dataset contains a single split: `train`.
## Dataset Creation
### Source Data
The dataset is a combination of the following authentic datasets:
| Dataset | Sentences |
|:----------------|----------:|
| CCMatrix | 16.305.758|
| Multi CCAligned | 1.442.584|
| WikiMatrix | 437.665|
| GNOME | 1.686|
| KDE 4 | 111.750|
| QED | 52.797|
| TED 2020 | 44.101|
| Open Subtitles | 225.786|
| **Total** | **18.634.844** |
All corpora were collected from [Opus](https://opus.nlpl.eu/).
### Data preparation
All datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75.
This is done using sentence embeddings calculated using [LaBSE](https://huggingface.co/sentence-transformers/LaBSE).
The filtered datasets are then concatenated to form a final corpus of **18.634.844** parallel sentences and before training the punctuation is normalized using a modified version of the join-single-file.py script from [SoftCatalà](https://github.com/Softcatala/nmt-models/blob/master/data-processing-tools/join-single-file.py).
### Personal and Sensitive Information
No anonymisation process was performed.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop Machines Translation tasks for low-resource languages such as Catalan.
### Discussion of Biases
We are aware that since part of the data comes from unreliable web pages and non-curated texts, some biases may be present in the dataset.
Nonetheless, we have not applied any steps to reduce their impact.
### Other Known Limitations
The dataset contains data of a general domain. Application of this dataset in more specific domains such as biomedical, legal etc. would be of limited use.
## Additional Information
### Author
Language Technologies Unit (LangTech) at the Barcelona Supercomputing Center.
### Contact information
For further information, please send an email to langtech@bsc.es.
### Copyright
Copyright Language Technologies Unit at Barcelona Supercomputing Center (2023).
### Licensing information
This work is licensed under a [Attribution-NonCommercial-ShareAlike 4.0 International](https://creativecommons.org/licenses/by-nc-sa/4.0/).
### Funding
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina). | [
-0.3468424677848816,
-0.6012418866157532,
0.2184678316116333,
0.6536235213279724,
-0.20520223677158356,
0.06844611465930939,
-0.5284334421157837,
-0.2281302809715271,
0.5547671914100647,
0.5656152963638306,
-0.4442788362503052,
-0.90422523021698,
-0.768024742603302,
0.40926963090896606,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
projecte-aina/CA-DE_Parallel_Corpus | projecte-aina | 2023-11-27T15:54:07Z | 0 | 0 | null | [
"task_categories:translation",
"multilinguality:translation",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:ca",
"language:de",
"language:multilingual",
"region:us"
] | 2023-11-27T15:54:07Z | 2023-11-27T10:09:33.000Z | 2023-11-27T10:09:33 | ---
language:
- ca
- de
- multilingual
multilinguality:
- translation
pretty_name: CA-DE Parallel Corpus
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- translation
task_ids: []
---
# Dataset Card for CA-DE Parallel Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Data preparation](#data-preparation)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Author](#author)
- [Contact Information](#contact-information)
- [Copyright](#copyright)
- [Licensing information](#licenciung-informatrion)
- [Funding](#funding)
## Dataset Description
### Dataset Summary
The CA-DE Parallel Corpus is a Catalan-German dataset of **9.530.709** parallel sentences. The dataset was created to support Catalan NLP tasks, e.g.,
Machine Translation.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for Multilingual Machine Translation. Success on this task is typically measured by achieving a high BLEU score.
### Languages
The texts in the dataset are in Catalan and German.
## Dataset Structure
Two separated txt files are provided with the sentences sorted in the same order:
- ca-de_all_2023_09_11.ca: contains 9.530.709 Catalan sentences.
- ca-de_all_2023_09_11.de: contains 9.530.709 German sentences.
### Data Splits
The dataset contains a single split: `train`.
## Dataset Creation
### Source Data
The dataset is a combination of the following authentic datasets:
| Dataset | Sentences |
|:--------------|----------:|
| Multi CCAligned | 1.027.481 |
| WikiMatrix | 125.811 |
| GNOME | 1.241|
| KDE4 | 105.098 |
| QED | 49.181 |
| TED2020 v1 | 38.428 |
| OpenSubtitles | 171.376 |
| GlobalVoices| 3.578|
| Tatoeba | 655 |
| Books | 2049 |
| Europarl | 1.734.643 |
| Tilde | 3.434.091 |
| **Total** | **6.258.272** |
All corpora except Europarl were collected from [Opus](https://opus.nlpl.eu/).
The Europarl corpus is a synthetic parallel corpus created from the original Spanish-Catalan corpus by [SoftCatalà](https://github.com/Softcatala/Europarl-catalan).
The remaining **3.272.437** sentences are synthetic parallel data created from a random sampling of the Spanish-German corpora available on [Opus](https://opus.nlpl.eu/) and translated into Catalan using the [PlanTL es-ca](https://huggingface.co/PlanTL-GOB-ES/mt-plantl-es-ca) model.
### Data preparation
All datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75.
This is done using sentence embeddings calculated using [LaBSE](https://huggingface.co/sentence-transformers/LaBSE).
The filtered datasets are then concatenated to form a final corpus of **9.530.709** parallel sentences and before training the punctuation is normalized using a modified version of the join-single-file.py script from [SoftCatalà](https://github.com/Softcatala/nmt-models/blob/master/data-processing-tools/join-single-file.py).
### Personal and Sensitive Information
No anonymisation process was performed.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop Machines Translation tasks for low-resource languages such as Catalan.
### Discussion of Biases
We are aware that since part of the data comes from unreliable web pages and non-curated texts, some biases may be present in the dataset.
Nonetheless, we have not applied any steps to reduce their impact.
### Other Known Limitations
The dataset contains data of a general domain. Application of this dataset in more specific domains such as biomedical, legal etc. would be of limited use.
## Additional Information
### Author
Language Technologies Unit (LangTech) at the Barcelona Supercomputing Center.
### Contact information
For further information, please send an email to langtech@bsc.es.
### Copyright
Copyright Language Technologies Unit at Barcelona Supercomputing Center (2023).
### Licensing information
This work is licensed under a [Attribution-NonCommercial-ShareAlike 4.0 International](https://creativecommons.org/licenses/by-nc-sa/4.0/).
### Funding
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
| [
-0.3974429666996002,
-0.6942981481552124,
0.31511202454566956,
0.6070305705070496,
-0.18561860918998718,
0.06784941256046295,
-0.4996357858181,
-0.26590651273727417,
0.5961923599243164,
0.45804935693740845,
-0.44733595848083496,
-0.9846922159194946,
-0.703397274017334,
0.41282516717910767,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tr416/2k_test | tr416 | 2023-11-27T12:52:10Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T12:52:10Z | 2023-11-27T10:36:27.000Z | 2023-11-27T10:36:27 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 762433
num_examples: 623
download_size: 383726
dataset_size: 762433
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "2k_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.49610140919685364,
-0.28605931997299194,
0.03294617310166359,
0.31157127022743225,
-0.28570568561553955,
-0.15466506779193878,
0.42940664291381836,
-0.25149691104888916,
0.5774723887443542,
0.43583667278289795,
-0.6705461144447327,
-0.4594767093658447,
-0.5835264325141907,
-0.2609459161... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
maxandron/mmlu-mini-100 | maxandron | 2023-11-27T10:42:04Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T10:42:04Z | 2023-11-27T10:41:57.000Z | 2023-11-27T10:41:57 | ---
dataset_info:
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: topic
dtype: string
splits:
- name: train
num_bytes: 9650984
num_examples: 5800
download_size: 685438
dataset_size: 9650984
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
MohametSena/PASBio | MohametSena | 2023-11-28T01:58:23Z | 0 | 0 | null | [
"task_categories:token-classification",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"biology",
"region:us"
] | 2023-11-28T01:58:23Z | 2023-11-27T10:43:01.000Z | 2023-11-27T10:43:01 | ---
license: apache-2.0
task_categories:
- token-classification
language:
- en
tags:
- biology
size_categories:
- 1K<n<10K
dataset_info:
- config_name: srl_xs
path: srl_xs/srl_xs.csv
- config_name: srl_sm
path: srl_sm/srl_sm.csv
- config_name: srl_lg
path: srl_lg/srl_lg.csv
pretty_name: PASBio
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
projecte-aina/CA-IT_Parallel_Corpus | projecte-aina | 2023-11-27T15:34:55Z | 0 | 0 | null | [
"task_categories:translation",
"multilinguality:translation",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:ca",
"language:it",
"language:multilingual",
"region:us"
] | 2023-11-27T15:34:55Z | 2023-11-27T10:45:27.000Z | 2023-11-27T10:45:27 | ---
language:
- ca
- it
- multilingual
multilinguality:
- translation
pretty_name: CA-IT Parallel Corpus
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- translation
task_ids: []
---
# Dataset Card for CA-IT Parallel Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Data preparation](#data-preparation)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Author](#author)
- [Contact Information](#contact-information)
- [Copyright](#copyright)
- [Licensing information](#licenciung-informatrion)
- [Funding](#funding)
## Dataset Description
### Dataset Summary
The CA-IT Parallel Corpus is a Catalan-Italian dataset of **9.482.927** parallel sentences. The dataset was created to support Catalan NLP tasks, e.g.,
Machine Translation.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for Multilingual Machine Translation. Success on this task is typically measured by achieving a high BLEU score.
### Languages
The texts in the dataset are in Catalan and Italian.
## Dataset Structure
Two separated txt files are provided with the sentences sorted in the same order:
- ca-it_corpus.ca: contains 9.482.927 Catalan sentences.
- ca-it_corpus.it: contains 9.482.927 Italian sentences.
### Data Splits
The dataset contains a single split: `train`.
## Dataset Creation
### Source Data
The dataset is a combination of the following authentic datasets:
| Dataset | Sentences |
|:--- | ---: |
| CCMatrix v1 | 7.757.357|
| MultiCCAligned v1 | 1.010.921|
| WikiMatrix | 271.587|
| GNOME | 1.198|
| KDE4 | 115.027 |
| QED | 52.616 |
| TED2020 v1 | 43.280 |
| OpenSubtitles | 225.732 |
| GlobalVoices| 5.209|
| **Total** | **9.482.927** |
All corpora were collected from [Opus](https://opus.nlpl.eu/).
### Data preparation
All datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75.
This is done using sentence embeddings calculated using [LaBSE](https://huggingface.co/sentence-transformers/LaBSE).
The filtered datasets are then concatenated to form a final corpus of **9.482.927** parallel sentences and before training the punctuation is normalized using a modified version of the join-single-file.py script from [SoftCatalà](https://github.com/Softcatala/nmt-models/blob/master/data-processing-tools/join-single-file.py).
### Personal and Sensitive Information
No anonymisation process was performed.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop Machines Translation tasks for low-resource languages such as Catalan.
### Discussion of Biases
We are aware that since part of the data comes from unreliable web pages and non-curated texts, some biases may be present in the dataset.
Nonetheless, we have not applied any steps to reduce their impact.
### Other Known Limitations
The dataset contains data of a general domain. Application of this dataset in more specific domains such as biomedical, legal etc. would be of limited use.
## Additional Information
### Author
Language Technologies Unit (LangTech) at the Barcelona Supercomputing Center.
### Contact information
For further information, please send an email to langtech@bsc.es.
### Copyright
Copyright Language Technologies Unit at Barcelona Supercomputing Center (2023).
### Licensing information
This work is licensed under a [Attribution-NonCommercial-ShareAlike 4.0 International](https://creativecommons.org/licenses/by-nc-sa/4.0/).
### Funding
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
| [
-0.3000892996788025,
-0.5752237439155579,
0.21974779665470123,
0.6255318522453308,
-0.2756998836994171,
0.053885046392679214,
-0.5073121190071106,
-0.2234681099653244,
0.6265404224395752,
0.46700793504714966,
-0.43387919664382935,
-0.9536921977996826,
-0.7085088491439819,
0.352399826049804... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AIML-TUDA/TEdBench_plusplus | AIML-TUDA | 2023-11-27T12:56:48Z | 0 | 0 | null | [
"task_categories:image-to-image",
"size_categories:n<1K",
"license:apache-2.0",
"region:us"
] | 2023-11-27T12:56:48Z | 2023-11-27T10:49:29.000Z | 2023-11-27T10:49:29 | ---
license: apache-2.0
task_categories:
- image-to-image
pretty_name: TEdBenc++
size_categories:
- n<1K
---
# TEdBench++
This dataset contains the TEdBench++ an image-to-image benchmark for text-based generative models. It contains original images (originals) and edited images (LEdits++) for benchmarking. ``tedbench++.csv`` contains the text-based edit instructions for the respective original image and parameters to reproduce the edited images with LEdits++.
| [
-0.2346169501543045,
-0.79386967420578,
0.3260725140571594,
0.26484575867652893,
-0.4201335906982422,
0.2060156762599945,
-0.06787508726119995,
-0.24618741869926453,
0.21109892427921295,
0.5676684975624084,
-0.937619149684906,
-0.6673370599746704,
-0.20764116942882538,
-0.00609313929453492... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Elfsong/beyond_blue | Elfsong | 2023-11-27T10:52:07Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T10:52:07Z | 2023-11-27T10:51:32.000Z | 2023-11-27T10:51:32 | ---
configs:
- config_name: default
data_files:
- split: anxiety
path: data/anxiety-*
- split: depression
path: data/depression-*
- split: ptsd
path: data/ptsd-*
- split: relationships
path: data/relationships-*
dataset_info:
features:
- name: url
dtype: string
- name: comments
list:
- name: author
dtype: string
- name: content
dtype: string
- name: date
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: date
dtype: string
- name: content
dtype: string
- name: author
dtype: string
splits:
- name: anxiety
num_bytes: 56172807
num_examples: 6943
- name: depression
num_bytes: 60224734
num_examples: 6008
- name: ptsd
num_bytes: 21141031
num_examples: 1816
- name: relationships
num_bytes: 75923360
num_examples: 6799
download_size: 103962826
dataset_size: 213461932
---
# Dataset Card for "beyond_blue"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7007102966308594,
-0.1338505893945694,
0.37102895975112915,
0.36706605553627014,
-0.21531881392002106,
0.18594518303871155,
0.27876386046409607,
-0.43785855174064636,
0.7114668488502502,
0.37373751401901245,
-1.0997732877731323,
-0.8049936890602112,
-0.3347424864768982,
-0.4985609352588... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BangumiBase/landofthelustrous | BangumiBase | 2023-11-27T11:51:35Z | 0 | 0 | null | [
"size_categories:n<1K",
"license:mit",
"art",
"region:us"
] | 2023-11-27T11:51:35Z | 2023-11-27T11:01:07.000Z | 2023-11-27T11:01:07 | ---
license: mit
tags:
- art
size_categories:
- n<1K
---
# Bangumi Image Base of Land Of The Lustrous
This is the image base of bangumi Land of the Lustrous, we detected 19 characters, 845 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 202 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 121 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 94 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 11 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 20 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 38 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 12 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 17 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 12 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 49 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 18 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 36 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 35 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 44 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 11 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 14 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 24 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 8 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 79 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
| [
-0.6723750829696655,
-0.17501673102378845,
0.1476351022720337,
0.2237434983253479,
-0.25923916697502136,
-0.06445809453725815,
-0.03595631569623947,
-0.3398517370223999,
0.6288940906524658,
0.5834469199180603,
-0.9013490676879883,
-0.8588588833808899,
-0.6651767492294312,
0.443844169378280... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BangumiBase/versaillesnobara | BangumiBase | 2023-11-27T13:57:51Z | 0 | 0 | null | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | 2023-11-27T13:57:51Z | 2023-11-27T11:01:40.000Z | 2023-11-27T11:01:40 | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Versailles No Bara
This is the image base of bangumi Versailles No Bara, we detected 35 characters, 4981 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 468 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 126 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 154 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 659 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 105 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 47 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 44 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 51 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 173 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 48 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 25 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 168 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 150 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 290 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 30 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 1251 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 125 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 34 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 66 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 76 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 150 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 147 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 13 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 62 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 65 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 119 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 109 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 14 | [Download](27/dataset.zip) |  |  |  |  |  |  |  |  |
| 28 | 13 | [Download](28/dataset.zip) |  |  |  |  |  |  |  |  |
| 29 | 28 | [Download](29/dataset.zip) |  |  |  |  |  |  |  |  |
| 30 | 17 | [Download](30/dataset.zip) |  |  |  |  |  |  |  |  |
| 31 | 12 | [Download](31/dataset.zip) |  |  |  |  |  |  |  |  |
| 32 | 19 | [Download](32/dataset.zip) |  |  |  |  |  |  |  |  |
| 33 | 10 | [Download](33/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 113 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
| [
-0.7384008169174194,
-0.17861860990524292,
0.15031079947948456,
0.20965340733528137,
-0.27552610635757446,
-0.13440662622451782,
0.013230105862021446,
-0.35353702306747437,
0.6168521642684937,
0.5952023267745972,
-0.9385504722595215,
-0.8907678127288818,
-0.6843164563179016,
0.517158627510... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BangumiBase/welcometothenhk | BangumiBase | 2023-11-27T11:56:50Z | 0 | 0 | null | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | 2023-11-27T11:56:50Z | 2023-11-27T11:02:09.000Z | 2023-11-27T11:02:09 | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Welcome To The N.h.k.
This is the image base of bangumi Welcome to the N.H.K., we detected 17 characters, 2205 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 1316 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 37 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 323 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 71 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 8 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 13 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 47 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 26 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 107 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 9 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 5 | [Download](10/dataset.zip) |  |  |  |  |  | N/A | N/A | N/A |
| 11 | 74 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 16 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 8 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 22 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 45 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 78 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
| [
-0.7164520621299744,
-0.17553026974201202,
0.13568027317523956,
0.20135848224163055,
-0.27538764476776123,
-0.038590628653764725,
0.0066399043425917625,
-0.39632487297058105,
0.6273210048675537,
0.5394443273544312,
-0.9440475106239319,
-0.8770976662635803,
-0.6866040825843811,
0.5275418758... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BangumiBase/anohimitahananonamaewobokutachiwamadashiranai | BangumiBase | 2023-11-27T12:12:17Z | 0 | 0 | null | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | 2023-11-27T12:12:17Z | 2023-11-27T11:02:38.000Z | 2023-11-27T11:02:38 | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Ano Hi Mita Hana No Namae Wo Bokutachi Wa Mada Shiranai.
This is the image base of bangumi Ano Hi Mita Hana no Namae wo Bokutachi wa Mada Shiranai., we detected 19 characters, 1523 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 183 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 29 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 121 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 40 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 22 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 20 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 462 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 21 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 154 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 10 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 131 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 12 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 8 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 14 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 221 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 15 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 14 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 11 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 35 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
| [
-0.7219958305358887,
-0.17881816625595093,
0.11681143194437027,
0.22851140797138214,
-0.23737496137619019,
-0.10057175159454346,
-0.0358455553650856,
-0.359928160905838,
0.6118905544281006,
0.5336080193519592,
-0.939225435256958,
-0.862019956111908,
-0.6427335739135742,
0.5145268440246582,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
xixixi0503/formatted_util_deontology_commensense_justice_prompts_for_llama2 | xixixi0503 | 2023-11-27T19:13:17Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T19:13:17Z | 2023-11-27T11:03:24.000Z | 2023-11-27T11:03:24 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 59417065
num_examples: 67603
download_size: 14719019
dataset_size: 59417065
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Wil16/Testeur | Wil16 | 2023-11-28T14:56:50Z | 0 | 0 | null | [
"region:us"
] | 2023-11-28T14:56:50Z | 2023-11-27T11:04:47.000Z | 2023-11-27T11:04:47 | ---
# For reference on dataset card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | [
-0.5322356224060059,
-0.5534716844558716,
0.1290130317211151,
0.23470574617385864,
-0.39626216888427734,
-0.1176246926188469,
-0.03545305132865906,
-0.6389272212982178,
0.5699821710586548,
0.7838326692581177,
-0.7834625840187073,
-0.9173274040222168,
-0.55633145570755,
0.13078093528747559,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
keiliniportableheater/keilini-portable-heater | keiliniportableheater | 2023-11-27T11:13:17Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T11:13:17Z | 2023-11-27T11:13:05.000Z | 2023-11-27T11:13:05 | <p>The <a href="https://keilini-portable.clubeo.com/calendar/2023/11/29/keilini-portable-heater"><strong>Keilini portable heater</strong></a> was created by experienced engineers who needed to develop a high-end portable heater that was efficient and energy-saving compared to other heaters. The product manufactured by Kielini Company is economical, user-friendly, portable is suitable for offices, homes, and indoor use.</p>
<p>During the harsh winter, many people use space heaters to help warm their homes or office. That may be because they don't control their thermostat or because their house is drafty and certain areas don't warm up like others. Some people may also turn to area heaters to save money by not running their furnaces.</p>
<h2><span style="background-color: #ffcc00; color: black;"><a style="background-color: #ffcc00; color: black;" href="https://www.globalfitnessmart.com/get-keilini-portable-heater"><strong>Click Here -- Official Website -- Order Now}</strong></a></span></h2>
<h2><span style="color: #ff6600;"><strong>➡️<span style="color: maroon;">● For Order Official Website - <a style="color: maroon;" href="https://www.globalfitnessmart.com/get-keilini-portable-heater">https://www.globalfitnessmart.com/get-keilini-portable-heater</a></span></strong></span><br /><strong>➡️<span style="color: red;">● Item Name: — </span><span style="color: red;"><a style="color: red;" href="https://www.globalfitnessmart.com/get-keilini-portable-heater">Keilini Portable Heater</a></span></strong><br /><span style="color: red;"><strong>➡️<span style="color: green;">● Ingredients: — All Natural</span></strong></span><br /><strong>➡️<span style="color: purple;">● Incidental Effects: — NA</span></strong><br /><strong>➡️<span style="color: blue;">● Accessibility: — <a style="color: blue;" href="https://www.globalfitnessmart.com/get-keilini-portable-heater">Online</a></span></strong></h2>
<h2><span style="background-color: #ffcc00; color: black;"><a style="background-color: #ffcc00; color: black;" href="https://www.globalfitnessmart.com/get-keilini-portable-heater"><strong>✅HUGE DISCOUNT ! HURRY UP! ORDER NOW!✅</strong></a></span><br /><span style="background-color: #ffcc00; color: black;"><a style="background-color: #ffcc00; color: black;" href="https://www.globalfitnessmart.com/get-keilini-portable-heater"><strong>✅HUGE DISCOUNT ! HURRY UP! ORDER NOW!✅</strong></a></span><br /><span style="background-color: #ffcc00; color: black;"><a style="background-color: #ffcc00; color: black;" href="https://www.globalfitnessmart.com/get-keilini-portable-heater"><strong>✅HUGE DISCOUNT ! HURRY UP! ORDER NOW!✅</strong></a></span></h2>
<h2 style="text-align: center;"><span style="text-decoration: underline;"><strong>Company and Product Overview</strong></span></h2>
<p>The Keilini Company has operated for over ten years with numerous reviews from satisfied customers. Keilini is known for manufacturing innovative and unique products, and they have increased sales over the years by manufacturing quality products. Each of the Keilini Company's products has a particular style, making them superior to other competitors. Their most popular products include; portable heaters, bug-repellent lamps, HD dash cams, and light bulb cameras.</p>
<p>The <a href="https://keilini-portable.clubeo.com/page/keilini-portable-heater-usa-no-1-platinum-keilini-portable-heater-3gear-adjustment-with-power-of-750w.html"><strong>Keilini portable heater</strong></a> is a portable heater that is highly efficient and cost-effective. The heater has a high ceramic PTC heating technology that warms up a room and the person in it in one minute, irrespective of how cold the space has been.</p>
<h2 style="text-align: center;"><span style="text-decoration: underline;"><strong>What Is Keilini Heater - Keilini Heater UK Reviews</strong></span></h2>
<p>Keilini Heater is a highly innovative portable heater that has been designed by world-leading engineers to help UK users maintain normal temperature in the cold season. <a href="https://keilini-portable.clubeo.com/page/keilini-portable-heater-usa-ca-uk-official-website.html"><strong>Keilini Portable Heater</strong></a> is a novel convection ceramic heater in the UK. Convection Heaters are known for their efficiency and speed of heating any room and sustaining the temperature for a long time. Keilini Heater UK comes with an incredibly high efficiency compared to other heaters. Keilini Heater is a brainchild of a group of experienced engineers who recognized that the heating industry needed some real innovations in the heating industry. The aim of Keilini Heater in UK is to offer cost friendly heating solutions to every household.</p>
<p>All Keilini Heater UK Reviews agreed that it is an inexpensive and easy-to-use portable heater that would suit every home, office, bathroom, and any space at all. Thanks to the fact that the <a href="https://keilini-portable.clubeo.com/"><strong>Keilini Portable Heater</strong></a> does not require any installation or maintenance costs and is extremely energy-efficient. Keilini heaters come with unique advantages, they have incredible efficiency and don't waste energy at all. It heats every area in your room in just 60 seconds. In addition, the Keilini Heater is really cheap, compact and very lightweight, but very powerful and effective in doing its job.</p>
<h2 style="text-align: center;"><span style="color: #800000;"><a style="color: #800000;" href="https://www.globalfitnessmart.com/get-keilini-portable-heater"><strong>(EXCLUSIVE OFFER)Click Here : "Keilini Portable Heater USA"Official Website!</strong></a></span></h2>
<h2 style="text-align: center;"><span style="text-decoration: underline;"><strong>How To Use - Keilini Heater Reviews UK</strong></span></h2>
<ul>
<li>Take it to any room where you want to use it.</li>
<li>Plug the <a href="https://groups.google.com/g/keilini-portable-heater/c/Y1Vr6NBSMto"><strong>Keilini Portable Heater</strong></a> into the outlet.</li>
<li>Set the desired mode.</li>
<li>Then just wait for this powerful device to slowly heat up the entire room.</li>
<li>You can take the heater with you anywhere and fight back the biting cold anytime, anywhere.</li>
</ul>
<h2 style="text-align: center;"><span style="text-decoration: underline;"><strong>Unique Features - Keilini Heater UK Reviews</strong> </span></h2>
<p><strong>Customizable Individual Settings</strong></p>
<p>The <a href="https://sites.google.com/view/keilini-portable-heater-review/home"><strong>Keilini Portable Heater</strong></a> is equipped with three-gear adjustment. These flexible three gears can be toggled as you need. It allows you to choose the suitable heating gear as the indoor temperature changes. Using PTC ceramic heating technology, it never gets too warm or too cold, you can always select the heat setting that is most comfortable for you. Gear one is natural wind, the second gear is warm wind, and the third gear is strong warm current. (Product Power 750W/1500W)</p>
<p><strong>Fast & Easy Set Up</strong></p>
<p>One unique advantage of Keilini Heater UK over other products is the ease of using it. Keilini Heater is super ease to set up and doesn't require extra maintenance from you. All you need to do is to plug it into any wall socket in the room or space you want to keep warm, and the device will do the rest. <a href="https://lookerstudio.google.com/u/0/reporting/22409471-350c-4287-a76a-a574ee233ac0/page/8QQjD"><strong>Keilini Portable Heater</strong></a> only takes seconds to produce heat and can run at full blast for as long as you want. You can comfortably take Keilini heater anywhere with you and stay warm. No more relying on the high cost inefficient central heating systems.</p>
<p><strong>Safe & Quiet Operation</strong></p>
<p>Keilini Heater is built to ensure safety and peace of mind. Compared to other heaters, the <a href="https://colab.research.google.com/drive/1umiLVq-m09bH_JqUh8mhu4PXOZoODdTz"><strong>Keilini Portable Heater</strong></a> UK has no exposed elements that could accidentally burn you. The Automatic switch-off, switches off if the device accidentally falls over. Also, Keilini Heater comes with a portable handle design, which makes it easy to move the heater without hand burns. To ensure maximum concentration and relaxation, Keilini heater operates with zero noise. With the 37-45dB (Quieter than in a library) No loud uncomfortable valve pops to disturb your concentration or sleep. With all these incredible features, you can use <a href="https://gamma.app/public/Keilini-Portable-Heater---USACA-UKOfficial-Website-2vvh0oxf7q1p2sm?mode=doc"><strong>Keilini Portable Heater</strong></a> in your home or office with absolute peace of mind!</p>
<p><strong>Portable/ Compact Design</strong></p>
<p>The compact and portable feature of Keilini Heater makes it ideal for different locations. This portable heater is very powerful, although, not big and chunky but very effective in dealing with the cold winter. Due to the lightweight feature, you can easily take it with you wherever you go. Also, the outer casing hardly ever gets warm, making it easy to carry the Keilini from room to room without burning your fingers.</p>
<h2 style="text-align: center;"><span style="color: #800000;"><a style="color: #800000;" href="https://www.globalfitnessmart.com/get-keilini-portable-heater"><strong>SPECIAL PROMO[Limited Discount]: "Keilini Portable Heater USA"Official Website!</strong></a></span></h2>
<h2 style="text-align: center;"><span style="text-decoration: underline;"><strong>Does Keilini Portable Heater UK Really Work?</strong></span></h2>
<p>Keilini Heater is a revolutionary portable heater with lots of incredible energy efficient features. This device works effectively to deliver the needed heating to any room. Many Keilini Heater Reviews UK online confirm it works very simply. The <a href="https://keilini-portable-heater.jimdosite.com/"><strong>Keilini Portable Heater</strong> </a>does not require any installation or maintenance, making it extremely energy-efficient and cost effective. Keilini heater is an easy-to-use portable heater that would suit any environment. <a href="https://www.scoop.it/topic/keilini-portable-heater/p/4149130319/2023/11/27/keilini-portable-heater-usa-ca-uk-official-website"><strong>Keilini portable heater</strong></a> is efficient and doesn't waste energy. Keilini Heater is manufactured to the highest standards of quality, and will deliver beyond your expectations. It comes with 3 gears that can be toggled at will, one gear is natural wind, the second gear is warm wind, and the third gear is strong warm current. (Keilini Heater Power is 750W/1500W)</p>
<h2 style="text-align: center;"><span style="text-decoration: underline;"><strong>Pros - Keilini Portable Heater Reviews UK</strong></span></h2>
<ul>
<li>No complicated setup or maintenance. Use it straight out of the box!</li>
<li>Highly Efficient Ceramic PTC Heating Technology</li>
<li>Safe&Quiet to Use 37-45dB (Quieter than in a library)</li>
<li>Three-Gear Adjustment With Power Of 750w/1500w.</li>
<li>Ultra Compact & Sleek Design. Perfect for any home or office Decor.</li>
<li>You can use <a href="https://www.scoop.it/topic/keilini-portable-heater-usa-no-1-platinum-keilini-portable-heater-3gear-adjustment-with-power-of-750w"><strong>Keilini Portable Heater</strong></a> in your home with peace of mind.</li>
<li>If it is knocked over, it will automatically shut down.</li>
<li>Keilini heater makes low noise during operation.</li>
<li>Portable handle design, which makes it easy to move the heater without hand burns.</li>
<li>50% Discount When You Make Purchase Today!</li>
<li>30-Day Money Back Guarantee.</li>
<li>Sustain the Perfect Temperature</li>
<li>Flexible three gears can be toggled as you need.</li>
<li>Energy-saving. Keilini heater can heat any room and save a lot.</li>
<li>You can choose the suitable heating gear as the indoor temperature changes.</li>
<li>Keilini heater only takes 60 seconds to produce heat and can run at full blast for as long as you want.</li>
<li>Guaranteed high quality. Made to the highest standards of quality.</li>
</ul>
<h2><strong>Cons</strong></h2>
<ul>
<li>Available on the <a href="https://huggingface.co/keiliniheaterusa/keilini-portable-heater/blob/main/README.md"><strong>Keilini Portable Heater</strong></a> official website.</li>
<li>Supply and the 50% Discount Offer may end anytime soon.</li>
</ul>
<h2 style="text-align: center;"><span style="text-decoration: underline;"><strong>Where to Buy Keilini Heater – Keilini Portable Heater Reviews</strong></span></h2>
<p>Whether you want to confirm the current price of Keilini heaters, gain access to the ongoing 50% promo, or buy Keilini ceramic portable heater, you must visit the official website.</p>
<p>The Keilini company is currently running out of stock due to high demand. To ensure you don’t miss out, confirm Keilini heater availability on their website and seize any chance you’ve got to buy one today.</p>
<h2 style="text-align: center;"><span style="color: #800000;"><a style="color: #800000;" href="https://www.globalfitnessmart.com/get-keilini-portable-heater"><strong>SPECIAL PROMO: Get Keilini Portable Heater at the Lowest Discounted Price Online</strong></a></span></h2>
<h2 style="text-align: center;"><span style="text-decoration: underline;"><strong>Keilini Portable Heater Price</strong></span></h2>
<p>Below is the current cost price of <a href="https://huggingface.co/datasets/keiliniheaterusa/keilini-portable-heater/blob/main/README.md"><strong>keilini portable heater.</strong></a> You can confirm recent price changes from their official website.</p>
<ul>
<li><strong>Buy 1 unit of <a href="https://carehealthreview.blogspot.com/2023/11/keilini-portable-heaterusa-no-1.html">Keilini Portable Heater</a> price at £59.99</strong></li>
<li><strong>Buy 2 units of <a href="https://forum.mush.com.br/topic/182801/keilini-portable-heater-usa-ca-uk-official-website">Keilini Portable Heater</a> price at £99.98</strong></li>
<li><strong>Buy 3 units of <a href="https://glonet.com/forum/thread/24016/keilini-portable-heater-usa-ca-uk%E3%80%90official-website%E3%80%91/">Keilini Portable Heater</a> price at £139.99</strong></li>
<li><strong>Buy 4 units of <a href="https://groups.google.com/g/comp.text.tex/c/YtuDPxEhNaA">Keilini Portable Heater </a>price at £159.96</strong></li>
</ul>
<h2 style="text-align: center;"><span style="text-decoration: underline;"><strong>Final Remarks - Keilini Portable Heater Reviews UK</strong></span></h2>
<p>Keilini Portable Heater is designed to be a cost effective and energy efficient portable heater. This device is perfect for any home and office. And it can easily heat up the temperature of any room in just seconds. Keilini is loaded with incredible features that work effectively to ensure you stay toasty warm throughout the coldest days of winter.</p>
<p>With Keilini heater the extremely powerful, efficient and portable heater with ceramic PTC heating technology, you're in for an amazing/personalized heating experience this season. With all these amazing features and benefits, Keilini Heaters is selling at a mouth watering 50% discount! Kindly visit Keilini Heater official website to place your order, so you don't miss out on the ongoing Offer!</p>
<h2 style="text-align: center;"><span style="color: #800000;"><a style="color: #800000;" href="https://www.globalfitnessmart.com/get-keilini-portable-heater"><strong>Read This: "More Information From Knowledgeable Expertise of Keilini Portable Heater"</strong></a></span></h2>
<h2><strong><span style="color: #800000;">@ READ MORE</span></strong></h2>
<p><strong><span style="color: #800000;"><a href="https://carehealthreview.blogspot.com/2023/11/keilini-portable-heaterusa-no-1.html">https://carehealthreview.blogspot.com/2023/11/keilini-portable-heaterusa-no-1.html</a></span></strong></p>
<p><strong><span style="color: #800000;"><a href="https://keilini-portable.clubeo.com/calendar/2023/11/29/keilini-portable-heater">https://keilini-portable.clubeo.com/calendar/2023/11/29/keilini-portable-heater</a></span></strong></p>
<p><strong><span style="color: #800000;"><a href="https://keilini-portable.clubeo.com/page/keilini-portable-heater-usa-no-1-platinum-keilini-portable-heater-3gear-adjustment-with-power-of-750w.html">https://keilini-portable.clubeo.com/page/keilini-portable-heater-usa-no-1-platinum-keilini-portable-heater-3gear-adjustment-with-power-of-750w.html</a></span></strong></p>
<p><strong><span style="color: #800000;"><a href="https://keilini-portable.clubeo.com/page/keilini-portable-heater-usa-ca-uk-official-website.html">https://keilini-portable.clubeo.com/page/keilini-portable-heater-usa-ca-uk-official-website.html</a></span></strong></p>
<p><strong><span style="color: #800000;"><a href="https://keilini-portable.clubeo.com/">https://keilini-portable.clubeo.com/</a></span></strong></p>
<p><strong><span style="color: #800000;"><a href="https://groups.google.com/g/keilini-portable-heater/c/Y1Vr6NBSMto">https://groups.google.com/g/keilini-portable-heater/c/Y1Vr6NBSMto</a></span></strong></p>
<p><strong><span style="color: #800000;"><a href="https://sites.google.com/view/keilini-portable-heater-review/home">https://sites.google.com/view/keilini-portable-heater-review/home</a></span></strong></p>
<p><strong><span style="color: #800000;"><a href="https://lookerstudio.google.com/u/0/reporting/22409471-350c-4287-a76a-a574ee233ac0/page/8QQjD">https://lookerstudio.google.com/u/0/reporting/22409471-350c-4287-a76a-a574ee233ac0/page/8QQjD</a></span></strong></p>
<p><strong><span style="color: #800000;"><a href="https://colab.research.google.com/drive/1umiLVq-m09bH_JqUh8mhu4PXOZoODdTz">https://colab.research.google.com/drive/1umiLVq-m09bH_JqUh8mhu4PXOZoODdTz</a></span></strong></p>
<p><strong><span style="color: #800000;"><a href="https://gamma.app/public/Keilini-Portable-Heater---USACA-UKOfficial-Website-2vvh0oxf7q1p2sm?mode=doc">https://gamma.app/public/Keilini-Portable-Heater---USACA-UKOfficial-Website-2vvh0oxf7q1p2sm?mode=doc</a></span></strong></p>
<p><strong><span style="color: #800000;"><a href="https://keilini-portable-heater.jimdosite.com/">https://keilini-portable-heater.jimdosite.com/</a></span></strong></p>
<p><strong><span style="color: #800000;"><a href="https://www.scoop.it/topic/keilini-portable-heater/p/4149130319/2023/11/27/keilini-portable-heater-usa-ca-uk-official-website">https://www.scoop.it/topic/keilini-portable-heater/p/4149130319/2023/11/27/keilini-portable-heater-usa-ca-uk-official-website</a></span></strong></p>
<p><strong><span style="color: #800000;"><a href="https://www.scoop.it/topic/keilini-portable-heater-usa-no-1-platinum-keilini-portable-heater-3gear-adjustment-with-power-of-750w">https://www.scoop.it/topic/keilini-portable-heater-usa-no-1-platinum-keilini-portable-heater-3gear-adjustment-with-power-of-750w</a></span></strong></p>
<p><strong><span style="color: #800000;"><a href="https://groups.google.com/g/sci.lang.japan/c/tVwhnwPYMy8">https://groups.google.com/g/sci.lang.japan/c/tVwhnwPYMy8</a></span></strong></p>
<p><strong> <a href="https://groups.google.com/g/comp.protocols.time.ntp/c/O_oOVCAcM-8">https://groups.google.com/g/comp.protocols.time.ntp/c/O_oOVCAcM-8</a></strong></p>
<p><strong><a href="https://groups.google.com/g/sci.lang/c/6xuZj_zisW0">https://groups.google.com/g/sci.lang/c/6xuZj_zisW0</a></strong></p>
<p><strong><a href="https://groups.google.com/g/comp.text.tex/c/YtuDPxEhNaA">https://groups.google.com/g/comp.text.tex/c/YtuDPxEhNaA</a></strong></p>
<p><strong><a href="https://groups.google.com/g/rec.arts.tv/c/23MI4nXtyv0">https://groups.google.com/g/rec.arts.tv/c/23MI4nXtyv0</a></strong></p>
<p><strong><a href="https://groups.google.com/g/comp.editors/c/VgQNp7fVEKA">https://groups.google.com/g/comp.editors/c/VgQNp7fVEKA</a></strong></p>
<p><strong><a href="https://groups.google.com/g/mozilla.dev.platform/c/fCue_xtjSTc">https://groups.google.com/g/mozilla.dev.platform/c/fCue_xtjSTc</a></strong></p>
<p><strong><a href="https://groups.google.com/g/microsoft.public.project/c/1hya8FA6Pio">https://groups.google.com/g/microsoft.public.project/c/1hya8FA6Pio</a></strong></p>
<p><strong><a href="https://huggingface.co/keiliniheaterusa/keilini-portable-heater/blob/main/README.md">https://huggingface.co/keiliniheaterusa/keilini-portable-heater/blob/main/README.md</a></strong></p>
<p><strong><a href="https://huggingface.co/datasets/keiliniheaterusa/keilini-portable-heater/blob/main/README.md">https://huggingface.co/datasets/keiliniheaterusa/keilini-portable-heater/blob/main/README.md</a></strong></p>
<p><strong><a href="https://www.bitsdujour.com/view/keilini-portable-heater-usaca-ukofficial-website#comments104316">https://www.bitsdujour.com/view/keilini-portable-heater-usaca-ukofficial-website#comments104316</a></strong></p>
<p><strong><a href="https://forum.mush.com.br/topic/182801/keilini-portable-heater-usa-ca-uk-official-website">https://forum.mush.com.br/topic/182801/keilini-portable-heater-usa-ca-uk-official-website</a></strong></p>
<p><strong><a href="https://glonet.com/forum/thread/24016/keilini-portable-heater-usa-ca-uk%E3%80%90official-website%E3%80%91/">https://glonet.com/forum/thread/24016/keilini-portable-heater-usa-ca-uk%E3%80%90official-website%E3%80%91/</a></strong></p> | [
-0.6271914839744568,
-0.3068810999393463,
0.4640839695930481,
0.18772150576114655,
-0.44182002544403076,
-0.42856529355049133,
-0.06483033299446106,
-0.39786720275878906,
0.4761982560157776,
-0.2221882939338684,
-0.2418556809425354,
-0.13596300780773163,
-0.13803020119667053,
0.11805511265... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
shachardon/midjourney-threads | shachardon | 2023-11-27T12:51:15Z | 0 | 0 | null | [
"task_categories:text-to-image",
"size_categories:100K<n<1M",
"language:en",
"arxiv:2311.12131",
"region:us"
] | 2023-11-27T12:51:15Z | 2023-11-27T11:22:44.000Z | 2023-11-27T11:22:44 | ---
task_categories:
- text-to-image
language:
- en
pretty_name: Midjourney-Threads
size_categories:
- 100K<n<1M
configs:
- config_name: default
data_files:
- split: train
path:
- "threads_0.csv"
- "threads_20000.csv"
- "threads_40000.csv"
- "threads_60000.csv"
- "threads_80000.csv"
- "threads_100000.csv"
- "threads_120000.csv"
- "threads_140000.csv"
- "threads_160000.csv"
---
# Dataset Card for Midjourney-Threads
<!-- Provide a quick summary of the dataset. -->
This dataset contains users prompts from the Midjourney discord channel, organized into "threads of interaction".
Each thread contains a user’s trails to create one target image. We hope this dataset would
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Language(s) (NLP):** English
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Repository:** https://github.com/shachardon/Mid-Journey-to-alignment
- **Paper:** https://arxiv.org/abs/2311.12131
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
Main Columns:
'text' - the original prompt
'args' - predefined parameters (such as the aspect ratio, chaos and [more][myexample])
'channel_id' - the discord channel
'userid' - an anonymous user id
'timestamp' - a timestamp of the prompt creation
'label' - Ture whether an image that was generated based on that prompt was upscaled, otherwise False.
'id' - unique id of the prompt
'url_png' - like to the generated images (a 4-grid version)
'main_content' - prefix of the prompt, without trailing magic-words
'concreteness' - concreteness score, based on the [this paper][concpaper]
'repeat_words' - the occurrences of each word that appears more than once in the prompt, excluding stop words.
'perplexity' - the perplexity GPT-2 assigns to each prompt.
'word_len' - the number of words
'caption_0-3' - captions that were generated by the BLIP-2 model
'phase' - train/test split, as was used to train image/text classifiers
'magic_ratio' - the percentage of words that were recognized as magic words in the prompt
'thread_id' - the id of the thread
'depth' - the max depth of a constituency parse tree of the prompt.
[myexample]: https://docs.midjourney.com/docs/parameter-list "markdown more"
[concpaper]: https://link.springer.com/article/10.3758/s13428-013-0403-5 "markdown this paper"
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
We construct the dataset by scraping user-generated prompts from the Midjourney Discord server.
The server contains channels in which a user can type a prompt and arguments, and then the Midjourney bot would reply with 4 generated images, combined together into a grid. Then, if the user is satisfied with one of the 4 images, they can send an ``upscale'' command to the bot, to get an upscaled version of the desired image.
We randomly choose one of the ``newbies'' channels, where both new and experienced users are experimenting with general domain prompts. We collect $693,528$ prompts (From 23 January to 1 March 2023), together with their matching images and meta-data such as timestamps and user ids (which we anonymize).
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
We fully anonymize the data by removing user names and other user-specific meta-data. If you recognize your prompts here and want to remove them, please send us an [email](mailto:shachar.don-yehiya@mail.huji.ac.il).
The Midjourney Discord is an open community that allows others to use images and prompts whenever they are posted in a public setting.
Paying users do own all assets they create, and therefore we do not include the image files in our dataset, but only links to them.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Our manual sample did not find any offensive content in the prompts.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
We split the prompts into threads automatically, and therefore there are some mistakes. For more about our annotations method, please see the paper.
## Citation
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
@misc{donyehiya2023human,
title={Human Learning by Model Feedback: The Dynamics of Iterative Prompting with Midjourney},
author={Shachar Don-Yehiya and Leshem Choshen and Omri Abend},
year={2023},
eprint={2311.12131},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | [
-0.45923036336898804,
-0.6033605337142944,
0.33877936005592346,
0.17776359617710114,
-0.320392370223999,
-0.09344945847988129,
-0.19662325084209442,
-0.34408727288246155,
0.15401732921600342,
0.3764394223690033,
-1.1320527791976929,
-0.5933917164802551,
-0.4966430068016052,
0.1946254074573... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
maximedb/multilingual_librispeech_fr | maximedb | 2023-11-27T12:33:03Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T12:33:03Z | 2023-11-27T11:23:22.000Z | 2023-11-27T11:23:22 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
splits:
- name: train
num_bytes: 62912882604.152
num_examples: 258213
- name: train.9h
num_bytes: 532581041.633
num_examples: 2167
- name: train.1h
num_bytes: 60218210.0
num_examples: 241
- name: validation
num_bytes: 620676040.84
num_examples: 2416
- name: test
num_bytes: 620016068.552
num_examples: 2426
download_size: 65546652537
dataset_size: 64746373965.177
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: train.9h
path: data/train.9h-*
- split: train.1h
path: data/train.1h-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
| [
-0.12853367626667023,
-0.18616794049739838,
0.6529126763343811,
0.4943627417087555,
-0.19319313764572144,
0.23607443273067474,
0.36071979999542236,
0.05056338757276535,
0.5793654322624207,
0.7400138974189758,
-0.6508103013038635,
-0.23783987760543823,
-0.710224986076355,
-0.047825977206230... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AnnikaSimonsen/combined_train_dataset_fo-en | AnnikaSimonsen | 2023-11-27T12:00:35Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T12:00:35Z | 2023-11-27T11:46:37.000Z | 2023-11-27T11:46:37 | ---
dataset_info:
features:
- name: File name
dtype: string
- name: Faroese
dtype: string
- name: English translation
dtype: string
splits:
- name: train
num_bytes: 11318248
num_examples: 105634
download_size: 7455201
dataset_size: 11318248
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853367626667023,
-0.18616794049739838,
0.6529126763343811,
0.4943627417087555,
-0.19319313764572144,
0.23607443273067474,
0.36071979999542236,
0.05056338757276535,
0.5793654322624207,
0.7400138974189758,
-0.6508103013038635,
-0.23783987760543823,
-0.710224986076355,
-0.047825977206230... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
MyRebRIc/datasetdojulius | MyRebRIc | 2023-11-27T12:01:45Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T12:01:45Z | 2023-11-27T12:00:56.000Z | 2023-11-27T12:00:56 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nu-dialogue/sfcoco2021 | nu-dialogue | 2023-11-28T00:29:38Z | 0 | 0 | null | [
"task_categories:image-to-text",
"language:ja",
"region:us"
] | 2023-11-28T00:29:38Z | 2023-11-27T12:01:24.000Z | 2023-11-27T12:01:24 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 309982178.6
num_examples: 729
- name: test
num_bytes: 35206083.4
num_examples: 81
download_size: 344054037
dataset_size: 345188262
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
task_categories:
- image-to-text
language:
- ja
--- | [
-0.12853367626667023,
-0.18616794049739838,
0.6529126763343811,
0.4943627417087555,
-0.19319313764572144,
0.23607443273067474,
0.36071979999542236,
0.05056338757276535,
0.5793654322624207,
0.7400138974189758,
-0.6508103013038635,
-0.23783987760543823,
-0.710224986076355,
-0.047825977206230... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
achouffe/leo | achouffe | 2023-11-27T12:06:08Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T12:06:08Z | 2023-11-27T12:05:10.000Z | 2023-11-27T12:05:10 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AhmadSaud/nov1_without_annotation | AhmadSaud | 2023-11-27T12:09:35Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T12:09:35Z | 2023-11-27T12:09:35.000Z | 2023-11-27T12:09:35 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nu-dialogue/sfcoco2022 | nu-dialogue | 2023-11-28T00:30:10Z | 0 | 0 | null | [
"task_categories:image-to-text",
"language:ja",
"region:us"
] | 2023-11-28T00:30:10Z | 2023-11-27T12:15:33.000Z | 2023-11-27T12:15:33 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 366416649.50223213
num_examples: 806
- name: test
num_bytes: 41865941.49776786
num_examples: 90
download_size: 405465686
dataset_size: 408282591
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
task_categories:
- image-to-text
language:
- ja
--- | [
-0.12853367626667023,
-0.18616794049739838,
0.6529126763343811,
0.4943627417087555,
-0.19319313764572144,
0.23607443273067474,
0.36071979999542236,
0.05056338757276535,
0.5793654322624207,
0.7400138974189758,
-0.6508103013038635,
-0.23783987760543823,
-0.710224986076355,
-0.047825977206230... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dderr/testdataset2 | dderr | 2023-11-27T12:26:24Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T12:26:24Z | 2023-11-27T12:26:23.000Z | 2023-11-27T12:26:23 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
smile123/Video-Bench | smile123 | 2023-11-27T12:35:34Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T12:35:34Z | 2023-11-27T12:35:34.000Z | 2023-11-27T12:35:34 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Ai4Happiness/Video-Bench | Ai4Happiness | 2023-11-27T14:16:49Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T14:16:49Z | 2023-11-27T12:40:21.000Z | 2023-11-27T12:40:21 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
onkar627/MentaBot_Project | onkar627 | 2023-11-27T12:44:46Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | 2023-11-27T12:44:46Z | 2023-11-27T12:42:53.000Z | 2023-11-27T12:42:53 | ---
license: mit
---
| [
-0.12853367626667023,
-0.18616794049739838,
0.6529126763343811,
0.4943627417087555,
-0.19319313764572144,
0.23607443273067474,
0.36071979999542236,
0.05056338757276535,
0.5793654322624207,
0.7400138974189758,
-0.6508103013038635,
-0.23783987760543823,
-0.710224986076355,
-0.047825977206230... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
aymanemalih/mfi | aymanemalih | 2023-11-27T12:45:21Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T12:45:21Z | 2023-11-27T12:45:21.000Z | 2023-11-27T12:45:21 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
MeetShah/test | MeetShah | 2023-11-28T05:15:24Z | 0 | 0 | null | [
"task_categories:text-classification",
"finance",
"region:us"
] | 2023-11-28T05:15:24Z | 2023-11-27T12:52:04.000Z | 2023-11-27T12:52:04 | ---
task_categories:
- text-classification
tags:
- finance
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BangumiBase/ginnosaji | BangumiBase | 2023-11-27T15:00:15Z | 0 | 0 | null | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | 2023-11-27T15:00:15Z | 2023-11-27T13:00:54.000Z | 2023-11-27T13:00:54 | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Gin No Saji
This is the image base of bangumi Gin no Saji, we detected 27 characters, 3590 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 18 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 700 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 181 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 97 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 35 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 44 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 1308 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 64 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 22 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 56 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 14 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 8 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 28 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 81 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 41 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 48 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 23 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 12 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 31 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 11 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 80 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 58 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 65 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 10 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 490 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 8 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 57 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
| [
-0.6736518740653992,
-0.17185254395008087,
0.1153508797287941,
0.19603917002677917,
-0.2558305263519287,
-0.10989271104335785,
-0.04386192187666893,
-0.3778369128704071,
0.6717191934585571,
0.5041681528091431,
-0.9219903945922852,
-0.8353495597839355,
-0.6530610918998718,
0.523333668708801... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
aegean-ai/engine-anomaly-detection-dataset | aegean-ai | 2023-11-28T19:32:08Z | 0 | 0 | null | [
"region:us"
] | 2023-11-28T19:32:08Z | 2023-11-27T13:17:52.000Z | 2023-11-27T13:17:52 | ---
license: other
license_name: ntt
license_link: https://zenodo.org/records/3351307/files/LICENSE.pdf?download=1
| [
-0.11537430435419083,
-0.22452086210250854,
0.3911401629447937,
0.35756441950798035,
-1.009035348892212,
-0.25076019763946533,
0.3497662842273712,
-0.5139469504356384,
0.1870526671409607,
0.7380144000053406,
-1.013312816619873,
-0.4446151852607727,
-0.2748897671699524,
0.29988136887550354,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
plaguss/end2end_textclassification | plaguss | 2023-11-27T13:33:58Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T13:33:58Z | 2023-11-27T13:33:48.000Z | 2023-11-27T13:33:48 | ---
dataset_info:
features:
- name: text
dtype: string
id: field
- name: label
list:
- name: user_id
dtype: string
id: question
- name: value
dtype: string
id: suggestion
- name: status
dtype: string
id: question
- name: label-suggestion
dtype: string
id: suggestion
- name: label-suggestion-metadata
struct:
- name: type
dtype: string
id: suggestion-metadata
- name: score
dtype: float32
id: suggestion-metadata
- name: agent
dtype: string
id: suggestion-metadata
- name: external_id
dtype: string
id: external_id
- name: metadata
dtype: string
id: metadata
splits:
- name: train
num_bytes: 343408
num_examples: 1000
download_size: 181964
dataset_size: 343408
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "end2end_textclassification"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.42846062779426575,
-0.1637413650751114,
0.0555395632982254,
-0.08522146940231323,
-0.14094524085521698,
0.08691702038049698,
0.0056590549647808075,
-0.3780243396759033,
0.6010180711746216,
0.35432928800582886,
-0.7390454411506653,
-0.5973405838012695,
-0.6094667911529541,
-0.29757070541... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
chienpham/vnpara | chienpham | 2023-11-27T13:39:51Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T13:39:51Z | 2023-11-27T13:38:14.000Z | 2023-11-27T13:38:14 | Entry not found | [
-0.32276469469070435,
-0.22568407654762268,
0.8622258901596069,
0.434614896774292,
-0.5282987952232361,
0.7012966275215149,
0.7915717363357544,
0.07618635147809982,
0.7746022939682007,
0.25632190704345703,
-0.7852814793586731,
-0.22573821246623993,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
HuyButter/Forklift-Person-Dataset | HuyButter | 2023-11-27T14:35:32Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-27T14:35:32Z | 2023-11-27T13:39:25.000Z | 2023-11-27T13:39:25 | ---
license: apache-2.0
---
| [
-0.128533735871315,
-0.18616747856140137,
0.6529128551483154,
0.4943627715110779,
-0.19319336116313934,
0.2360745221376419,
0.3607197701931,
0.05056330934166908,
0.5793653130531311,
0.740013837814331,
-0.6508103013038635,
-0.23783954977989197,
-0.7102248668670654,
-0.04782583937048912,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Rawodo/DocLayNet-large_paragraphs_encoded_ml512 | Rawodo | 2023-11-28T23:52:21Z | 0 | 0 | null | [
"region:us"
] | 2023-11-28T23:52:21Z | 2023-11-27T13:39:37.000Z | 2023-11-27T13:39:37 | ---
dataset_info:
features:
- name: page_hash
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: normalized_bboxes
sequence:
sequence: int64
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 3793445572
num_examples: 150701
- name: validation
num_bytes: 401719948
num_examples: 15959
- name: test
num_bytes: 275482368
num_examples: 10944
download_size: 110392206
dataset_size: 4470647888
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
| [
-0.128533735871315,
-0.18616747856140137,
0.6529128551483154,
0.4943627715110779,
-0.19319336116313934,
0.2360745221376419,
0.3607197701931,
0.05056330934166908,
0.5793653130531311,
0.740013837814331,
-0.6508103013038635,
-0.23783954977989197,
-0.7102248668670654,
-0.04782583937048912,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AlexWortega/InstructCaptions | AlexWortega | 2023-11-27T14:34:22Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T14:34:22Z | 2023-11-27T13:41:24.000Z | 2023-11-27T13:41:24 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 48943069439.6
num_examples: 33650
download_size: 35099473289
dataset_size: 48943069439.6
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Instruct stlye Text Image dataset
Following dataset was crawled and filetered from COYO, Laion and captioned with LLAVA 13b | [
0.01010420173406601,
-0.4744560122489929,
0.43350595235824585,
0.4211589992046356,
-0.7156917452812195,
0.04076150059700012,
0.14596417546272278,
-0.5536628365516663,
0.7285497784614563,
1.0866175889968872,
-0.6198244690895081,
-0.5706422328948975,
-0.7637929320335388,
0.14974412322044373,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
younghoonKIM/noise_korean | younghoonKIM | 2023-11-27T13:46:57Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T13:46:57Z | 2023-11-27T13:41:59.000Z | 2023-11-27T13:41:59 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tyzhu/squad_qa_baseline_v5_full_first_permute | tyzhu | 2023-11-27T15:15:34Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T15:15:34Z | 2023-11-27T13:48:16.000Z | 2023-11-27T13:48:16 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: answer
dtype: string
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 2496440.0
num_examples: 2385
- name: validation
num_bytes: 335684
num_examples: 300
download_size: 0
dataset_size: 2832124.0
---
# Dataset Card for "squad_qa_baseline_v5_full_first_permute"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.435785710811615,
-0.10632920265197754,
0.24399414658546448,
0.5264371037483215,
-0.27052924036979675,
0.13029268383979797,
0.5947995185852051,
0.042342230677604675,
0.6501848697662354,
0.5093148946762085,
-1.3233908414840698,
-0.9698823690414429,
-0.40545764565467834,
-0.032021645456552... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tyzhu/squad_qa_baseline_v5_full_last_permute | tyzhu | 2023-11-27T15:15:51Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T15:15:51Z | 2023-11-27T13:48:41.000Z | 2023-11-27T13:48:41 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: answer
dtype: string
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 2496440.0
num_examples: 2385
- name: validation
num_bytes: 335684
num_examples: 300
download_size: 0
dataset_size: 2832124.0
---
# Dataset Card for "squad_qa_baseline_v5_full_last_permute"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3546665608882904,
-0.025863666087388992,
0.40103796124458313,
0.44039294123649597,
-0.21400125324726105,
0.1552073359489441,
0.5166321992874146,
0.05218084529042244,
0.5962396860122681,
0.5275481939315796,
-1.2311608791351318,
-0.9656991958618164,
-0.26661416888237,
0.024269070476293564... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tyzhu/squad_qa_baseline_v5_full_no_permute | tyzhu | 2023-11-27T15:16:08Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T15:16:08Z | 2023-11-27T13:49:19.000Z | 2023-11-27T13:49:19 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: answer
dtype: string
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 2496440.0
num_examples: 2385
- name: validation
num_bytes: 335684
num_examples: 300
download_size: 0
dataset_size: 2832124.0
---
# Dataset Card for "squad_qa_baseline_v5_full_no_permute"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3843345046043396,
-0.11827550828456879,
0.3096649944782257,
0.516999363899231,
-0.23477870225906372,
0.13134686648845673,
0.5360642671585083,
0.020622344687581062,
0.6744789481163025,
0.6012526154518127,
-1.281757116317749,
-0.9909924864768982,
-0.32123857736587524,
0.005356236826628447... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tyzhu/squad_qa_context_v5_full_first_permute | tyzhu | 2023-11-27T15:16:24Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T15:16:24Z | 2023-11-27T13:49:53.000Z | 2023-11-27T13:49:53 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: answer
dtype: string
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 4350151.0
num_examples: 2385
- name: validation
num_bytes: 570908
num_examples: 300
download_size: 0
dataset_size: 4921059.0
---
# Dataset Card for "squad_qa_context_v5_full_first_permute"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3991670608520508,
-0.13000638782978058,
0.2970179617404938,
0.6168501973152161,
-0.34221771359443665,
0.028689812868833542,
0.5348133444786072,
-0.007387487683445215,
0.665722131729126,
0.4979981780052185,
-1.3746190071105957,
-0.9611469507217407,
-0.4039434790611267,
0.0792619287967681... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ScandEval/nordjylland-news-summarization-mini | ScandEval | 2023-11-27T13:50:41Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T13:50:41Z | 2023-11-27T13:50:08.000Z | 2023-11-27T13:50:08 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_text
dtype: string
- name: target_text
dtype: string
- name: text_len
dtype: int64
- name: summary_len
dtype: int64
splits:
- name: train
num_bytes: 1588698
num_examples: 1024
- name: val
num_bytes: 392467
num_examples: 256
- name: test
num_bytes: 3268194
num_examples: 2048
download_size: 3271757
dataset_size: 5249359
---
# Dataset Card for "nordjylland-news-summarization-mini"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6548962593078613,
-0.15035860240459442,
0.2746511399745941,
0.13801635801792145,
-0.49189552664756775,
-0.11894035339355469,
0.04805457592010498,
0.01924036256968975,
1.117450475692749,
0.34822672605514526,
-0.8794042468070984,
-0.8228366374969482,
-0.60496586561203,
-0.1917550414800644... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tyzhu/squad_qa_context_v5_full_last_permute | tyzhu | 2023-11-27T15:16:39Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T15:16:39Z | 2023-11-27T13:50:26.000Z | 2023-11-27T13:50:26 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: answer
dtype: string
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 4350151.0
num_examples: 2385
- name: validation
num_bytes: 570908
num_examples: 300
download_size: 0
dataset_size: 4921059.0
---
# Dataset Card for "squad_qa_context_v5_full_last_permute"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3209103047847748,
-0.04071597009897232,
0.4710012972354889,
0.5207856297492981,
-0.2811838984489441,
0.04873674362897873,
0.4522821605205536,
0.006018122658133507,
0.6098087430000305,
0.5145134329795837,
-1.2847564220428467,
-0.9603012204170227,
-0.25109201669692993,
0.14183026552200317... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tyzhu/squad_qa_context_v5_full_no_permute | tyzhu | 2023-11-27T15:16:55Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T15:16:55Z | 2023-11-27T13:51:16.000Z | 2023-11-27T13:51:16 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: answer
dtype: string
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 4350151.0
num_examples: 2385
- name: validation
num_bytes: 570908
num_examples: 300
download_size: 0
dataset_size: 4921059.0
---
# Dataset Card for "squad_qa_context_v5_full_no_permute"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.34405457973480225,
-0.13750149309635162,
0.37315496802330017,
0.6136419773101807,
-0.30199354887008667,
0.029878508299589157,
0.47055837512016296,
-0.0238669253885746,
0.6950304508209229,
0.5810707807540894,
-1.3389407396316528,
-0.986038863658905,
-0.3116273880004883,
0.124173201620578... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Zainaru/XSS | Zainaru | 2023-11-27T13:52:34Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-27T13:52:34Z | 2023-11-27T13:51:51.000Z | 2023-11-27T13:51:51 | ---
license: apache-2.0
---
| [
-0.12853367626667023,
-0.18616794049739838,
0.6529126763343811,
0.4943627417087555,
-0.19319313764572144,
0.23607443273067474,
0.36071979999542236,
0.05056338757276535,
0.5793654322624207,
0.7400138974189758,
-0.6508103013038635,
-0.23783987760543823,
-0.710224986076355,
-0.047825977206230... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ShynBui/law_term_classification | ShynBui | 2023-11-27T13:56:39Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T13:56:39Z | 2023-11-27T13:56:39.000Z | 2023-11-27T13:56:39 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
babosso75/ListeA | babosso75 | 2023-11-27T13:58:07Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | 2023-11-27T13:58:07Z | 2023-11-27T13:58:06.000Z | 2023-11-27T13:58:06 | ---
license: mit
---
| [
-0.12853367626667023,
-0.18616794049739838,
0.6529126763343811,
0.4943627417087555,
-0.19319313764572144,
0.23607443273067474,
0.36071979999542236,
0.05056338757276535,
0.5793654322624207,
0.7400138974189758,
-0.6508103013038635,
-0.23783987760543823,
-0.710224986076355,
-0.047825977206230... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
cyanelis/15485 | cyanelis | 2023-11-27T14:03:56Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T14:03:56Z | 2023-11-27T14:03:34.000Z | 2023-11-27T14:03:34 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
coeuslearning/Lagreement | coeuslearning | 2023-11-27T14:06:11Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T14:06:11Z | 2023-11-27T14:06:11.000Z | 2023-11-27T14:06:11 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
idleheroevich2/Krebs | idleheroevich2 | 2023-11-27T14:30:24Z | 0 | 0 | null | [
"license:unknown",
"region:us"
] | 2023-11-27T14:30:24Z | 2023-11-27T14:15:43.000Z | 2023-11-27T14:15:43 | ---
license: unknown
---
| [
-0.12853367626667023,
-0.18616794049739838,
0.6529126763343811,
0.4943627417087555,
-0.19319313764572144,
0.23607443273067474,
0.36071979999542236,
0.05056338757276535,
0.5793654322624207,
0.7400138974189758,
-0.6508103013038635,
-0.23783987760543823,
-0.710224986076355,
-0.047825977206230... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nlplabtdtu/law_data | nlplabtdtu | 2023-11-27T14:16:33Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T14:16:33Z | 2023-11-27T14:15:59.000Z | 2023-11-27T14:15:59 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
LanguageBind/Video-Bench | LanguageBind | 2023-11-28T01:10:31Z | 0 | 2 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-28T01:10:31Z | 2023-11-27T14:21:23.000Z | 2023-11-27T14:21:23 | ---
license: apache-2.0
---
| [
-0.12853367626667023,
-0.18616794049739838,
0.6529126763343811,
0.4943627417087555,
-0.19319313764572144,
0.23607443273067474,
0.36071979999542236,
0.05056338757276535,
0.5793654322624207,
0.7400138974189758,
-0.6508103013038635,
-0.23783987760543823,
-0.710224986076355,
-0.047825977206230... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pavitemple/Accident-Multiple-Labels-V2 | pavitemple | 2023-11-27T14:31:53Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T14:31:53Z | 2023-11-27T14:26:23.000Z | 2023-11-27T14:26:23 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tyzhu/squad_qa_title_v5_full_last_permute | tyzhu | 2023-11-27T15:09:01Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T15:09:01Z | 2023-11-27T14:52:03.000Z | 2023-11-27T14:52:03 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: answer
dtype: string
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 7724566.286747957
num_examples: 4778
- name: validation
num_bytes: 353148
num_examples: 300
download_size: 1323670
dataset_size: 8077714.286747957
---
# Dataset Card for "squad_qa_title_v5_full_last_permute"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.2386210411787033,
0.06016867607831955,
0.449226051568985,
0.43662744760513306,
-0.31470853090286255,
0.2707327902317047,
0.5294636487960815,
0.10359526425600052,
0.6444113850593567,
0.5211895108222961,
-1.2222076654434204,
-1.0397834777832031,
-0.3588913679122925,
0.17425628006458282,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tyzhu/squad_qa_title_v5_full_no_permute | tyzhu | 2023-11-27T15:09:23Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T15:09:23Z | 2023-11-27T14:52:32.000Z | 2023-11-27T14:52:32 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: answer
dtype: string
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 7724566.286747957
num_examples: 4778
- name: validation
num_bytes: 353148
num_examples: 300
download_size: 1180079
dataset_size: 8077714.286747957
---
# Dataset Card for "squad_qa_title_v5_full_no_permute"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.25973352789878845,
-0.01453939825296402,
0.3450220227241516,
0.5336771011352539,
-0.3358359634876251,
0.25243932008743286,
0.5436173677444458,
0.05727878957986832,
0.7303087711334229,
0.5888449549674988,
-1.2744051218032837,
-1.0634201765060425,
-0.41729089617729187,
0.1597524881362915,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tyzhu/squad_qa_wrong_title_v5_full_first_permute | tyzhu | 2023-11-27T15:09:46Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T15:09:46Z | 2023-11-27T14:53:03.000Z | 2023-11-27T14:53:03 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: answer
dtype: string
- name: context_id
dtype: string
- name: correct_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 7855838.683639287
num_examples: 4778
- name: validation
num_bytes: 361864
num_examples: 300
download_size: 1370780
dataset_size: 8217702.683639287
---
# Dataset Card for "squad_qa_wrong_title_v5_full_first_permute"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.1587744504213333,
-0.11216800659894943,
0.26050257682800293,
0.5989452004432678,
-0.3252108693122864,
0.2765079140663147,
0.6102404594421387,
0.039488207548856735,
0.6243736743927002,
0.47674813866615295,
-1.3053653240203857,
-0.8945074677467346,
-0.6521527171134949,
0.17244693636894226... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tyzhu/squad_qa_wrong_title_v5_full_last_permute | tyzhu | 2023-11-27T15:10:12Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T15:10:12Z | 2023-11-27T14:53:34.000Z | 2023-11-27T14:53:34 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: answer
dtype: string
- name: context_id
dtype: string
- name: correct_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 7855838.683639287
num_examples: 4778
- name: validation
num_bytes: 361864
num_examples: 300
download_size: 1363399
dataset_size: 8217702.683639287
---
# Dataset Card for "squad_qa_wrong_title_v5_full_last_permute"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.10259737819433212,
-0.04081492871046066,
0.4084470272064209,
0.5266246795654297,
-0.2773188352584839,
0.30845245718955994,
0.5500115752220154,
0.05052148178219795,
0.578822135925293,
0.49177682399749756,
-1.2190678119659424,
-0.9023677706718445,
-0.5380666851997375,
0.22021102905273438,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tyzhu/squad_qa_wrong_title_v5_full_no_permute | tyzhu | 2023-11-27T15:10:35Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T15:10:35Z | 2023-11-27T14:54:02.000Z | 2023-11-27T14:54:02 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: answer
dtype: string
- name: context_id
dtype: string
- name: correct_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 7855838.683639287
num_examples: 4778
- name: validation
num_bytes: 361864
num_examples: 300
download_size: 1219794
dataset_size: 8217702.683639287
---
# Dataset Card for "squad_qa_wrong_title_v5_full_no_permute"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.12037856131792068,
-0.09360864013433456,
0.32942473888397217,
0.5823449492454529,
-0.2764643430709839,
0.29562780261039734,
0.5470730066299438,
0.023116065189242363,
0.6531982421875,
0.5405027270317078,
-1.2531603574752808,
-0.9193731546401978,
-0.5656319260597229,
0.20043964684009552,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tyzhu/squad_qa_num_v5_full_first_permute | tyzhu | 2023-11-27T15:10:57Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T15:10:57Z | 2023-11-27T14:54:26.000Z | 2023-11-27T14:54:26 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: answer
dtype: string
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 7515576.963687525
num_examples: 4778
- name: validation
num_bytes: 343184
num_examples: 300
download_size: 1306567
dataset_size: 7858760.963687525
---
# Dataset Card for "squad_qa_num_v5_full_first_permute"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4246648848056793,
-0.04799090325832367,
0.21647782623767853,
0.5964874029159546,
-0.3896642327308655,
0.21927331387996674,
0.5829057693481445,
0.0997915267944336,
0.7506733536720276,
0.5665169954299927,
-1.2691373825073242,
-0.9942304491996765,
-0.44747239351272583,
0.09933359920978546,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tyzhu/squad_qa_num_v5_full_last_permute | tyzhu | 2023-11-27T15:11:20Z | 0 | 0 | null | [
"region:us"
] | 2023-11-27T15:11:20Z | 2023-11-27T14:54:55.000Z | 2023-11-27T14:54:55 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: answer
dtype: string
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 7515576.963687525
num_examples: 4778
- name: validation
num_bytes: 343184
num_examples: 300
download_size: 1299186
dataset_size: 7858760.963687525
---
# Dataset Card for "squad_qa_num_v5_full_last_permute"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.35096707940101624,
0.03272039070725441,
0.3851378858089447,
0.5077164173126221,
-0.33381110429763794,
0.23882314562797546,
0.49772271513938904,
0.12087494879961014,
0.6991910934448242,
0.5858350992202759,
-1.1784849166870117,
-0.9948354959487915,
-0.29340270161628723,
0.1571846008300781... | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.