"id","author","created_at","lastModified","sha","downloads_30","downloads_alltime","likes","tags","tasks","description","citation","languages","language_category","size_categories","paperswithcode_id","private","gated","disabled","license","arxiv_id","url","task_ids" "allenai/c4","allenai","2022-03-02 23:29:22","2024-01-09 19:14:03+00:00","1588ec454efa1a09f29cd18ddd04fe05fc8653a2","496392","4669421","354","None","text-generation, fill-mask, language-modeling, masked-language-modeling","C4 Dataset Summary A colossal, cleaned version of Common Crawl's web crawl corpus. Based on Common Crawl dataset: ""https://commoncrawl.org"". This is the processed version of Google's C4 dataset We pre","None","af, am, ar, az, be, bg, bn, ca, ceb, co, cs, cy, da, de, el, en, eo, es, et, eu, fa, fi, fil, fr, fy, ga, gd, gl, gu, ha, haw, he, hi, hmn, ht, hu, hy, id, ig, is, it, iw, ja, jv, ka, kk, km, kn, ko, ku, ky, la, lb, lo, lt, lv, mg, mi, mk, ml, mn, mr, ms, mt, my, ne, nl, no, ny, pa, pl, ps, pt, ro, ru, sd, si, sk, sl, sm, sn, so, sq, sr, st, su, sv, sw, ta, te, tg, th, tr, uk, und, ur, uz, vi, xh, yi, yo, zh, zu","multi","n|<|1|K|||1|K|<|n|<|1|0|K|||1|0|K|<|n|<|1|0|0|K|||1|0|0|K|<|n|<|1|M|||1|M|<|n|<|1|0|M|||1|0|M|<|n|<|1|0|0|M|||1|0|0|M|<|n|<|1|B|||1|B|<|n|<|1|0|B","None","False","False","False","odc-by","1910.10683","None","language-modeling, masked-language-modeling" "unimelb-nlp/wikiann","unimelb-nlp","2022-03-02 23:29:22","2024-02-22 14:32:02+00:00","f0a3be6dc5564c0cc4150bb660144800a1f539d4","47521","2954749","103","None","token-classification, named-entity-recognition","Dataset Card for WikiANN Dataset Summary WikiANN (sometimes called PAN-X) is a multilingual named entity recognition dataset consisting of Wikipedia articles annotated with LOC (location), PER (person","None","ace, af, als, am, an, ang, ar, arc, arz, as, ast, ay, az, ba, bar, be, bg, bh, bn, bo, br, bs, ca, cbk, cdo, ce, ceb, ckb, co, crh, cs, csb, cv, cy, da, de, diq, dv, el, eml, en, eo, es, et, eu, ext, fa, fi, fo, fr, frr, fur, fy, ga, gan, gd, gl, gn, gu, hak, he, hi, hr, hsb, hu, hy, ia, id, ig, ilo, io, is, it, ja, jbo, jv, ka, kk, km, kn, ko, ksh, ku, ky, la, lb, li, lij, lmo, ln, lt, lv, lzh, mg, mhr, mi, min, mk, ml, mn, mr, ms, mt, mwl, my, mzn, nan, nap, nds, ne, nl, nn, no, nov, oc, or, os, pa, pdc, pl, pms, pnb, ps, pt, qu, rm, ro, ru, rw, sa, sah, scn, sco, sd, sgs, sh, si, sk, sl, so, sq, sr, su, sv, sw, szl, ta, te, tg, th, tk, tl, tr, tt, ug, uk, ur, uz, vec, vep, vi, vls, vo, vro, wa, war, wuu, xmf, yi, yo, yue, zea, zh","multi","n|<|1|K","None","False","False","False","unknown","None","None","named-entity-recognition" "HAERAE-HUB/KMMLU","HAERAE-HUB","2023-11-27 09:06:18","2024-03-05 14:13:32+00:00","d61b3f19e552c576bf5960dd24289763edc36a88","24676","1980159","62","None","multiple-choice","KMMLU (Korean-MMLU) We propose KMMLU, a new Korean benchmark with 35,030 expert-level multiple-choice questions across 45 subjects ranging from humanities to STEM. Unlike previous Korean benchmarks th","None","ko","mono","1|0|K|<|n|<|1|0|0|K","None","False","False","False","cc-by-nd-4.0","2402.11548","None","None" "google/xtreme","google","2022-03-02 23:29:22","2024-02-22 17:12:06+00:00","ec5f1f46e9af79639a90684a7a70a956c4998f04","42104","1893686","97","None","multiple-choice, question-answering, token-classification, text-classification, text-retrieval, token-classification, multiple-choice-qa, extractive-qa, open-domain-qa, natural-language-inference, named-entity-recognition, part-of-speech","Dataset Card for ""xtreme"" Dataset Summary The Cross-lingual Natural Language Inference (XNLI) corpus is a crowd-sourced collection of 5,000 test and 2,500 dev pairs for the MultiNLI corpus. The pairs ","None","af, ar, bg, bn, de, el, en, es, et, eu, fa, fi, fr, he, hi, hu, id, it, ja, jv, ka, kk, ko, ml, mr, ms, my, nl, pt, ru, sw, ta, te, th, tl, tr, ur, vi, yo, zh","multi","n|<|1|K|||1|K|<|n|<|1|0|K|||1|0|K|<|n|<|1|0|0|K|||1|0|0|K|<|n|<|1|M","None","False","False","False","apache-2.0","None","None","multiple-choice-qa, extractive-qa, open-domain-qa, natural-language-inference, named-entity-recognition, part-of-speech" "Helsinki-NLP/opus-100","Helsinki-NLP","2022-03-02 23:29:22","2024-02-28 09:17:34+00:00","805090dc28bf78897da9641cdf08b61287580df9","24897","1450598","168","task_categories:translation, annotations_creators:no-annotation, language_creators:found, multilinguality:translation, source_datasets:extended, language:af, language:am, language:an, language:ar, language:as, language:az, language:be, language:bg, language:bn, language:br, language:bs, language:ca, language:cs, language:cy, language:da, language:de, language:dz, language:el, language:en, language:eo, language:es, language:et, language:eu, language:fa, language:fi, language:fr, language:fy, language:ga, language:gd, language:gl, language:gu, language:ha, language:he, language:hi, language:hr, language:hu, language:hy, language:id, language:ig, language:is, language:it, language:ja, language:ka, language:kk, language:km, language:kn, language:ko, language:ku, language:ky, language:li, language:lt, language:lv, language:mg, language:mk, language:ml, language:mn, language:mr, language:ms, language:mt, language:my, language:nb, language:ne, language:nl, language:nn, language:no, language:oc, language:or, language:pa, language:pl, language:ps, language:pt, language:ro, language:ru, language:rw, language:se, language:sh, language:si, language:sk, language:sl, language:sq, language:sr, language:sv, language:ta, language:te, language:tg, language:th, language:tk, language:tr, language:tt, language:ug, language:uk, language:ur, language:uz, language:vi, language:wa, language:xh, language:yi, language:yo, language:zh, language:zu, license:unknown, size_categories:10M|1|T","None","False","False","False","cc0-1.0","None","None","language-modeling" "mozilla-foundation/common_voice_16_0","mozilla-foundation","2023-12-20 09:01:34","2023-12-21 13:53:03+00:00","5b41cdb6fba1effea26615d7c78df110694e7c33","590","121056","71","None","None","Dataset Card for Common Voice Corpus 16 Dataset Summary The Common Voice dataset consists of a unique MP3 and corresponding text file. Many of the 30328 recorded hours in the dataset also include demo","@inproceedings{commonvoice:2020, author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.}, title = {Common Voice: A Massively-Multilingual Speech Corpus}, booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)}, pages = {4211--4215}, year = 2020 }","ab, af, am, ar, as, ast, az, ba, bas, be, bg, bn, br, ca, ckb, cnh, cs, cv, cy, da, de, dv, dyu, el, en, eo, es, et, eu, fa, fi, fr, fy, ga, gl, gn, ha, he, hi, hsb, hu, hy, ia, id, ig, is, it, ja, ka, kab, kk, kmr, ko, ky, lg, lij, lo, lt, ltg, lv, mdf, mhr, mk, ml, mn, mr, mrj, mt, myv, nan, ne, nhi, nl, nn, oc, or, os, pa, pl, ps, pt, quy, rm, ro, ru, rw, sah, sat, sc, sk, skr, sl, sq, sr, sv, sw, ta, te, th, ti, tig, tk, tok, tr, tt, tw, ug, uk, ur, uz, vi, vot, yi, yo, yue, zgh, zh","multi","None","None","False","auto","False","cc0-1.0","1912.06670","None","None" "esdurmus/wiki_lingua","esdurmus","2022-03-02 23:29:22","2024-01-05 08:06:54+00:00","ea3db3510cbd34d0f8dc612419ae40e4732f3b40","1088","112664","41","None","summarization","Dataset Card for ""wiki_lingua"" Dataset Summary We introduce WikiLingua, a large-scale, multilingual dataset for the evaluation of cross-lingual abstractive summarization systems. We extract article an","None","ar, cs, de, en, es, fr, hi, id, it, ja, ko, nl, pt, ru, th, tr, vi, zh","multi","1|0|K|<|n|<|1|0|0|K|||1|K|<|n|<|1|0|K","None","False","False","False","cc-by-3.0","2010.03093","None","None" "adithya7/xlel_wd","adithya7","2022-04-22 02:50:11","2022-07-13 07:46:57+00:00","a6d542d37b24cc1f2536af5e4afb850b9641e3ff","278","97612","2","None","None","XLEL-WD is a multilingual event linking dataset. This dataset contains mention references from multilingual Wikipedia/Wikinews articles to event items in Wikidata. The text descriptions for Wikidata e","@article{pratapa-etal-2022-multilingual, title = {Multilingual Event Linking to Wikidata}, author = {Pratapa, Adithya and Gupta, Rishubh and Mitamura, Teruko}, publisher = {arXiv}, year = {2022}, url = {https://arxiv.org/abs/2204.06535}, }","af, ar, be, bg, bn, ca, cs, da, de, el, en, es, fa, fi, fr, he, hi, hu, id, it, ja, ko, ml, mr, ms, nl, no, pl, pt, ro, ru, si, sk, sl, sr, sv, sw, ta, te, th, tr, uk, vi, zh","multi","1|M|<|n|<|1|0|M","None","False","False","False","cc-by-4.0","2204.06535","None","None" "Helsinki-NLP/opus_ubuntu","Helsinki-NLP","2022-03-02 23:29:22","2024-02-22 15:45:27+00:00","4dc07f768ea537e979b48b824c1fe254918d1f76","219","92472","4","None","translation","Dataset Card for Opus Ubuntu Dataset Summary These are translations of the Ubuntu software package messages, donated by the Ubuntu community. To load a language pair which isn't part of the config, al","None","ace, af, ak, am, an, ang, ar, ary, as, ast, az, ba, bal, be, bem, ber, bg, bho, bn, bo, br, brx, bs, bua, byn, ca, ce, ceb, chr, ckb, co, crh, cs, csb, cv, cy, da, de, dsb, dv, dz, el, en, eo, es, et, eu, fa, ff, fi, fil, fo, fr, frm, frp, fur, fy, ga, gd, gl, gn, grc, gu, guc, gv, ha, haw, he, hi, hil, hne, hr, hsb, ht, hu, hy, ia, id, ig, io, is, it, iu, ja, jbo, jv, ka, kab, kg, kk, kl, km, kn, ko, kok, ks, ksh, ku, kw, ky, la, lb, lg, li, lij, lld, ln, lo, lt, ltg, lv, mai, mg, mh, mhr, mi, miq, mk, ml, mn, mr, ms, mt, mus, my, nan, nap, nb, nds, ne, nhn, nl, nn, no, nso, ny, oc, om, or, os, pa, pam, pap, pl, pms, pmy, ps, pt, qu, rm, ro, rom, ru, rw, sa, sc, sco, sd, se, shn, shs, si, sk, sl, sm, sml, sn, so, son, sq, sr, st, sv, sw, syr, szl, ta, te, tet, tg, th, ti, tk, tl, tlh, tr, trv, ts, tt, ug, uk, ur, uz, ve, vec, vi, wa, wae, wo, xal, xh, yi, yo, zh, zu, zza","multi","1|0|K|<|n|<|1|0|0|K|||1|K|<|n|<|1|0|K|||n|<|1|K","None","False","False","False","bsd-3-clause","None","None","None" "mozilla-foundation/common_voice_12_0","mozilla-foundation","2023-03-12 17:28:02","2023-11-17 18:09:06+00:00","eb58b6bb4457bd7b185edcd5a4286291b3f64e68","666","88967","22","None","automatic-speech-recognition","Dataset Card for Common Voice Corpus 12.0 Dataset Summary The Common Voice dataset consists of a unique MP3 and corresponding text file. Many of the 26119 recorded hours in the dataset also include de","@inproceedings{commonvoice:2020, author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.}, title = {Common Voice: A Massively-Multilingual Speech Corpus}, booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)}, pages = {4211--4215}, year = 2020 }","ab, ar, as, ast, az, ba, bas, be, bg, bn, br, ca, ckb, cnh, cs, cv, cy, da, de, dv, el, en, eo, es, et, eu, fa, fi, fr, gl, gn, ha, hi, hsb, hu, ia, id, ig, it, ja, ka, kab, kk, kmr, ko, ky, lg, lt, lv, mdf, mhr, mk, ml, mn, mr, mrj, mt, myv, nl, oc, or, pl, pt, quy, ro, ru, rw, sah, sat, sc, sk, skr, sl, sr, sw, ta, th, ti, tig, tok, tr, tt, tw, ug, uk, ur, uz, vi, vot, yo, yue, rm, zh, sv, pa, nn, ne, nan, hy, ga, fy","multi","1|0|M|<|n|<|1|0|0|M","None","False","auto","False","cc0-1.0","1912.06670","None","None" "castorini/mr-tydi","castorini","2022-03-02 23:29:22","2022-10-12 20:25:19+00:00","1d43c80218d06d0ef80f5b172ccabd848b948bc1","970","86336","19","None","text-retrieval","Dataset Summary Mr. TyDi is a multi-lingual benchmark dataset built on TyDi, covering eleven typologically diverse languages. It is designed for monolingual retrieval, specifically to evaluate ranking","None","ar, bn, en, fi, id, fi, ja, ko, ru, sw, te, th","multi","None","None","False","False","False","apache-2.0","None","None","None" "kakaobrain/kor_nli","kakaobrain","2022-03-02 23:29:22","2024-08-22 08:05:04+00:00","3e0e4626f66911b344c490c26e3cc07e6c3bb0f9","224","72388","20","task_categories:text-classification, task_ids:natural-language-inference, task_ids:multi-input-text-classification, annotations_creators:crowdsourced, language_creators:machine-generated, language_creators:expert-generated, multilinguality:monolingual, source_datasets:extended|multi_nli, source_datasets:extended|snli, source_datasets:extended|xnli, language:ko, license:cc-by-sa-4.0, size_categories:100K>> from datasets import load_dataset >>> ds = load_dataset(""beomi/KoAlpaca-v1.1a"", split=""train"") >>> ds Dataset(","None","ko","mono","None","None","False","False","False","None","None","None","None" "Helsinki-NLP/opus_gnome","Helsinki-NLP","2022-03-02 23:29:22","2024-02-22 15:04:29+00:00","8362267db12f636640003de2381eff67277e6657","299","40448","1","None","translation","Dataset Card for Opus Gnome Dataset Summary To load a language pair which isn't part of the config, all you need to do is specify the language code as pairs. You can find the valid pairs in Homepage s","None","af, am, an, ang, ar, as, ast, az, bal, be, bem, bg, bn, bo, br, brx, bs, ca, crh, cs, csb, cy, da, de, dv, dz, el, en, eo, es, et, eu, fa, fi, fo, fr, fur, fy, ga, gd, gl, gn, gu, gv, ha, he, hi, hr, hu, hy, ia, id, ig, io, is, it, ja, jbo, ka, kg, kk, km, kn, ko, kr, ks, ku, ky, la, lg, li, lo, lt, lv, mai, mg, mi, mk, ml, mn, mr, ms, mt, mus, my, nb, nds, ne, nhn, nl, nn, no, nqo, nr, nso, oc, or, os, pa, pl, ps, pt, quz, ro, ru, rw, si, sk, sl, so, sq, sr, st, sv, sw, szl, ta, te, tg, th, tk, tl, tr, ts, tt, tyj, ug, uk, ur, uz, vi, wa, xh, yi, yo, zh, zu","multi","1|0|K|<|n|<|1|0|0|K|||1|K|<|n|<|1|0|K|||n|<|1|K","None","False","False","False","unknown","None","None","None" "copenlu/answerable_tydiqa","copenlu","2022-08-16 11:31:34","2024-07-12 11:53:23+00:00","4413588cd5f38fddbf047f87a7167a0a9de5369a","107","39407","8","None","question-answering, extractive-qa","Dataset Card for ""answerable-tydiqa"" Dataset Summary TyDi QA is a question answering dataset covering 11 typologically diverse languages. Answerable TyDi QA is an extension of the GoldP subtask of the","None","en, ar, bn, fi, id, ja, sw, ko, ru, te, th","multi","1|0|0|K|<|n|<|1|M","None","False","False","False","apache-2.0","None","None","extractive-qa" "OpenAssistant/oasst2","OpenAssistant","2023-12-24 09:53:24","2024-01-11 06:09:29+00:00","179dd21fc55192153d94adb0e0ce8f69e222bf75","1297","38580","222","None","None","Open Assistant Conversations Dataset Release 2 (OASST2) Dataset Structure This dataset contains message trees. Each message tree has an initial prompt message as the root node, which can have multiple","None","en, es, ru, de, pl, th, vi, sv, bn, da, he, it, fa, sk, id, nb, el, nl, hu, eu, zh, eo, ja, ca, cs, bg, fi, pt, tr, ro, ar, uk, gl, fr, ko","multi","1|0|0|K|<|n|<|1|M","None","False","False","False","apache-2.0","2304.07327","None","None" "Babelscape/SREDFM","Babelscape","2023-06-13 18:35:19","2023-06-20 07:33:28+00:00","2732d2834e12e36510aeb2a468163ea2642d55db","2928","37479","14","None","token-classification","Relation Extraction (RE) is a task that identifies relationships between entities in a text, enabling the acquisition of relational facts and bridging the gap between natural language and structured k","@InProceedings{REDFM2023, author = {Huguet Cabot, Pere-Lluis and Tedeschi, Simone and Ngonga Ngomo, Axel-Cyrille and Navigli, Roberto}, title = {RED\\textsuperscript{FM}: a Filtered and Multilingual Relation Extraction Dataset}, booktitle = {Proceedings of the 2023 Conference on Association for Computational Linguistics}, year = {2023}, publisher = {Association for Computational Linguistics}, location = {Toronto, Canada}, }","ar, ca, de, el, en, es, fr, hi, it, ja, ko, nl, pl, pt, ru, sv, vi, zh","multi","1|0|M|<|n|<|1|0|0|M","None","False","False","False","cc-by-sa-4.0","2306.09802","None","None" "maywell/korean_textbooks","maywell","2023-12-27 23:13:45","2024-01-10 09:21:36+00:00","0f78e815bf4ba9ca0addf3484ee195d526c506a4","1377","35689","108","None","None","Massive Korean synthetic dataset This dataset is a large-scale Korean artificial data set created using Gemini Pro. It was created using the methodology described in Creation of synthetic textbook-qua","None","ko","mono","1|M|<|n|<|1|0|M","None","False","False","False","apache-2.0","2306.11644","None","None" "sentence-transformers/parallel-sentences-ccmatrix","sentence-transformers","2024-05-25 08:10:49","2024-06-18 19:49:55+00:00","0c2ecfe22787e119991dbc482b776dc9aa25aa9f","8249","33644","4","None","feature-extraction, sentence-similarity","Dataset Card for Parallel Sentences - CCMatrix This dataset contains parallel sentences (i.e. English sentence + the same sentences in another language) for numerous other languages. The texts origina","None","af, ar, ast, az, be, bg, bn, br, ca, ceb, cs, da, de, el, eo, es, et, eu, fa, fi, fr, fy, ga, gd, gl, ha, he, hi, hr, hu, id, ig, ilo, is, it, ja, jv, ko, la, lb, lt, lv, mg, mk, ml, mr, ms, ne, nl, no, oc, or, pl, pt, ro, ru, sd, si, sk, sl, so, sq, sr, su, sv, sw, ta, tl, tr, uk, ur, vi, xh, yi, zh","multi","1|B|<|n|<|1|0|B","None","False","False","False","None","None","None","None" "mteb/sib200","mteb","2024-05-07 14:07:00","2024-05-07 14:59:53+00:00","a74d7350ea12af010cfb1c21e34f1f81fd2e615b","1212","27530","1","None","text-classification, topic-classification","Dataset Card for SIB-200 Dataset Summary SIB-200 is the largest publicly available topic classification dataset based on Flores-200 covering 205 languages and dialects. The train/validation/test sets ","None","ace, acm, acq, aeb, af, ajp, ak, als, am, apc, ar, ars, ary, arz, as, ast, awa, ayr, azb, azj, ba, bm, ban, be, bem, bn, bho, bjn, bo, bs, bug, bg, ca, ceb, cs, cjk, ckb, crh, cy, da, de, dik, dyu, dz, el, en, eo, et, eu, ee, fo, fj, fi, fon, fr, fur, fuv, gaz, gd, ga, gl, gn, gu, ht, ha, he, hi, hne, hr, hu, hy, ig, ilo, id, is, it, jv, ja, kab, kac, kam, kn, ks, ka, kk, kbp, kea, khk, km, ki, rw, ky, kmb, kmr, knc, kg, ko, lo, lij, li, ln, lt, lmo, ltg, lb, lua, lg, luo, lus, lvs, mag, mai, ml, mar, min, mk, mt, mni, mos, mi, my, nl, nn, nb, npi, nqo, nso, nus, ny, oc, ory, pag, pa, pap, pbt, pes, plt, pl, pt, prs, quy, ro, rn, ru, sg, sa, sat, scn, shn, si, sk, sl, sm, sn, sd, so, st, es, sc, sr, ss, su, sv, swh, szl, ta, taq, tt, te, tg, tl, th, ti, tpi, tn, ts, tk, tum, tr, tw, tzm, ug, uk, umb, ur, uzn, vec, vi, war, wo, xh, ydd, yo, yue, zh, zsm, zu","multi","1|K|<|n|<|1|0|K","None","False","False","False","cc-by-sa-4.0","2309.07445","None","topic-classification" "occiglot/tokenizer-wiki-bench","occiglot","2024-03-13 14:49:07","2024-04-23 21:00:00+00:00","56a5ea2bfc6a6445e9e3ed58130ef1fb2759e6c6","4761","27245","5","None","None","Multilingual Tokenizer Benchmark This dataset includes pre-processed wikipedia data for tokenizer evaluation in 45 languages. We provide more information on the evaluation task in general this blogpos","None","af, ar, bg, ca, cs, da, de, el, en, es, et, eu, fa, fi, fr, ga, he, hi, hr, hu, hy, id, it, ja, ko, lt, lv, mr, nl, no, pl, pt, ro, ru, sa, sk, sl, sr, sv, ta, te, tr, uk, ur, vi","multi","None","None","False","False","False","mit","2012.15613","None","None" "taeminlee/Ko-StrategyQA","taeminlee","2024-01-12 01:58:26","2024-01-19 08:48:28+00:00","d243889a3eb6654029dbd7e7f9319ae31d58f97c","6240","26627","16","task_categories:text-retrieval, task_ids:document-retrieval, multilinguality:monolingual, source_datasets:Ko-StrategyQA, language:ko, size_categories:10K 답변에 오픈 어시스턴트라고 하는 경우가 나오기 때문또한 스탠포드 대학 번역 데이터에서 번역 과정 오류로 input에 입력없음 과 같이 추가된 부분 삭제그리고 ","None","ko","mono","None","None","False","False","False","apache-2.0","None","None","None" "Muennighoff/xP3x-sample","Muennighoff","2023-07-06 09:42:03","2023-09-18 13:51:06+00:00","70365cee44e6b6cd1ad5f76a052aefa6bbd667b7","619","4732","3","None","other","A multilingual collection of Winograd Schemas in six languages that can be used for evaluation of cross-lingual commonsense reasoning capabilities.","@misc{muennighoff2022crosslingual, title={Crosslingual Generalization through Multitask Finetuning}, author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel}, year={2022}, eprint={2211.01786}, archivePrefix={arXiv}, primaryClass={cs.CL} }","af, ar, az, be, bg, bn, br, bs, ca, ch, cs, cv, cy, da, de, el, en, eo, es, et, eu, fa, fi, fo, fr, fy, ga, gd, gl, gn, he, hi, hr, hu, hy, ia, id, ie, io, is, it, ja, jv, ka, kk, km, ko, ku, kw, la, lb, lt, lv, mi, mk, ml, mn, mr, ms, mt, my, nb, nl, nn, no, oc, pl, pt, qu, rn, ro, ru, sh, sl, sq, sr, sv, sw, ta, te, th, tk, tl, tr, tt, ug, uk, ur, uz, vi, vo, yi, zh, ace, acm, acq, aeb, af, ajp, ak, als, am, apc, ar, ars, ary, arz, as, ast, awa, ayr, azb, azj, ba, bm, ban, be, bem, bn, bho, bjn, bo, bs, bug, bg, ca, ceb, cs, cjk, ckb, crh, cy, da, de, dik, dyu, dz, el, en, eo, et, eu, ee, fo, fj, fi, fon, fr, fur, fuv, gaz, gd, ga, gl, gn, gu, ht, ha, he, hi, hne, hr, hu, hy, ig, ilo, id, is, it, jv, ja, kab, kac, kam, kn, ks, ka, kk, kbp, kea, khk, km, ki, rw, ky, kmb, kmr, knc, kg, ko, lo, lij, li, ln, lt, lmo, ltg, lb, lua, lg, luo, lus, lvs, mag, mai, ml, mar, min, mk, mt, mni, mos, mi, my, nl, nn, nb, npi, nso, nus, ny, oc, ory, pag, pa, pap, pbt, pes, plt, pl, pt, prs, quy, ro, rn, ru, sg, sa, sat, scn, shn, si, sk, sl, sm, sn, sd, so, st, es, sc, sr, ss, su, sv, swh, szl, ta, taq, tt, te, tg, tl, th, ti, tpi, tn, ts, tk, tum, tr, tw, tzm, ug, uk, umb, ur, uzn, vec, vi, war, wo, xh, ydd, yo, yue, zh, zsm, zu","multi","1|0|0|M|<|n|<|1|B","None","False","False","False","apache-2.0","None","None","None" "traintogpb/aihub-koen-translation-integrated-base-1m","traintogpb","2024-01-05 00:51:54","2024-01-05 04:17:17+00:00","eb15b41a52964f7d0906b5cc10c347fa4ec66de1","111","4536","3","task_categories:translation, language:en, language:ko, size_categories:1M /(/ 같은 오류 등...) Citation @misc{m","None","ko","mono","1|0|0|K|<|n|<|1|M","None","False","False","False","cc-by-sa-4.0","None","None","None" "nlpai-lab/openassistant-guanaco-ko","nlpai-lab","2023-06-01 06:54:34","2023-06-01 10:44:35+00:00","d1c4520a0fb31c086af415fc6a66fa1affa95a77","77","1622","8","task_categories:text-generation, task_categories:question-answering, task_categories:summarization, language:ko, license:apache-2.0, size_categories:10K|1|T","None","False","False","False","odc-by","None","None","None" "eaglewatch/Korean_Wikipedia_Dataset_for_GPT2_August_2022","eaglewatch","2023-08-25 05:30:30","2024-06-03 06:22:10+00:00","46d2589714cbda34e9fd6368c43c350025467c40","57","1400","6","task_categories:question-answering, task_categories:text2text-generation, task_categories:translation, task_categories:visual-question-answering, task_ids:open-domain-qa, task_ids:closed-domain-qa, task_ids:dialogue-generation, task_ids:visual-question-answering, annotations_creators:other, language_creators:other, multilinguality:multilingual, language:ko, license:apache-2.0, size_categories:100K>> from datasets import load_dataset >>> ds = load_dataset(""channelcorp/komagpie-raw-preview"", split=""train","None","ko","mono","None","None","False","False","False","None","None","None","None" "jp1924/KconfSpeech","jp1924","2024-03-24 08:28:43","2024-06-14 06:06:17+00:00","ce86bd94845e428dd200823e90d47a68c1303ef5","111","1144","2","task_categories:automatic-speech-recognition, language:ko, size_categories:1M>> from datasets import load_dataset >>> dataset =","None","ko","mono","1|0|M|<|n|<|1|0|0|M","None","False","False","False","cc-by-4.0","None","None","None" "allganize/financial-mmlu-ko","allganize","2024-03-28 22:25:24","2024-04-02 04:50:38+00:00","2b827871962e2f8dd3efb118d105f04c597f6f03","53","930","6","language:ko, size_categories:n<1K, format:parquet, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us","None","financial-mmlu-ko financial-mmlu-ko 데이터는 금융 도메인의 다중 선택(Multiple Choice) 데이터셋입니다. 질문과 선택지가 주어졌을 때, 답을 찾는 객관식 문제입니다. 입력값은 text이며, 아래 Fewshot 예시의 텍스트를 system prompt 혹은 context로 함께 제공할 수 있습니다. 한국어 데이터를 생성","None","ko","mono","None","None","False","False","False","None","None","None","None" "Songweii/M3GIA","Songweii","2024-06-06 09:55:12","2024-06-27 08:02:29+00:00","07415814e27a715acf57b2b157fba88b79599d15","76","913","2","None","None","M3GIA: A Cognition Inspired Multilingual and Multimodal General Intelligence Ability [🌐 Homepage] | 🤗 Dataset | 🤗 Paper | 📖 arXiv | 💻 GitHub The evaluation code can be found in 💻 GitHub. [Abstract] As","None","en, zh, es, fr, pt, ko","multi","1|K|<|n|<|1|0|K","None","False","False","False","apache-2.0","2406.05343","None","None" "AmazonScience/tydi-as2","AmazonScience","2023-05-16 00:02:00","2023-07-24 17:33:28+00:00","39481652408123c412b400f50b2eef530d2da1e8","191","906","1","None","question-answering, text-retrieval, open-domain-qa","TyDi-AS2 Dataset Summary TyDi-AS2 and Xtr-TyDi-AS2 are multilingual Answer Sentence Selection (AS2) datasets comprising 8 diverse languages, proposed in our paper accepted at ACL 2023 (Findings): Cros","None","bn, en, fi, id, ja, ko, ru, sw","multi","1|0|M|<|n|<|1|0|0|M","None","False","False","False","cdla-permissive-2.0","None","None","open-domain-qa" "imvladikon/QAmeleon","imvladikon","2023-08-13 19:29:03","2023-08-13 19:36:48+00:00","bf35de1f9cc3db3938be5bca8130c492bbd60757","86","896","1","task_categories:question-answering, language:ar, language:bn, language:fi, language:id, language:ko, language:ru, language:sw, language:te, license:cc-by-4.0, size_categories:10K as EOS and BOS tok","None","en, es, ru, de, pl, th, vi, sv, bn, da, he, it, fa, sk, id, nb, el, nl, hu, eu, zh, eo, ja, ca, cs, bg, fi, pt, tr, ro, ar, uk, gl, fr, ko","multi","1|K|<|n|<|1|0|k","None","False","False","False","apache-2.0","2304.07327","None","None" "wisenut-nlp-team/aihub_corpus_expertise","wisenut-nlp-team","2023-03-26 13:14:00","2023-05-23 13:00:44+00:00","2a384eb7baa60411d13b1e7fe1ef48249162ae3b","71","371","0","task_categories:other, task_categories:token-classification, annotations_creators:no-annotation, language_creators:found, multilinguality:monolingual, source_datasets:original, language:ko, license:cc-by-4.0, size_categories:10M>> from datasets import load_dataset >>> ds = load_dataset(""jaeyong2/persona-inst"", split=""train"") >>> ds Dataset({ features: ['Level', 'English', 'Korean', 'Thai', 'Vietnamese', 'context'","None","ja, ko, th, vi","multi","None","None","False","False","False","cc-by-nc-sa-4.0","None","None","None" "CaterinaLac/sharegpt-deduplicated","CaterinaLac","2023-10-04 13:31:41","2023-10-04 14:40:39+00:00","6a50751e126ba3a05ba002ef8ed6e013baeab09b","83","282","1","None","conversational","Dataset Card for Dataset Name Dataset Description Dataset Summary This dataset is a deduplicated version of sharegpt4. The deduplication process has two steps: The literal duplicates (both input and o","None","en, zh, ko, fr, ja, es, no, et, de, ca, vi, fi","multi","1|K|<|n|<|1|0|K","None","False","False","False","apache-2.0","None","None","None" "neulab/PangeaBench-xchat","neulab","2024-10-31 21:38:23","2024-11-01 15:19:15+00:00","ffc5a12aee02f9de6cddd2b06eb9efa92664e2c2","81","279","0","None","visual-question-answering","None","None","zh, en, hi, id, ja, rw, ko, es","multi","n|<|1|K","None","False","False","False","cc-by-4.0","None","None","None" "hecatonai/Housing_Subscription_QA_Dataset","hecatonai","2024-09-09 01:21:38","2024-09-24 04:56:47+00:00","a17b81f209fc6a35d9898c461c33f2684856b6e7","35","278","0","task_categories:question-answering, language:ko, size_categories:1K train/valid/test dataset of session 4 translation ( English -> Koeran ) GPT-3.5-turbo is used mostly ","None","ko","mono","1|K|<|n|<|1|0|K","None","False","False","False","apache-2.0","None","None","None" "nayohan/korean-hate-speech","nayohan","2024-07-07 08:22:36","2024-07-07 08:25:42+00:00","3e5b0ae2b1559db61982b47af66367306b67c407","34","223","0","language:ko, license:cc-by-sa-4.0, size_categories:1K>> from datasets import load_dataset >>> ds = load_dataset(""beomi/KoAlpaca-v1.1a"", split=""train"") >>> ds Dataset(","None","ko","mono","None","None","False","False","False","None","None","None","None" "vitus9988/ko_gpt4omini_note_15.4k","vitus9988","2024-10-11 06:37:48","2024-10-29 03:25:08+00:00","b10fb88ec3c9acb9e295d00392014649ea177cba","49","214","0","language:ko, size_categories:10K>> from datasets import load_dataset >>> ds = load_dataset(""SangChan/KCC_Profit_DataSet_v2"", spli","None","ko","mono","None","None","False","False","False","None","None","None","None" "Junnos/luckyvicky","Junnos","2024-07-01 00:13:55","2024-07-08 23:17:55+00:00","8a7192df659d63aff63121afb58d44d39f6a6382","37","212","0","task_categories:text2text-generation, language:ko, license:mit, size_categories:n<1K, format:csv, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us, lifestyle","text2text-generation","원영적 사고 데이터셋","None","ko","mono","n|<|1|K","None","False","False","False","mit","None","None","None" "nayohan/Magpie-Phi3-Pro-300K-Filtered-ko","nayohan","2024-08-07 16:31:16","2024-08-07 17:10:47+00:00","7cb4dc2602885486e51f599f09ae6d493da77909","37","212","1","task_categories:text-generation, language:ko, size_categories:100K>> from datasets import load_dataset >>> ds = load_dataset(""beomi/KoAlpaca-v1.1a"", split=""train"") >>> ds Dataset(","None","ko","mono","None","None","False","False","False","None","None","None","None" "CanariaView/GlobalCopperDemandForecastingDataset","CanariaView","2023-12-30 16:17:42","2023-12-30 16:52:21+00:00","c9fbefec90b10fda261ab03804e068d33e72b2b6","36","201","1","task_categories:time-series-forecasting, language:en, language:ko, size_categories:n<1K, format:csv, modality:tabular, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us, mining, LSTM, TimeSeries, CanariaView","time-series-forecasting","CanariaView Global Copper Demand Forecasting Dataset Description This dataset encompasses economic and industrial indicators vital for constructing a copper demand forecasting model. Coverage Period: ","None","en, ko","bi","None","None","False","False","False","None","None","None","None" "4n3mone/mmmlu_kor","4n3mone","2024-09-24 00:15:46","2024-09-24 00:16:48+00:00","b9f82f39118e4f07c04691b259c8adb8f5fc0b02","40","201","2","task_categories:question-answering, language:ko, license:mit, size_categories:10K>> from datasets import load_dataset >>> ds = load_dataset(""beomi/KoAlpaca-v1.1a"", split=""train"") >>> ds Dataset(","None","ko","mono","None","None","False","False","False","None","None","None","None" "opencompass/mmmlu_lite","opencompass","2024-10-08 13:54:09","2024-11-01 08:34:22+00:00","02c08f44d1b6450b439d0415fc7c63bd70891bd8","44","196","2","None","question-answering","MMMLU-Lite Introduction A lite version of the MMMLU dataset, which is an community version of the MMMLU dataset by OpenCompass. Due to the large size of the original dataset (about 200k questions), we","None","ar, bn, de, es, fr, hi, id, it, ja, ko, pt, sw, yo, zh","multi","None","None","False","False","False","mit","None","None","None" "jungsoon/ComputerLiteracy","jungsoon","2024-09-25 18:55:56","2024-11-13 19:49:15+00:00","c36bd2e1bfb55dd6e85ce4e1d18b05b3eaf02b87","34","195","0","task_categories:multiple-choice, language:ko, license:apache-2.0, size_categories:n<1K, format:parquet, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us, Computer Proficiency Test Level 2, computer proficiency Test","multiple-choice","None","None","ko","mono","None","None","False","auto","False","apache-2.0","None","None","None" "nayohan/coedit-ko","nayohan","2024-08-01 16:30:45","2024-08-01 16:36:10+00:00","e57409674b9353056bcd7bcdcacbdb6d6c82f420","49","195","0","task_categories:text-generation, language:ko, size_categories:10K Train > json 파일을 DataFrame 형태로 변형하여 수정 없이 업로드하였습니다. 저작권에 의해 본 데이터는 외부 반출 및 타인의 acess 승낙은 불허합니다. 데이터 설명 본 데이터의 Type 은 ['양자/다자비교', '비율연산', '단서추출', '날짜추출'","None","ko","mono","1|0|0|K|<|n|<|1|M","None","False","auto","False","cc-by-nc-sa-4.0","None","None","None" "saillab/alpaca_korean_taco","saillab","2024-06-04 00:26:27","2024-09-20 22:08:08+00:00","88b6275ca08cb5bb3bf5a8e977c128f2ff6bf9b1","35","169","1","language:ko, size_categories:10K>> from datasets import load_dataset >>> ds = load_dataset(""onit3772/EvChargerFineTune.1a"", split='train', streami","None","ko","mono","None","None","False","False","False","None","None","None","None" "jin-code/test3","jin-code","2024-02-27 03:56:24","2024-02-27 09:04:55+00:00","e0b5053d97469ea00103586717192a1ad4646625","40","165","0","language:ko, license:cc-by-4.0, size_categories:n<1K, format:parquet, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us","None","None","None","ko","mono","n|<|1|K","None","False","False","False","cc-by-4.0","None","None","None" "jr-d-analyst24/ai_hub_narr_sum_vali","jr-d-analyst24","2024-07-26 15:20:12","2024-07-26 15:21:12+00:00","211fdae95be12572f342066042cc20f112e18365","46","165","0","task_categories:summarization, language:ko, license:apache-2.0, size_categories:n<1K, format:csv, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us","summarization","None","None","ko","mono","1|K|<|n|<|1|0|K","None","False","False","False","apache-2.0","None","None","None" "ifmain/comment-translation","ifmain","2024-10-13 11:30:55","2024-10-13 13:16:36+00:00","ec6ef5ab4c5c7a9070b5fed00c4912df8406cd2f","50","164","0","None","None","This dataset is based on Kaggle. This dataset includes translations of 69,000 Reddit comments into 17 languages (English to 16 languages): Belarusian, Czech, German, English, Spanish, Finnish, French,","None","en, de, fr, es, it, sv, fi, pl, cs, lv, zh, ja, ko, ru, uk, be, kk","multi","None","None","False","False","False","apache-2.0","None","None","None" "ziozzang/deepl-trans-IT-KO","ziozzang","2024-04-02 04:27:10","2024-04-02 04:28:14+00:00","66dd09ec269ed087563b1f1d6c782189c59ff0a9","39","162","0","task_categories:translation, language:ko, language:it, size_categories:10K Train > json 파일을 DataFrame 형태로 변형하여 전처리 및 답변 생성을 하였습니다. Raw 데이터의 answer 정보를 참고하여 답변을 생성하였습니다. 답변 생성 시 gpt-4o 를 활용했습니다. 저작권에 의해 본 데이터는 외부 반","None","ko","mono","1|K|<|n|<|1|0|K","None","False","auto","False","cc-by-nc-sa-4.0","None","None","None" "youngmon/atlassian-qna","youngmon","2024-10-09 08:17:14","2024-11-13 06:03:56+00:00","ab08bb9f51742f88d0b2dc6120f4e4ac7a7313e6","34","150","1","task_categories:question-answering, language:en, language:ko, language:zh, language:ja, language:es, language:ru, license:mit, size_categories:100K Train > json 파일을 DataFrame 형태로 변형하여 전처리 및 답변 생성을 하였습니다. Raw 데이터의 answer 정보를 참고하여 답변을 생성하였습니다. 답변 생성 시 gpt-4o 를 활용했습니다. 저작권에 의해 본 데이터는 외부 반","None","ko","mono","1|K|<|n|<|1|0|K","None","False","auto","False","cc-by-nc-sa-4.0","None","None","None" "overfit-brothers/KRX-INST","overfit-brothers","2024-12-05 06:22:21","2024-12-11 00:48:31+00:00","3d6a1c5e3dce31a28fb2ff40fa2b9a960fcdfa7b","64","146","0","language:ko, license:cc-by-nc-nd-4.0, size_categories:10K>> from datasets import load_dataset >>> ds = load_dataset(""jaeyong2/ko-persona-cot-inst"", split=""train"") >>> ds Dataset({ features: ['content', 'text'], num_rows: 240000 }) Development Pr","None","ko","mono","None","None","False","False","False","None","None","None","None" "CarrotAI/ko-tree-conversation","CarrotAI","2024-10-30 00:51:53","2024-10-31 03:13:31+00:00","ef0d1ace3795e88f92bba38ea5025f749cd0ab28","45","124","0","task_categories:text-generation, language:ko, license:apache-2.0, size_categories:1Ksystem\\n당신은 알리바바 클라우드에서 만든 Qwen입니다. 당신은 유용한 어시스턴트입니다.\\nuser가 논리적인 다단계의 추론 과정이 필요한 복잡한 문제를 내면, assistant는 한국어","None","ko","mono","1|0|0|K|<|n|<|1|M","None","False","False","False","apache-2.0","None","None","None" "BigShort/bok_words_700","BigShort","2024-11-01 10:39:43","2024-11-05 00:11:10+00:00","0966ab021c781986330daecb44eb2aab292cc661","51","118","0","task_categories:text-generation, language:ko, size_categories:n<1K, format:parquet, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us, finance","text-generation","None","None","ko","mono","n|<|1|K","None","False","auto","False","None","None","None","None" "david9dragon9/shp_translations","david9dragon9","2024-12-29 00:22:04","2024-12-29 01:27:27+00:00","2c3ff5946665f081a6c03fcea4192a36547a4dbe","117","117","0","None","question-answering","This dataset contains translations of three splits (askscience, explainlikeimfive, legaladvice) of the Stanford Human Preference (SHP) dataset, used for training domain-invariant reward models. The tr","None","en, ko, zh, th","multi","None","None","False","False","False","mit","None","None","None" "global-llm-2024/ko_ifeval","global-llm-2024","2024-11-01 07:52:13","2024-11-01 07:54:26+00:00","ec906775e543f4a8cf03a94c73cf08938a780cce","39","117","0","language:ko, size_categories:n<1K, format:json, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us","None","Dataset Card for Dataset Name This dataset card aims to be a base template for new datasets. It has been generated using this raw template. Dataset Details Dataset Description Curated by: [More Inform","None","ko","mono","None","None","False","manual","False","None","None","None","None" "bot-yaya/parallel_corpus_game","bot-yaya","2024-12-09 15:40:17","2024-12-09 16:20:55+00:00","a72e8f23f03f3cc50e83abc5e5e60778ad78ae45","66","117","3","None","translation","https://github.com/mnbvc-parallel-corpus-team/parallel_corpus_mnbvc MNBVC平行语料小组:游戏语料 不定期更新,目前已收录的游戏语料文件,共29份: 博德之门3 赛博朋克2077 黑暗之魂3 底特律:化身为人 饥荒 艾尔登法环 原神 黑帝斯 霍格沃兹之遗 Ib 如龙8 如龙7外传 荒野大镖客2 只狼:影逝二度 文明6 杀戮尖塔 ","None","ar, zh, de, en, eo, es, fr, he, id, it, ja, ko, nl, pt, ru, sv, th, vi, pl, tr","multi","None","None","False","False","False","mit","None","None","None" "devngho/korean-webtext-edu","devngho","2024-11-08 12:30:56","2024-11-08 14:02:21+00:00","9dbc5a9f7fcb08c03c2aa70f08c1d7b470c1ceac","45","117","1","task_categories:text-generation, source_datasets:HAERAE-HUB/KOREAN-WEBTEXT, language:ko, license:mit, size_categories:1M과 같은 비식별화가 되어 있다. 이거 참고해서 사용해야 함.데이터 필더할 때, 비식별화 되어 있는 데이터는 따로 전처리 하지 않음","None","ko","mono","None","None","False","auto","False","None","None","None","None" "youjunhyeok/Magpie-Air-DPO-100K-v0.1-ko","youjunhyeok","2024-11-03 12:46:37","2024-11-04 04:59:04+00:00","f65d2102e5fb7544f43e05282058b7d90e6d05de","47","116","0","task_categories:text-generation, language:ko, size_categories:10K