text-classification
bool
2 classes
text
stringlengths
0
664k
false
# Do what you will with the data this is old photos of crafts I used to make - just abide by the liscence above and you good to go!
false
# Alexa Answers from [alexaanswers.amazon.com](https://alexaanswers.amazon.com/) The Alexa Answers community helps to improve Alexa’s knowledge and answer questions asked by Alexa users. Which contains some very quirky and hard question like Q: what percent of the population has blackhair A: The most common hair color in the world is black and its found in wide array of background and ethnicities. About 75 to 85% of the global population has either black hair or the deepest brown shade. Q: what was the world population during world war two A: 2.3 billion However, with unusual questions there are unsual answers. Q: what is nascar poem A: Roses are red; Violets are blue; For Blaney's new ride; Switch the 1 and the 2. there's no official nascar poem # Dataset stats Total dataset size are 136039 and splitted into train-test-validation via 7-2-1 ratio. The split are same as [alexa-qa-with-rank](https://huggingface.co/datasets/theblackcat102/alexa-qa-with-rank), so no train question in alexa-qa can be found in validation and test splits in alex-qa-with-rank. Train : 95,227 Test : 27,208 Validation : 13,604 Do note that similar repharses of question does exist between splits and I will leave the study to others. # Last update 19/02/2023
false
## To use this dataset for your research, please cite the following preprint. Full-paper will be available soon. [Preprint](https://arxiv.org/abs/2212.02842) ### Citation: @article{thambawita2022visem, title={VISEM-Tracking: Human Spermatozoa Tracking Dataset}, author={Thambawita, Vajira and Hicks, Steven A and Stor{\aa}s, Andrea M and Nguyen, Thu and Andersen, Jorunn M and Witczak, Oliwia and Haugen, Trine B and Hammer, Hugo L, and Halvorsen, P{\aa}l and Riegler, Michael A}, journal={arXiv preprint arXiv:2212.02842}, year={2022} } ☝️ ☝️ ☝️ ### Motivation and background Manual evaluation of a sperm sample using a microscope is time-consuming and requires costly experts who have extensive training. In addition, the validity of manual sperm analysis becomes unreliable due to limited reproducibility and high inter-personnel variations due to the complexity of tracking, identifying, and counting sperm in fresh samples. The existing computer-aided sperm analyzer systems are not working well enough for application in a real clinical setting due to unreliability caused by the consistency of the semen sample. Therefore, we need to research new methods for automated sperm analysis. ### Target group The task is of interest to researchers in the areas of machine learning (classification and detection), visual content analysis, and multimodal fusion. Overall, this task is intended to encourage the multimedia community to help improve the healthcare system through the application of their knowledge and methods to reach the next level of computer and multimedia-assisted diagnosis, detection, and interpretation. ### Class Label Mapping sperm: 0 cluster: 1 small or pinhead: 2
false
# OpenSubtitles - Source: https://huggingface.co/datasets/open_subtitles - Num examples: 3,505,276 - Language: English ```python from datasets import load_dataset load_dataset("tdtunlp/open_subtitles_envi") ``` - Format for Translation task ```python def preprocess(sample): eng = sample['en'] vie = sample['vi'] return {'text': f'<|startoftext|><|eng|>{eng}<|vie|>{vie}<|endoftext|>'} """ <|startoftext|><|eng|>Previously on "The Blacklist"...<|vie|>Trong phần trước của phim<|endoftext|> """ ```
false
# m1_fine_tuning_ocr_ptrn_cmbert_io ## Introduction This dataset was used to fine-tuned [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) for **nested NER task** using Independant NER layers approach [M1]. It contains Paris trade directories entries from the 19th century. ## Dataset parameters * Approach : M1 * Dataset type : noisy (Pero OCR) * Tokenizer : [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) * Tagging format : IO * Counts : * Train : 6084 * Dev : 676 * Test : 1685 * Associated fine-tuned models : * Level-1 : [nlpso/m1_ind_layers_ocr_ptrn_cmbert_io_level_1](https://huggingface.co/nlpso/m1_ind_layers_ocr_ptrn_cmbert_io_level_1) * Level 2 : [nlpso/m1_ind_layers_ocr_ptrn_cmbert_io_level_2](https://huggingface.co/nlpso/m1_ind_layers_ocr_ptrn_cmbert_io_level_2) ## Entity types Abbreviation|Entity group (level)|Description -|-|- O |1 & 2|Outside of a named entity PER |1|Person or company name ACT |1 & 2|Person or company professional activity TITREH |2|Military or civil distinction DESC |1|Entry full description TITREP |2|Professionnal reward SPAT |1|Address LOC |2|Street name CARDINAL |2|Street number FT |2|Geographical feature ## How to use this dataset ```python from datasets import load_dataset train_dev_test = load_dataset("nlpso/m1_fine_tuning_ocr_ptrn_cmbert_io")
false
# m2m3_qualitative_analysis_ocr_ptrn_cmbert_io ## Introduction This dataset was used to perform **qualitative analysis** of [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) on **nested NER task** using Independant NER layers approach [M1]. It contains Paris trade directories entries from the 19th century. ## Dataset parameters * Approachrd : M2 and M3 * Dataset type : noisy (Pero OCR) * Tokenizer : [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) * Tagging format : IO * Counts : * Train : 6084 * Dev : 676 * Test : 1685 * Associated fine-tuned models : * M2 : [nlpso/m2_joint_label_ocr_ptrn_cmbert_io](https://huggingface.co/nlpso/m2_joint_label_ocr_ptrn_cmbert_io) * M3 : [nlpso/m3_hierarchical_ner_ocr_ptrn_cmbert_io](https://huggingface.co/nlpso/m3_hierarchical_ner_ocr_ptrn_cmbert_io) ## Entity types Abbreviation|Entity group (level)|Description -|-|- O |1 & 2|Outside of a named entity PER |1|Person or company name ACT |1 & 2|Person or company professional activity TITREH |2|Military or civil distinction DESC |1|Entry full description TITREP |2|Professionnal reward SPAT |1|Address LOC |2|Street name CARDINAL |2|Street number FT |2|Geographical feature ## How to use this dataset ```python from datasets import load_dataset train_dev_test = load_dataset("nlpso/m2m3_qualitative_analysis_ocr_ptrn_cmbert_io")
false
# MFAQ - Source: https://huggingface.co/datasets/clips/mfaq - Num examples: - 26,494 (train) - 663 (validation) - Language: Vietnamese ```python from datasets import load_dataset load_dataset("tdtunlp/mfag_vi") ``` - Format for QA task ```python def preprocess(sample): question = sample['question'] answer = sample['answer'] return {'text': f'<|startoftext|><|question|>{question}<|answer|>{answer}<|endoftext|>'} """ <|startoftext|><|question|>Bao lâu thì nên gội đầu?<|answer|>Nếu bạn thường xuyên đội tóc giả, nên giặt chúng một lần trong một tháng.<|endoftext|> """ ```
false
# MFAQ - Source: https://huggingface.co/datasets/clips/mfaq - Num examples: - 3,567,659 (train) - 151,825 (validation) - Language: English ```python from datasets import load_dataset load_dataset("tdtunlp/mfaq_en") ``` - Format for QA task ```python def preprocess(sample): question = sample['question'] answer = sample['answer'] return {'text': f'<|startoftext|><|question|>{question}<|answer|>{answer}<|endoftext|>'} """ <|startoftext|><|question|>A registered person is sending semi-cooked food from his manufacturing unit at Gurugram to his branch in Delhi. Is he required to pay any tax?<|answer|>In accordance with the provisions of Section 25(4) of the CGST Act, 2017, branches in different States are considered as distinct persons. Further, as per Schedule I, this constitutes supply made in the course or furtherance of business between distinct persons even if made without consideration. As it is an inter-State supply, the registered person is required to pay IGST.<|endoftext|> """ ```
false
false
## Shaded relief image dataset for geomorphological studies of Polish postglacial landscape This dataset contains a list of 138 png images of shaded relief cut into the 128x128 arrays. The area that the dataset covers is compacted within the two main geomorphological spheres in Poland - postglacial denuded and nondenuded landscape. Arrays representing one of two categories are labeled accordingly. Shaded relief scene has been calculated with exposition and sunlight paramiters set to direct south (thus, in this case - 180 degrees).
false
# Dataset Card for Dataset Name ## Dataset Description - **https://duskfallcrew.carrd.co/:** - **https://discord.gg/Da7s8d3KJ7** ### Dataset Summary A mixture of photography and other goods from Dusfkallcrew that has been either curated or taken by duskfall crew. Some may or may not be AI generated. This template was generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Languages English mainly, but that's because the data is largely of New Zealand. ### Source Data ### Personal and Sensitive Information No personal data has been included in this data, it is ALL a mixture of AI generated and personally created photography. If data is not from what is said, then the data set will be cleaned of any errors. ## Considerations for Using the Data ### Social Impact of Dataset Too much time on my hands. ### Discussion of Biases It's a DSLR, it's a samsung phne - its' a BIRD ITS A - you get my point. There shoudl be no bias other than where I can actually take photos. ### Licensing Information Do not sell this dataset, however you may use it as you see fit in TEXT TO IMAGE stable diffusion models. Your outputs are your own, and the datawithin is free to be used for AI generation models. ### Citation Information None needed. ### Contributions If you'd like to contribute please do so!
false
# Do not resell the data, you don't own the data but you do your own outputs of your training. See main lisc for details
false
# Negative outputs from various models of Stable Diffusion - use at your will to train textual inversons or other things.
false
# Dataset Card for DuskfallCrewArtStyle_Lora ## Dataset Description - **Homepage:https://duskfallcrew.carrd.co/** - **Point of Contact: See the Carrd website for contact info, or DM us on HF** ### Dataset Summary This data set is the basis for the LoRa that is in this repository. ### Supported Tasks and Leaderboards Text to Image / Stable Diffusion/ LoRA ### Languages English ### Source Data ### Personal and Sensitive Information This is based on our own Art, and while we're A OK for you to use it, you don't own the art within the dataset, but you may not care to anyways. ## Considerations for Using the Data ### Social Impact of Dataset Shitty Art! ### Discussion of Biases It largely has non binary features, not sure if it has any one specific gender. We have Dissociative identity disorder so laregely the faces in here are either alters in our system or other systems we've done art for. ### Other Known Limitations SHITTYART! ## Additional Information ### Licensing Information While it's under the lisc listed, we do ask you that you don't resell the dataset. You're responsible for your use of the dataset, and the faces within it. Your outputs are up to you. ### Citation Information If you use the dataset, citation is nice, but it'd be even nicer if you gave us coffee! https://ko-fi.com/DUSKFALLcrew
false
<h2>Dataset to make the galactic-diffusion</h2> <h5>num: 133</h5> <h5>source: <b><i>Entergalactic</i></b> on Netflix</h5> <h5>including: male, female, male and female, indoor scene, outdoor scene</h5>
false
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
false
# xlsum - Source: https://huggingface.co/datasets/GEM/xlsum - Num examples: - 32,108 (train) - 4,013 (validation) - 4,013 (test) - Language: Vietnamese ```python from datasets import load_dataset load_dataset("tdtunlp/xlsum_vi") ``` - Format for Summarization task ```python def preprocess(sample): title = sample['title'] summary = sample['target'] article = sample['text'] return {'text': f'<|startoftext|><|title|>{title}<|article|>{article}<|summary|>{summary}<|endoftext|>'} """ <|startoftext|><|title|>Việt Nam đã sẵn sàng nâng tầm đối tác chiến lược với Mỹ?<|article|>Ông Donald Trump và ông Nguyễn Phú Trọng bắt tay trước thềm Thượng đỉnh Trump-Kim ở Hà Nội hôm 27/2/2019 Vài tháng qua đã có nhiều thảo luận về khả năng Hoa Kỳ-Việt Nam nâng tầm mối quan hệ từ "đối tác toàn diện" lên thành "đối tác chiến lược". Dưới đây là một số nhận định tiêu biểu. Một số quan ngại Theo ông Prashanth Parameswaran, tác giả bài viết hôm 12/9 trên The Diplomat, mối quan hệ Việt Nam - Hoa Kỳ đã tốt hơn nhiều so với thời chiến tranh Việt Nam. Hai nước bình thường hóa quan hệ dưới thời cựu Tổng thống Mỹ Bill Clinton và tiếp tục duy trì tốt dưới thời Obama. Việc nâng tầm quan hệ Mỹ-Việt có ý nghĩa lớn với các nhà hoạch định chính sách cả hai nước. Nó phản ánh nỗ lực của Washington trong việc mở rộng mạng lưới các đồng minh và đối tác tại châu Á - Thái Bình Dương và tầm quan trọng của Việt Nam trong mạng lưới này, đồng thời nhấn mạnh cơ hội và thách thức mà Hà Nội phải cân nhắc. TQ 'không vui' với chuyến thăm VN của USS Carl Vinson? David Hutt: 'Mục tiêu thương chiến kế tiếp của Trump là VN' USS Carl Vinson tới Đà Nẵng: 'Bước đi chiến lược' Việc Mỹ-Việt nâng tầm quan hệ có thể có ý nghĩa lớn hơn là bản thân mối quan hệ này, đặc biệt trong bối cảnh Mỹ - Trung Quốc tăng cường cạnh tranh về quyền lực trong khu vực châu Á-Thái Bình Dương và vai trò của Việt Nam trong các vấn đề như Biển Đông - nơi mà Trung Quốc ngày càng lấn lướt và Hà Nội chịu áp lực ngày càng lớn. Mỹ gần đây đã tăng cường các hoạt động hợp tác với Việt Nam. Năm 2018, Mỹ mang hàng không mẫu hạm USS Carl Vinson tới Việt Nam. Năm nay, Chủ tịch nước, Tổng bí thư Nguyễn Phú Trọng cũng dự kiến có chuyến công du Mỹ vào tháng 10/2019. Tuy nhiên, thực tế là Việt Nam và Mỹ vẫn có nhiều khác biệt trong nhiều lĩnh vực, từ thể chế tới quan điểm về nhân quyền. Việt Nam và Mỹ cũng có khác biệt trong quan điểm đối với vấn đề thương mại hoặc vấn đề Bắc Hàn - điều khiến quan hệ hai nước từng có vẻ khó 'toàn diện', chứ chưa nói đến 'chiến lược'. Chính vì thế, các cuộc thảo luận để nâng tầm mối quan hệ Mỹ - Việt cũng bao gồm cả các quan ngại, ông Prashanth Parameswaran bình luận. Các nhà hoạch định chính sách sẽ cần cân nhắc các yếu tố quan trọng này để tính toán được mất khi nâng tầm mối quan hệ. Chẳng hạn, không phải ngẫu nhiên mà chúng ta đã thấy Việt Nam trì hoãn một số hoạt động liên quan đến quốc phòng với Hoa Kỳ bất chấp những lợi ích có thể thấy rõ, vẫn theo tác giả Prashanth Parameswaran. Các quan ngại nói trên không có nghĩa Việt Nam - Hoa Kỳ không mong muốn hoặc không thể nâng tầng hợp tác. Nhưng nó có nghĩa rằng cả Mỹ và Việt Nam cần đảm bảo rằng các vấn đề thực tế giữa hai nước phù hợp với bất cứ tầm mức quan hệ nào mà họ lựa chọn. Quan trọng nữa là, việc điều chỉnh tên gọi của mối quan hệ chỉ có giá trị khi cả hai bên cùng cam kết nỗ lực để biến tiềm năng hợp tác thành sự hợp tác trên thực tế. Mỹ gửi tín hiệu 'hỗn hợp' Nhà báo David Hutt, cũng về đề tài này, trên Asia Times lại cho rằng Mỹ gửi những tín hiệu không thống nhất đến Việt Nam, nói năm nay, Mỹ lên tiếng cáo buộc Trung Quốc có hành động 'bắt nạt' nước láng giềng Việt Nam. Mỹ cũng ngỏ ý "muốn củng cố mối quan hệ quân sự chặt chẽ hơn với Hà Nội, mặc dù Việt Nam vẫn tỏ ra thận trọng và vẫn duy trì các chính sách ngoại giao không cam kết," David Hutt cũng nhắc tới tin đồn gần đây rằng công ty dầu khí Mỹ ExxonMobil đang tìm cách rút dự án Cá Voi Xanh trị giá hàng tỷ đô la khỏi Việt Nam, và bình luận rằng: Nếu thực sự ExxonMobil rút - cứ cho là vì lý do tài chính chứ không phải địa chính trị - thì đây cũng là một cú nốc ao vào mối quan hệ Mỹ-Việt ở giai đoạn mang tính bước ngoặt. Hơn bao giờ hết, Hà Nội hiện đang tìm kiếm các cam kết từ Washington rằng họ sẽ đứng về phía mình trong bất kỳ cuộc xung đột có vũ trang nào với Trung Quốc trên Biển Đông. Thương mại: Ông Donald Trump đe dọa Việt Nam Kỳ vọng gì nếu chủ tịch Trọng thăm Hoa Kỳ? Tập trận Mỹ-ASEAN: 'Mỹ sẽ không đứng yên nếu TQ tiếp tục ép VN' Mỹ, tuy thế, đang gửi tín hiệu 'hỗn hợp'. Quan hệ Việt Nam - Hoa Kỳ nảy nở dưới thời Tổng thống Mỹ Donald Trump. Ông Trump đã hai lần đến thăm Việt Nam và hiếm khi chỉ trích điều gì về đất nước được coi là vi phạm nhân quyền tồi tệ nhất Đông Nam Á này, vẫn theo David Hutt. Nhưng ông Trump, bên cạnh đó, lại cũng rất phiền lòng với việc Việt Nam trở thành nơi sản xuất, xuất khẩu các mặt hàng Trung Quốc nằm trong diện bị Mỹ đánh thuế, để trốn thuế. Ông Trump, hồi tháng Sáu đã gọi Việt Nam là nước 'lạm dụng tồi tệ nhất' trong một cuộc phỏng vấn truyền hình. Tuy nhiên, chính quyền của ông Trung cũng lại phản ứng quyết liệt khi Trung Quốc mang tàu vào khu đặc quyền kinh tế của Việt Nam tại Bãi Tư Chính trên Biển Đông. Người phát ngôn Bộ Ngoại giao Mỹ Morgan Ortagus nói Trung Quốc đã thực hiện một loạt các động thái hung hăng để can thiệp các hoạt động kinh tế lâu đời của Việt Nam. "Việt-Mỹ đã hợp tác chiến lược nhiều mặt, trừ tên gọi" Trong khi đó, tác giả Đoàn Xuân Lộc viết trên Asia Times, một yếu tố quan trọng của chính sách đối ngoại của Hà Nội là không liên minh. Để giúp đất nước tăng cường quan hệ ngoại giao, kinh tế và an ninh với các đối tác liên quan, chính phủ Việt Nam, do đó, đã tìm cách xây dựng một mạng lưới quan hệ đối tác. "Quan hệ đối tác toàn diện" là nấc thấp nhất trong mạng lưới này. Việt Nam và Hoa Kỳ thiết lập "quan hệ đối tác toàn diện" tháng 7/2013. Như vậy, Việt Nam đứng sau Philippines, Thái Lan, Indonesia và Singapore - các đối tác chiến lược của Hoa Kỳ trong khu vực - về tầm quan trọng đối với Washington. Trong khi đó, Việt Nam đã nâng tầm "quan hệ chiến lược" với 16 nước gồm Nga (2001), Nhật Bản (2006), Ấn Độ (2007), Trung Quốc (2008), Hàn Quốc và Tây Ban Nha (2009), Vương quốc Anh (2010), Đức (2011), Pháp, Indonesia, Ý, Singapore và Thái Lan ( 2013), Malaysia và Philippines (2015) và Úc (2017). Trong ngôn ngữ ngoại giao của Hà Nội, tất nhiên, Trung Quốc là đối tác quan trọng nhất của Việt Nam, trong khi Mỹ là một trong những quốc gia ít quan trọng nhất. Trên giấy tờ, mối "quan hệ đối tác toàn diện" của Việt Nam với Mỹ - nền kinh tế và quân sự lớn nhất thế giới - thậm chí còn xếp sau quan hệ "đối tác toàn diện" của Việt Nam với Myanmar - được thiết lập năm 2017. Nhưng trên thực tế, Mỹ là đối tác quan trọng thứ hai của Việt Nam. Ở nhiều khía cạnh, Mỹ cũng quan trọng không kém Trung Quốc. Và Hà Nội hiểu rằng có một mối quan hệ khỏe mạnh với Mỹ mang tính sống còn với đất nước, giúp ổn định sự phát triển và tránh quá phụ thuộc vào Trung Quốc về kinh tế, ông Đoàn Xuân Lộc nhận định. Hiện nay, sự hung hăng của Trung Quốc ở Biển Đông là một trong các yếu tố chính để Việt Nam tìm cách thắt chặt quan hệ với Mỹ, đặc biệt trong an ninh quốc phòng. Nhìn chung, mặc dù vẫn có những khác biệt nhất định, đặc biệt là về các quyền tự do chính trị và nhân quyền, lợi ích chiến lược của Hoa Kỳ và Việt Nam ngày càng phù hợp với nhau. Đối Việt Nam, mối quan hệ với Mỹ hiện tại về cơ bản là chiến lược trong nhiều lĩnh vực quan trọng, như an ninh và quốc phòng, mặc dù về tên gọi nó mới chỉ là "quan hệ đối tác toàn diện", vẫn theo tác giả Đoàn Xuân Lộc. <|summary|>Ý kiến về khả năng Việt-Mỹ trở thành đối tác chiến lược khi hai nước có nhiều khác biệt về thể chế chính trị và nhân quyền.<|endoftext|> """ ```
false
This is an image dataset for object detection of wildlife in the mixed coniferous broad-leaved forest. A total of 25,657 images in this dataset were generated from video clips taken by infrared cameras in the Northeast Tiger and Leopard National Park, including 17 main species (15 wild animals and 2 major domestic animals): Amur tiger, Amur leopard, wild boar, roe deer, sika deer, Asian black bear, red fox, Asian badger, raccoon dog, musk deer, Siberian weasel, sable, yellow-throated marten, leopard cat, Manchurian hare, cow, and dog. All images were labeled in Pascal VOC format. The image resolution is 1280 × 720 or 1600 × 1200 pixels.
true
# Dataset Card for "turkishSMS-ds" The dataset was utilized in the following study. It consists of Turkish SMS spam and legitimate data. Uysal, A. K., Gunal, S., Ergin, S., & Gunal, E. S. (2013). The impact of feature extraction and selection on SMS spam filtering. Elektronika ir Elektrotechnika, 19(5), 67-72. [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
false
# Alloprof dataset This is the dataset refered to in our paper: Alloprof: a new French question-answer education dataset and its use in an information retrieval case study (https://arxiv.org/abs/2302.07738) This dataset was provided by [AlloProf](https://www.alloprof.qc.ca/), an organisation in Quebec, Canada offering resources and a help forum curated by a large number of teachers to students on all subjects taught from in primary and secondary school. Raw data on questions is available in the following files: - `data/questions/categories.json`: subjects and their corresponding id - `data/questions/comments.json`: explanation (answer) data - `data/questions/discussions.json`: question data - `data/questions/grades.json`: grades and their corresponding id - `data/questions/roles.json`: information about the user type for each user id Raw data on reference pages is available in the following files: - `data/pages/page-content-en.json`: data for the reference pages in English - `data/pages/page-content-fr.json`: data for the reference pages in French The data can be parsed and structured using the script `scripts/parse_data.py` to create the file `data/alloprof.csv` with the following columns: - `id` (str) : Id of the document - `url` (str) : URL of the document - `text` (str) : Parsed text of the document - `language` (str) : Either "fr" or "en", the language of the document - `user` (int) : Id corresponding to the user who asked the question - `images` (str) : ";" separated list of URLs of images contained in the document - `relevant` (str) : ";" separated list of document ids appearing as links in the explanation to that document. For files, this will always be empty as there are no corresponding explanation - `is_query` (bool) : If this document is a question - `subject` (str) : ";" separated list of school subjects the document is related to - `grade` (str) : ";" separated list of school grade levels the document is related to - `possible` (str) : ";" separated list of possible documents ids this document may refer to. This list corresponds to every document of the same subject and grade. For files, this will always be empty to speed up reading and writing The `possible` column depends on arguments passed to the scripts to add related subjects, and lower and higher grade levels to the possible documents (see paper). Also note that the provided `alloprof.csv` file is stored with git lfs and can be pulled with `git lfs install && git lfs pull`. For images, a script to download them is available as `scripts/download_images.py`. If you have any questions, don't hesitate to mail us at antoine.lefebvre-brossard@mila.quebec. **Please cite our work as:** ``` @misc{lef23, doi = {10.48550/ARXIV.2302.07738}, url = {https://arxiv.org/abs/2302.07738}, author = {Lefebvre-Brossard, Antoine and Gazaille, Stephane and Desmarais, Michel C.}, keywords = {Computation and Language (cs.CL), Information Retrieval (cs.IR), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Alloprof: a new French question-answer education dataset and its use in an information retrieval case study}, publisher = {arXiv}, year = {2023}, copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International} } ```
false
# Dataset Card for "REDDIT_comments" ## Dataset Description - **Homepage:** - **Paper: https://arxiv.org/abs/2001.08435** ### Dataset Summary Comments of 50 high-quality subreddits, extracted from the REDDIT PushShift data dumps (from 2006 to Jan 2023). ### Supported Tasks These comments can be used for text generation and language modeling, as well as dialogue modeling. ## Dataset Structure ### Data Splits Each split corresponds to a specific subreddit in the following list: "tifu", "explainlikeimfive", "WritingPrompts", "changemyview", "LifeProTips", "todayilearned", "science", "askscience", "ifyoulikeblank", "Foodforthought", "IWantToLearn", "bestof", "IAmA", "socialskills", "relationship_advice", "philosophy", "YouShouldKnow", "history", "books", "Showerthoughts", "personalfinance", "buildapc", "EatCheapAndHealthy", "boardgames", "malefashionadvice", "femalefashionadvice", "scifi", "Fantasy", "Games", "bodyweightfitness", "SkincareAddiction", "podcasts", "suggestmeabook", "AskHistorians", "gaming", "DIY", "mildlyinteresting", "sports", "space", "gadgets", "Documentaries", "GetMotivated", "UpliftingNews", "technology", "Fitness", "travel", "lifehacks", "Damnthatsinteresting", "gardening", "programming" ## Dataset Creation ### Curation Rationale All the information fields have been cast to string, as their format change through time from one dump to the following. A reduced number of keys have been kept: "archived", "author", "author_fullname", "body", "comment_type", "controversiality", "created_utc", "edited", "gilded", "id", "link_id", "locked", "name", "parent_id", "permalink", "retrieved_on", "score", "subreddit", "subreddit_id", "subreddit_name_prefixed", "subreddit_type", "total_awards_received". ### Source Data The [Reddit PushShift data dumps](https://files.pushshift.io/reddit/) are part of a data collection effort which crawls Reddit at regular intervals, to extract and keep all its data. #### Initial Data Collection and Normalization See the paper. #### Who are the source language producers? Redditors are mostly young (65% below 30), male (70%), and American (50% of the site). ### Personal and Sensitive Information The data contains Redditor's usernames associated to their content. ## Considerations for Using the Data This dataset should be anonymized before any processing. Though the subreddits selected are considered as being of higher quality, they can still reflect what you can find on the internet in terms of expressions of biases and toxicity. ### Contributions Thanks to [@clefourrier](https://github.com/clefourrier) for adding this dataset.
true
# Dataset Card for Fandom23K *The BigKnow2022 dataset and its subsets are not yet complete. Not all information here may be accurate or accessible.* ## Dataset Description - **Homepage:** (TODO) https://docs.ryokoai.com/docs/training/dataset#Fandom22K - **Repository:** <https://github.com/RyokoAI/BigKnow2022> - **Paper:** N/A - **Leaderboard:** N/A - **Point of Contact:** Ronsor/undeleted <ronsor@ronsor.com> ### Dataset Summary Fandom23K is a dataset composed of 15,616,749 articles scraped from approximately 23,665 Fandom.com wikis between March 14 and March 18, 2023. It is a subset of the upcoming BigKnow2022 dataset. ### Supported Tasks and Leaderboards This dataset is primarily intended for unsupervised training of text generation models; however, it may be useful for other purposes. * text-classification ### Languages * English * Potentially other languages in much smaller quantities. ## Dataset Structure ### Data Instances ```json { "tag": "fandom.wikia2011", "text": "# Add Your Wiki's Highlights\n\nWrite the text of your article here!-_-\n\n", "title": "Add Your Wiki's Highlights" } { "tag": "fandom.wikia2011", "text": "# Add Your Wiki's Highlights!\n\nWikia wants to hear from you! What significant milestones did your wiki experience in 2011? What cool things did the community try out?\nCreate a page for the wiki you're most active on! Be sure to add it to the Entertainment, Gaming, or Lifestyle categories so it shows up in the right place!\n\n", "title": "Add Your Wiki's Highlights!" } { "tag": "fandom.wikia2011", "text": "# Assassins Creed Wiki 2011\n\nIn 2011, Assassin's Creed Wiki tested new Wikia features such as Message Wall, Chat, and New Layouts.\n\n", "title": "Assassins Creed Wiki 2011" } ``` ### Data Fields * **text**: the actual article text * **title**: the article title * **tag**: text source tag, in the following format: `fandom.<wiki name>` ### Data Splits No splitting of the data was performed. ## Dataset Creation ### Curation Rationale Fandom23K provides an up-to-date corpus containing pop culture and media information spanning a variety of interests and hobbies. Previous datasets containing such information are either part of a large and harder-to-handle whole, such as Common Crawl, do not provide enough variety, or are simply outdated. ### Source Data #### Initial Data Collection and Normalization *More information about any referenced scripts, commands, or programs used may be found in the BigKnow2022 GitHub repository.* First, a list of active Fandom wikis was gathered into a text file. Active is defined as "having at least 250 images on the wiki." This list was gathered in early January 2023, despite the actual wiki content being more recent. Second, the `scrape_fandom.py` script was used to generate and download an up to date dump for each of the wikis. Third, `wikiextractor` was used to process these dumps into single XML files containing each article stripped of all formatting besides links. Fourth, `dump2jsonl` was used to convert the XML files into JSONL files with an article per line. Light markdown formatting was applied, converting the HTML links to markdown-formatted links, and automatically making the article's title a header. Finally, the JSONL files were concatenated into the Fandom23K dataset. The version uploaded to this repository, however, is split into multiple files, numbered 00 through 04 inclusive. #### Who are the source language producers? The contributors of each wiki. ### Annotations #### Annotation process Wiki names and article titles were collected alongside the article text. Other than that automated process, no annotation was performed. #### Who are the annotators? There were no human annotators. ### Personal and Sensitive Information The dataset was collected from public wiki data. As a result, we do not believe it should contain any PII and did not inspect it further. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended to be useful for anyone who wishes to train a model to generate "more entertaining" content requiring knowledge of popular culture or a particular niche. ### Discussion of Biases This dataset contains text from random Internet users and generally should not be used as an authoritative source of information. Additionally, this dataset was not filtered at all. We recommmend its usage for research purposes only. ### Other Known Limitations This dataset is based on a list of active wikis from January 2023, even though the actual wiki content may be more recent. Additionally, smaller yet still active wikis may have been excluded. ## Additional Information ### Dataset Curators Ronsor Labs ### Licensing Information CC-BY-SA 3.0, except for any portions which state otherwise. ### Citation Information ``` @misc{ryokoai2023-bigknow2022, title = {BigKnow2022: Bringing Language Models Up to Speed}, author = {Ronsor}, year = {2023}, howpublished = {\url{https://github.com/RyokoAI/BigKnow2022}}, } ``` ### Contributions Thanks to @ronsor for gathering this dataset.
false
## Description This is a clenned version of AllenAI mC4 PtBR section. The original dataset can be found here https://huggingface.co/datasets/allenai/c4 ## Clean procedure We applied the same clenning procedure as explained here: https://gitlab.com/yhavinga/c4nlpreproc.git The repository offers two strategies. The first one, found in the main.py file, uses pyspark to create a dataframe that can both clean the text and create a pseudo mix on the entire dataset. We found this strategy clever, but it is time/resource-consuming. To overcome this we jumped into the second approach consisting in leverage the singlefile.py script and parallel all together. We did the following: ``` GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/allenai/c4 cd c4 git lfs pull --include "multilingual/c4-pt.*.json.gz" ls c4-nl* | parallel --gnu --jobs 96 --progress python ~/c4nlpreproc/singlefile.py {} ``` Be advice you should install parallel first if you want to reproduce this dataset, or to create another in a different language. ## Dataset Structure We kept the same structure as the original, so it is like this: ``` { 'timestamp': '2020-02-22T22:24:31Z', 'url': 'https://url here', 'text': 'the content' } ``` ## Considerations for Using the Data We do not perform any procedure to remove bad words, vulgarity, or profanity. it must be considered that model trained on this scraped corpus will inevitably reflect biases present in blog articles and comments on the Internet. This makes the corpus especially interesting in the context of studying data biases and how to limit their impacts.
false
Dataset generated from HKR train set using Stackmix =================================================== Number of images: 300000 Sources: * [HKR dataset](https://github.com/abdoelsayed2016/HKR_Dataset) * [Stackmix code](https://github.com/ai-forever/StackMix-OCR)
false
# SwissNER A multilingual test set for named entity recognition (NER) on Swiss news articles. ## Description SwissNER is a dataset for named entity recognition based on manually annotated news articles in Swiss Standard German, French, Italian, and Romansh Grischun. We have manually annotated a selection of articles that have been published in February 2023 in the categories "Switzerland" or "Regional" on the following online news portals: - Swiss Standard German: [srf.ch](https://www.srf.ch/) - French: [rts.ch](https://www.rts.ch/) - Italian: [rsi.ch](https://www.rsi.ch/) - Romansh Grischun: [rtr.ch](https://www.rtr.ch/) For each article we extracted the first two paragraphs after the lead paragraph. We followed the guidelines of the CoNLL-2002 and 2003 shared tasks and annotated the names of persons, organizations, locations and miscellaneous entities. The annotation was performed by a single annotator. ## License - Text paragraphs: © Swiss Broadcasting Corporation (SRG SSR) - Annotations: Attribution 4.0 International (CC BY 4.0) ## Statistics | | DE | FR | IT | RM | Total | |----------------------|-----:|------:|------:|------:|------:| | Number of paragraphs | 200 | 200 | 200 | 200 | 800 | | Number of tokens | 9498 | 11434 | 12423 | 13356 | 46711 | | Number of entities | 479 | 475 | 556 | 591 | 2101 | | – `PER` | 104 | 92 | 93 | 118 | 407 | | – `ORG` | 193 | 216 | 266 | 227 | 902 | | – `LOC` | 182 | 167 | 197 | 246 | 792 | | – `MISC` | 113 | 79 | 88 | 39 | 319 | ## Citation ```bibtex @article{vamvas-etal-2023-swissbert, title={Swiss{BERT}: The Multilingual Language Model for Switzerland}, author={Jannis Vamvas and Johannes Gra\"en and Rico Sennrich}, year={2023}, eprint={2303.13310}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2303.13310} } ```
true
# Dataset Card for unarXive IMRaD classification ## Dataset Description * **Homepage:** [https://github.com/IllDepence/unarXive](https://github.com/IllDepence/unarXive) * **Paper:** [unarXive 2022: All arXiv Publications Pre-Processed for NLP, Including Structured Full-Text and Citation Network](https://arxiv.org/abs/2303.14957) ### Dataset Summary The unarXive IMRaD classification dataset contains 530k paragraphs from computer science papers and the IMRaD section they originate from. The paragraphs are derived from [unarXive](https://github.com/IllDepence/unarXive). The dataset can be used as follows. ``` from datasets import load_dataset imrad_data = load_dataset('saier/unarXive_imrad_clf') imrad_data = imrad_data.class_encode_column('label') # assign target label column imrad_data = imrad_data.remove_columns('_id') # remove sample ID column ``` ## Dataset Structure ### Data Instances Each data instance contains the paragraph’s text as well as one of the labels ('i', 'm', 'r', 'd', 'w' — for Introduction, Methods, Results, Discussion and Related Work). An example is shown below. ``` {'_id': '789f68e7-a1cc-4072-b07d-ecffc3e7ca38', 'label': 'm', 'text': 'To link the mentions encoded by BERT to the KGE entities, we define ' 'an entity linking loss as cross-entropy between self-supervised ' 'entity labels and similarities obtained from the linker in KGE ' 'space:\n' '\\(\\mathcal {L}_{EL}=\\sum -\\log \\dfrac{\\exp (h_m^{proj}\\cdot ' '\\textbf {e})}{\\sum _{\\textbf {e}_j\\in \\mathcal {E}} \\exp ' '(h_m^{proj}\\cdot \\textbf {e}_j)}\\) \n'} ``` ### Data Splits The data is split into training, development, and testing data as follows. * Training: 520,053 instances * Development: 5000 instances * Testing: 5001 instances ## Dataset Creation ### Source Data The paragraph texts are extracted from the data set [unarXive](https://github.com/IllDepence/unarXive). #### Who are the source language producers? The paragraphs were written by the authors of the arXiv papers. In file `license_info.jsonl` author and text licensing information can be found for all samples, An example is shown below. ``` {'authors': 'Yusuke Sekikawa, Teppei Suzuki', 'license': 'http://creativecommons.org/licenses/by/4.0/', 'paper_arxiv_id': '2011.09852', 'sample_ids': ['cc375518-347c-43d0-bfb2-f88564d66df8', '18dc073e-a48e-488e-b34c-e5fc3cb8a4ca', '0c2e89b3-d863-4bc2-9e11-8f6c48d867cb', 'd85e46cf-b11d-49b6-801b-089aa2dd037d', '92915cea-17ab-4a98-aad2-417f6cdd53d2', 'e88cb422-47b7-4f69-9b0b-fbddf8140d98', '4f5094a4-0e6e-46ae-a34d-e15ce0b9803c', '59003494-096f-4a7c-ad65-342b74eed561', '6a99b3f5-217e-4d3d-a770-693483ef8670']} ``` ### Annotations Class labels were automatically determined ([see implementation](https://github.com/IllDepence/unarXive/blob/master/src/utility_scripts/ml_tasks_prep_data.py)). ## Considerations for Using the Data ### Discussion and Biases Because only paragraphs unambiguously assignable to one of the IMRaD classeswere used, a certain selection bias is to be expected in the data. ### Other Known Limitations Depending on authors’ writing styles as well LaTeX processing quirks, paragraphs can vary in length a significantly. ## Additional Information ### Licensing information The dataset is released under the Creative Commons Attribution-ShareAlike 4.0. ### Citation Information ``` @inproceedings{Saier2023unarXive, author = {Saier, Tarek and Krause, Johan and F\"{a}rber, Michael}, title = {{unarXive 2022: All arXiv Publications Pre-Processed for NLP, Including Structured Full-Text and Citation Network}}, booktitle = {Proceedings of the 23rd ACM/IEEE Joint Conference on Digital Libraries}, year = {2023}, series = {JCDL '23} } ```
false
# LandCover.ai: Dataset for Automatic Mapping of Buildings, Woodlands, Water and Roads from Aerial Imagery My project based on the dataset, can be found on Github: https://github.com/MortenTabaka/Semantic-segmentation-of-LandCover.ai-dataset The dataset used in this project is the [Landcover.ai Dataset](https://landcover.ai.linuxpolska.com/), which was originally published with [LandCover.ai: Dataset for Automatic Mapping of Buildings, Woodlands, Water and Roads from Aerial Imagery paper](https://arxiv.org/abs/2005.02264) also accessible on [PapersWithCode](https://paperswithcode.com/paper/landcover-ai-dataset-for-automatic-mapping-of). **Please note that I am not the author or owner of this dataset, and I am using it under the terms of the license specified by the original author. All credits for the dataset go to the original author and contributors.** --- license: cc-by-nc-sa-4.0 ---
true
# Mathematics StackExchange Dataset This dataset contains questions and answers from Mathematics StackExchange (math.stackexchange.com). The data was collected using the Stack Exchange API. Total collected questions 465.295. ## Data Format The dataset is provided in JSON Lines format, with one JSON object per line. Each object contains the following fields: - `id`: the unique ID of the question - `asked_at`: the timestamp when the question was asked - `author_name`: the name of the author who asked the question - `author_rep`: the reputation of the author who asked the question - `score`: the score of the question - `title`: the title of the question - `tags`: a list of tags associated with the question - `body`: the body of the question - `comments`: a list of comments on the question, where each comment is represented as a dictionary with the following fields: - `id`: the unique ID of the comment - `body`: the body of the comment - `at`: the timestamp when the comment was posted - `score`: the score of the comment - `author`: the name of the author who posted the comment - `author_rep`: the reputation of the author who posted the comment - `answers`: a list of answers to the question, where each answer is represented as a dictionary with the following fields: - `id`: the unique ID of the answer - `body`: the body of the answer - `score`: the score of the answer - `ts`: the timestamp when the answer was posted - `author`: the name of the author who posted the answer - `author_rep`: the reputation of the author who posted the answer - `accepted`: whether the answer has been accepted - `comments`: a list of comments on the answer, where each comment is represented as a dictionary with the following fields: - `id`: the unique ID of the comment - `body`: the body of the comment - `at`: the timestamp when the comment was posted - `score`: the score of the comment - `author`: the name of the author who posted the comment - `author_rep`: the reputation of the author who posted the comment ## Preprocessing There was no preprocessing done, this dataset contains raw unfiltered data, also there might be problems with redundant line breaks or spacings ## License This dataset is released under the [WTFPL](http://www.wtfpl.net/txt/copying/) license. ## Contact For any questions or comments about the dataset, please contact nurik040404@gmail.com.
true
# Dataset Validated from https://huggingface.co/spaces/dariolopez/argilla-reddit-c-ssrs-suicide-dataset-es https://huggingface.co/spaces/dariolopez/argilla-reddit-c-ssrs-suicide-dataset-es
false
This dataset splits the original [Self-instruct dataset](https://huggingface.co/datasets/yizhongw/self_instruct) into training (90%) and test (10%).
false
# Dataset Card for AdvertiseGen - **formal url:** https://www.luge.ai/#/luge/dataDetail?id=9 ## Dataset Description 数据集介绍 AdvertiseGen是电商广告文案生成数据集。 AdvertiseGen以商品网页的标签与文案的信息对应关系为基础构造,是典型的开放式生成任务,在模型基于key-value输入生成开放式文案时,与输入信息的事实一致性需要得到重点关注。 - 任务描述:给定商品信息的关键词和属性列表kv-list,生成适合该商品的广告文案adv; - 数据规模:训练集114k,验证集1k,测试集3k; - 数据来源:清华大学CoAI小组; ### Supported Tasks and Leaderboards The dataset designed for generate e-commerce advertise. ### Languages The data in AdvertiseGen are in Chinese. ## Dataset Structure ### Data Instances An example of "train" looks as follows: ```json { "content": "类型#上衣*材质#牛仔布*颜色#白色*风格#简约*图案#刺绣*衣样式#外套*衣款式#破洞", "summary": "简约而不简单的牛仔外套,白色的衣身十分百搭。衣身多处有做旧破洞设计,打破单调乏味,增加一丝造型看点。衣身后背处有趣味刺绣装饰,丰富层次感,彰显别样时尚。" } ``` ### Citation Information 数据集引用 如在学术论文中使用本数据集,请添加相关引用说明,具体如下: ``` Shao, Zhihong, et al. "Long and Diverse Text Generation with Planning-based Hierarchical Variational Model." Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). 2019. ```
false
# Dataset Card for "petfinder-dogs" ## Dataset Description - **Homepage:** https://www.petfinder.com/ - **Paper:** N.A. - **Leaderboard:** N.A. - **Point of Contact:** N.A. ### Dataset Summary Contains 700k+ 300px-wide images of 150k+ distinct dogs extracted from the PetFinder API in March 2023. Only those having at least 4 photos are present: Each subject has between 4 and 12 photos. This dataset aims to simplify AI work based on dogs' images and avoid rescraping thousands of them from the PetFinder API again and again.
true
# Dataset Validated from https://huggingface.co/spaces/dariolopez/argilla-elena-reddit-c-ssrs-suicide-dataset-es https://dariolopez-argilla-elena-reddit-c-ssrs-suic-00dc6af.hf.space
false
# DailyDialog - Source: https://huggingface.co/datasets/daily_dialog - Num examples: - 11,118 (train) - 1,000 (validation) - 1,000 (test) - Language: English ```python from datasets import load_dataset load_dataset("tdtunlp/daily_dialog_en") ```
false
## Introduction * We build a large-scale dataset called the Theme and Aesthetics Dataset with 66K images (TAD66K), which is specifically designed for IAA. Specifically, (1) it is a theme-oriented dataset containing 66K images covering 47 popular themes. All images were carefully selected by hand based on the theme. (2) In addition to common aesthetic criteria, we provide 47 criteria for the 47 themes. Images of each theme are annotated independently, and each image contains at least 1200 effective annotations (so far the richest annotations). These high-quality annotations could help to provide deeper insight into the performance of models. ![TAD66K](https://user-images.githubusercontent.com/15050507/164620789-2958fbd6-5e3b-4eba-9697-bcd28d5257f6.png) <div align="center"> ![example3](https://user-images.githubusercontent.com/15050507/164624400-acb365e0-05d9-4de9-bc16-f894904c6d33.png) </div> ## If you find our work is useful, pleaes cite our paper: ``` @article{herethinking, title={Rethinking Image Aesthetics Assessment: Models, Datasets and Benchmarks}, author={He, Shuai and Zhang, Yongchang and Xie, Rui and Jiang, Dongxiang and Ming, Anlong}, journal={IJCAI}, year={2022}, } ```
false
# Alpaca-Cleaned - Source: https://huggingface.co/datasets/yahma/alpaca-cleaned - Num examples: 51,848 - Language: English ```python from datasets import load_dataset load_dataset("tdtunlp/alpaca_en") ``` - Format for Instruction task ```python def preprocess(sample): instruction = sample['instruction'] input = sample['input'] output = sample['output'] if input: return {'text': f'<|startoftext|><|instruction|>{instruction}<|input|>{input}<|output|>{output}<|endoftext|>'} else: return {'text': f'<|startoftext|><|instruction|>{instruction}<|output|>{output}<|endoftext|>'} """ <|startoftext|><|instruction|>Give three tips for staying healthy.<|output|>1.Eat a balanced diet and make sure to include plenty of fruits and vegetables. 2. Exercise regularly to keep your body active and strong. 3. Get enough sleep and maintain a consistent sleep schedule.<|endoftext|> """ ```
false
## Dataset Multi30k: English-Ukrainian variation Multi30K dataset is designed to develop multilingual multimodal researches. Initially this dataset extends the Flickr30K dataset by adding German translations. The descriptions were collected from a crowdsourcing platform, while the translations were collected from professionally contracted translators. We present a variation of this dataset manually translated for Ukrainian language. Paper: ```python @inproceedings{saichyshyna-etal-2023-extension, title = "Extension {M}ulti30{K}: Multimodal Dataset for Integrated Vision and Language Research in {U}krainian", author = "Saichyshyna, Nataliia and Maksymenko, Daniil and Turuta, Oleksii and Yerokhin, Andriy and Babii, Andrii and Turuta, Olena", booktitle = "Proceedings of the Second Ukrainian Natural Language Processing Workshop (UNLP)", month = may, year = "2023", address = "Dubrovnik, Croatia", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.unlp-1.7", pages = "54--61", abstract = "We share the results of the project within the well-known Multi30k dataset dedicated to improving machine translation of text from English into Ukrainian. The main task was to manually prepare the dataset and improve the translation of texts. The importance of collecting such datasets for low-resource languages for improving the quality of machine translation has been discussed. We also studied the features of translations of words and sentences with ambiguous meanings.The collection of multimodal datasets is essential for natural language processing tasks because it allows the development of more complex and comprehensive machine learning models that can understand and analyze different types of data. These models can learn from a variety of data types, including images, text, and audio, for more accurate and meaningful results.", } ```
false
# Self instruct - Source: https://github.com/yizhongw/self-instruct - Num examples: 82,612 - Language: English ```python from datasets import load_dataset load_dataset("tdtunlp/self_instruct_en") ``` - Format for Instruction task ```python def preprocess(sample): instruction = sample['instruction'] input = sample['input'] output = sample['output'] if input: return {'text': f'<|startoftext|><|instruction|>{instruction}<|input|>{input}<|output|>{output}<|endoftext|>'} else: return {'text': f'<|startoftext|><|instruction|>{instruction}<|output|>{output}<|endoftext|>'} """ <|startoftext|><|instruction|>Make a list of 10 ways to help students improve their study skills.<|output|>1. Make a schedule for studying and stick to it. 2. Study in the same place every time. 3. Set goals for yourself. 4. Take breaks when you need them. 5. Don't cram before an exam. 6. Get enough sleep. 7. Eat healthy food. 8. Exercise regularly. 9. Find a study partner. 10. Reward yourself after completing a task.<|endoftext|> """ ```
false
# Dataset Card for cloud-types ** The original COCO dataset is stored at `dataset.tar.gz`** ## Dataset Description - **Homepage:** https://universe.roboflow.com/object-detection/cloud-types - **Point of Contact:** francesco.zuppichini@gmail.com ### Dataset Summary cloud-types ### Supported Tasks and Leaderboards - `object-detection`: The dataset can be used to train a model for Object Detection. ### Languages English ## Dataset Structure ### Data Instances A data point comprises an image and its object annotations. ``` { 'image_id': 15, 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>, 'width': 964043, 'height': 640, 'objects': { 'id': [114, 115, 116, 117], 'area': [3796, 1596, 152768, 81002], 'bbox': [ [302.0, 109.0, 73.0, 52.0], [810.0, 100.0, 57.0, 28.0], [160.0, 31.0, 248.0, 616.0], [741.0, 68.0, 202.0, 401.0] ], 'category': [4, 4, 0, 0] } } ``` ### Data Fields - `image`: the image id - `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]` - `width`: the image width - `height`: the image height - `objects`: a dictionary containing bounding box metadata for the objects present on the image - `id`: the annotation id - `area`: the area of the bounding box - `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format) - `category`: the object's category. #### Who are the annotators? Annotators are Roboflow users ## Additional Information ### Licensing Information See original homepage https://universe.roboflow.com/object-detection/cloud-types ### Citation Information ``` @misc{ cloud-types, title = { cloud types Dataset }, type = { Open Source Dataset }, author = { Roboflow 100 }, howpublished = { \url{ https://universe.roboflow.com/object-detection/cloud-types } }, url = { https://universe.roboflow.com/object-detection/cloud-types }, journal = { Roboflow Universe }, publisher = { Roboflow }, year = { 2022 }, month = { nov }, note = { visited on 2023-03-29 }, }" ``` ### Contributions Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset.
false
false
true
# Dataset Description * Example model using the dataset: https://huggingface.co/hackathon-somos-nlp-2023/roberta-base-bne-finetuned-suicide-es * Example space using the dataset: https://huggingface.co/spaces/hackathon-somos-nlp-2023/suicide-comments-es * Language: Spanish ## Dataset Summary The dataset consists of comments on Reddit, Twitter, and inputs/outputs of the Alpaca dataset translated to Spanish language and classified as suicidal ideation/behavior and non-suicidal. # Dataset Structure The dataset has 10050 rows (777 considered as Suicidal Ideation/Behavior and 9273 considered Not Suicidal). ## Dataset fields * `Text`: User comment. * `Label`: 1 if suicidal ideation/behavior; 0 if not suicidal comment. # Dataset Creation ## Suicidal Ideation/Behavior * 90 rows from Columbia Suicide Severity Rating Scale (C-SSRS) https://zenodo.org/record/2667859#.ZDGnX-xBxYi C-SSRS is a gold dataset for suicidal comments detection on Reddit. We use `Helsinki-NLP/opus-mt-en-es` to translate the dataset. We also explode on paragraphs, filter messages less than 240 characters, and we filter the positive ones validating against the [Moderation API of OpenAI](https://platform.openai.com/docs/guides/moderation). * 519 rows from https://github.com/laxmimerit/twitter-suicidal-intention-dataset/tree/master The dataset contains the tweet data of suicidal intention and no intention data. We use `Helsinki-NLP/opus-mt-en-es` to translate the dataset. We filter the positive ones validating against the [Moderation API of OpenAI](https://platform.openai.com/docs/guides/moderation). * 168 rows added manually from public forums and public blogs. ## Non Suicidal * 5000 rows from instructions of https://huggingface.co/datasets/somosnlp/somos-clean-alpaca-es * 2000 rows from output of https://huggingface.co/datasets/somosnlp/somos-clean-alpaca-es * 2000 rows from Columbia Suicide Severity Rating Scale (C-SSRS) * 100 rows from https://huggingface.co/datasets/ziq/depression_advice. We use `Helsinki-NLP/opus-mt-en-es` to translate the dataset. * 100 rows added manually from public forums, blogs and podcasts. # Considerations for Using the Data ## Social Impact of Dataset The dataset could contain some patterns to detect suicidal ideation/behavior. ## Discussion of Biases No measures have been taken to estimate the bias and toxicity embedded in the dataset. However, the most of the data is collected on Reddit, Twitter, and ChatGPT. So there is probably an age bias because [the Internet is used more by younger people](https://www.statista.com/statistics/272365/age-distribution-of-internet-users-worldwide). # Additional Information ## Team * [dariolopez](https://huggingface.co/dariolopez) * [diegogd](https://huggingface.co/diegogd) ## Licesing This work is licensed under a [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
false
false
false
### Dataset Summary First 10k rows of the scientific_papers["pubmed"] dataset. 8:1:1 split (10000:1250:1250). ### Usage ``` from datasets import load_dataset train_dataset = load_dataset("ronitHF/pubmed-10k-8.1.1", split="train") val_dataset = load_dataset("ronitHF/pubmed-10k-8.1.1", split="validation") test_dataset = load_dataset("ronitHF/pubmed-10k-8.1.1", split="test") ```
false
Please see [repo](https://github.com/niizam/4chan-datasets) to turn the text file into json/csv format Deleted some boards, since they are already archived by https://archive.4plebs.org/
false
# TempoFunk S(mall)Dance 10k samples of metadata and encoded latents & prompts of videos themed around **dance**. ## Data format - Video frame latents - Numpy arrays - 120 frames, 512x512 source size - Encoded shape (120, 4, 64, 64) - CLIP (openai) encoded prompts - Video description (as seen in metadata) - Encoded shape (77,768) - Video metadata as JSON (description, tags, categories, source URLs, etc.)
false
false
19K Multilingual VQA Alignment Dataset, in the format of Mini-GPT4 dataset. With 1.1K images from COCO-2017, resized.
false
# Face Mask Detection Dataset includes 250 000 images, 4 types of mask worn on 28 000 unique faces. All images were collected using the Toloka.ai crowdsourcing service and validated by TrainingData.pro # File with the extension .csv includes the following information for each media file: - **WorkerId**: the identifier of the person who provided the media file, - **Country**: the country of origin of the person, - **Age**: the age of the person, - **Sex**: the gender of the person, - **Type**: the type of media file - **Link**: the URL to access the media file # Folder "img" with media files - containg all the photos which correspond to the data in the .csv file **How it works**: *go to the first folder and you will make sure that it contains media files taken by a person whose parameters are specified in the first 4 lines of the .csv file.* In order to get access to more than 250,000 photos or to learn more about our data, please contact our sales team by submitting a request on our website https://trainingdata.pro/data-market?utm_source=huggingface or emaling us at sales@trainingdata.pro More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets** TrainingData's GitHub: **https://github.com/trainingdata-pro**
false
# The Portrait and 26 Photos (272 people) Each set includes 27 photos of people. Each person provided two types of photos: one photo in profile (portrait_1), and 26 photos from their life (photo_1, photo_2, …, photo_26). # The Portrait The portrait photo is a photo that shows a person in profile. Mandatory conditions for the photo are: - The person is pictured alone; - Shoulder-length photo; - No sunglasses or medical mask on the face; - The face is calm, with no smiling or gesturing. # 26 Photos The rest of the photos are completely different, with one exception being that they show a person from The Portrait. There may be different people in it, taken at different times of life and in different locations. The person may be laughing, wearing a mask, and surrounded by friends. # File with the extension .csv includes the following information for each media file: - **WorkerId**: the identifier of the person who provided the media file, - **Age**: the age of the person, - **Country**: the country of origin of the person, - **Gender**: the gender of the person, - **Type**: a unique identifier of a set of 26 media files, - **Link**: the URL to access the media file # Folder "img" with media files - containg all the photos - which correspond to the data in the .csv file **How it works**: *go to the folder “0ff4d24098b3110ecfc0a7198e080a4b” and you will make sure that it contains media files taken by a person whose parameters are specified in the first 27 lines of the .csv file.* In order to get access to 7344 media files from 272 people or to learn more about our data, please contact our sales team by submitting a request on our website https://trainingdata.pro/data-market?utm_source=huggingface or emaling us at sales@trainingdata.pro More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets** TrainingData's GitHub: **https://github.com/trainingdata-pro**
false
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
true
# Dataset Card for blognone-20230430 ## Dataset Summary [Blognone](https://www.blognone.com/) posts from January 1, 2020 to April 30, 2023. ## Features - title: (str) - author: (str) - date: (str) - tags: (list) - content: (str) ## Licensing Information Blognone posts are published are licensed under the [Creative Commons Attribution 3.0 Thailand](https://creativecommons.org/licenses/by/3.0/th/deed.en) (CC BY 3.0 TH).
false
true
false
# License Plates Over **1.2 million** annotated license plates from vehicles around the world. This dataset is tailored for **License Plate Recognition tasks** and includes images from both YouTube and PlatesMania. Annotation details are provided in the About section below. # About ## Variables in .csv files: - **file_name** - filename of the original car photo - **license_plate.country** - country where the vehicle was captured - **bbox** - normalized Bounding Box labeling of the car - **license_plate.visibility** - the visibility type of the license plate - **license_plate.id** - unique license plate's id - **license_plate.mask** - normalized coordinates of the license plate - **license_plate.rows_count** - single-line or double-line number - **license_plate.number** - recognized text of the license plate - **license_plate.serial** - only for UAE numbers - license plate series - **license_plate.region** - only for UAE numbers - license plate subregion - **license_plate.color** - only for Saudi Arabia - color of the international plate code **How it works**: *go to the folder of the country, CSV-file contains all labeling information about images located in the subfolder "photos" of the corresponding folder.* In order to get access to more than 1,2 million photos with lisence plates of 32 different countries or to learn more about our data, please contact our sales team by submitting a request on our website https://trainingdata.pro/data-market?utm_source=huggingface or emaling us at **sales@trainingdata.pro** More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets** TrainingData's GitHub: **https://github.com/trainingdata-pro**
true
# Dataset Card for Review Helpfulness Prediction (RHP) Dataset ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** [On the Role of Reviewer Expertise in Temporal Review Helpfulness Prediction](https://aclanthology.org/2023.findings-eacl.125/) - **Leaderboard:** ### Dataset Summary The success of e-commerce services is largely dependent on helpful reviews that aid customers in making informed purchasing decisions. However, some reviews may be spammy or biased, making it challenging to identify which ones are helpful. Current methods for identifying helpful reviews only focus on the review text, ignoring the importance of who posted the review and when it was posted. Additionally, helpfulness votes may be scarce for less popular products or recently submitted reviews. To address these challenges, the we introduce a dataset and task for review helpfulness prediction, incorporating the reviewers' attributes and review date, and build the dataset by scraping reviews from [TripAdvisor](https://www.tripadvisor.com/). ### Languages English ## Dataset Structure ### Data Instances One example from the `test` split of the dataset is given below in JSON format. ``` { "user_review_posted": 28, "user_total_helpful_votes": 78, "expertise": 0.013414038240254, "user_cities_visited": 89, "review_days": 0.39430449069003204, "helpful_class": 4, "review_text": "Had to see for myself. Over priced, bloviated, cheap. I am highly sensitive to mold, and it permeated the hotel. Sheets were damp, pipes blew hot air even when turned off. Considering all the hype, that's what this place is, all hype for too much money." } ``` ### Data Fields - `user_review_posted`: An integer representing the number of reviews posted by the reviewer. - `user_total_helpful_votes`: An integer representing the cumulative helpful votes received by the reviewer. - `expertise`: A normalized floating point number representing the mean number of helpful votes received per review. - `user_cities_visited`: An integer representing the number of cities visited by the reviewer. - `review_days`: A normalized floating point number representing the relative age of a review in days. - `helpful_class`: An integer representing the degree of helpfulness of a review. - `review_text`: A string representing the review text. ### Data Splits The following Table presents the summary of our dataset with train, validation, and test splits. | | Train | Valid | Test | |:---------------:|---------|--------|-------| | Total #Samples | 145,381 | 8,080 | 8,080 | | Avg. #Sentences | 7.82 | 7.8 | 7.81 | | Avg. #Words | 152.37 | 152.25 | 148.9 | ## Dataset Creation We build our dataset by scraping reviews from [TripAdvisor](https://www.tripadvisor.com). Out of 225,664 reviews retrieved, close to one third have no helpful votes. We filter such reviews, and this reduces the number of reviews to 161,541. We leverage a logarithmic scale to categorize the reviews based on the number of votes received. Specifically, we map the number of votes into five intervals (i.e., [1,2), [2, 4), [4, 8), [8, 16), [16, infinity)), each corresponding to a helpfulness score of {1, 2, 3, 4, 5}, where the higher the score, the more helpful the review. More details can be found in our [EACL 2023](https://aclanthology.org/2023.findings-eacl.125/) paper. ### Discussion of Ethics In our data scraping process, we took into account ethical considerations. We obtained data at an appropriate pace, avoiding any potential DDoS attacks. ### Known Limitations Limitation of our dataset is that we only worked with reviews written in English. As a result, we filter out the reviews written in other languages and notice code-switched reviews when the reviewers alternate between two or more languages in a single review. ## Additional Information ### Licensing Information Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Copyright of the dataset contents belongs to the original copyright holders. ### Citation Information If you use any of the resources or it's relevant to your work, please cite [the paper](https://aclanthology.org/2023.findings-eacl.125/). ``` @inproceedings{nayeem-rafiei-2023-role, title = "On the Role of Reviewer Expertise in Temporal Review Helpfulness Prediction", author = "Nayeem, Mir Tafseer and Rafiei, Davood", booktitle = "Findings of the Association for Computational Linguistics: EACL 2023", month = may, year = "2023", address = "Dubrovnik, Croatia", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-eacl.125", pages = "1684--1692", abstract = "Helpful reviews have been essential for the success of e-commerce services, as they help customers make quick purchase decisions and benefit the merchants in their sales. While many reviews are informative, others provide little value and may contain spam, excessive appraisal, or unexpected biases. With the large volume of reviews and their uneven quality, the problem of detecting helpful reviews has drawn much attention lately. Existing methods for identifying helpful reviews primarily focus on review text and ignore the two key factors of (1) who post the reviews and (2) when the reviews are posted. Moreover, the helpfulness votes suffer from scarcity for less popular products and recently submitted (a.k.a., cold-start) reviews. To address these challenges, we introduce a dataset and develop a model that integrates the reviewer{'}s expertise, derived from the past review history of the reviewers, and the temporal dynamics of the reviews to automatically assess review helpfulness. We conduct experiments on our dataset to demonstrate the effectiveness of incorporating these factors and report improved results compared to several well-established baselines.", } ```
false
false
# Dataset Card for "Trad_food" - info: This dataset comes from the ANSES-CIQUAL 2020 Table in English in XML format, found on https://www.data.gouv.fr/fr/datasets/table-de-composition-nutritionnelle-des-aliments-ciqual/ . I made some minor changes on it in order to have it meets my needs (removed/added words to have exact translations, removed repetitions etc).
true
> I am not the author of this dataset. [View on GitHub](https://github.com/ye-kyaw-thu/khPOS). # khPOS (draft released 1.0) khPOS (Khmer Part-of-Speech) Corpus for Khmer NLP Research and Developments ## Lincense Creative Commons Attribution-NonCommercial-Share Alike 4.0 International (CC BY-NC-SA 4.0) License [Details Info of License](https://creativecommons.org/licenses/by-nc-sa/4.0/) ## Introduction The khPOS Corpus (Khmer POS Corpus) is a 12,000 sentences (25,626 words) manually word segmented and POS tagged corpus developed for Khmer language NLP research and developments. We collected Khmer sentences from websites that include various area such as economics, news, politics. Moreover it is also contained some student list and voter list of national election committee of Cambodia. The average number of words per sentence in the whole corpus is 10.75. Here, some symbols such as "។" (Khmer sign Khan), "៖" (Khmer sign Camnuc pii kuuh), "-", "?", "\[", "\]" etc. also counted as words. The shotest sentence contained only 1 word and longest sentence contained 169 words as follows (here, line number : Khmer sentence): 1814 : " ម៉ែ ឥត មាន ស្អប់_ខ្ពើម ឪពុក កូន ឯង ទេ ម៉ែ តែង នឹក មក កូន នឹង ឪពុក ឯង ពុំ មាន ភ្លេច ព្រម_ទាំង អ្នក\~ភូមិ ផង របង ជាមួយ ឯង ទៀត ដែល ម្ដាយ ធ្លាប់ នៅ ជាមួយ គេ ប៉ុន្តែ ម៉ែ ជាតិ ជា ទេព_ធីតា ពុំ អាច នៅ ជាមួយ មនុស្ស_លោក បាន យូរ ទេ រាល់ ថ្ងៃ ម៉ែ តែង ទៅ បំពេញ កិច្ច នៅ ចំពោះ មុខ ព្រះ\~ភក្ត្រ ព្រះ\~ឥន្ទ្រាធិរាជ គឺ សុំ អង្វរ ឲ្យ ព្រះ\~អង្គ ប្រទាន ពរ ដល់ កូន ឯង និង ឪពុក កូន ឯង កុំ បី ខាន មិន តែ ប៉ុណ្ណោះ ម្ដាយ បាន ទាំង ទូល សុំ ព្រះ\~ឥន្ទ្រ ឲ្យ ព្រះ\~អង្គ មេត្តា ផ្សាយ នូវ សុភ_មង្គល ដល់ មនុស្ស នៅ ឋាន នេះ ទូទៅ ផង កូន_ប្រុស ពន្លក ម្ដាយ ! ម្ដាយ ពុំ អាច នៅ ជាមួយ_នឹង កូន បាន ទៀត តែ ម្ដាយ យក កូន ឯង ទៅ លេង ប្រាសាទ ម្ដាយ ឯ ឋាន លើ មួយ ដង ម្ដាយ នឹង នាំ កូន ឯង ទៅ មុជ_ទឹក ក្នុង អាង ក្រអូប នៅ_ក្នុង សួន ព្រះ\~ឥន្ទ្រ ហើយ ទឹក នោះ នឹង ជម្រះ កាយ កូន ឯង ឲ្យ បាត់ ធំ ក្លិន មនុស្ស_លោក បន្ទាប់_ពី នោះ មក ម្ដាយ នឹង នាំ កូន ឯង ចូល ទៅ_ក្នុង ប្រាសាទ រួច នាំ កូន ឯង ទៅ ថ្វាយ_បង្រះ\~ឥន្ទ្រ " ។ ## Word Segmentation In Khmer texts, words composed of single or multiple syllables are usually not separated by white space. Spaces are used for easier reading and generally put between phrases, but there are no clear rules for using spaces in Khmer language. Therefore, word segmentation is a necessary prerequisite for POS tagging. Four classes of segment (word) types were observed during the manual segmentation of the corpus of Khmer text, each representing a different type of word, these were: - Word Type 1: Single Words - Word Type 2: Compound Words - Word Type 3: Compound Words with Prefix - Word Type 4: Compound Words with Suffix For the detail information of the word segmentation rules and how we built a Khmer word segmentation model, please refer to our published paper (see Publiation Section). ## POS Tags Part of speech is a category to which a word is assigned in accordance with its syntactic functions. In Khmer grammatical system, many linguists has defined their own POS according to their trend of research. Even though, many books are published, there are no standard agreement yet especially on number and name of POS tags. Comparing to English language, some English POS are not used in Khmer language, such as gerund, comparative and superlative adjectives, particle, etc. Based on CHOUN NATH dictionary, Khmer POS Tag set is defined. Some new POS tags that are not defined in the dictionary are added for considering word disambiguation task. Unlike English grammar, some Khmer sentences consist of more than one verb. The definitions and descriptions of POS tags are presented in detail as follow: 1. Abbreviation (AB): For example, គម or គ.ម for kilometer (km), អសប for United Nation (UN), ពស or ព.ស for ពុទ សក ជ (Buddhism era), នប or ន.ប for នគរ ល (police), អហ or អ.ហ for វុធហត (Police Military) etc. 2. Adjective is a word used to modify or describe the noun. Adjective is usually at the right hand side of noun. There are very few adjectives that their positions are before noun. ក្រហម (red), កន្លះ (half), ប្លែក (strange), តូច (small), ល្អ (good), ស្អាត (beautiful) etc. 3. Adverb (RB): An adverb is a word that is used to modify verb, adjective or another adverb. For example, ណាស់ (very), ពុំ (not), ទើប (just), ពេកក្រៃ (very), ហើយ (already) etc. 4. Auxiliary Verb (AUX): Only three groups of verbs are tagged as auxiliary verb that used to make tense. - Past form: បាន or មាន + Verb - Progressive form: កំពុង + Verb - Future form: នឹង + Verb 5. Cardinal Number (CD): A cardinal number is a word or a number that denoting the quality. For example, បី (three), ១០០ (100), ចតុ (four), ពាន់ (thousand), លាន (million) etc. 6. Conjunction (CC): Conjunction is a word to connect between words, phrases, and sentences. ក៏ប៉ុន្តែ (but), ពីព្រោះ (because), ដ្បិត (for, since), ទម្រាំតែ (until), ពុំនោះសោត (otherwise), បើ (if) etc. 7. Currency (CUR): CUR for currency symbol such as: ៛, \$, ₤, € etc. 8. Determiner Pronoun (DT): In Khmer grammar, determiners are classified under pronoun unlike English. It is used to tell location or/and uncertainty of noun. They are equivalent to English words: this, that, those, these, all, every, each, some etc. For example, នេះ (this), នោះ (that), ទាំងនេះ (these), ទាំងអស់ (all), នានា (various), ខ្លះ (some), សព្វ (every) etc. 9. Double Sign (DBL): Double sign (ៗ) is used to remind reader to read the previous word twice. For example, មនុស្ស/NN (people) គ្រប់/DT (every) ៗ/DBL គ្នា/PRO (person), "everybody" in English. 10. Et Cetera (ETC): ។ល។ is equal to et cetera (etc.) in English. 11. Full Stop (KAN): There are two full stops in Khmer language, ។ for sentence and ៕ for paragraph. 12. Interjection (UH): Word represents sound of animal, machine, and surprised sound. Interjections are always at the beginning of a sentence, and mostly followed by exclamation mark. For example, អូ (Oh!), ម៉េវ (Meow), អ៊ុះ (uh) etc. 13. Measure Word (M): Measure Words are classified to describe different quality corresponding class of noun. Some of these words can not be found in English. For example: ព្រះសង្គ/NN (monk) ២/CD (2) អង្គ/M (person), សំលៀកបំពាក់/NN (cloth) ១/CD (1), សម្រាប់/M (set), ឆ្កែ/NN (dog) ១/CD (1) ក្បាល/M (head) etc. 14. Noun (NN): A noun is a word or compound word that identifies a person, an animal, an object, an idea, a thing, etc. For example: ឡាន (Car), ការអភិវឌ្ឍន៍ (Development), សកម្មភាព (Action), ខ្មៅដៃ (Pencil), ទឹកកក (Ice) etc. 15. Particle (PA): We consider three types of particle and they are hesitation, response and final. For the two medial particle words ក៏ ("so, then, but" in English) and នូវ ("of, with" in English) \[1\], we consider them as RB and IN. - Hesitation Particle: ខ្ញុំ (I) គិត (think) …អ៊ើ/PA (Er. . .) មិន (not) ឃើញ (see), ("I er… don’t think so" in English) - Response Particle: អើ/PA (Hm, Ah) ខ្ញុំ (I) ដឹង (know) ហើយ (already), ("Hmm I already know" in English) - Final Particle: There are some final particles such as ណា៎, សិន and ចុះ. Example usage of ណា៎: កុំ/RB (don't) ភ្លេច/VB (forget) ណា៎/PA, ("Hmm don't forget!" in English), Example usage of សិន: ចាំ/VB (wait) បន្តិច/RB (a while) សិន/PA, Example usage of ចុះ: ទៅ/VB (go) ចុះ/PA 16. Preposition (IN): Preposition is a word or a compound word that is used to connect two different words or phrases. It indicate the place, time, possession, relation etc. For example, ចំពោះ (to), ដល់ (to), ដើម្បី (in order to), ក្នុង (in), លើ (on), រវាង (between, around) etc. 17. Pronoun (PRO): A pronoun is a word that substitutes of a noun or a noun phrase. Those words are equivalent to Englis word: I, he, she, it, we, they, them, him, her etc. For example, ខ្ញុំ (I), គាត់ (he or she), យើង (we), ពួកយើង (our group or we), ខ្ញុំបាទ (polite form of I, me), ទូលបង្គំ (I, me for conversation with royal family) etc. 18. Proper Noun (PN): A proper noun is a noun that represents of a unique thing, for example, name of person, name of place and name of date etc. For example: សុខា (Sokha) ភ្នំពេញ (Phnom Penh), ថ្ងៃអង្គារ (Tuesday), កាល់តិច (Caltex), មេគង្គ (Mekong) etc. 19. Question Word (QT): In Khmer language, តើ is mostly used in the beginning of an interrogative sentence. For example, តើ/QT អ្នក/PRO (you) ឈ្មោះ/NN (name) អ្វី/PRO (what)?, "What is your name?" in English. 20. Relative Pronoun (RPN): In Khmer language, there is only one relative pronoun. It is ដែល "that, which, where, who" in English. 21. Symbol (SYM): SYM for others sign or symbol such as: +, -, \*, \/, ៖, =, @, \#, \% etc. 22. VB\_JJ: VB\_JJ is a tag for an adjective which its original form is a Verb. Currently, there is no proposed POS tag name for such kind of Khmer words. Although we can use JJ tag, we want to clarify by using VB\_JJ POS tag for its function and also for semantic purpose. For example: - The word សម្រាប់ (for) or ដើម្បី (to) is normally removed in both written and spoken Khmer. កន្លែង/NN (place) សម្រាប់ (for) ធ្វើការ/VB\_JJ (working), office in English ម៉ាស៊ីន/NN (Machine) សម្រាប់ (for) បោក/VB\_JJ (washing) ខោអាវ/NN (cloth), washing machine in English ពួកគាត់/PRO (they) អាច/VB (can) មាន/VB (have) ការងារ/NN (work) ធ្វើ/VB\_JJ (to do) - When Khmer Relative Pronoun is removed, the verb form keep the same as it was. It must be VB\_JJ it is no longer a Verb in subbordiante clause. សិស្ស (student) ដែល (who) មាន/VB (has) ពិន្ទុ (mark) ខ្ពស់ (hight) នឹង (will) ទទួលបាន (get) អាហារូបករណ៍ (scholarship), student who has hight mark will get a scholarship in English but when ដែល who is removed, មាន/VB (has) should become មាន/VB\_JJ (having) 23. Verb (VB): Verb is a word that shows the action, even, and condition. Verb is a middle part of phrase. Normally, verb always need object and sometime it also need complement. For example, ស្តាប់ (listen), មានប្រសាសន៍ (say), ស្រលាញ់ (love), ច្រៀង (sing), បើកបរ (drive) etc. 24. Verb Complement (VCOM): Its original form is a verb, but it will turn into VCOM when two verbs in a sentence to emphasize the first verb. Especially, a compound verb is splitted by the word មិន (no or not), the first part is a verb and the second part is VCOM. For example, លក់ (sell) ដាច់/VCOM (a lot), ប្រលង (exam) មិន (no) ជាប់/VCOM (pass), ដេក/VB (sleep), មិន/RB (not) លក់/VCOM (sleep well) etc. ## Files/Scripts Corpus-draft-ver-1.0/ (**_latest version_**) **Scripts:** mk-wordtag.pl : Perl script for printing word only file, tag only file, listing compound-words etc. mk-pair.pl : Perl script for combining word file and tag file to word/tag format **Data:** data/ : Data preparation folder for incremental POS-tagging models **Models:** Two-Hours/: Incremental training (2,000 to 12,000 sentences) of 2hours annotation approach models with khPOS corpus. Running logfile: [note.txt](https://github.com/ye-kyaw-thu/khPOS/blob/master/corpus-draft-ver-1.0/model/2hours/note.txt) 3gHMM/ : Incremental training (2,000 to 12,000 sentences) of 3-gram HMM (Hidden Markov Model) models with khPOS corpus. Running logfile: [note.txt](https://github.com/ye-kyaw-thu/khPOS/blob/master/corpus-draft-ver-1.0/model/3gHMM/note.txt) crf/ : Incremental training (2,000 to 12,000 sentences) of CRF POS-tagging models with khPOS corpus. Running logfile: [note.txt](https://github.com/ye-kyaw-thu/khPOS/blob/master/corpus-draft-ver-1.0/model/crf/note.txt) kytea/ : Incremental training (2,000 to 12,000 sentences) of L2 regularized SVM models with khPOS corpus. Running logfile: [note](https://github.com/ye-kyaw-thu/khPOS/blob/master/corpus-draft-ver-1.0/model/kytea/note.txt) maxent/ : Incremental training (2,000 to 12,000 sentences) of Maximum Entrophy models with khPOS corpus. Running logfile: [note.txt](https://github.com/ye-kyaw-thu/khPOS/blob/master/corpus-draft-ver-1.0/model/maxent/note.txt) rdr/ : Incremental training (2,000 to 12,000 sentences) of RDR (Ripple Down Rule-based) models with khPOS corpus. Running logfile: [note.txt](https://github.com/ye-kyaw-thu/khPOS/blob/master/corpus-draft-ver-1.0/model/rdr/note.txt) ## Development and Support Contributors Vichet Chea [Ye Kyaw Thu](https://sites.google.com/site/yekyawthunlp/) ## Acknowledgements We would like to express our gratitude to Mr. Sorn Kea and Miss Leng Greyhuy for their help in POS tagging 12,100 sentences of Khmer Corpus manually. ## Publication *Please cite following paper:* Ye Kyaw Thu, Vichet Chea, Yoshinori Sagisaka, "Comparison of Six POS Tagging Methods on 12K Sentences Khmer Language POS Tagged Corpus", In the first Regional Conference on Optical character recognition and Natural language processing technologies for ASEAN languages (ONA 2017), December 7-8, 2017, Phnom Penh, Cambodia. [paper](https://github.com/ye-kyaw-thu/khPOS/blob/master/khpos.pdf) ## Reference Vichet Chea, Ye Kyaw Thu, Chenchen Ding, Masao Utiyama, Andrew Finch and Eiichiro Sumita, "Khmer Word Segmentation Using Conditional Random Fields", In Khmer Natural Language Processing 2015 (KNLP2015), December 4, 2015, Phnom Penh, Cambodia. [paper](http://khmernlp.org/2015/wp-content/uploads/2016/09/Paper-Khmer-Word-Segmentation-Using-.pdf) Madeline Elizabeth. Ehrman, Kem Sos, Foreign Service Institute (U.S.), and Defense Language Institute (U.S.). Contemporary Cambodian: grammatical sketch, by Madeline E. Ehrman, with the assistance of Kem Sos. Foreign Service Institute, Dept. of State; \[for sale by the Supt. of Docs., U.S. Govt. Print. O .\] Washington, 1972.
false
false
# Dataset Card for AIO Version 2.0 with Japanese Wikipedia This dataset is used for baseline systems of AIO (AI王), a competition to promote research on question answering systems for the Japanese language. Each data point consists of a question, the answers, and positive and negative passages for the question. Please refer to [the original repository](https://github.com/cl-tohoku/quiz-datasets) for further details.
false
# Dataset Card for multilingual tatoeba translations with ~3M entries (llama supported languages only). ### Dataset Summary ~3M entries. Just more user-friendly version that combines all of the entries of original dataset in a single file (llama supported languages only): https://huggingface.co/datasets/Helsinki-NLP/tatoeba_mt
false
ESLO audio dataset configs: - no_overlap_no_hesitation - no_hesitation - no_overlap - raw Licence Creative Commons Attribution - Pas d'Utilisation Commerciale - Partage dans les Mêmes Conditions 4.0 International Dependencies: - ffmpeg: `sudo apt-get install ffmpeg` - ffmpeg-python: `pip install ffmpeg-python` ``` {'audio': {'array': array([-0.00250244, 0.00039673, 0.00326538, ..., 0.01953125, 0.02206421, 0.02304077]), 'path': None, 'sampling_rate': 16000}, 'end_timestamp': 8.939, 'file': 'ESLO1_INTPERS_437', 'overlap': False, 'sentence': "eh bien je voudrais vous demander d'abord en quoi consiste votre " 'entreprise ici ? exactement', 'speaker': 'spk1', 'start_timestamp': 0.954} ``` Eshkol-Taravella I., Baude O., Maurel D., Hriba L., Dugua C., Tellier I., (2012), Un grand corpus oral « disponible » : le corpus d’Orléans 1968-2012., in Ressources linguistiques libres, TAL. Volume 52 – n° 3/2011, 17-46 Laboratoire Ligérien de Linguistique - UMR 7270 (LLL) (2023). ESLO [Corpus]. ORTOLANG (Open Resources and TOols for LANGuage) - www.ortolang.fr, v1, https://hdl.handle.net/11403/eslo/v1.
false
# Summary This is a 🇹🇭 Thai-instructed dataset translated using Google Cloud Translation from [GPTeacher](https://github.com/teknium1/GPTeacher), A collection of modular datasets generated by GPT-4, General-Instruct & Roleplay-Instruct and is comprised of around 20,000 examples with deduplication. The dataset was asked to include reasoning and thought steps in the example responses where appropriate. Supported Tasks: - Training LLMs - Synthetic Data Generation - Data Augmentation Languages: Thai Version: 1.0 ---
false
# Summary This is a 🇹🇭 Thai-instructed dataset translated using Google Cloud Translation from [HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3) ( Included total **24K**, 17K reddit_eli5, 4K finance, 1.2K medicine, 1.2K open_qa and 0.8K wiki_csai ) The first human-ChatGPT comparison corpus which is introduced in this paper: - [How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection](https://arxiv.org/abs/2301.07597) Code, models and analysis are available on GitHub: - GitHub: [Chatgpt-Comparison-Detection project 🔬](https://github.com/Hello-SimpleAI/chatgpt-comparison-detection) Supported Tasks: - Training LLMs - Synthetic Data Generation - Data Augmentation Languages: Thai Version: 1.0 ---
true
# Universal Text Classification Dataset (UTCD) ## Load dataset ```python from datasets import load_dataset dataset = load_dataset('claritylab/utcd', name='in-domain') ``` ## Description UTCD is a curated compilation of 18 datasets revised for Zero-shot Text Classification spanning 3 aspect categories of Sentiment, Intent/Dialogue, and Topic classification. UTCD focuses on the task of zero-shot text classification where the candidate labels are descriptive of the text being classified. TUTCD consists of ~ 6M/800K train/test examples. UTCD was introduced in the Findings of ACL'23 Paper **Label Agnostic Pre-training for Zero-shot Text Classification** by ***Christopher Clarke, Yuzhao Heng, Yiping Kang, Krisztian Flautner, Lingjia Tang and Jason Mars***. [Project Homepage](https://github.com/ChrisIsKing/zero-shot-text-classification/tree/master). UTCD Datasets & Principles: In order to make NLP models more broadly useful, zero-shot techniques need to be capable of label, domain \& aspect transfer. As such, in the construction of UTCD we enforce the following principles: - **Textual labels**: In UTCD, we mandate the use of textual labels. While numerical label values are often used in classification tasks, descriptive textual labels such as those present in the datasets across UTCD enable the development of techniques that can leverage the class name which is instrumental in providing zero-shot support. As such, for each of the compiled datasets, labels are standardized such that the labels are descriptive of the text in natural language. - **Diverse domains and Sequence lengths**: In addition to broad coverage of aspects, UTCD compiles diverse data across several domains such as Banking, Finance, Legal, etc each comprising varied length sequences (long and short). The datasets are listed above. - Sentiment - GoEmotions introduced in [GoEmotions: A Dataset of Fine-Grained Emotions](https://arxiv.org/pdf/2005.00547v2.pdf) - TweetEval introduced in [TWEETEVAL: Unified Benchmark and Comparative Evaluation for Tweet Classification](https://arxiv.org/pdf/2010.12421v2.pdf) (Sentiment subset) - Emotion introduced in [CARER: Contextualized Affect Representations for Emotion Recognition](https://aclanthology.org/D18-1404.pdf) - Amazon Polarity introduced in [Character-level Convolutional Networks for Text Classification](https://arxiv.org/pdf/1509.01626.pdf) - Finance Phrasebank introduced in [Good debt or bad debt: Detecting semantic orientations in economic texts](https://arxiv.org/pdf/1307.5336.pdf) - Yelp introduced in [Character-level Convolutional Networks for Text Classification](https://arxiv.org/pdf/1509.01626.pdf) - Intent/Dialogue - Schema-Guided Dialogue introduced in [Towards Scalable Multi-Domain Conversational Agents: The Schema-Guided Dialogue Dataset](https://arxiv.org/pdf/1909.05855v2.pdf) - Clinc-150 introduced in [An Evaluation Dataset for Intent Classification and Out-of-Scope Prediction](https://arxiv.org/pdf/1909.02027v1.pdf) - SLURP SLU introduced in [SLURP: A Spoken Language Understanding Resource Package](https://arxiv.org/pdf/2011.13205.pdf) - Banking77 introduced in [Efficient Intent Detection with Dual Sentence Encoders](https://arxiv.org/abs/2003.04807](https://arxiv.org/pdf/2003.04807.pdf) - Snips introduced in [Snips Voice Platform: an embedded Spoken Language Understanding system for private-by-design voice interfaces](https://arxiv.org/pdf/1805.10190.pdf) - NLU Evaluation introduced in [Benchmarking Natural Language Understanding Services for building Conversational Agents](https://arxiv.org/pdf/1903.05566.pdf) - Topic - AG News introduced in [Character-level Convolutional Networks for Text Classification](https://arxiv.org/pdf/1509.01626.pdf) - DBpedia 14 introduced in [DBpedia: A Nucleus for a Web of Open Data](https://link.springer.com/chapter/10.1007/978-3-540-76298-0_52) - Yahoo Answer Topics introduced in [Character-level Convolutional Networks for Text Classification](https://arxiv.org/pdf/1509.01626.pdf) - MultiEurlex introduced in [MultiEURLEX -- A multi-lingual and multi-label legal document classification dataset for zero-shot cross-lingual transfer](https://aclanthology.org/2021.emnlp-main.559v2.pdf) - BigPatent introduced in [BIGPATENT: A Large-Scale Dataset for Abstractive and Coherent Summarization](https://aclanthology.org/P19-1212.pdf) - Consumer Finance introduced in [Consumer Complaint Database](https://www.consumerfinance.gov/data-research/consumer-complaints/) ## Structure ### Data Samples Each dataset sample contains the text, the label encoded as an integer, and the dataset name encoded as an integer. ```python { 'text': "My favourite food is anything I didn't have to cook myself.", 'labels': [215], 'dataset_name': 0 } ``` ### Datasets Contained The UTCD dataset contains 18 datasets, 9 `in-domain`, 9 `out-of-domain`, spanning 3 aspects: `sentiment`, `intent` and `topic`. Below are statistics on the datasets. **In-Domain Datasets** | Dataset | Aspect | #Samples in Train/Test | #labels | average #token in text in Train/Test | | ---------- | --------- | ---------------------- | ------- | ------------------------------------ | | GoEmotions | sentiment | 43K/5.4K | 28 | 12/12 | | TweetEval | sentiment | 45K/12K | 3 | 19/14 | | Emotion | sentiment | 16K/2K | 6 | 17/17 | | SGD | intent | 16K/4.2K | 26 | 8/9 | | Clinc-150 | intent | 15K/4.5K | 150 | 8/8 | | SLURP | intent | 12K/2.6K | 75 | 7/7 | | AG News | topic | 120K7.6K | 4 | 38/37 | | DBpedia | topic | 560K/70K | 14 | 45/45 | | Yahoo | topic | 1.4M/60K | 10 | 10/10 | **Out-of-Domain Datasets** | Dataset | Aspect | #Samples in Train/Test | #labels | average #token in text | | --------------------- | --------- | ---------------------- | ------- | ---------------------- | | Amazon Polarity | sentiment | 3.6M/400K | 2 | 71/71 | | Financial Phrase Bank | sentiment | 1.8K/453 | 3 | 19/19 | | Yelp | sentiment | 650K/50K | 3 | 128/128 | | Banking77 | intent | 10K/3.1K | 77 | 11/10 | | SNIPS | intent | 14K/697 | 7 | 8/8 | | NLU Eval | intent | 21K/5.2K | 68 | 7/7 | | MultiEURLEX | topic | 55K/5K | 21 | 1198/1853 | | Big Patent | topic | 25K/5K | 9 | 2872/2892 | | Consumer Finance | topic | 630K/160K | 18 | 190/189 | ### Configurations The `in-domain` and `out-of-domain` configurations has 2 splits: `train` and `test`. The aspect-normalized configurations (`aspect-normalized-in-domain`, `aspect-normalized-out-of-domain`) has 3 splits: `train`, `validation` and `test`. Below are statistics on the configuration splits. **In-Domain Configuration** | Split | #samples | | ----- | --------- | | Train | 2,192,703 | | Test | 168,365 | **Out-of-Domain Configuration** | Split | #samples | | ----- | --------- | | Train | 4,996,673 | | Test | 625,911 | **Aspect-Normalized In-Domain Configuration** | Split | #samples | | ---------- | -------- | | Train | 115,127 | | Validation | 12,806 | | Test | 168,365 | **Aspect-Normalized Out-of-Domain Configuration** | Split | #samples | | ---------- | -------- | | Train | 119,167 | | Validation | 13,263 | | Test | 625,911 |
false
false
# Gaepago (Gae8J/gaepago_s) ## How to use ### 1. Install dependencies ```bash pip install datasets==2.10.1 pip install soundfile==0.12.1 pip install librosa==0.10.0.post2 ``` ### 2. Load the dataset ```python from datasets import load_dataset dataset = load_dataset("Gae8J/gaepago_s") ``` Outputs ``` DatasetDict({ train: Dataset({ features: ['file', 'audio', 'label', 'is_unknown', 'youtube_id'], num_rows: 12 }) validation: Dataset({ features: ['file', 'audio', 'label', 'is_unknown', 'youtube_id'], num_rows: 12 }) test: Dataset({ features: ['file', 'audio', 'label', 'is_unknown', 'youtube_id'], num_rows: 12 }) }) ``` ### 3. Check a sample ```python dataset['train'][0] ``` Outputs ``` {'file': 'bark/1_Q80fDGLRM.wav', 'audio': {'path': 'bark/1_Q80fDGLRM.wav', 'array': array([-9.15838356e-08, 6.80501699e-08, 1.97052145e-07, ..., 0.00000000e+00, 0.00000000e+00, 0.00000000e+00]), 'sampling_rate': 16000}, 'label': 0, 'is_unknown': False, 'youtube_id': '1_Q80fDGLRM'} ```
false
false
# Dataset Card for CIFAKE_autotrain_compatible ## Dataset Description - **Homepage:** [Kaggle data card](https://www.kaggle.com/datasets/birdy654/cifake-real-and-ai-generated-synthetic-images?resource=download) - **Paper:** Krizhevsky, A., & Hinton, G. (2009). Learning multiple layers of features from tiny images. ### Dataset Summary This is a copy of the CIFAKE dataset created by Dr Jordan J. Bird and Professor Ahmad Lotfi. See more information on the original data card on [Kaggle](https://www.kaggle.com/datasets/birdy654/cifake-real-and-ai-generated-synthetic-images?resource=download). The real images used are from CIFAR-10. The fake images were created by the authors using Stable Diffusion v1.4. This dataset removes the train/test structures in the original dataset to allow compatibility with HuggingFace's AutoTrain. It removes the test split images from the original dataset in both categories. All training images remain. ## Dataset Structure ### Data Fields Contains 100k total images per splits below. ### Data Splits Real: 50k real images Fake: 50k AI generated images ## Additional Information ### Dataset Curators Dr Jordan J. Bird Professor Ahmad Lotfi ### Licensing Information This dataset is published under the [same MIT license as CIFAR-10](https://github.com/wichtounet/cifar-10/blob/master/LICENSE): Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ### Citation Information If you use this dataset, you must cite the following sources: [Krizhevsky, A., & Hinton, G. (2009). Learning multiple layers of features from tiny images.](https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdfl) [Bird, J.J., Lotfi, A. (2023). CIFAKE: Image Classification and Explainable Identification of AI-Generated Synthetic Images. arXiv preprint arXiv:2303.14126.](https://arxiv.org/abs/2303.14126) Real images are from Krizhevsky & Hinton (2009), fake images are from Bird & Lotfi (2023). The Bird & Lotfi study is a preprint currently available on ArXiv and this description will be updated when the paper is published.
false
# AutoTrain Dataset for project: cilantroperejil ## Dataset Description This dataset has been automatically processed by AutoTrain for project cilantroperejil. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "image": "<474x410 RGB PIL image>", "target": 0 }, { "image": "<474x575 RGB PIL image>", "target": 1 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "image": "Image(decode=True, id=None)", "target": "ClassLabel(names=['cilantro', 'perejil'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 160 | | valid | 40 |
false
# Breast Histopathology Image dataset - This dataset is just a rearrangement of the Original dataset at Kaggle: https://www.kaggle.com/datasets/paultimothymooney/breast-histopathology-images - Data Citation: https://www.ncbi.nlm.nih.gov/pubmed/27563488 , http://spie.org/Publications/Proceedings/Paper/10.1117/12.2043872 - The original dataset has structure: <pre> |-- patient_id |-- class(0 and 1) </pre> - The present dataset has following structure: <pre> |-- train |-- class(0 and 1) |-- valid |-- class(0 and 1) |-- test |-- class(0 and 1)
true
false
# Dataset Card for Acapella Evaluation Dataset ## Dataset Description - **Homepage:** <https://ccmusic-database.github.io> - **Repository:** <https://huggingface.co/datasets/CCMUSIC/acapella_evaluation> - **Paper:** <https://doi.org/10.5281/zenodo.5676893> - **Leaderboard:** <https://ccmusic-database.github.io/team.html> - **Point of Contact:** N/A ### Dataset Summary This database contains 6 Mandarin song segments sung by 22 singers, totaling 132 audio clips. Each segment consists of a verse and a chorus. Four judges evaluate the singing from nine aspects which are pitch, rhythm, vocal range, timbre, pronunciation, vibrato, dynamic, breath control and overall performance on a 10-point scale. The scores are recorded in a sheet. ### Supported Tasks and Leaderboards Acapella evaluation/scoring ### Languages Chinese, English ## Dataset Structure ### Data Instances .wav & .csv ### Data Fields song, singer id, pitch, rhythm, vocal range, timbre, pronunciation, vibrato, dynamic, breath control and overall performance ### Data Splits song1-6 ## Dataset Creation ### Curation Rationale Lack of a training dataset for acapella scoring system ### Source Data #### Initial Data Collection and Normalization Zhaorui Liu, Monan Zhou #### Who are the source language producers? Students and judges from CCMUSIC ### Annotations #### Annotation process 6 Mandarin song segments were sung by 22 singers, totaling 132 audio clips. Each segment consists of a verse and a chorus. Four judges evaluate the singing from nine aspects which are pitch, rhythm, vocal range, timbre, pronunciation, vibrato, dynamic, breath control and overall performance on a 10-point scale. The scores are recorded in a sheet. #### Who are the annotators? Judges from CCMUSIC ### Personal and Sensitive Information Singers' and judges' names are hided ## Considerations for Using the Data ### Social Impact of Dataset Providing a training dataset for acapella scoring system may improve the developement of related Apps ### Discussion of Biases Only for Mandarin songs ### Other Known Limitations No starting point has been marked for the vocal ## Additional Information ### Dataset Curators Zijin Li ### Licensing Information ``` MIT License Copyright (c) 2023 CCMUSIC Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ``` ### Citation Information ``` @dataset{zhaorui_liu_2021_5676893, author = {Zhaorui Liu, Monan Zhou, Shenyang Xu and Zijin Li}, title = {{Music Data Sharing Platform for Computational Musicology Research (CCMUSIC DATASET)}}, month = nov, year = 2021, publisher = {Zenodo}, version = {1.1}, doi = {10.5281/zenodo.5676893}, url = {https://doi.org/10.5281/zenodo.5676893} } ``` ### Contributions Provide a training dataset for acapella scoring system
false
# Summary This is a 🇹🇭 Thai-translated (GCP) dataset based on English 74K [Alpaca-CoT](https://github.com/PhoebusSi/alpaca-CoT) instruction dataset. Supported Tasks: - Training LLMs - Synthetic Data Generation - Data Augmentation Languages: Thai Version: 1.0 ---
true
true
# Dataset Card for "pubmed-rct-200k_indexed" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
true
# Dataset Card for d0rj/conv_ai_3_ru ## Dataset Description - **Homepage:** https://github.com/aliannejadi/ClariQ - **Repository:** https://github.com/aliannejadi/ClariQ - **Paper:** https://arxiv.org/abs/2009.11352 ### Dataset Summary This is translated version of [conv_ai_3](https://huggingface.co/datasets/conv_ai_3) dataset to Russian language. ### Languages Russian (translated from English). ## Dataset Structure ### Data Fields - `topic_id`: the ID of the topic (`initial_request`). - `initial_request`: the query (text) that initiates the conversation. - `topic_desc`: a full description of the topic as it appears in the TREC Web Track data. - `clarification_need`: a label from 1 to 4, indicating how much it is needed to clarify a topic. If an `initial_request` is self-contained and would not need any clarification, the label would be 1. While if a `initial_request` is absolutely ambiguous, making it impossible for a search engine to guess the user's right intent before clarification, the label would be 4. - `facet_id`: the ID of the facet. - `facet_desc`: a full description of the facet (information need) as it appears in the TREC Web Track data. - `question_id`: the ID of the question.. - `question`: a clarifying question that the system can pose to the user for the current topic and facet. - `answer`: an answer to the clarifying question, assuming that the user is in the context of the current row (i.e., the user's initial query is `initial_request`, their information need is `facet_desc`, and `question` has been posed to the user). ### Citation Information @misc{aliannejadi2020convai3, title={ConvAI3: Generating Clarifying Questions for Open-Domain Dialogue Systems (ClariQ)}, author={Mohammad Aliannejadi and Julia Kiseleva and Aleksandr Chuklin and Jeff Dalton and Mikhail Burtsev}, year={2020}, eprint={2009.11352}, archivePrefix={arXiv}, primaryClass={cs.CL} } ### Contributions Thanks to [@rkc007](https://github.com/rkc007) for adding this dataset.
false
# Dataset Card for "piqa_ru" This is translated version of [piqa dataset](https://huggingface.co/datasets/piqa) into Russian.
false
false
# Dataset Card for multi-figqa ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [Needs More Information] - **Repository:** [Multi-FigQA](https://github.com/simran-khanuja/Multilingual-Fig-QA) - **Paper:** [Multi-lingual and Multi-cultural Figurative Language Understanding ](https://arxiv.org/abs/2305.16171) - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Emmy Liu](emmy@cmu.edu) ### Dataset Summary A multilingual dataset of human-written creative figurative expressions in many languages (mostly metaphors and similes). The English version (with the same format) can be found [here](https://huggingface.co/datasets/nightingal3/fig-qa) ### Languages Languages included are Hindi, Indonesian, Javanese, Kannada, Sundanese, Swahili, and Yoruba. The language codes are respectively `hi`, `id`, `kn`, `su`, `sw`, and `yo`. ## Dataset Structure ### Data Instances ``` { 'startphrase': the phrase, 'ending1': one possible answer, 'ending2': another possible answer, 'labels': 0 if ending1 is correct else 1 } ``` ### Data Splits All data in each language is originally intended to be used as a test set for that language. ## Dataset Creation ### Curation Rationale Figurative language permeates human communication, but at the same time is relatively understudied in NLP. Datasets have been created in English to accelerate progress towards measuring and improving figurative language processing in language models (LMs). However, the use of figurative language is an expression of our cultural and societal experiences, making it difficult for these phrases to be universally applicable. We created this dataset as part of an effort to introduce more culturally relevant training data for different languages and cultures. ### Source Data #### Who are the source language producers? The language producers were hired to write creative sentences in their native languages. ## Additional Information ### Citation Information Please use this citation if you found this helpful: ``` @misc{kabra2023multilingual, title={Multi-lingual and Multi-cultural Figurative Language Understanding}, author={Anubha Kabra and Emmy Liu and Simran Khanuja and Alham Fikri Aji and Genta Indra Winata and Samuel Cahyawijaya and Anuoluwapo Aremu and Perez Ogayo and Graham Neubig}, year={2023}, eprint={2305.16171}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
false
# Audio Dataset This dataset consists of audio data for the following categories: * Coughing * Running water * Toilet flush * Other sounds Although this data is unbalanced, data augmentations can be added to process the data for audio classification. The file structure looks as follows: \- audio/ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; \- coughing/ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; \- toilet_flush/ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; \- running_water/ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; \- other_1/ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; \- other_2/
false
false
# Race - Source: https://huggingface.co/datasets/race - Num examples: - 87,866 (train) - 4,887 (validation) - 4,934 (test) - Language: English ```python from datasets import load_dataset load_dataset("vietgpt/race_en") ``` - Format for QA task ```python def preprocess_qa(sample): article = sample['article'] question = sample['question'] answer = sample['answer'] options = sample['options'] if answer == 'A': answer = options[0] elif answer == 'A': answer = options[1] elif answer == 'A': answer = options[2] else: answer = options[3] text = f"<|startoftext|><|article|> {article} <|question|> {question} <|answer|> {answer} <|endoftext|>" return {'text': text} """ <|startoftext|><|article|> Last week I talked with some of my students about what they wanted to do after they graduated, and what kind of job prospects they thought they had. Given that I teach students who are training to be doctors, I was surprised do find that most thought that they would not be able to get the jobs they wanted without "outside help". "What kind of help is that?" I asked, expecting them to tell me that they would need a or family friend to help them out. "Surgery ," one replied. I was pretty alarmed by that response. It seems that the graduates of today are increasingly willing to go under the knife to get ahead of others when it comes to getting a job . One girl told me that she was considering surgery to increase her height. "They break your legs, put in special extending screws, and slowly expand the gap between the two ends of the bone as it re-grows, you can get at least 5 cm taller!" At that point, I was shocked. I am short, I can't deny that, but I don't think I would put myself through months of agony just to be a few centimetres taller. I don't even bother to wear shoes with thick soles, as I'm not trying to hide the fact that I am just not tall! It seems to me that there is a trend towards wanting "perfection" , and that is an ideal that just does not exist in reality. No one is born perfect, yet magazines, TV shows and movies present images of thin, tall, beautiful people as being the norm. Advertisements for slimming aids, beauty treatments and cosmetic surgery clinics fill the pages of newspapers, further creating an idea that "perfection" is a requirement, and that it must be purchased, no matter what the cost. In my opinion, skills, rather than appearance, should determine how successful a person is in his/her chosen career. <|question|> We can know from the passage that the author works as a_. <|answer|> reporter <|endoftext|> """ ``` - Format for Multichoices task ```python def preprocess_multichoices(sample): article = sample['article'] question = sample['question'] answer = sample['answer'] options = sample['options'] options = f"A. {options[0]}\nB. {options[1]}\nC. {options[2]}\nD. {options[3]}" text = f"<|startoftext|><|article|> {article} <|question|> {question}\n{options} <|answer|> {answer} <|endoftext|>" return {'text': text} """ <|startoftext|><|article|> Last week I talked with some of my students about what they wanted to do after they graduated, and what kind of job prospects they thought they had. Given that I teach students who are training to be doctors, I was surprised do find that most thought that they would not be able to get the jobs they wanted without "outside help". "What kind of help is that?" I asked, expecting them to tell me that they would need a or family friend to help them out. "Surgery ," one replied. I was pretty alarmed by that response. It seems that the graduates of today are increasingly willing to go under the knife to get ahead of others when it comes to getting a job . One girl told me that she was considering surgery to increase her height. "They break your legs, put in special extending screws, and slowly expand the gap between the two ends of the bone as it re-grows, you can get at least 5 cm taller!" At that point, I was shocked. I am short, I can't deny that, but I don't think I would put myself through months of agony just to be a few centimetres taller. I don't even bother to wear shoes with thick soles, as I'm not trying to hide the fact that I am just not tall! It seems to me that there is a trend towards wanting "perfection" , and that is an ideal that just does not exist in reality. No one is born perfect, yet magazines, TV shows and movies present images of thin, tall, beautiful people as being the norm. Advertisements for slimming aids, beauty treatments and cosmetic surgery clinics fill the pages of newspapers, further creating an idea that "perfection" is a requirement, and that it must be purchased, no matter what the cost. In my opinion, skills, rather than appearance, should determine how successful a person is in his/her chosen career. <|question|> We can know from the passage that the author works as a_. A. doctor B. model C. teacher D. reporter <|answer|> C <|endoftext|> """ ``` - Format for GPT-3 ```python def preprocess_gpt3(sample): article = sample['article'] question = sample['question'] options = sample['options'] answer = sample['answer'] if answer == 'A': output = f'\n<|correct|> {options[0]}\n<|incorrect|> {options[1]}\n<|incorrect|> {options[2]}\n<|incorrect|> {options[3]}' elif answer == 'B': output = f'\n<|correct|> {options[1]}\n<|incorrect|> {options[0]}\n<|incorrect|> {options[2]}\n<|incorrect|> {options[3]}' elif answer == 'C': output = f'\n<|correct|> {options[2]}\n<|incorrect|> {options[0]}\n<|incorrect|> {options[1]}\n<|incorrect|> {options[3]}' else: output = f'\n<|correct|> {options[3]}\n<|incorrect|> {options[0]}\n<|incorrect|> {options[1]}\n<|incorrect|> {options[2]}' return {'text': f'<|startoftext|><|article|> {article} <|question|> {question} <|answer|> {output} <|endoftext|>'} """ <|startoftext|><|article|> Last week I talked with some of my students about what they wanted to do after they graduated, and what kind of job prospects they thought they had. Given that I teach students who are training to be doctors, I was surprised do find that most thought that they would not be able to get the jobs they wanted without "outside help". "What kind of help is that?" I asked, expecting them to tell me that they would need a or family friend to help them out. "Surgery ," one replied. I was pretty alarmed by that response. It seems that the graduates of today are increasingly willing to go under the knife to get ahead of others when it comes to getting a job . One girl told me that she was considering surgery to increase her height. "They break your legs, put in special extending screws, and slowly expand the gap between the two ends of the bone as it re-grows, you can get at least 5 cm taller!" At that point, I was shocked. I am short, I can't deny that, but I don't think I would put myself through months of agony just to be a few centimetres taller. I don't even bother to wear shoes with thick soles, as I'm not trying to hide the fact that I am just not tall! It seems to me that there is a trend towards wanting "perfection" , and that is an ideal that just does not exist in reality. No one is born perfect, yet magazines, TV shows and movies present images of thin, tall, beautiful people as being the norm. Advertisements for slimming aids, beauty treatments and cosmetic surgery clinics fill the pages of newspapers, further creating an idea that "perfection" is a requirement, and that it must be purchased, no matter what the cost. In my opinion, skills, rather than appearance, should determine how successful a person is in his/her chosen career. <|question|> We can know from the passage that the author works as a_. <|answer|> <|correct|> teacher <|incorrect|> doctor <|incorrect|> model <|incorrect|> reporter <|endoftext|> """ ```
false
true
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:https://github.com/kaistAI/CoT-Collection** - **Repository:https://github.com/kaistAI/CoT-Collection** - **Paper:https://arxiv.org/abs/2305.14045** - **Point of Contact:sejune@lklab.io** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages English ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits | name | train | |-------------------|------:| |CoT-Collection|1837928| ## Additional Information ### Citation Information ``` @article{kim2023cot, title={The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning}, author={Kim, Seungone and Joo, Se June and Kim, Doyoung and Jang, Joel and Ye, Seonghyeon and Shin, Jamin and Seo, Minjoon}, journal={arXiv preprint arXiv:2305.14045}, year={2023} } ```
true
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:https://github.com/kaistAI/CoT-Collection** - **Repository:https://github.com/kaistAI/CoT-Collection** - **Paper:https://arxiv.org/abs/2305.14045** - **Point of Contact:sejune@lklab.io** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages English ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits | name | train | |-------------------|------:| |CoT-Collection|1837928| ## Additional Information ### Citation Information ``` @article{kim2023cot, title={The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning}, author={Kim, Seungone and Joo, Se June and Kim, Doyoung and Jang, Joel and Ye, Seonghyeon and Shin, Jamin and Seo, Minjoon}, journal={arXiv preprint arXiv:2305.14045}, year={2023} } ```
true
# Dataset Card for "HC3-ru" This is translated version of [Hello-SimpleAI/HC3 dataset](https://huggingface.co/datasets/Hello-SimpleAI/HC3) into Russian. ## Citation Checkout this papaer [arxiv: 2301.07597](https://arxiv.org/abs/2301.07597) ``` @article{guo-etal-2023-hc3, title = "How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection", author = "Guo, Biyang and Zhang, Xin and Wang, Ziyuan and Jiang, Minqi and Nie, Jinran and Ding, Yuxuan and Yue, Jianwei and Wu, Yupeng", journal={arXiv preprint arxiv:2301.07597} year = "2023", } ```
false
include six common Time-series-forcasting dataset * ETTsmall - ETTh1 - ETTh2 - ETTm1 - ETTm2 * traffic * eletricity * illness * exchange_rate
false
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
false
false
# curation-corpus ## Dataset Description - **Homepage:** [https://github.com/CurationCorp/curation-corpus](https://github.com/CurationCorp/curation-corpus) - **Repository:** [https://github.com/CurationCorp/curation-corpus](https://github.com/CurationCorp/curation-corpus) ## Source Data from [this official repo](https://github.com/CurationCorp/curation-corpus) with downloaded news articles content. ## Citation ``` @misc{curationcorpusbase:2020, title={Curation Corpus Base}, author={Curation}, year={2020} } ```
false
# FETV **FETV** is a benchmark for **F**ine-grained **E**valuation of open-domain **T**ext-to-**V**ideo generation ## Overview FETV consist of a diverse set of text prompts, categorized based on three orthogonal aspects: major content, attribute control, and prompt complexity. ![caption](https://github.com/llyx97/FETV/raw/main/Figures/categorization.png) ## Dataset Structure ### Data Instances All FETV data are all available in the file `fetv_data.json`. Each line is a data instance, which is formatted as: ``` { "video_id": "1006807024", "prompt": "A mountain stream", "major content": { "spatial": ["scenery & natural objects"], "temporal": ["fluid motions"] }, "attribute control": { "spatial": null, "temporal": null }, "prompt complexity": ["simple"], "source": "WebVid", "video_url": "https://ak.picdn.net/shutterstock/videos/1006807024/preview/stock-footage-a-mountain-stream.mp4", "unusual type": null } ``` ### Data Fields * "video_id": The video identifier in the original dataset where the prompt comes from. * "prompt": The text prompt for text-to-video generation. * "major content": The major content described in the prompt. * "attribute control": The attribute that the prompt aims to control. * "prompt complexity": The complexity of the prompt. * "source": The original dataset where the prompt comes from, which can be "WebVid", "MSRVTT" or "ours". * "video_url": The url link of the reference video. * "unusual type": The type of unusual combination the prompt involves. Only available for data instances with `"source": "ours"`. ### Dataset Statistics FETV contains 619 text prompts. The data distributions over different categories are as follows (the numbers over categories do not sum up to 619 because a data instance can belong to multiple categories) ![caption](https://github.com/llyx97/FETV/raw/main/Figures/content_attribute_statistics.png) ![caption](https://github.com/llyx97/FETV/raw/main/Figures/complexity_statistics.png)
false
# Visual Novel Dataset This dataset contains parsed Visual Novel scripts for training language models. The dataset consists of approximately 60 million tokens of parsed scripts. ## Dataset Structure The dataset follows a general structure for visual novel scripts: - Dialogue lines: Dialogue lines are formatted with the speaker's name followed by a colon, and the dialogue itself enclosed in quotes. For example: ``` John: "Hello, how are you?" ``` - Actions and narration: Actions and narration within the Visual Novel scripts are often enclosed in asterisks, but it's important to note that not all visual novels follow this convention. Actions and narration provide descriptions of character movements, background settings, or other narrative elements. ``` *John looked around the room, searching for answers.* ``` ## Contents - `visual-novels.txt`: This file contains all the parsed VNs concatenated within a single plaintext file. Each entry is separated with this string: ``` [ - title - {visual-novel-title-1.txt} ] ``` - `VNDB/`: This directory contains `.json` files that contain VNDB IDs for the corresponding VN's characters. Does not include unparsed VNs. - `Archives/visual-novels-parsed.tar.zst`: This archive contains the parsed VNs but with each script in a separate text file (i.e. not concatenated). - `Archives/visual-novels-unparsed.tar.zst`: This archive contains all the unparsed VNs along with the original script for the currently parsed VNs. ## Usage You can utilize this dataset to train language models, particularly for tasks related to natural language processing and text generation. By leveraging the parsed visual novel scripts, you can train models to understand dialogue structures and generate coherent responses. Additionally, the inclusion of the unparsed scripts allows for further analysis and processing. ## Contribution This dataset was gathered and parsed by the [PygmalionAI](https://hugginface.co/PygmalionAI) Data Processing Team. Listed below are the team members, sorted by contribution amount: - **Suikamelon**: [HuggingFace](https://huggingface.co/lemonilia) - (2,787,704 ++ 672,473 --) - **Alpin**: [HuggingFace](https://huggingface.co/alpindale) - [GitHub](https://github.com/AlpinDale) (1,170,985 ++ 345,120 --) - **Spartan**: [GitHub](https://github.com/Spartan9772) (901,046 ++ 467,915 --) - **Unlucky-AI** [GitHub](https://github.com/Unlucky-AI) (253,316 ++ 256 --) ## Citation If you use this dataset in your research or projects, please cite it appropriately. ## Acknowledgements This dataset is compiled and shared for research and educational purposes. The dataset includes parsed visual novel scripts from various sources, which are predominantly copyrighted and owned by their respective publishers and creators. The inclusion of these scripts in this dataset does not imply any endorsement or authorization from the copyright holders. We would like to express our sincere gratitude to the original copyright holders and creators of the visual novels for their valuable contributions to the art and storytelling. We respect and acknowledge their intellectual property rights. We strongly encourage users of this dataset to adhere to copyright laws and any applicable licensing restrictions when using or analyzing the provided content. It is the responsibility of the users to ensure that any use of the dataset complies with the legal requirements governing intellectual property and fair use. Please be aware that the creators and distributors of this dataset disclaim any liability or responsibility for any unauthorized or illegal use of the dataset by third parties. If you are a copyright holder or have any concerns about the content included in this dataset, please contact us at [this email address](mailto:alpin@alpindale.dev) to discuss the matter further and address any potential issues.
true
Code/data source: https://github.com/sileod/tasksource Language models can learn in context without supervision, but learning to learn in context (meta-icl) leads to better results. With symbol tuning, labels are replaced with arbitrary symbols (e.g. foo/bar), which makes learning in context a key condition to learn the instructions We implement symbol in context learning, as presented in the [Symbol tuning improves in-context learning](https://arxiv.org/pdf/2305.08298.pdf) paper with tasksource classification datasets. An input is shuffled sequence of 5 positive and 5 negative examples the identification of a particular label (replaced with symbol), followed by an example to label. This is the largest symbol-tuning dataset to date, with 279 datasets. Symbol tuning improves in-context learning. ``` @article{sileo2023tasksource, title={tasksource: Structured Dataset Preprocessing Annotations for Frictionless Extreme Multi-Task Learning and Evaluation}, author={Sileo, Damien}, url= {https://arxiv.org/abs/2301.05948}, journal={arXiv preprint arXiv:2301.05948}, year={2023} } @article{wei2023symbol, title={Symbol tuning improves in-context learning in language models}, author={Wei, Jerry and Hou, Le and Lampinen, Andrew and Chen, Xiangning and Huang, Da and Tay, Yi and Chen, Xinyun and Lu, Yifeng and Zhou, Denny and Ma, Tengyu and others}, journal={arXiv preprint arXiv:2305.08298}, year={2023} } ```
true
M3KE, or Massive Multi-Level Multi-Subject Knowledge Evaluation, is a benchmark developed to assess the knowledge acquired by large Chinese language models by evaluating their multitask accuracy in both zero- and few-shot settings. The benchmark comprises 20,477 questions spanning 71 tasks. For further information about M3KE, please consult our [paper](https://arxiv.org/abs/2305.10263) or visit our [GitHub](https://github.com/tjunlp-lab/M3KE) page. ## Load the data ```python from datasets import load_dataset ds = load_dataset( path="TJUNLP/M3KE", name="Computer Programming Language-Natural Sciences-Other" ) print(ds) """ DatasetDict({ test: Dataset({ features: ['id', 'question', 'A', 'B', 'C', 'D', 'answer'], num_rows: 236 }) dev: Dataset({ features: ['id', 'question', 'A', 'B', 'C', 'D', 'answer'], num_rows: 5 }) }) """ print(ds["test"][0]) """ {'id': 0, 'question': '下面判断正确的是?', 'A': 'char str[10]={"china"}; 等价于 char str[10];str[]="china";', 'B': 'char *s="china"; 等价于 char *s;s="china"; ', 'C': 'char *a="china"; 等价于 char *a;*a="china";', 'D': 'char c[6]="china",d[6]="china"; 等 价 于 char c[6]=d[6]="china"; ', 'answer': ''} """ ``` ``` @misc{liu2023m3ke, title={M3KE: A Massive Multi-Level Multi-Subject Knowledge Evaluation Benchmark for Chinese Large Language Models}, author={Chuang Liu and Renren Jin and Yuqi Ren and Linhao Yu and Tianyu Dong and Xiaohan Peng and Shuting Zhang and Jianxiang Peng and Peiyi Zhang and Qingqing Lyu and Xiaowen Su and Qun Liu and Deyi Xiong}, year={2023}, eprint={2305.10263}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
true
### Dataset Description - **Homepage:** https://github.com/sunnweiwei/user-satisfaction-simulation - **Repository:** https://github.com/sunnweiwei/user-satisfaction-simulation - **Paper:** https://arxiv.org/pdf/2105.03748.pdf - **View records using Datasette:** [datasette-link](https://lite.datasette.io/?parquet=https%3A%2F%2Fhuggingface.co%2Fdatasets%2Fakomma%2Fuss-ratings-dataset%2Fresolve%2Fmain%2Fuss-ratings-dataset-datasette.parquet#/data/uss-ratings-dataset-datasette) ### Dataset Summary - Dialogs Quality Dataset - With both turn-level and dialog-level ratings provided on a scale of 1 to 5 by human annotators. - Each task has been annotated by multiple annotators. - Contains annotated dialogs from 4 different datasets (SGD, MultiWoz, ReDial, CCPE) - Total 34358 turns from 3500 dialogs |Dataset|Dialogs|Turns | |-------|------:|-----:| |SGD | 1000 | 11833| |MWOZ | 1000 | 10553| |Redial | 1000 | 6792 | |CCPE | 500 | 5180 | ### Column Definitions |Column |Type |Example Value |Description | |-------------------|-------|-------------------------|-----------------------------------------------| |split | str | CCPE;MWOZ;SGD;Redial | dataset name | |session_idx | int | 1 | dialog identifier | |turn_idx | int | 1 | turn identifier within a dialog | |tree_idx | int | 1 | tree identifier within a turn (is all 1s here)| |system | str | Do you like movies | system message | |user | str | No I don't like | user message | |turn_scores | list | [3; 2; 2] | list of turn-level quality scores from different human annotations| |mean_turn_rating | float | 2.33 | mean of turn-level annotator scores | |mode_turn_rating | int | 2 | mode of turn-level annotator scores | |dialog_scores | list | [3; 3; 3] | list of dialog-level quality scores from different human annotations| |mean_dialog_rating | float | 3.00 | mean of dialog-level annotator scores | |mode_dialog_rating | int | 3 | mode of dialog-level annotator scores | ### Dataset Description - **Homepage:** https://github.com/sunnweiwei/user-satisfaction-simulation - **Repository:** https://github.com/sunnweiwei/user-satisfaction-simulation - **Paper:** https://arxiv.org/pdf/2105.03748.pdf - **View records using Datasette:** [datasette-link](https://lite.datasette.io/?parquet=https%3A%2F%2Fhuggingface.co%2Fdatasets%2Fakomma%2Fuss-ratings-dataset%2Fresolve%2Fmain%2Fuss-ratings-dataset-datasette.parquet#/data/uss-ratings-dataset-datasette)
true
# French Grammatical Errors This dataset contains pairs of sentences and an explanation: - "phrase1" is a french sentence containing a grammatical error - "phrase2" is the same sentence without any error (please reach out if you think an error is present -- I could not see any) - "explication" is some text explaining the grammatical error ## Release Notes `0.1.0` - No error category is present, you would have to infer it from the `explication` column
false