dataset_name stringlengths 2 128 | description stringlengths 1 9.7k | prompt stringlengths 59 185 |
|---|---|---|
TArC | A morpho-syntactically annotated Tunisian Arabish Corpus (TArC). | Provide a detailed description of the following dataset: TArC |
Taskmaster-2 | The **Taskmaster-2** dataset consists of 17,289 dialogs in seven domains: restaurants (3276), food ordering (1050), movies (3047), hotels (2355), flights (2481), music (1602), and sports (3478). | Provide a detailed description of the following dataset: Taskmaster-2 |
Tasty Videos | A collection of 2511 recipes for zero-shot learning, recognition and anticipation. | Provide a detailed description of the following dataset: Tasty Videos |
TB-Places | TB-Places is a data set of garden images for testing algorithms for visual place recognition. It contains images with ground truth camera pose recorded in two real gardens at different times, with a total of four different recording sessions, with varying light conditions. | Provide a detailed description of the following dataset: TB-Places |
TCG | The **TCG** dataset is used to evaluate **Traffic Control Gesture** recognition for autonomous driving. The dataset is based on 3D body skeleton input to perform traffic control gesture classification on every time step. The dataset consists of 250 sequences from several actors, ranging from 16 to 90 seconds per sequence.
Source: [https://arxiv.org/pdf/2007.16072.pdf](https://arxiv.org/pdf/2007.16072.pdf)
Image Source: [https://github.com/againerju/tcg_recognition](https://github.com/againerju/tcg_recognition) | Provide a detailed description of the following dataset: TCG |
TCIA Test & Validation Radiotherapy CT Planning Scan | A dataset of 663 deidentified computed tomography (CT) scans acquired in routine clinical practice and with both segmentations taken from clinical practice and segmentations. | Provide a detailed description of the following dataset: TCIA Test & Validation Radiotherapy CT Planning Scan |
TE141K | A new text effects dataset with 141,081 text effect/glyph pairs in total. The dataset consists of 152 professionally designed text effects rendered on glyphs, including English letters, Chinese characters, and Arabic numerals. | Provide a detailed description of the following dataset: TE141K |
Tencent ML-Images | Tencent ML-Images is a large open-source multi-label image database, including 17,609,752 training and 88,739 validation image URLs, which are annotated with up to 11,166 categories. | Provide a detailed description of the following dataset: Tencent ML-Images |
TextCaps | Contains 145k captions for 28k images. The dataset challenges a model to recognize text, relate it to its visual context, and decide what part of the text to copy or paraphrase, requiring spatial, semantic, and visual reasoning between multiple text tokens and visual entities, such as objects. | Provide a detailed description of the following dataset: TextCaps |
TextSeg | **TextSeg** is a large-scale fine-annotated and multi-purpose text detection and segmentation dataset, collecting scene and design text with six types of annotations: word- and character-wise bounding polygons, masks and transcriptions.
Source: [https://github.com/SHI-Labs/Rethinking-Text-Segmentation](https://github.com/SHI-Labs/Rethinking-Text-Segmentation)
Image Source: [https://github.com/SHI-Labs/Rethinking-Text-Segmentation](https://github.com/SHI-Labs/Rethinking-Text-Segmentation) | Provide a detailed description of the following dataset: TextSeg |
Textual Visual Semantic Dataset | Extends the COCO-text [Veit et al. 2016] with information about the scene (such as objects and places appearing in the image) to enable researchers to include semantic relations between texts and scene in their Text Spotting systems, and to offer a common framework for such approaches. | Provide a detailed description of the following dataset: Textual Visual Semantic Dataset |
TextVQA | TextVQA is a dataset to benchmark visual reasoning based on text in images.
TextVQA requires models to read and reason about text in images to answer questions about them. Specifically, models need to incorporate a new modality of text present in the images and reason over it to answer TextVQA questions.
Statistics
* 28,408 images from OpenImages
* 45,336 questions
* 453,360 ground truth answers | Provide a detailed description of the following dataset: TextVQA |
TextWorld KG | **TextWorld KG** is a dynamic Knowledge Graph (KG) extraction dataset. It is based on a set of text-based games generated using. That framework allows to extract the underlying partial KG for every state, i.e., the subgraph that represents the agent’s partial knowledge of the world – what it has observed so far. All games share the same overarching theme: the agent finds itself hungry in a simple modern house with the goal of gathering ingredients and cooking a meal.
Source: [https://arxiv.org/abs/1910.09532](https://arxiv.org/abs/1910.09532) | Provide a detailed description of the following dataset: TextWorld KG |
TextZoom | **TextZoom** is a super-resolution dataset that consists of paired Low Resolution – High Resolution scene text images. The images are captured by cameras with different focal length in the wild.
Source: [https://github.com/JasonBoy1/TextZoom](https://github.com/JasonBoy1/TextZoom)
Image Source: [https://github.com/JasonBoy1/TextZoom](https://github.com/JasonBoy1/TextZoom) | Provide a detailed description of the following dataset: TextZoom |
Texygen Platform | Texygen is a benchmarking platform to support research on open-domain text generation models. Texygen has not only implemented a majority of text generation models, but also covered a set of metrics that evaluate the diversity, the quality and the consistency of the generated texts. The Texygen platform could help standardize the research on text generation and facilitate the sharing of fine-tuned open-source implementations among researchers for their work. As a consequence, this would help in improving the reproductivity and reliability of future research work in text generation. | Provide a detailed description of the following dataset: Texygen Platform |
TG-ReDial | **TG-ReDial** is a a topic-guided conversational recommendation dataset for research on conversational/interactive recommender systems.
Source: [https://github.com/RUCAIBox/TG-ReDial](https://github.com/RUCAIBox/TG-ReDial)
Image Source: [https://github.com/RUCAIBox/TG-ReDial](https://github.com/RUCAIBox/TG-ReDial) | Provide a detailed description of the following dataset: TG-ReDial |
THCHS-30 | THCHS-30 is a free Chinese speech database
THCHS-30 that can be used to build a full-fledged
Chinese speech recognition system. | Provide a detailed description of the following dataset: THCHS-30 |
The RobotriX | Photorealistic indoor dataset designed to enable the application of deep learning techniques to a wide variety of robotic vision problems. The RobotriX consists of hyperrealistic indoor scenes which are explored by robot agents which also interact with objects in a visually realistic manner in that simulated world. | Provide a detailed description of the following dataset: The RobotriX |
VLOG Dataset | A large collection of interaction-rich video data which are annotated and analyzed. | Provide a detailed description of the following dataset: VLOG Dataset |
ThirdToFirst | Two datasets (synthetic and natural/real) containing simultaneously recorded egocentric and exocentric videos. | Provide a detailed description of the following dataset: ThirdToFirst |
TicketTalk | A movie ticketing dialog dataset with 23,789 annotated conversations. The movie ticketing conversations range from completely open-ended and unrestricted to more structured, both in terms of their knowledge base, discourse features, and number of turns. In qualitative human evaluations, model-generated responses trained on just 10,000 TicketTalk dialogs were rated to "make sense" 86.5 percent of the time, almost the same as human responses in the same contexts. | Provide a detailed description of the following dataset: TicketTalk |
TikTok Comments | TikTok Comments is a domain specific lexicon based on TikTok comments dataset.
<data> | Provide a detailed description of the following dataset: TikTok Comments |
Tilde MODEL Corpus | Tilde MODEL Corpus is a multilingual corpora for European languages – particularly focused on the smaller languages. The collected resources have been cleaned, aligned, and formatted into a corpora standard TMX format useable for developing new Language technology products and services.
It contains over 10M segments of multilingual open data.
The data has been collected from sites allowing free use and reuse of its content, as well as from Public Sector web sites. | Provide a detailed description of the following dataset: Tilde MODEL Corpus |
Tilt-RGBD | Includes considerable roll and pitch camera motion. | Provide a detailed description of the following dataset: Tilt-RGBD |
Time-Lapse Hyperspectral Radiance Images | These sequences of hyperspectral radiance images have been taken from scenes undergoing natural illumination changes. In each scene, hyperspectral images were acquired at about 1-hour intervals. | Provide a detailed description of the following dataset: Time-Lapse Hyperspectral Radiance Images |
TimeTravel | TimeTravel contains 29,849 counterfactual rewritings, each with the original story, a counterfactual event, and human-generated revision of the original story compatible with the counterfactual event. | Provide a detailed description of the following dataset: TimeTravel |
TIM-Tremor | Contains static tasks as well as a multitude of more dynamic tasks, involving larger motion of the hands. The dataset has 55 tremor patient recordings together with: associated ground truth accelerometer data from the most affected hand, RGB video data, and aligned depth data. | Provide a detailed description of the following dataset: TIM-Tremor |
HAKE | HAKE is built upon existing activity datasets and provides human body part level atomic action labels (Part States). | Provide a detailed description of the following dataset: HAKE |
TinyPerson | **TinyPerson** is a benchmark for tiny object detection in a long distance and with massive backgrounds. The images in TinyPerson are collected from the Internet. First, videos with a high resolution are collected from different websites. Second, images from the video are sampled every 50 frames. Then images with a certain repetition (homogeneity) are deleted, and the resulting images are annotated with 72,651 objects with bounding boxes by hand. | Provide a detailed description of the following dataset: TinyPerson |
TinySocial | TinySocial is a dataset to enable research on Social Visual Question Answering. | Provide a detailed description of the following dataset: TinySocial |
TinyVIRAT | TinyVIRAT contains natural low-resolution activities. The actions in TinyVIRAT videos have multiple labels and they are extracted from surveillance videos which makes them realistic and more challenging. | Provide a detailed description of the following dataset: TinyVIRAT |
TITAN | TITAN consists of 700 labeled video-clips (with odometry) captured from a moving vehicle on highly interactive urban traffic scenes in Tokyo. The dataset includes 50 labels including vehicle states and actions, pedestrian age groups, and targeted pedestrian action attributes that are organized hierarchically corresponding to atomic, simple/complex-contextual, transportive, and communicative actions. | Provide a detailed description of the following dataset: TITAN |
TJU-DHD | **TJU-DHD** is a high-resolution dataset for object detection and pedestrian detection. The dataset contains 115,354 high-resolution images (52% images have a resolution of 1624×1200 pixels and 48% images have a resolution of at least 2,560×1,440 pixels) and 709,330 labelled objects in total with a large variance in scale and appearance.
Source: [https://github.com/tjubiit/TJU-DHD](https://github.com/tjubiit/TJU-DHD)
Image Source: [https://github.com/tjubiit/TJU-DHD](https://github.com/tjubiit/TJU-DHD) | Provide a detailed description of the following dataset: TJU-DHD |
TLL | Contains 6016 image-pairs from the wild, shedding light upon a rich and diverse set of criteria employed by human beings. | Provide a detailed description of the following dataset: TLL |
TLP | A new long video dataset and benchmark for single object tracking. The dataset consists of 50 HD videos from real world scenarios, encompassing a duration of over 400 minutes (676K frames), making it more than 20 folds larger in average duration per sequence and more than 8 folds larger in terms of total covered duration, as compared to existing generic datasets for visual tracking. | Provide a detailed description of the following dataset: TLP |
TME Motorway Dataset | The “Toyota Motor Europe (TME) Motorway Dataset” is composed by 28 clips for a total of approximately 27 minutes (30000+ frames) with vehicle annotation. Annotation was semi-automatically generated using laser-scanner data. Image sequences were selected from acquisition made in North Italian motorways in December 2011. This selection includes variable traffic situations, number of lanes, road curvature, and lighting, covering most of the conditions present in the complete acquisition.
The dataset comprises:
- Image acquisition: stereo, 20 Hz frequency , 1024x768 grayscale losslessly compressed images, 32° horizontal field of view, bayer coded color information (in OpenCV use CV_BayerGB2GRAY and CV_BayerGB2BGR color conversion codes; please note that left camera was rotated upside down, convert to color/grayscale BEFORE flipping the image). A checkboard calibration sequence is made available.
- Laser-scanner generated vehicle annotation and classification (car/truck).
- A software evaluation toolkit (C++ source code). | Provide a detailed description of the following dataset: TME Motorway Dataset |
Topical-Chat | A knowledge-grounded human-human conversation dataset where the underlying knowledge spans 8 broad topics and conversation partners don’t have explicitly defined roles. | Provide a detailed description of the following dataset: Topical-Chat |
TopLogo-10 | Collected from top 10 most popular clothing/wearable brandname logos captured in rich visual context. | Provide a detailed description of the following dataset: TopLogo-10 |
Topology Optimization Dataset | TOP is a synthetic dataset for topology optimization generated using Topy. The generated dataset has 10,000 objects which consist on 100 iterations of the optimization process for the problem defined on a regular 40 x 40 grid.
Source: [https://arxiv.org/pdf/1709.09578.pdf](https://arxiv.org/pdf/1709.09578.pdf)
Image Source: [https://github.com/ISosnovik/top](https://github.com/ISosnovik/top) | Provide a detailed description of the following dataset: Topology Optimization Dataset |
Toronto-3D | **Toronto-3D** is a large-scale urban outdoor point cloud dataset acquired by an MLS system in Toronto, Canada for semantic segmentation. This dataset covers approximately 1 km of road and consists of about 78.3 million points. Point clouds has 10 attributes and classified in 8 labelled object classes.
Source: [https://github.com/WeikaiTan/Toronto-3D](https://github.com/WeikaiTan/Toronto-3D)
Image Source: [https://github.com/WeikaiTan/Toronto-3D](https://github.com/WeikaiTan/Toronto-3D) | Provide a detailed description of the following dataset: Toronto-3D |
Torque | Torque is an English reading comprehension benchmark built on 3.2k news snippets with 21k human-generated questions querying temporal relationships. | Provide a detailed description of the following dataset: Torque |
Touchdown Dataset | Touchdown is a corpus for executing navigation instructions and resolving spatial descriptions in visual real-world environments. The task is to follow instruction to a goal position and there find a hidden object, Touchdown the bear.
Source: [https://github.com/lil-lab/touchdown](https://github.com/lil-lab/touchdown)
Image Source: [https://github.com/lil-lab/touchdown](https://github.com/lil-lab/touchdown) | Provide a detailed description of the following dataset: Touchdown Dataset |
Toulouse Vanishing Points Dataset | Toulouse Vanishing Points Dataset is a public photographs database of Manhattan scenes taken with an iPad Air 1. The purpose of this dataset is the evaluation of vanishing points estimation algorithms. Its originality is the addition of Inertial Measurement Unit (IMU) data synchronized with the camera under the form of rotation matrices. Moreover, contrary to existing works which provide vanishing points of reference in the form of single points, there are computed uncertainty regions. | Provide a detailed description of the following dataset: Toulouse Vanishing Points Dataset |
Tour20 | Contains 140 videos with multiple human created summaries, which were acquired in a controlled experiment. | Provide a detailed description of the following dataset: Tour20 |
ToyADMOS | **ToyADMOS** dataset is a machine operating sounds dataset of approximately 540 hours of normal machine operating sounds and over 12,000 samples of anomalous sounds collected with four microphones at a 48kHz sampling rate, prepared by Yuma Koizumi and members in NTT Media Intelligence Laboratories. The ToyADMOS dataset is designed for anomaly detection in machine operating sounds (ADMOS) research. It is designed for three tasks of ADMOS: product inspection (toy car), fault diagnosis for fixed machine (toy conveyor), and fault diagnosis for moving machine (toy train).
Source: [https://github.com/YumaKoizumi/ToyADMOS-dataset](https://github.com/YumaKoizumi/ToyADMOS-dataset) | Provide a detailed description of the following dataset: ToyADMOS |
Toyota Smarthome Dataset | A large scale dataset with daily-living activities performed in a natural manner. | Provide a detailed description of the following dataset: Toyota Smarthome Dataset |
TPIC17 | Image dataset with about 600K Flickr photos. | Provide a detailed description of the following dataset: TPIC17 |
TRACT | TRACT is a small scale manually annotated corpus for abuse classification problem. | Provide a detailed description of the following dataset: TRACT |
Traditional Chinese Landscape Painting Dataset | This dataset consists of 2,192 high-quality traditional Chinese landscape paintings (中国山水画). All paintings are sized 512x512, from the following sources:
* Princeton University Art Museum, 362 paintings
* Harvard University Art Museum, 101 paintings
* Metropolitan Museum of Art, 428 paintings
* Smithsonian's Freer Gallery of Art, 1,301 paintings
Source: [https://github.com/alicex2020/Chinese-Landscape-Painting-Dataset](https://github.com/alicex2020/Chinese-Landscape-Painting-Dataset)
Image Source: [https://github.com/alicex2020/Chinese-Landscape-Painting-Dataset](https://github.com/alicex2020/Chinese-Landscape-Painting-Dataset) | Provide a detailed description of the following dataset: Traditional Chinese Landscape Painting Dataset |
CVL Traffic Signs Dataset | A video dataset for recognising traffic signs hosted with the first IEEE Video and Image Processing (VIP) Cup within the IEEE Signal Processing Society. | Provide a detailed description of the following dataset: CVL Traffic Signs Dataset |
Trans10K | A large-scale dataset for transparent object segmentation, named Trans10K, consisting of 10,428 images of real scenarios with carefully manual annotations, which are 10 times larger than the existing datasets. | Provide a detailed description of the following dataset: Trans10K |
Transient Biometrics Nails Dataset | An extended version of an experimental dataset, called **Transient Biometrics Nails Dataset** (TBND), was created. TBND is composed of images of the right index finger. During acquisition the subject was instructed to lay her finger over a flat white surface and a simple point-and-shoot camera was used to acquire an image without the the use of a flash. No explicit instructions with respect to force applied were given and thus the results incorporate arbitrary force differences between users and capture sessions. Acquisition was thus done in a semi-controlled environment; apart from the white background and indirect lighting, the images present variation with respect to scale, focal plane and illumination. The dataset consists of three subsets, each one compromising the same 93 subjects, but varying on acquisition date. The first subset D01 consists of images acquired on the first acquisition day. The second subset D02 is composed of images acquired one day later. The third subset D30 was acquired 1 month after the first acquisition date. Given acquisition restrictions, the acquisitions of D30 have up to two days’ tolerance. This represents a massive expansion of the originally collected dataset TBND V01 | Provide a detailed description of the following dataset: Transient Biometrics Nails Dataset |
TREK-100 | The dataset is composed of 100 video sequences densely annotated with 60K bounding boxes, 17 sequence attributes, 13 action verb attributes and 29 target object attributes. | Provide a detailed description of the following dataset: TREK-100 |
T-REx | A dataset of large scale alignments between Wikipedia abstracts and Wikidata triples. T-REx consists of 11 million triples aligned with 3.09 million Wikipedia abstracts (6.2 million sentences). | Provide a detailed description of the following dataset: T-REx |
TriageSQL | **TriageSQL** is a cross-domain text-to-SQL question intention classification benchmark that requires models to distinguish four types of unanswerable questions from answerable questions.
Source: [https://github.com/chatc/TriageSQL](https://github.com/chatc/TriageSQL) | Provide a detailed description of the following dataset: TriageSQL |
TrMor2018 | A new high accuracy Turkish morphology dataset. | Provide a detailed description of the following dataset: TrMor2018 |
TSAC | Tunisian Sentiment Analysis Corpus (TSAC) is a Tunisian Dialect corpus of 17.000 comments from Facebook. | Provide a detailed description of the following dataset: TSAC |
TTPLA | **TTPLA** is a public dataset which is a collection of aerial images on Transmission Towers (TTs) and Power Lines (PLs). It can be used for detection and segmentation of transmission towers and power lines. It consists of 1,100 images with the resolution of 3,840×2,160 pixels, as well as manually labelled 8,987 instances of TTs and PLs.
Source: [https://github.com/r3ab/ttpla_dataset](https://github.com/r3ab/ttpla_dataset)
Image Source: [https://github.com/r3ab/ttpla_dataset](https://github.com/r3ab/ttpla_dataset) | Provide a detailed description of the following dataset: TTPLA |
TTS-Portuguese Corpus | The dataset has 10.5 hours from a single speaker. | Provide a detailed description of the following dataset: TTS-Portuguese Corpus |
TUM Visual-Inertial Dataset | A novel dataset with a diverse set of sequences in different scenes for evaluating VI odometry. It provides camera images with 1024x1024 resolution at 20 Hz, high dynamic range and photometric calibration. | Provide a detailed description of the following dataset: TUM Visual-Inertial Dataset |
TUNIZI | A sentiment analysis Tunisian Arabizi Dataset, collected from social networks, preprocessed for analytical studies and annotated manually by Tunisian native speakers. | Provide a detailed description of the following dataset: TUNIZI |
TURBID Dataset | The TURBID, is an open image dataset that has been generated to contribute with the underwater research area. TURBID consists in a collection of five different subsets of degraded images with its respective ground-truth. | Provide a detailed description of the following dataset: TURBID Dataset |
Turing Change Point Dataset | Specifically designed for the evaluation of change point detection algorithms, consisting of 37 time series from various domains. | Provide a detailed description of the following dataset: Turing Change Point Dataset |
TuSimple Lane | TuSimple Lane is an extension of the [TuSimple](/dataset/tusimple) dataset with 14,336 lane boundaries annotations. Each lane boundary in the dataset is annotated using 7 different classes such as “Single Dashed”, “Double Dashed” or “Single White Continuous”. | Provide a detailed description of the following dataset: TuSimple Lane |
TutorialBank | TutorialBank is a publicly available dataset which aims to facilitate NLP education and research. The dataset consists of links to over 6,300 high-quality resources on NLP and related fields. The corpus’s magnitude, manual collection and focus on annotation for education in addition to research differentiates it from other corpora. | Provide a detailed description of the following dataset: TutorialBank |
TVC | TV show Caption is a large-scale multimodal captioning dataset, containing 261,490 caption descriptions paired with 108,965 short video moments. **TVC** is unique as its captions may also describe dialogues/subtitles while the captions in the other datasets are only describing the visual content.
Source: [https://tvr.cs.unc.edu/tvc.html](https://tvr.cs.unc.edu/tvc.html)
Image Source: [https://github.com/jayleicn/TVCaption](https://github.com/jayleicn/TVCaption) | Provide a detailed description of the following dataset: TVC |
TVPR | The TVPR (Top View Person Re-identification) dataset stores depth frames (640x480) collected using Asus Xtion Pro Live in top-view configuration. This setup choice is primarily due to the reduction of occlusions and it has also the advantage of being privacy preserving, because faces are not recorded by the camera. The use of an RGB-D camera allows to extract anthropometric features for the recognition of people passing under the camera. | Provide a detailed description of the following dataset: TVPR |
TVSeries | A realistic dataset composed of 27 episodes from 6 popular TV series. The dataset spans over 16 hours of footage annotated with 30 action classes, totaling 6,231 action instances. | Provide a detailed description of the following dataset: TVSeries |
TweetEval | TweetEval introduces an evaluation framework consisting of seven heterogeneous Twitter-specific classification tasks. | Provide a detailed description of the following dataset: TweetEval |
Twitch-FIFA | **Twitch-FIFA** is video-context, many-speaker dialogue dataset based on live-broadcast soccer game videos and chats from Twitch.tv. This dataset can be used to train visually-grounded dialogue models that generate relevant temporal and spatial event language from the live video, while also being relevant to the chat history.
Source: [https://github.com/ramakanth-pasunuru/video-dialogue](https://github.com/ramakanth-pasunuru/video-dialogue) | Provide a detailed description of the following dataset: Twitch-FIFA |
Twitter100k | Twitter100k is a large-scale dataset for weakly supervised cross-media retrieval. | Provide a detailed description of the following dataset: Twitter100k |
Twitter Conversations Dataset | This dataset is used for the task of conversational document prediction. The dataset includes conversations that occurred between users and customer care agents in 25 organizations on the Twitter platform. Each conversation ends with a customer care agent providing a URL to a document to resolve the issue the user is facing. The task is to predict the document given a dialog context.
The train, dev and test datasets include 10000, 525 and 500 conversations respectively. | Provide a detailed description of the following dataset: Twitter Conversations Dataset |
Twitter Cyberthreat Detection Dataset | Twitter Cyberthreat Detection Dataset is a dataset that contains tweets from two sets of accounts related to cybersecurity. The tweets are annotated with different information such as whether they contain security-related information and named entities. | Provide a detailed description of the following dataset: Twitter Cyberthreat Detection Dataset |
Twitter Flood | This dataset contains two subsets of flood images from Twitter: The Harz17 dataset comprises images from tweets containing flood-related keywords during the occurrence of a flood in the Harz region in Germany in July 2017. Similarly, the Rhine18 dataset comprises images related to a flood of the river Rhine in January 2018.
Source: [https://github.com/cvjena/twitter-flood-dataset](https://github.com/cvjena/twitter-flood-dataset) | Provide a detailed description of the following dataset: Twitter Flood |
TURL | Twitter News URL Corpus is a human-labeled paraphrase corpus to date of 51,524 sentence pairs and the first cross-domain benchmarking for automatic paraphrase identification. | Provide a detailed description of the following dataset: TURL |
TWT-16 | The TWT16 dataset contains ~30k conversations in Twitter, collected from January to June 2016.
Source: [https://arxiv.org/pdf/1903.07319.pdf](https://arxiv.org/pdf/1903.07319.pdf)
Image Source: [https://github.com/zengjichuan/Topic_Disc](https://github.com/zengjichuan/Topic_Disc) | Provide a detailed description of the following dataset: TWT-16 |
UAV-GESTURE | UAV-GESTURE is a dataset for UAV control and gesture recognition. It is an outdoor recorded video dataset for UAV commanding signals with 13 gestures suitable for basic UAV navigation and command from general aircraft handling and
helicopter handling signals. It contains 119 high-definition video clips
consisting of 37,151 frames. | Provide a detailed description of the following dataset: UAV-GESTURE |
UAVid | UAVid is a high-resolution UAV semantic segmentation dataset as a complement, which brings new challenges, including large scale variation, moving object recognition and temporal consistency preservation. The UAV dataset consists of 30 video sequences capturing 4K high-resolution images in slanted views. In total, 300 images have been densely labeled with 8 classes for the semantic labeling task. | Provide a detailed description of the following dataset: UAVid |
UBC3V Dataset | ~6 million synthetic depth frames for pose estimation from multiple cameras. | Provide a detailed description of the following dataset: UBC3V Dataset |
UDC | **Ubuntu Dialogue Corpus** (**UDC**) is a dataset containing almost 1 million multi-turn dialogues, with a total of over 7 million utterances and 100 million words. This provides a unique resource for research into building dialogue managers based on neural language models that can make use of large amounts of unlabeled data. The dataset has both the multi-turn property of conversations in the Dialog State Tracking Challenge datasets, and the unstructured nature of interactions from microblog services such as Twitter. | Provide a detailed description of the following dataset: UDC |
UCFRep | The **UCFRep** dataset contains 526 annotated repetitive action videos. This dataset is built from the action recognition dataset UCF101.
Source: [https://github.com/Xiaodomgdomg/Deep-Temporal-Repetition-Counting](https://github.com/Xiaodomgdomg/Deep-Temporal-Repetition-Counting) | Provide a detailed description of the following dataset: UCFRep |
UCLA Protest Image | 40,764 images (11,659 protest images and hard negatives) with various annotations of visual attributes and sentiments. | Provide a detailed description of the following dataset: UCLA Protest Image |
UC Merced Land Use Dataset | This is a 21 class land use image dataset meant for research purposes.
There are 100 images for each of the following classes:
- agricultural
- airplane
- baseballdiamond
- beach
- buildings
- chaparral
- denseresidential
- forest
- freeway
- golfcourse
- harbor
- intersection
- mediumresidential
- mobilehomepark
- overpass
- parkinglot
- river
- runway
- sparseresidential
- storagetanks
- tenniscourt
- Each image measures 256x256 pixels.
The images were manually extracted from large images from the USGS National Map Urban Area Imagery collection for various urban areas around the country. The pixel resolution of this public domain imagery is 1 foot. | Provide a detailed description of the following dataset: UC Merced Land Use Dataset |
UFDD | Unconstrained Face Detection Dataset (UFDD) aims to fuel further research in unconstrained face detection. | Provide a detailed description of the following dataset: UFDD |
UFPR-AMR | This dataset contains 2,000 images taken from inside a warehouse of the Energy Company of Paraná (Copel), which directly serves more than 4 million consuming units in the Brazilian state of Paraná.
The images were acquired with three different cameras and are available in the JPG format with a resolution between 2,340 × 4,160 and 3,120 × 4,160 pixels. The dataset is split into three sets: training (800 images), validation (400 images) and testing (800 images).
Every image has the following annotations available in a text file: the camera in which the image was taken, the counter’s position (x,y,w,h) and reading, as well as the position of each digit. All counters of the dataset (regardless of meter type) have 5 digits, and thus 10,000 digits were manually annotated. | Provide a detailed description of the following dataset: UFPR-AMR |
UG^2 | Contains three difficult real-world scenarios: uncontrolled videos taken by UAVs and manned gliders, as well as controlled videos taken on the ground. Over 160,000 annotated frames forhundreds of ImageNet classes are available, which are used for baseline experiments that assess the impact of known and unknown image artifacts and other conditions on common deep learning-based object classification approaches. | Provide a detailed description of the following dataset: UG^2 |
UIT-ViNames | This dataset comprises over 26,000 full names annotated with genders. | Provide a detailed description of the following dataset: UIT-ViNames |
UIT-ViNewsQA | UIT-ViNewsQA is a new corpus for the Vietnamese language to evaluate healthcare reading comprehension models. The corpus comprises 22,057 human-generated question-answer pairs. Crowd-workers create the questions and their answers based on a collection of over 4,416 online Vietnamese healthcare news articles, where the answers comprise spans extracted from the corresponding articles. | Provide a detailed description of the following dataset: UIT-ViNewsQA |
UIT-ViQuAD | A new dataset for the low-resource language as Vietnamese to evaluate MRC models. This dataset comprises over 23,000 human-generated question-answer pairs based on 5,109 passages of 174 Vietnamese articles from Wikipedia. | Provide a detailed description of the following dataset: UIT-ViQuAD |
UMC005 English-Urdu | UMC005 English-Urdu is a parallel corpus of texts in English and Urdu language with sentence alignments. The corpus can be used for experiments with statistical machine translation.
The texts come from four different sources:
- Quran
- Bible
- Penn Treebank (Wall Street Journal)
- Emille corpus | Provide a detailed description of the following dataset: UMC005 English-Urdu |
UMDFaces | UMDFaces is a face dataset divided into two parts:
* Still Images - 367,888 face annotations for 8,277 subjects.
* Video Frames - Over 3.7 million annotated video frames from over 22,000 videos of 3100 subjects.
*Part 1 - Still Images*
The dataset contains 367,888 face annotations for 8,277 subjects divided into 3 batches. The annotations contain human curated bounding boxes for faces and estimated pose (yaw, pitch, and roll), locations of twenty-one keypoints, and gender information generated by a pre-trained neural network.
*Part 2 - Video Frames*
The second part contains 3,735,476 annotated video frames extracted from a total of 22,075 for 3,107 subjects. The annotations contain the estimated pose (yaw, pitch, and roll), locations of twenty-one keypoints, and gender information generated by a pre-trained neural network. | Provide a detailed description of the following dataset: UMDFaces |
UIEB | Includes 950 real-world underwater images, 890 of which have the corresponding reference images. | Provide a detailed description of the following dataset: UIEB |
UniMiB SHAR | Includes 11,771 samples of both human activities and falls performed by 30 subjects of ages ranging from 18 to 60 years. Samples are divided in 17 fine grained classes grouped in two coarse grained classes: one containing samples of 9 types of activities of daily living (ADL) and the other containing samples of 8 types of falls. The dataset has been stored to include all the information useful to select samples according to different criteria, such as the type of ADL, the age, the gender, and so on. | Provide a detailed description of the following dataset: UniMiB SHAR |
United Nations Parallel Corpus | The first parallel corpus composed from United Nations documents published by the original data creator. The parallel corpus presented consists of manually translated UN documents from the last 25 years (1990 to 2014) for the six official UN languages, Arabic, Chinese, English, French, Russian, and Spanish. | Provide a detailed description of the following dataset: United Nations Parallel Corpus |
MultiUN | The MultiUN parallel corpus is extracted from the United Nations Website , and then cleaned and converted to XML at Language Technology Lab in DFKI GmbH (LT-DFKI), Germany. The documents were published by UN from 2000 to 2009. | Provide a detailed description of the following dataset: MultiUN |
Unite the People | Unite The People is a dataset for 3D body estimation. The images come from the Leeds Sports Pose dataset and its extended version, as well as the single person tagged people from the MPII Human Pose Dataset. The images are labeled with different types of annotations such as segmentation labels, pose or 3D. | Provide a detailed description of the following dataset: Unite the People |
UPIQ | Contains over 4,000 images created by realigning and merging existing HDR and standard-dynamic-range (SDR) datasets. | Provide a detailed description of the following dataset: UPIQ |
Urban Dict spelling variant | **Urban Dict spelling variant** is a variant spelling dataset for use of NLP research in the informal domain. It consists of around 25k variant spelling pairs form UrbanDictionary. | Provide a detailed description of the following dataset: Urban Dict spelling variant |
Urban Environments | The Urban Environments dataset is a dataset of 20 land use classes across 300 European cities paired with satellite imagery data. | Provide a detailed description of the following dataset: Urban Environments |
UrbanLoco | UrbanLoco is a mapping/localization dataset collected in highly-urbanized environments with a full sensor-suite. The dataset includes 13 trajectories collected in San Francisco and Hong Kong, covering a total length of over 40 kilometers. | Provide a detailed description of the following dataset: UrbanLoco |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.