dataset_name
stringlengths
2
128
description
stringlengths
1
9.7k
prompt
stringlengths
59
185
NCI
Contains a wide range of texts in Irish, including fiction, news reports, informative texts and official documents. Kilgarriff, A., Rundell, M., and Uí Dhonnchadha, E. (2006). Efficient corpus development for lexicography: building the New Corpus for Ireland. Language Resources and Evaluation, 40:127–152. https://link.springer.com/article/10.1007/s10579-006-9011-7
Provide a detailed description of the following dataset: NCI
Irish Wikipedia
Text from Irish Wikipedia, an online encyclopedia.
Provide a detailed description of the following dataset: Irish Wikipedia
Hearthstone
This dataset contains card descriptions of the card game Hearthstone and the code that implements them. These are obtained from the open-source implementation Hearthbreaker (https://github.com/danielyule/hearthbreaker).
Provide a detailed description of the following dataset: Hearthstone
Unsplash_1k
Inpainting networks are typically benchmarked on samples from Places2 dataset. However, this dataset does not have high resolution images for evaluation purposes. Instead, we will use images from the Unsplash-Lite Dataset, which contains 25k high resolution nature-themed photos. We randomly sampled 1000 images from the dataset. Each image is resized and cropped to 1024x1024, and a set of masks is generated with thin, medium, and thick brush strokes, using the methodology described in LaMa. The purpose of this dataset is to serve as a test-set for evaluating inpainting performance on high resolution natural images.
Provide a detailed description of the following dataset: Unsplash_1k
Snips-SmartLights
The SmartLights benchmark from Snipstests the capability of controlling lights in different rooms. It consists of 1660 requests which are split into five partitions for a 5-fold evaluation. A sample command could be: “please change the [bedroom] lights to [red]” or “i’d like the [living room] lights to be at [twelve] percent”
Provide a detailed description of the following dataset: Snips-SmartLights
Snips-SmartSpeaker
The SmartSpeaker benchmark tests the performance of reacting to music player commands in English as well as in French. It has the difficulty of containing many artist or music tracks with uncommon names in the commands, like “play music by [a boogie wit da hoodie]” or “I’d like to listen to [Kinokoteikoku]”.
Provide a detailed description of the following dataset: Snips-SmartSpeaker
SVRT
The Synthetic Visual Reasoning Test (SVRT) is a series of 23 classification problems involving images of randomly generated shapes. https://www.pnas.org/doi/10.1073/pnas.1109168108 Comparing machines and humans on a visual categorization test
Provide a detailed description of the following dataset: SVRT
AOM-CTC
This is the Current Video sequence set from the AOM-CTC.
Provide a detailed description of the following dataset: AOM-CTC
AnoShift
AnoShift is a large-scale anomaly detection benchmark, which focuses on splitting the test data based on its temporal distance to the training set, introducing three testing splits: IID, NEAR, and FAR. This testing scenario proves to capture the in-time performance degradation of anomaly detection methods for classical to masked language models. AnoShift benchmark aims to enable a better estimate of the anomaly detection model’s performance, under natural distribution shifts that occur over time in the input, closer to the real-world performance, leading to more robust anomaly detection algorithms. The benchmark is based on the Kyoto-2016 dataset (https://www.takakura.com/Kyoto_data/).
Provide a detailed description of the following dataset: AnoShift
DaNewsroom
The first large-scale non-English language dataset specifically curated for automatic summarisation. The document-summary pairs are news articles and manually written summaries in the Danish language.
Provide a detailed description of the following dataset: DaNewsroom
Mechanical MNIST – Distribution Shift
The Mechanical MNIST – Distribution Shift dataset contains the results of finite element simulation of heterogeneous material subject to large deformation due to equibiaxial extension at a fixed boundary displacement of d = 7.0. The result provided in this dataset is the change in strain energy after this equibiaxial extension. The Mechanical MNIST dataset is generated by converting the MNIST bitmap images (28x28 pixels) with range 0 - 255 to 2D heterogeneous blocks of material (28x28 unit square) with varying modulus in range 1- s. The original bitmap images are sourced from the MNIST Digits dataset, (http://www.pymvpa.org/datadb/mnist.html) which corresponds to Mechanical MNIST – MNIST, and the EMNIST Letters dataset (https://www.nist.gov/itl/products-and-services/emnist-dataset) which correspond to Mechanical MNIST – EMNIST Letters. The Mechanical MNIST – Distribution Shift dataset is specifically designed to demonstrate three types of data distribution shift: (1) covariate shift, (2) mechanism shift, and (3) sampling bias, for all of which the training and testing environments are drawn from different distributions. For each type of data distribution shift, we have one dataset generated from the Mechanical MNIST bitmaps and one from the Mechanical MNIST – EMNIST Letters bitmaps. For the covariate shift dataset, the training dataset is collected from two environments (2500 samples from s = 100, and 2500 samples from s = 90), and the test data is collected from two additional environments (2000 samples from s = 75, and 2000 samples from s = 50). For the mechanism shift dataset, the training data is identical to the training data in the covariate shift dataset (i.e., 2500 samples from s = 100, and 2500 samples from s = 90), and the test datasets are from two additional environments (2000 samples from s = 25, and 2000 samples from s = 10). For the sampling bias dataset, datasets are collected such that each datapoint is selected from the broader MNIST and EMNIST inputs bitmap selection by a probability which is controlled by a parameter r. The training data is collected from two environments (9800 from r = 15, and 200 from r = -2), and the test data is collected from three different environments (2000 from r = -5, 2000 from r = -10, and 2000 from r = 1). Thus, in the end we have 6 benchmark datasets with multiple training and testing environments in each. The enclosed document “folder_description.pdf'” shows the organization of each zipped folder provided on this page. The code to reproduce these simulations is available on GitHub (https://github.com/elejeune11/Mechanical-MNIST/blob/master/generate_dataset/Equibiaxial_Extension_FEA_test_FEniCS.py).
Provide a detailed description of the following dataset: Mechanical MNIST – Distribution Shift
GPI corpus
The GPI Corpus is a collection of 1,043 privacy laws, regulations, and guidelines ("GPIs") covering 182 jurisdictions around the world. These documents are provided in two file formats (i.e., PDF showing the original formatting on the source website and TXT containing just the text of the GPI) and, in some cases, in multiple languages (i.e., the original language(s) and an English translation).
Provide a detailed description of the following dataset: GPI corpus
TFH_Annotated_Dataset
## **Dataset Introduction** TFH_Annotated_Dataset is an annotated patent dataset pertaining to *thin film head* technology in hard-disk. To the best of our knowledge, this is the second labeled patent dataset public available in technology management domain that annotates both entities and the semantic relations between entities, the first one is [1]. The well-crafted information schema used for patent annotation contains 17 types of entities and 15 types of semantic relations as shown below. **Table 1** The specification of entity types | Type | Comment | example | | ------------------ | ------------------------------------------- | :----------------------------------------------------------- | | physical flow | substance that flows freely | The **etchant solution** has a suitable solvent additive such as glycerol or methyl cellulose | | information flow | information data | A camera using a film having a magnetic surface for recording **magnetic data** thereon | | energy flow | entity relevant to energy | Conductor is utilized for producing **writing flux** in magnetic yoke | | measurement | method of measuring something | The curing step takes place at the substrate **temperature** less than 200.degree | | value | numerical amount | The curing step takes place at the substrate temperature less than **200.degree** | | location | place or position | The legs are thinner near the pole tip than in the **back gap region** | | state | particular condition at a specific time | The MR elements are biased to operate in a **magnetically unsaturated mode** | | effect | change caused an innovation | Magnetic disk system permits **accurate alignment** of magnetic head with spaced tracks | | function | manufacturing technique or activity | A magnetic head having **highly efficient write and read functions** is thereby obtained | | shape | the external form or outline of something | **Recess** is filled with non-magnetic material such as glass | | component | a part or element of a machine | A pole face of **yoke** is adjacent edge of element remote from surface | | attribution | a quality or feature of something | A **pole face** of yoke is adjacent edge of element remote from surface | | consequence | The result caused by something or activity | This prevents the slider substrate from **electrostatic damage** | | system | a set of things working together as a whole | A **digital recording system** utilizing a magnetoresistive transducer in a magnetic recording head | | material | the matter from which a thing is made | Interlayer may comprise material such as **Ta** | | scientific concept | terminology used in scientific theory | **Peak intensity ratio** represents an amount hydrophilic radical | | other | Not belongs to the above entity types | **Pressure distribution** across air bearing surface is substantially symmetrical side | **Table 2** The specification of relation types | TYPE | COMMENT | EXAMPLE | | ------------------ | ----------------------------------------------------------- | ------------------------------------------------------------ | | spatial relation | specify how one entity is located in **relation** to others | **Gap spacer material** is then deposited on the **film knife-edge** | | part-of | the ownership between two entities | a **magnetic head** has a **magnetoresistive element** | | causative relation | one entity operates as a cause of the other entity | **Pressure pad** carried another **arm** of spring urges film into contact with head | | operation | specify the relation between an activity and its object | **Heat treatment** improves the (100) **orientation** | | made-of | one entity is the material for making the other entity | The thin film head includes a **substrate** of **electrically insulative material** | | instance-of | the relation between a class and its instance | At least one of the **magnetic layer** is a **free layer** | | attribution | one entity is an attribution of the other entity | The **thin film** has very high **heat resistance** of remaining stable at 700.degree | | generating | one entity generates another entity | **Buffer layer resistor** create **impedance** that noise introduced to head from disk of drive | | purpose | relation between reason/result | **conductor** is utilized for producing **writing flux** in magnetic yoke | | in-manner-of | do something in certain way | The **linear array** is angled at a **skew angle** | | alias | one entity is also known under another entity’s name | The bias structure includes an **antiferromagnetic layer** **AFM** | | formation | an entity acts as a role of the other entity | **Windings** are joined at end to form **center tapped winding** | | comparison | compare one entity to the other | **First end** is closer to recording media use than **second end** | | measurement | one entity acts as a way to measure the other entity | This provides a relative **permeance** of at least **1000** | | other | not belongs to the above types | Then, **MR resistance estimate** during polishing step is calculated from **S value** and K value | There are 1010 patent abstracts with 3,986 sentences in this corpus . We use a web-based annotation tool named *Brat*[2] for data labeling, and the annotated data is saved in '.ann' format. The benefit of 'ann' is that you can display and manipulate the annotated data once the TFH_Annotated_Dataset.zip is unzipped under corresponding repository of Brat. TFH_Annotated_Dataset contains 22,833 entity mentions and 17,412 semantic relation mentions. With TFH_Annotated_Dataset, we run two tasks of information extraction including named entity recognition with BiLSTM-CRF[3] and semantic relation extractionand with BiGRU-2ATTENTION[4]. For improving semantic representation of patent language, the word embeddings are trained with the abstract of 46,302 patents regarding magnetic head in hard disk drive, which turn out to improve the performance of named entity recognition by 0.3% and semantic relation extraction by about 2% in weighted average F1, compared to GloVe and the patent word embedding provided by Risch et al[5]. For named entity recognition, the weighted-average precision, recall, F1-value of BiLSTM-CRF on entity-level for the test set are 78.5%, 78.0%, and 78.2%, respectively. Although such performance is acceptable, it is still lower than its performance on general-purpose dataset by more than 10% in F1-value. The main reason is the limited amount of labeled dataset. The precision, recall, and F1-value for each type of entity is shown in Fig. 4. As to relation extraction, the weighted-average precision, recall, F1-value of BiGRU-2ATTENTION for the test set are 89.7%, 87.9%, and 88.6% with no_edge relations, and 32.3%, 41.5%, 36.3% without no_edge relations. ## **Academic citing** Chen, L., Xu, S*., Zhu, L. et al. A deep learning based method for extracting semantic information from patent documents. Scientometrics 125, 289–312 (2020). https://doi.org/10.1007/s11192-020-03634-y ## **Paper link** https://link.springer.com/article/10.1007/s11192-020-03634-y ## **REFERENCE** [1] Pérez-Pérez, M., Pérez-Rodríguez, G., Vazquez, M., Fdez-Riverola, F., Oyarzabal, J., Oyarzabal, J., Valencia,A., Lourenço, A., & Krallinger, M. (2017). Evaluation of chemical and gene/protein entity recognition systems at BioCreative V.5: The CEMP and GPRO patents tracks. In Proceedings of the Bio-Creative V.5 challenge evaluation workshop, pp. 11–18. [2] Stenetorp, P., Pyysalo, S., Topić, G., Ohta, T., Ananiadou, S., & Tsujii, J. I. (2012). BRAT: a web-based tool for NLP-assisted text annotation. In Proceedings of the Demonstrations at the 13th Conference of the European Chapter of the Association for Computational Linguistics (pp. 102-107) [3] Huang, Z., Xu, W., &Yu, K. (2015). Bidirectional LSTM-CRF models for sequence tagging. arXiv preprint arXiv:1508.01991 [4] Han,X., Gao,T., Yao,Y., Ye,D., Liu,Z., Sun, M.(2019). OpenNRE: An Open and Extensible Toolkit for Neural Relation Extraction. arXiv preprint arXiv: 1301.3781 [5] Risch, J., & Krestel, R. (2019). Domain-specific word embeddings for patent classification. Data Technologies and Applications, 53(1), 108–122.
Provide a detailed description of the following dataset: TFH_Annotated_Dataset
Astock
(1) provide financial news for each specific stock. (2) provide various stock technical factors and fundamental factors for each stock.
Provide a detailed description of the following dataset: Astock
VNDS
A single-document Vietnamese summarization dataset
Provide a detailed description of the following dataset: VNDS
Mapping Topics in 100,000 Real-Life Moral Dilemmas
This dataset accompanies the ICWSM 2022 paper "Mapping Topics in 100,000 Real-Life Moral Dilemmas".
Provide a detailed description of the following dataset: Mapping Topics in 100,000 Real-Life Moral Dilemmas
ScribbleKITTI
ScribbleKITTI is a scribble-annotated dataset for LiDAR semantic segmentation.
Provide a detailed description of the following dataset: ScribbleKITTI
DailyTalk
DailyTalk is a high-quality conversational speech dataset designed for Text-to-Speech. We sampled, modified, and recorded 2,541 dialogues from the open-domain dialogue dataset DailyDialog which are adequately long to represent context of each dialogue.
Provide a detailed description of the following dataset: DailyTalk
Short Stories, Adjudicator Scores and Written Reflections
In this Adjudicator Scores_Short Stories and Written Reflections folder: Four files from four student participants of the contest. Each file contains 1.1 the short story version that was scored by third party reviewers, 1.2 the scores given by the reviewers, 1.3 the planning of the short story, 1.4 the short story version that highlights a student's own words and text generator words and that was submitted to the contest, 1.5 the answers to reflection questions that students completed after attending a pre-contest workshop
Provide a detailed description of the following dataset: Short Stories, Adjudicator Scores and Written Reflections
BRIND
BRIND is a short name of BSDS-RIND is the first public benchmark that dedicated to studying simultaneously the four edge types, namely Reflectance Edge (RE), Illumination Edge (IE), Normal Edge (NE) and Depth Edge (DE)
Provide a detailed description of the following dataset: BRIND
Pre-Contest Workshop Video Recordings
In this Pre-Contest Workshop Video Recordings folder: Seven screen and audio recordings of seven pre-contest workshops
Provide a detailed description of the following dataset: Pre-Contest Workshop Video Recordings
Student Reflections, Coding and Coding Scheme
In this Coding and Coding Scheme spreadsheet: Student answers to reflection questions from pre-context workshops; coding scheme for student reflections; and coding of student reflection
Provide a detailed description of the following dataset: Student Reflections, Coding and Coding Scheme
Pre-Contest Workshop Slidedeck
In this Pre-Contest Workshop Slidedeck.pdf: Instructional materials delivered for the seven pre-contest workshops
Provide a detailed description of the following dataset: Pre-Contest Workshop Slidedeck
STARSS22
The Sony-TAu Realistic Spatial Soundscapes 2022(STARSS22) dataset consists of recordings of real scenes captured with high channel-count spherical microphone array (SMA). The recordings are conducted from two different teams at two different sites, Tampere University in Tammere, Finland, and Sony facilities in Tokyo, Japan. Recordings at both sites share the same capturing and annotation process, and a similar organization. They are organized in sessions, corresponding to distinct rooms, human participants, and sound making props with a few exceptions.
Provide a detailed description of the following dataset: STARSS22
ARAUS
Choosing optimal maskers for existing soundscapes to effect a desired perceptual change via soundscape augmentation is non-trivial due to extensive varieties of maskers and a dearth of benchmark datasets with which to compare and develop soundscape augmentation models. To address this problem, we make publicly available the ARAUS (Affective Responses to Augmented Urban Soundscapes) dataset, which comprises a five-fold cross-validation set and independent test set totaling 25,440 unique subjective perceptual responses to augmented soundscapes presented as audio-visual stimuli. Each augmented soundscape is made by digitally adding "maskers" (bird, water, wind, traffic, construction, or silence) to urban soundscape recordings at fixed soundscape-to-masker ratios. Responses were then collected by asking participants to rate how pleasant, annoying, eventful, uneventful, vibrant, monotonous, chaotic, calm, and appropriate each augmented soundscape was, in accordance with ISO 12913-2:2018. Participants also provided relevant demographic information and completed standard psychological questionnaires. We perform exploratory and statistical analysis of the responses obtained to verify internal consistency and agreement with known results in the literature. Finally, we demonstrate the benchmarking capability of the dataset by training and comparing four baseline models for urban soundscape pleasantness: a low-parameter regression model, a high-parameter convolutional neural network, and two attention-based networks in the literature.
Provide a detailed description of the following dataset: ARAUS
Urban Soundscapes of the World
A main goal of the Urban Soundscapes of the World project is to create a reference database of examples of urban acoustic environments, consisting of high-quality immersive audiovisual recordings (360-degree video and spatial audio), in adherence to ISO 12913-2. Ultimately, this database may set the scope for immersive recording and reproducing urban acoustic environments with soundscape in mind.
Provide a detailed description of the following dataset: Urban Soundscapes of the World
PyMigBench
A benchmark for Python library migration.
Provide a detailed description of the following dataset: PyMigBench
Names pairs dataset
Includes co-referent name string pairs along with their similarities.
Provide a detailed description of the following dataset: Names pairs dataset
SKIPP'D
Large-scale integration of photovoltaics (PV) into electricity grids is challenged by the intermittent nature of solar power. Sky-image-based solar forecasting using deep learning has been recognized as a promising approach to predicting the short-term fluctuations. However, there are few publicly available standardized benchmark datasets for image-based solar forecasting, which limits the comparison of different forecasting models and the exploration of forecasting methods. To fill these gaps, we introduce SKIPP'D -- a SKy Images and Photovoltaic Power Generation Dataset. The dataset contains three years (2017-2019) of quality-controlled down-sampled sky images and PV power generation data that is ready-to-use for short-term solar forecasting using deep learning. In addition, to support the flexibility in research, we provide the high resolution, high frequency sky images and PV power generation data as well as the concurrent sky video footage. We also include a code base containing data processing scripts and baseline model implementations for researchers to reproduce our previous work and accelerate their research in solar forecasting.
Provide a detailed description of the following dataset: SKIPP'D
Sample EEG dataset and looking times data for NEAR (Newborn EEG Artifact Removal)
The sample EEG dataset consists of the newborn EEG data recorded for the work published as: Buiatti M. et al. "Cortical route for facelike pattern processing in human newborns." Proceedings of the National Academy of Sciences 116.10 (2019): 4625-4630. Please refer to the original article for the description of how data were collected. EEG datasets are in EEGLAB format. The reference electrode is at the vertex (Cz). Events are coded as follows: DIN=Onset of cycle of visual presentation (upright, inverted or scrambled facelike pattern). DI50=Onset of visual attractor (a spiral dynamically converging to the center of the screen). The Looking Times dataset consists of looking time intervals (in ms) for all the subjects. For further details, please contact Marco Buiatti at marco.buiatti@unitn.it
Provide a detailed description of the following dataset: Sample EEG dataset and looking times data for NEAR (Newborn EEG Artifact Removal)
UMLS-43
UMLS-43 is a variant of the UMLS knowledge graph that is robust to data leakage through inverse relations. It has been derived by removing three edge types that should be considered problematic by Dettmers' definition: 'degree_of', 'precedes', and 'derivative_of'. It is presented here* as a .tsv edgelist, such that each line represents one edge in the (head, relation, tail) format. \* at https://github.com/oliver-lloyd/kge_param_sens
Provide a detailed description of the following dataset: UMLS-43
SMAC-Exp
The StarCraft Multi-Agent Challenges+ requires agents to learn completion of multi-stage tasks and usage of environmental factors without precise reward functions. The previous challenges (SMAC) recognized as a standard benchmark of Multi-Agent Reinforcement Learning are mainly concerned with ensuring that all agents cooperatively eliminate approaching adversaries only through fine manipulation with obvious reward functions. This challenge, on the other hand, is interested in the exploration capability of MARL algorithms to efficiently learn implicit multi-stage tasks and environmental factors as well as micro-control. This study covers both offensive and defensive scenarios. In the offensive scenarios, agents must learn to first find opponents and then eliminate them. The defensive scenarios require agents to use topographic features. For example, agents need to position themselves behind protective structures to make it harder for enemies to attack.
Provide a detailed description of the following dataset: SMAC-Exp
Pile of Law
Pile of Law is a ∼256GB (and growing) dataset of legal and administrative data which can be used for assessing norms on data sanitization across legal and administrative settings.
Provide a detailed description of the following dataset: Pile of Law
FLORES-200
FLORES-200 doubles the existing language coverage of FLORES-101. Given the nature of the new languages, which have less standardization and require more specialized professional translations, the verification process became more complex. This required modifications to the translation workflow. FLORES-200 has several languages which were not translated from English. Specifically, several languages were translated from Spanish, French, Russian and Modern Standard Arabic.
Provide a detailed description of the following dataset: FLORES-200
OmniBenchmark
Omni-Realm Benchmark (OmniBenchmark) is a diverse (21 semantic realm-wise datasets) and concise (realm-wise datasets have no concepts overlapping) benchmark for evaluating pre-trained model generalization across semantic super-concepts/realms, e.g. across mammals to aircraft. [**ECCV2022**]
Provide a detailed description of the following dataset: OmniBenchmark
Off_Near_parallel
smac+ offensive near scenario with 20 parallel episodic buffer
Provide a detailed description of the following dataset: Off_Near_parallel
Off_Distant_parallel
SMAC+ offense distant scenario.
Provide a detailed description of the following dataset: Off_Distant_parallel
SMAC+ Def_infantry_episodic
SMAC+ defensive infantry scenario with sequential episodic buffer
Provide a detailed description of the following dataset: SMAC+ Def_infantry_episodic
Def_Infantry_parallel
smac+ defense infantry scenario with parallel episodic buffer
Provide a detailed description of the following dataset: Def_Infantry_parallel
Def_Infantry_sequential
SMAC+ defensive infantry scenario with sequential episodic buffer
Provide a detailed description of the following dataset: Def_Infantry_sequential
Off_Complicated_parallel
smac+ offensive complicated scenario with 20 parallel episodic buffer.
Provide a detailed description of the following dataset: Off_Complicated_parallel
Def_Armored_parallel
smac+ defense armored scenario with parallel episodic buffer
Provide a detailed description of the following dataset: Def_Armored_parallel
Def_Outnumbered_parallel
smac+ defense outnumbered scenario with parallel episodic buffer
Provide a detailed description of the following dataset: Def_Outnumbered_parallel
Off_Hard_parallel
smac+ offensive hard scenario with 20 parallel episodic buffer.
Provide a detailed description of the following dataset: Off_Hard_parallel
Off_Superhard_parallel
smac+ offensive scenario with 20 parallel episodic buffer.
Provide a detailed description of the following dataset: Off_Superhard_parallel
Def_Armored_sequential
SMAC+ defensive armored scenario with sequential episodic buffer
Provide a detailed description of the following dataset: Def_Armored_sequential
Def_Outnumbered_sequential
SMAC+ defensive outnumbered scenario with sequential episodic buffer
Provide a detailed description of the following dataset: Def_Outnumbered_sequential
Off_Near_sequential
SMAC+ offensive near scenario with sequential episodic buffer
Provide a detailed description of the following dataset: Off_Near_sequential
Off_Distant_sequential
SMAC+ offensive distant scenario with sequential episodic buffer
Provide a detailed description of the following dataset: Off_Distant_sequential
Off_Complicated_sequential
SMAC+ offensive complicated scenario with sequential episodic buffer
Provide a detailed description of the following dataset: Off_Complicated_sequential
Off_Hard_sequential
SMAC+ offensive hard scenario with sequential episodic buffer
Provide a detailed description of the following dataset: Off_Hard_sequential
Off_Superhard_sequential
SMAC+ offensive superhard scenario with sequential episodic buffer
Provide a detailed description of the following dataset: Off_Superhard_sequential
MOTFront
MOTFront provides photo-realistic RGB-D images with their corresponding instance segmentation masks, class labels, 2D & 3D bounding boxes, 3D geometry, 3D poses and camera parameters. The MOTFront dataset comprises 2,381 unique indoor sequences with a total of 60,000 images and is based on the 3D-FRONT dataset.
Provide a detailed description of the following dataset: MOTFront
Korea Composite Stock Price Index
The data contains the following attributes for Korea Stock Price Index (KOSPI) for January 2000–December 2016: 1. Date (YYYY.M(M).D(D)) 2. Opening Price for the date, PX_OPEN 3. Highest Price for the date, PX_HIGH 4. Lowest Price for the date, PX_LOW 5. Closing Price for the date, PX_LAST 6. Total volume traded on the date, PX_VOLUME The total number of cases comprises 4203 trading days, and historical data is obtained from Bloomberg.
Provide a detailed description of the following dataset: Korea Composite Stock Price Index
FewSOL
The **Few**-**S**hot **O**bject **L**earning (FewSOL) dataset can be used for object recognition with a few images per object. It contains 336 real-world objects with 9 RGB-D images per object from different views. Object segmentation masks, object poses and object attributes are provided. In addition, synthetic images generated using 330 3D object models are used to augment the dataset. FewSOL dataset can be used to study a set of few-shot object recognition problems such as classification, detection and segmentation, shape reconstruction, pose estimation, keypoint correspondences and attribute recognition. **Motivation**: If robots can recognize objects from a few exemplar images, it is possible to scale up the number of objects a robot can recognize because collecting a few images per object is a much easier process compared to building a 3D model of an object. In addition, models trained in the meta-learning setting can generalize to new objects without re-training.
Provide a detailed description of the following dataset: FewSOL
QPT
Quantum process tomography (QPT) is a method for experimentally reconstructing the quantum channel from measurement data. A QPT experiment prepares multiple input states, evolves them by the circuit, then performs multiple measurements in different measurement bases.
Provide a detailed description of the following dataset: QPT
cTDaR
Table is a compact and efficient form for summarizing and presenting correlative information in handwritten and printed archival documents, scientific journals, reports, financial statements and so on. Table recognition is fundamental for the extraction of information from structured documents. The ICDAR 2019 cTDaR evaluates two aspects of table analysis: table detection and recognition. The participating methods will be evaluated on a modern dataset and archival documents with printed and handwritten tables present.
Provide a detailed description of the following dataset: cTDaR
wifi_data
Wi-Fi dataset: the dataset may be downloaded from this link. If you use this dataset, please cite the following reference: Anisa Allahdadi, Ricardo Morla, and Jaime S. Cardoso. "802.11 wireless simulation and anomaly detection using HMM and UBM". CoRR, abs/1707.02933, 2017. URL http://arxiv.org/abs/1707.02933. Human3.6M dataset: preprocessed data can be downloaded from this link (third party provider). Please do not forget to check the dataset license agreement, available at the Human3.6M dataset website.
Provide a detailed description of the following dataset: wifi_data
SC2ReSet: StarCraft II Esport Replaypack Set
Raw StarCraft II data is subject to processing under the Blizzard end user license agreement (EULA), and in special cases Blizzard AI and Machine Learning License may be applied. Please refer to the materials listed below. The dataset contains data in MPQ format (.SC2Replay) that can be processed with multiple open-source libraries. 1. https://www.blizzard.com/en-us/legal/fba4d00f-c7e4-4883-b8b9-1b4500a402ea/blizzard-end-user-license-agreement 2. https://blzdistsc2-a.akamaihd.net/AI_AND_MACHINE_LEARNING_LICENSE.html
Provide a detailed description of the following dataset: SC2ReSet: StarCraft II Esport Replaypack Set
SC2EGSet: StarCraft II Esport Game State Dataset
SC2EGSet: StarCraft II Esport Game State Dataset Pre-processed data that was generated from the SC2ReSet: StarCraft II Esports Replaypack Set Data Modeling Our aplication programing interface (API) implementation supports downloading, unpacking, loading and data access features. Please refer to: https://github.com/Kaszanas/SC2EGSet_Dataset License Information This dataset is licensed under the following license: Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)
Provide a detailed description of the following dataset: SC2EGSet: StarCraft II Esport Game State Dataset
TGRDB
1. Our TGRDB dataset was collected with a 180 fisheye RGB camera on-board of a moving tour-guide robot. 2. A first dataset in tour-guide scenario. Statistical comparisons between TGRDB and existing datasets are refferred to https://arxiv.org/abs/2207.03726. 3. We hope this dataset will drive the progress of research in service robotics, long-term multi-person tracking, and fine-grained or clothes-inconsistency person re-identification.
Provide a detailed description of the following dataset: TGRDB
Simulacra Aesthetic Captions
**Simulacra Aesthetic Captions** is a dataset of over 238000 synthetic images generated with AI models such as CompVis latent GLIDE and Stable Diffusion from over forty thousand user submitted prompts. The images are rated on their aesthetic value from 1 to 10 by users to create caption, image, and rating triplets. In addition to this each user agreed to release all of their work with the bot: prompts, outputs, ratings, completely public domain under the CC0 1.0 Universal Public Domain Dedication. The result is a high quality royalty free dataset with over 176000 ratings that can be used for projects such as: - Filtering Datasets - Guiding Generative Models - Training A Prompt Generator - Extracting vitamin phrases ("trending on artstation", etc) Alignment Research Description from: [https://github.com/JD-P/simulacra-aesthetic-captions](https://github.com/JD-P/simulacra-aesthetic-captions)
Provide a detailed description of the following dataset: Simulacra Aesthetic Captions
OBJ-MDA
The dataset contains images of 16 artworks included in the cultural site “Galleria Regionale di Palazzo Bellomo2”. The collection covers different types of artworks, as well as books, sculptures and paintings. The dataset three domains: i) synthetic images generated from a 3D model of the cultural site and automatically labeled during the generation process; ii) real images collected by 10 visitors with a HoloLens device and manually labeled; iii) realimages collected by the same visitors with a GoPro and manually labeled.
Provide a detailed description of the following dataset: OBJ-MDA
CareerCoach 2022
The CareerCoach 2022 gold standard is available for download in the NIF and JSON format, and draws upon documents from a corpus of over 99,000 education courses which have been retrieved from 488 different education providers. The corpus contains two partitions.: * **Partition (P1)** supports the **content extraction** (i.e., text segmentation and text segment classification) tasks and comprises 169 documents and gold standard annotations for page segments * **Partition (P2)** contains 75 documents with a significantly richer set of annotations that consider content extraction, entities and slots. It supports benchmarking knowledge extraction tasks such as **entity recognition**, **entity classification**, **entity linking**, and **slot filling** on top of the content extraction task.
Provide a detailed description of the following dataset: CareerCoach 2022
CERBERUS DARPA Subterranean Challenge Datasets
Dataset link: https://github.com/leggedrobotics/cerberus_darpa_subt_datasets
Provide a detailed description of the following dataset: CERBERUS DARPA Subterranean Challenge Datasets
MSU HDR Video Reconstruction Benchmark
This is a dataset for a video inverse-tone-mapping task. The dataset contains various contents for the task of restoring HDR video: fireworks, flowers, football, night city, scenes with reflections. Videos have different brightness ranges and contain different types of lighting. The camera for shooting the dataset captures 14 stops of the dynamic range.
Provide a detailed description of the following dataset: MSU HDR Video Reconstruction Benchmark
Multilingual Persuasion Detection
This dataset contains dialogue lines from the games Knights of the Old Republic 1 & 2 and Neverwinter Nights 1. Some of the dialogue lines are marked as persuasive (which is when the player character is attempting a Persuade skill check.) If you use this data, please cite: Pöyhönen, T., Hämäläinen, M., Alnajjar, K. (2022) "Multilingual Persuasion Detection: Video Games as an Invaluable Data Source for NLP" DiGRA '22 - Proceedings of the 2022 DiGRA International Conference
Provide a detailed description of the following dataset: Multilingual Persuasion Detection
Taskography
PDDL dataset of [Rearrangement](https://arxiv.org/abs/2011.01975) tasks in large-scale 3D scene graphs.
Provide a detailed description of the following dataset: Taskography
V2XSet
A large-scale V2X perception dataset using CARLA and OpenCDA
Provide a detailed description of the following dataset: V2XSet
Online retail dataset
This is a transnational data set which contains all the transactions occurring between 01/12/2010 and 09/12/2011 for a UK-based and registered non-store online retail. The company mainly sells unique all-occasion gifts. Many customers of the company are wholesalers. https://archive.ics.uci.edu/ml/datasets/online+retail
Provide a detailed description of the following dataset: Online retail dataset
CAP
The Consented Activities of People (CAP) dataset is a fine grained activity dataset for visual AI research curated using the Visym Collector platform. The CAP dataset contains annotated videos of fine-grained activity classes of consented people. Videos are recorded from mobile devices around the world from a third person viewpoint looking down on the scene from above, containing subjects performing every day activities. Videos are annotated with bounding box tracks around the primary actor along with temporal start/end frames for each activity instance, and distributed in vipy json format. An interactive visualization and video summary is available for review in the dataset distribution site.
Provide a detailed description of the following dataset: CAP
N-Caltech 101
The Neuromorphic-Caltech101 (N-Caltech101) dataset is a spiking version of the original frame-based Caltech101 dataset. The original dataset contained both a "Faces" and "Faces Easy" class, with each consisting of different versions of the same images. The "Faces" class has been removed from N-Caltech101 to avoid confusion, leaving 100 object classes plus a background class. The N-Caltech101 dataset was captured by mounting the ATIS sensor on a motorized pan-tilt unit and having the sensor move while it views Caltech101 examples on an LCD monitor as shown in the video below. A full description of the dataset and how it was created can be found in the paper below. Please cite this paper if you make use of the dataset.
Provide a detailed description of the following dataset: N-Caltech 101
CoCaHis
Highlights • Publicly available dataset with 82 H&E stained images of frozen sections. • Images are acquired on 19 patients with metastatic colon cancer in a liver. • Originally stained and two stain-normalized sets of images included • Pixel wise ground truths provided by seven domain experts.
Provide a detailed description of the following dataset: CoCaHis
VALSE
We propose VALSE (Vision And Language Structured Evaluation), a novel benchmark designed for testing general-purpose pretrained vision and language (V&L) models for their visio-linguistic grounding capabilities on specific linguistic phenomena. VALSE offers a suite of six tests covering various linguistic constructs. Solving these requires models to ground linguistic phenomena in the visual modality, allowing more fine-grained evaluations than hitherto possible. We expect VALSE to serve as an important benchmark to measure future progress of pretrained V&L models from a linguistic perspective, complementing the canonical task-centred V&L evaluations.
Provide a detailed description of the following dataset: VALSE
MFR
During the COVID-19 coronavirus epidemic, almost everyone wears a facial mask, which poses a huge challenge to face recognition. Traditional face recognition systems may not effectively recognize the masked faces, but removing the mask for authentication will increase the risk of virus infection. Inspired by the COVID-19 pandemic response, the widespread requirement that people wear protective face masks in public places has driven a need to understand how face recognition technology deals with occluded faces, often with just the periocular area and above visible. To cope with the challenge arising from wearing masks, it is crucial to improve the existing face recognition approaches. Recently, some commercial providers have announced the availability of face recognition algorithms capable of handling face masks, and an increasing number of research publications have surfaced on the topic of face recognition on people wearing masks. However, due to the sudden outbreak of the epidemic, there is yet no publicly available masked face recognition benchmark. In this workshop, we will organise Masked Face Recognition (MFR) challenge and focus on bench-marking deep face recognition methods under the existence of facial masks. In this challenge, we will evaluate the accuracy of following testsets: Accuracy between masked and non-masked faces. Accuracy among children(2~16 years old). Accuracy of globalised multi-racial benchmarks. We ensure that there's no overlap between these testsets and public available training datasets, as they are not collected from online celebrities. The globalised multi-racial testset contains 242,143 identities and 1,624,305 images. Mask testset contains 6,964 identities, 6,964 masked images and 13,928 non-masked images. There are totally 13,928 positive pairs and 96,983,824 negative pairs. Children testset contains 14,344 identities and 157,280 images. There are totally 1,773,428 positive pairs and 24,735,067,692 negative pairs. For Mask set, TAR is measured on mask-to-nonmask 1:1 protocal, with FAR less than 0.0001(e-4). For Children set, TAR is measured on all-to-all 1:1 protocal, with FAR less than 0.0001(e-4). For other sets, TAR is measured on all-to-all 1:1 protocal, with FAR less than 0.000001(e-6). Participants are ordered in terms of highest scores across two datasets: TAR@Mask and TAR@MR-All, by the formula of 0.25 * TAR@Mask + 0.75 * TAR@MR-All.
Provide a detailed description of the following dataset: MFR
MatriVasha:
MatriVasha the largest dataset of handwritten Bangla compound characters for research on handwritten Bangla compound character recognition. The proposed dataset contains 120 different types of compound characters that consist of 306,464‬ images written where 152,950 male and 153,514 female handwritten Bangla compound characters. This dataset can be used for other issues such as gender, age, district base handwriting research because the sample was collected that included district authenticity, age group, and an equal number of men and women.
Provide a detailed description of the following dataset: MatriVasha:
MS-CXR
The MS-CXR dataset provides 1162 image–sentence pairs of bounding boxes and corresponding phrases, collected across eight different cardiopulmonary radiological findings, with an approximately equal number of pairs for each finding. This dataset complements the existing [MIMIC-CXR](/dataset/mimic-cxr) v.2 dataset and comprises: 1. Reviewed and edited bounding boxes and phrases (1026 pairs of bounding box/sentence); and 2. Manual bounding box labels from scratch (136 pairs of bounding box/sentence).e
Provide a detailed description of the following dataset: MS-CXR
The Mafia Dataset
The Mafia Dataset was created to model the behavior of deceptive actors in the context of the Mafia game, as described in the paper “Putting the Con in Context: Identifying Deceptive Actors in the Game of Mafia”. We hope that this dataset will be of use to others studying the effects of deception on language use.
Provide a detailed description of the following dataset: The Mafia Dataset
ANTILLES
`ANTILLES` is a part-of-speech tagging corpus based on [UD_French-GSD](https://universaldependencies.org/treebanks/fr_gsd/index.html) which was originally created in 2015 and is based on the [universal dependency treebank v2.0](https://github.com/ryanmcd/uni-dep-tb). Originally, the corpus consists of 400,399 words (16,341 sentences) and had 17 different classes. Now, after applying our tags augmentation script `transform.py`, we obtain 60 different classes which add semantic information such as: the gender, number, mood, person, tense or verb form given in the different CoNLL-03 fields from the original corpus. We based our tags on the level of details given by the [LIA_TAGG](http://pageperso.lif.univ-mrs.fr/frederic.bechet/download.html) statistical POS tagger written by [Frédéric Béchet](http://pageperso.lif.univ-mrs.fr/frederic.bechet/index-english.html) in 2001. <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>.
Provide a detailed description of the following dataset: ANTILLES
UniMorph 4.0
The Universal Morphology (UniMorph) project is a collaborative effort to improve how NLP handles complex morphology in the world’s languages. The goal of UniMorph is to annotate morphological data in a universal schema that allows an inflected word from any language to be defined by its lexical meaning, typically carried by the lemma, and by a rendering of its inflectional form in terms of a bundle of morphological features from our schema. The specification of the schema is described here and in Sylak-Glassman (2016).
Provide a detailed description of the following dataset: UniMorph 4.0
Long Video Dataset
We randomly selected three videos from the Internet, that are longer than 1.5K frames and have their main objects continuously appearing. Each video has 20 uniformly sampled frames manually annotated for evaluation.
Provide a detailed description of the following dataset: Long Video Dataset
Long Video Dataset (3X)
We randomly selected three videos from the Internet, that are longer than 1.5K frames and have their main objects continuously appearing. Each video has 20 uniformly sampled frames manually annotated for evaluation. Each video has been played back and forth to generate videos that are three times as long.
Provide a detailed description of the following dataset: Long Video Dataset (3X)
WinoNB
This dataset consists of Winograd schemas that test coreference resolution systems' ability to differentiate singular vs plural they/them pronouns. It consists of 4077 templates, each with a group of people, a singular person (which can be filled with a name or a generic "someone") and a single they/them pronoun to resolve.
Provide a detailed description of the following dataset: WinoNB
WorldStrat
**Nearly 10,000 km² of free high-resolution and paired multi-temporal low-resolution satellite imagery** of unique locations which ensure stratified representation of all types of land-use across the world: from agriculture to ice caps, from forests to multiple urbanization densities. ​ Those locations are also enriched with typically under-represented locations in ML datasets: sites of humanitarian interest, illegal mining sites, and settlements of persons at risk. ​ Each high-resolution image (Airbus SPOT at up to 1.5 m/pixel) comes with multiple temporally-matched low-resolution images from the freely accessible lower-resolution Sentinel-2 satellites (up to 10 m/pixel, 12 spectral bands). ​ The dataset is accompanied with a paper, datasheet for datasets and an open-source Python package to: rebuild or extend the WorldStrat dataset, train and infer baseline algorithms, and learn with abundant tutorials, all compatible with the popular EO-learn toolbox. The hope is to foster broad-spectrum applications of ML to satellite imagery, and possibly develop the same power of analysis allowed by costly private high-resolution imagery from free public low-resolution Sentinel2 imagery. We illustrate this specific point by training and releasing several highly compute-efficient baselines on the task of Multi-Frame Super-Resolution.
Provide a detailed description of the following dataset: WorldStrat
iSarcasmEval
iSarcasmEval is the first shared task to target intended sarcasm detection: the data for this task was provided and labelled by the authors of the texts themselves. Such an approach minimises the downfalls of other methods to collect sarcasm data, which rely on distant supervision or third-party annotations. The shared task contains two languages, English and Arabic, and three subtasks: sarcasm detection, sarcasm category classification, and pairwise sarcasm identification given a sarcastic sentence and its non-sarcastic rephrase. The task received submissions from 60 different teams, with the sarcasm detection task being the most popular. Most of the participating teams utilised pre-trained language models. In this paper, we provide an overview of the task, data, and participating teams.
Provide a detailed description of the following dataset: iSarcasmEval
ArSarcasm
ArSarcasm is a new Arabic sarcasm detection dataset. The dataset was created using previously available Arabic sentiment analysis datasets (SemEval 2017 and ASTD) and adds sarcasm and dialect labels to them. The dataset contains 10,547 tweets, 1,682 (16%) of which are sarcastic.
Provide a detailed description of the following dataset: ArSarcasm
ArSarcasm-v2
ArSarcasm-v2 is an extension of the original ArSarcasm dataset published along with the paper From Arabic Sentiment Analysis to Sarcasm Detection: The ArSarcasm Dataset. ArSarcasm-v2 conisists of ArSarcasm along with portions of DAICT corpus and some new tweets. Each tweet was annotated for sarcasm, sentiment and dialect. The final dataset consists of 15,548 tweets divided into 12,548 training tweets and 3,000 testing tweets. ArSarcasm-v2 was used and released as a part of the shared task on sarcasm detection and sentiment analysis in Arabic.
Provide a detailed description of the following dataset: ArSarcasm-v2
Wind speed and power potential for Switzerland
Summary: This dataset contains an estimation of the average yearly wind speed and of the wind power potential for Switzerland, at a spatial resolution of 250 x 250 meters and over the period from 2008 to 2017. Wind speed data are obtained by modelling data collected at an hourly frequency on a set of up to 208 monitoring stations over the country. The data are then interpolated using a spatio-temporal machine learning model, allowing the estimation of wind speed and its uncertainty at unsampled locations. Then, the modelled spatio-temporal wind speed field is used to estimate the wind power. This is computed based on the characteristic parameters of an Enercon E-101 wind turbine at 100 meters hub height. The latter indicates the distance from the turbine platform to the rotor of an installed wind turbine, showing how high the turbine stands above the ground without considering the length of the turbine blades. The hourly estimations of wind speed are then averaged over each of the ten years studied, for each 250 x 250 spatial location, while wind power data are summed over each year for each spatial unit. Advantages and limitations of the proposed method are discussed in Amato et al. (2021). Data description: The hourly estimation of wind speed and power for Switzerland from 2008 to 2017 are available under request. Here we share the annual values. For both wind speed and power, the data are available over 660697 spatial units of 250 x 250 meters each, covering the entire Swiss territory. Check details in the file Data_description.pdf. Data are provided in the pickle format, see https://docs.python.org/3/library/pickle.html#module-pickle.
Provide a detailed description of the following dataset: Wind speed and power potential for Switzerland
TCIA 4D-Lung
This data collection consists of images acquired during chemoradiotherapy of 20 locally-advanced, non-small cell lung cancer patients. The images include four-dimensional (4D) fan beam (4D-FBCT) and 4D cone beam CT (4D-CBCT). All patients underwent concurrent radiochemotherapy to a total dose of 64.8-70 Gy using daily 1.8 or 2 Gy fractions. scription of the dataset.
Provide a detailed description of the following dataset: TCIA 4D-Lung
Figment
A dataset for fine-grained entity typing of knowledge graph entities built from Freebase. It can be used to evaluate entity representations and also mention-level entity typing.
Provide a detailed description of the following dataset: Figment
Article Bias Prediction
# Article-Bias-Prediction ## Dataset The articles crawled from www.allsides.com are available in the ```./data``` folder, along with the different evaluation splits. The dataset consists of a total of 37,554 articles. Each article is stored as a ```JSON``` object in the ```./data/jsons``` directory, and contains the following fields: 1. **ID**: an alphanumeric identifier. 2. **topic**: the topic being discussed in the article. 3. **source**: the name of the articles's source *(example: New York Times)* 4. **source_url**: the URL to the source's homepage *(example: www.nytimes.com)* 5. **url**: the link to the actual article. 6. **date**: the publication date of the article. 7. **authors**: a comma-separated list of the article's authors. 8. **title**: the article's title. 9. **content_original**: the original body of the article, as returned by the ```newspaper3k``` Python library. 10. **content**: the processed and tokenized content, which is used as input to the different models. 11. **bias_text**: the label of the political bias annotation of the article (left, center, or right). 12. **bias**: the numeric encoding of the political bias of the article (0, 1, or 2). The ```./data/splits``` directory contains the two types of splits, as discussed in the paper: **random** and **media-based**. For each of these types, we provide the train, validation and test files that contains the articles' IDs belonging to each set, along with their numeric bias label. ## Code Under maintenance. To be available soon. ## Citation ``` @inproceedings{baly2020we, author = {Baly, Ramy and Da San Martino, Giovanni and Glass, James and Nakov, Preslav}, title = {We Can Detect Your Bias: Predicting the Political Ideology of News Articles}, booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)}, series = {EMNLP~'20}, NOmonth = {November}, year = {2020} pages = {4982--4991}, NOpublisher = {Association for Computational Linguistics} } ```
Provide a detailed description of the following dataset: Article Bias Prediction
Crowd Activity Dataset
This dataset concentrates on the activities of the crowd for a fine-grained image classification task, named as Crowd Activity dataset, as automatically understanding crowd activity is meaningful for social security. This dataset is newly collected, where the images are mainly searched on the Internet and collected from streets by mobile phones. All images in this dataset contain at least one text instance. The categories come from activities of daily living and demonstrations stimulated by hot events in recent years. Specifically, this dataset consists of **21 categories** and **8785 images** in total. The 21 categories broadly fall into two types: **activities of daily living**(i.e., celebrating Christmas, holding sport meeting, holding concert, celebrating birthday party, celebrity speech, teaching, graduation ceremony, picnic, press briefing, shopping, celebrating Thanks giving day) and **demonstrations** (i.e., protecting animals, protecting environment, appealing for peace, Brexit, COVID-19, election, immigrant, respecting female, racial equality, mouvement des gilets jaunes).
Provide a detailed description of the following dataset: Crowd Activity Dataset
DOLPHINS
Vehicle-to-Everything (V2X) network has enabled collaborative perception in autonomous driving, which is a promising solution to the fundamental defect of stand-alone intelligence including blind zones and long-range perception. However, the lack of datasets has severely blocked the development of collaborative perception algorithms. In this work, we release DOLPHINS: Dataset for cOllaborative Perception enabled Harmonious and INterconnected Self-driving, as a new simulated large-scale various-scenario multi-view multi-modality autonomous driving dataset, which provides a ground-breaking benchmark platform for interconnected autonomous driving. DOLPHINS outperforms current datasets in six dimensions: temporally-aligned images and point clouds from both vehicles and Road Side Units (RSUs) enabling both Vehicle-to-Vehicle (V2V) and Vehicle-to-Infrastructure (V2I) based collaborative perception; 6 typical scenarios with dynamic weather conditions make the most various interconnected autonomous driving dataset; meticulously selected viewpoints providing full coverage of the key areas and every object; 42376 frames and 292549 objects, as well as the corresponding 3D annotations, geo-positions, and calibrations, compose the largest dataset for collaborative perception; Full-HD images and 64-line LiDARs construct high-resolution data with sufficient details; well-organized APIs and open-source codes ensure the extensibility of DOLPHINS. We also construct a benchmark of 2D detection, 3D detection, and multi-view collaborative perception tasks on DOLPHINS. The experiment results show that the raw-level fusion scheme through V2X communication can help to improve the precision as well as to reduce the necessity of expensive LiDAR equipment on vehicles when RSUs exist, which may accelerate the popularity of interconnected self-driving vehicles.
Provide a detailed description of the following dataset: DOLPHINS
AiTLAS: Benchmark Arena
**AiTLAS: Benchmark Arena** is an open-source benchmark framework for evaluating state-of-the-art deep learning approaches for image classification in Earth Observation (EO).
Provide a detailed description of the following dataset: AiTLAS: Benchmark Arena
Nocturne
Nocturne is a 2D, partially observed, driving simulator, built in C++ for speed and exported as a Python library. It is currently designed to handle traffic scenarios from the Waymo Open Dataset, and with some work could be extended to support different driving datasets. Using the Python library nocturne, one is able to train controllers for AVs to solve various tasks from the Waymo dataset, which we provide as a benchmark, then use the tools we offer to evaluate the designed controllers. Using this rich data source, Nocturne contains a wide range of scenarios whose solution requires the formation of complex coordination, theory of mind, and handling of partial observability. Below we show replays of the expert data, centered on the light blue agent, with the corresponding view of the agent on the right. Description from: [Nocturne](https://github.com/facebookresearch/nocturne)
Provide a detailed description of the following dataset: Nocturne
BC7 NLM-Chem
Full-text chemical identification and indexing in PubMed articles. Identifying named entities is an important building block for many complex knowledge extraction tasks. Errors in identifying relevant biomedical entities is a key impediment to accurate article retrieval, classification, and further understanding of textual semantics, such as relation extraction. Chemical entities appear throughout the biomedical research literature and are one of the entity types most frequently searched in PubMed. Accurate automated identification of the chemicals mentioned in journal publications has the potential to translate to improvements in many downstream NLP tasks and biomedical fields; in the near-term, specifically in the retrieval of relevant articles, greatly assisting researchers, indexers, and curators. The NLM-CHEM track will consist of two tasks. Participants can choose to participate in either one or both. These tasks are: Chemical identification in full text: predicting all chemicals mentioned in recently published full-text articles, both span (i.e. named entity recognition) and normalization (i.e. entity linking) using MeSH. Chemical indexing prediction task: predicting which chemicals mentioned in recently published full-text articles should be indexed, i.e. appear in the listing of MeSH terms for the document.
Provide a detailed description of the following dataset: BC7 NLM-Chem
LineCap
LineCap is a dataset of line charts scraped from scientific papers each accompanied with crowd-sourced captions describing the trends of individual lines in the figure and the figure as a whole.
Provide a detailed description of the following dataset: LineCap
HTDM
Hypertention Disease Medication dataset.
Provide a detailed description of the following dataset: HTDM
DEVIL
Diagnostic Evaluation of Video Inpainting on Landscapes (DEVIL) benchmark is composed of a curated video/occlusion mask dataset and a comprehensive evaluation scheme
Provide a detailed description of the following dataset: DEVIL
PosePrior
Accurate modeling of priors over 3D human pose is fundamental to many problems in computer vision. Most previous priors are either not general enough for the diverse nature of human poses or not restrictive enough to avoid invalid 3D poses. We propose a physically-motivated prior that only allows anthropometrically valid poses and restricts the ones that are invalid. One can use joint-angle limits to evaluate whether two connected bones are valid or not. However, it is established in biomechanics that there are dependencies in joint-angle limits between certain pair of bones. For example how much one can flex one’s arm depends on whether it is in front of, or behind, the back. Medical textbooks only provide joint-angle limits in a few positions and the complete configuration of pose-dependent joint-angle limits for the full body is unknown.
Provide a detailed description of the following dataset: PosePrior