id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
1906.08333
Spatial Pyramid Encoding with Convex Length Normalization for Text-Independent Speaker Verification
In this paper, we propose a new pooling method called spatial pyramid encoding (SPE) to generate speaker embeddings for text-independent speaker verification. We first partition the output feature maps from a deep residual network (ResNet) into increasingly fine sub-regions and extract speaker embeddings from each sub-region through a learnable dictionary encoding layer. These embeddings are concatenated to obtain the final speaker representation. The SPE layer not only generates a fixed-dimensional speaker embedding for a variable-length speech segment, but also aggregates the information of feature distribution from multi-level temporal bins. Furthermore, we apply deep length normalization by augmenting the loss function with ring loss. By applying ring loss, the network gradually learns to normalize the speaker embeddings using model weights themselves while preserving convexity, leading to more robust speaker embeddings. Experiments on the VoxCeleb1 dataset show that the proposed system using the SPE layer and ring loss-based deep length normalization outperforms both i-vector and d-vector baselines.
false
false
true
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
135,837
2105.03844
Reinforcement Learning with Expert Trajectory For Quantitative Trading
In recent years, quantitative investment methods combined with artificial intelligence have attracted more and more attention from investors and researchers. Existing related methods based on the supervised learning are not very suitable for learning problems with long-term goals and delayed rewards in real futures trading. In this paper, therefore, we model the price prediction problem as a Markov decision process (MDP), and optimize it by reinforcement learning with expert trajectory. In the proposed method, we employ more than 100 short-term alpha factors instead of price, volume and several technical factors in used existing methods to describe the states of MDP. Furthermore, unlike DQN (deep Q-learning) and BC (behavior cloning) in related methods, we introduce expert experience in training stage, and consider both the expert-environment interaction and the agent-environment interaction to design the temporal difference error so that the agents are more adaptable for inevitable noise in financial data. Experimental results evaluated on share price index futures in China, including IF (CSI 300) and IC (CSI 500), show that the advantages of the proposed method compared with three typical technical analysis and two deep leaning based methods.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
234,293
1804.08611
Faster Response in Bounded-Update-Rate, Discrete-time Networks using Delayed Self-Reinforcement
The response speed of a network impacts the efficacy of its actions to external stimuli. However, for a given bound on the update rate, the network-response speed is limited by the need to maintain stability. This work increases the network-response speed without having to increase the update rate by using delayed self-reinforcement (DSR), where each agent uses its already available information from the network to strengthen its individual update law. Example simulation results are presented that show more than an order of magnitude improvement in the response speed (quantified using the settling time) with the proposed DSR approach.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
95,799
1607.01133
Learning when to trust distant supervision: An application to low-resource POS tagging using cross-lingual projection
Cross lingual projection of linguistic annotation suffers from many sources of bias and noise, leading to unreliable annotations that cannot be used directly. In this paper, we introduce a novel approach to sequence tagging that learns to correct the errors from cross-lingual projection using an explicit debiasing layer. This is framed as joint learning over two corpora, one tagged with gold standard and the other with projected tags. We evaluated with only 1,000 tokens tagged with gold standard tags, along with more plentiful parallel data. Our system equals or exceeds the state-of-the-art on eight simulated low-resource settings, as well as two real low-resource languages, Malagasy and Kinyarwanda.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
58,188
1811.06165
Improving Skin Condition Classification with a Question Answering Model
We present a skin condition classification methodology based on a sequential pipeline of a pre-trained Convolutional Neural Network (CNN) and a Question Answering (QA) model. This method enables us to not only increase the classification confidence and accuracy of the deployed CNN system, but also enables the emulation of the conventional approach of doctors asking the relevant questions in refining the ultimate diagnosis and differential. By combining the CNN output in the form of classification probabilities as a prior to the QA model and the image textual description, we greedily ask the best symptom that maximizes the information gain over symptoms. We demonstrate that combining the QA model with the CNN increases the accuracy up to 10% as compared to the CNN alone, and more than 30% as compared to the QA model alone.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
113,468
2204.08970
Rendering Nighttime Image Via Cascaded Color and Brightness Compensation
Image signal processing (ISP) is crucial for camera imaging, and neural networks (NN) solutions are extensively deployed for daytime scenes. The lack of sufficient nighttime image dataset and insights on nighttime illumination characteristics poses a great challenge for high-quality rendering using existing NN ISPs. To tackle it, we first built a high-resolution nighttime RAW-RGB (NR2R) dataset with white balance and tone mapping annotated by expert professionals. Meanwhile, to best capture the characteristics of nighttime illumination light sources, we develop the CBUnet, a two-stage NN ISP to cascade the compensation of color and brightness attributes. Experiments show that our method has better visual quality compared to traditional ISP pipeline, and is ranked at the second place in the NTIRE 2022 Night Photography Rendering Challenge for two tracks by respective People's and Professional Photographer's choices. The code and relevant materials are avaiable on our website: https://njuvision.github.io/CBUnet.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
292,270
1302.6288
Super-resolution via superset selection and pruning
We present a pursuit-like algorithm that we call the "superset method" for recovery of sparse vectors from consecutive Fourier measurements in the super-resolution regime. The algorithm has a subspace identification step that hinges on the translation invariance of the Fourier transform, followed by a removal step to estimate the solution's support. The superset method is always successful in the noiseless regime (unlike L1-minimization) and generalizes to higher dimensions (unlike the matrix pencil method). Relative robustness to noise is demonstrated numerically.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
22,364
2206.04678
ReCo: A Dataset for Residential Community Layout Planning
Layout planning is centrally important in the field of architecture and urban design. Among the various basic units carrying urban functions, residential community plays a vital part for supporting human life. Therefore, the layout planning of residential community has always been of concern, and has attracted particular attention since the advent of deep learning that facilitates the automated layout generation and spatial pattern recognition. However, the research circles generally suffer from the insufficiency of residential community layout benchmark or high-quality datasets, which hampers the future exploration of data-driven methods for residential community layout planning. The lack of datasets is largely due to the difficulties of large-scale real-world residential data acquisition and long-term expert screening. In order to address the issues and advance a benchmark dataset for various intelligent spatial design and analysis applications in the development of smart city, we introduce Residential Community Layout Planning (ReCo) Dataset, which is the first and largest open-source vector dataset related to real-world community to date. ReCo Dataset is presented in multiple data formats with 37,646 residential community layout plans, covering 598,728 residential buildings with height information. ReCo can be conveniently adapted for residential community layout related urban design tasks, e.g., generative layout design, morphological pattern recognition and spatial evaluation. To validate the utility of ReCo in automated residential community layout planning, two Generative Adversarial Network (GAN) based generative models are further applied to the dataset. We expect ReCo Dataset to inspire more creative and practical work in intelligent design and beyond. The ReCo Dataset is published at: https://www.kaggle.com/fdudsde/reco-dataset.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
301,721
2211.09919
Patch-Craft Self-Supervised Training for Correlated Image Denoising
Supervised neural networks are known to achieve excellent results in various image restoration tasks. However, such training requires datasets composed of pairs of corrupted images and their corresponding ground truth targets. Unfortunately, such data is not available in many applications. For the task of image denoising in which the noise statistics is unknown, several self-supervised training methods have been proposed for overcoming this difficulty. Some of these require knowledge of the noise model, while others assume that the contaminating noise is uncorrelated, both assumptions are too limiting for many practical needs. This work proposes a novel self-supervised training technique suitable for the removal of unknown correlated noise. The proposed approach neither requires knowledge of the noise model nor access to ground truth targets. The input to our algorithm consists of easily captured bursts of noisy shots. Our algorithm constructs artificial patch-craft images from these bursts by patch matching and stitching, and the obtained crafted images are used as targets for the training. Our method does not require registration of the images within the burst. We evaluate the proposed framework through extensive experiments with synthetic and real image noise.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
331,141
2203.08619
An Independently Learnable Hierarchical Model for Bilateral Control-Based Imitation Learning Applications
Recently, motion generation by machine learning has been actively researched to automate various tasks. Imitation learning is one such method that learns motions from data collected in advance. However, executing long-term tasks remains challenging. Therefore, a novel framework for imitation learning is proposed to solve this problem. The proposed framework comprises upper and lower layers, where the upper layer model, whose timescale is long, and lower layer model, whose timescale is short, can be independently trained. In this model, the upper layer learns long-term task planning, and the lower layer learns motion primitives. The proposed method was experimentally compared to hierarchical RNN-based methods to validate its effectiveness. Consequently, the proposed method showed a success rate equal to or greater than that of conventional methods. In addition, the proposed method required less than 1/20 of the training time compared to conventional methods. Moreover, it succeeded in executing unlearned tasks by reusing the trained lower layer.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
285,860
2109.11399
A Skeleton-Driven Neural Occupancy Representation for Articulated Hands
We present Hand ArticuLated Occupancy (HALO), a novel representation of articulated hands that bridges the advantages of 3D keypoints and neural implicit surfaces and can be used in end-to-end trainable architectures. Unlike existing statistical parametric hand models (e.g.~MANO), HALO directly leverages 3D joint skeleton as input and produces a neural occupancy volume representing the posed hand surface. The key benefits of HALO are (1) it is driven by 3D key points, which have benefits in terms of accuracy and are easier to learn for neural networks than the latent hand-model parameters; (2) it provides a differentiable volumetric occupancy representation of the posed hand; (3) it can be trained end-to-end, allowing the formulation of losses on the hand surface that benefit the learning of 3D keypoints. We demonstrate the applicability of HALO to the task of conditional generation of hands that grasp 3D objects. The differentiable nature of HALO is shown to improve the quality of the synthesized hands both in terms of physical plausibility and user preference.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
256,935
2411.09772
Beyond Static Tools: Evaluating Large Language Models for Cryptographic Misuse Detection
The use of Large Language Models (LLMs) in software development is rapidly growing, with developers increasingly relying on these models for coding assistance, including security-critical tasks. Our work presents a comprehensive comparison between traditional static analysis tools for cryptographic API misuse detection-CryptoGuard, CogniCrypt, and Snyk Code-and the LLMs-GPT and Gemini. Using benchmark datasets (OWASP, CryptoAPI, and MASC), we evaluate the effectiveness of each tool in identifying cryptographic misuses. Our findings show that GPT 4-o-mini surpasses current state-of-the-art static analysis tools on the CryptoAPI and MASC datasets, though it lags on the OWASP dataset. Additionally, we assess the quality of LLM responses to determine which models provide actionable and accurate advice, giving developers insights into their practical utility for secure coding. This study highlights the comparative strengths and limitations of static analysis versus LLM-driven approaches, offering valuable insights into the evolving role of AI in advancing software security practices.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
508,360
0709.1699
Efficient Tabling Mechanisms for Transaction Logic Programs
In this paper we present efficient evaluation algorithms for the Horn Transaction Logic (a generalization of the regular Horn logic programs with state updates). We present two complementary methods for optimizing the implementation of Transaction Logic. The first method is based on tabling and we modified the proof theory to table calls and answers on states (practically, equivalent to dynamic programming). The call-answer table is indexed on the call and a signature of the state in which the call was made. The answer columns contain the answer unification and a signature of the state after the call was executed. The states are signed efficiently using a technique based on tries and counting. The second method is based on incremental evaluation and it applies when the data oracle contains derived relations. The deletions and insertions (executed in the transaction oracle) change the state of the database. Using the heuristic of inertia (only a part of the state changes in response to elementary updates), most of the time it is cheaper to compute only the changes in the state than to recompute the entire state from scratch. The two methods are complementary by the fact that the first method optimizes the evaluation when a call is repeated in the same state, and the second method optimizes the evaluation of a new state when a call-state pair is not found by the tabling mechanism (i.e. the first method). The proof theory of Transaction Logic with the application of tabling and incremental evaluation is sound and complete with respect to its model theory.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
653
1202.1779
Finding the Graph of Epidemic Cascades
We consider the problem of finding the graph on which an epidemic cascade spreads, given only the times when each node gets infected. While this is a problem of importance in several contexts -- offline and online social networks, e-commerce, epidemiology, vulnerabilities in infrastructure networks -- there has been very little work, analytical or empirical, on finding the graph. Clearly, it is impossible to do so from just one cascade; our interest is in learning the graph from a small number of cascades. For the classic and popular "independent cascade" SIR epidemics, we analytically establish the number of cascades required by both the global maximum-likelihood (ML) estimator, and a natural greedy algorithm. Both results are based on a key observation: the global graph learning problem decouples into $n$ local problems -- one for each node. For a node of degree $d$, we show that its neighborhood can be reliably found once it has been infected $O(d^2 \log n)$ times (for ML on general graphs) or $O(d\log n)$ times (for greedy on trees). We also provide a corresponding information-theoretic lower bound of $\Omega(d\log n)$; thus our bounds are essentially tight. Furthermore, if we are given side-information in the form of a super-graph of the actual graph (as is often the case), then the number of cascade samples required -- in all cases -- becomes independent of the network size $n$. Finally, we show that for a very general SIR epidemic cascade model, the Markov graph of infection times is obtained via the moralization of the network graph.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
14,227
2004.05363
WES: Agent-based User Interaction Simulation on Real Infrastructure
We introduce the Web-Enabled Simulation (WES) research agenda, and describe FACEBOOK's WW system. We describe the application of WW to reliability, integrity and privacy at FACEBOOK , where it is used to simulate social media interactions on an infrastructure consisting of hundreds of millions of lines of code. The WES agenda draws on research from many areas of study, including Search Based Software Engineering, Machine Learning, Programming Languages, Multi Agent Systems, Graph Theory, Game AI, and AI Assisted Game Play. We conclude with a set of open problems and research challenges to motivate wider investigation.
true
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
true
172,169
2008.08290
Attribute Prototype Network for Zero-Shot Learning
From the beginning of zero-shot learning research, visual attributes have been shown to play an important role. In order to better transfer attribute-based knowledge from known to unknown classes, we argue that an image representation with integrated attribute localization ability would be beneficial for zero-shot learning. To this end, we propose a novel zero-shot representation learning framework that jointly learns discriminative global and local features using only class-level attributes. While a visual-semantic embedding layer learns global features, local features are learned through an attribute prototype network that simultaneously regresses and decorrelates attributes from intermediate features. We show that our locality augmented image representations achieve a new state-of-the-art on three zero-shot learning benchmarks. As an additional benefit, our model points to the visual evidence of the attributes in an image, e.g. for the CUB dataset, confirming the improved attribute localization ability of our image representation.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
192,379
2110.11460
MUGL: Large Scale Multi Person Conditional Action Generation with Locomotion
We introduce MUGL, a novel deep neural model for large-scale, diverse generation of single and multi-person pose-based action sequences with locomotion. Our controllable approach enables variable-length generations customizable by action category, across more than 100 categories. To enable intra/inter-category diversity, we model the latent generative space using a Conditional Gaussian Mixture Variational Autoencoder. To enable realistic generation of actions involving locomotion, we decouple local pose and global trajectory components of the action sequence. We incorporate duration-aware feature representations to enable variable-length sequence generation. We use a hybrid pose sequence representation with 3D pose sequences sourced from videos and 3D Kinect-based sequences of NTU-RGBD-120. To enable principled comparison of generation quality, we employ suitably modified strong baselines during evaluation. Although smaller and simpler compared to baselines, MUGL provides better quality generations, paving the way for practical and controllable large-scale human action generation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
262,488
1301.3641
Training Neural Networks with Stochastic Hessian-Free Optimization
Hessian-free (HF) optimization has been successfully used for training deep autoencoders and recurrent networks. HF uses the conjugate gradient algorithm to construct update directions through curvature-vector products that can be computed on the same order of time as gradients. In this paper we exploit this property and study stochastic HF with gradient and curvature mini-batches independent of the dataset size. We modify Martens' HF for these settings and integrate dropout, a method for preventing co-adaptation of feature detectors, to guard against overfitting. Stochastic Hessian-free optimization gives an intermediary between SGD and HF that achieves competitive performance on both classification and deep autoencoder experiments.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
21,126
1901.00621
Edge-Semantic Learning Strategy for Layout Estimation in Indoor Environment
Visual cognition of the indoor environment can benefit from the spatial layout estimation, which is to represent an indoor scene with a 2D box on a monocular image. In this paper, we propose to fully exploit the edge and semantic information of a room image for layout estimation. More specifically, we present an encoder-decoder network with shared encoder and two separate decoders, which are composed of multiple deconvolution (transposed convolution) layers, to jointly learn the edge maps and semantic labels of a room image. We combine these two network predictions in a scoring function to evaluate the quality of the layouts, which are generated by ray sampling and from a predefined layout pool. Guided by the scoring function, we apply a novel refinement strategy to further optimize the layout hypotheses. Experimental results show that the proposed network can yield accurate estimates of edge maps and semantic labels. By fully utilizing the two different types of labels, the proposed method achieves state-of-the-art layout estimation performance on benchmark datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
117,823
2206.03933
TURJUMAN: A Public Toolkit for Neural Arabic Machine Translation
We present TURJUMAN, a neural toolkit for translating from 20 languages into Modern Standard Arabic (MSA). TURJUMAN exploits the recently-introduced text-to-text Transformer AraT5 model, endowing it with a powerful ability to decode into Arabic. The toolkit offers the possibility of employing a number of diverse decoding methods, making it suited for acquiring paraphrases for the MSA translations as an added value. To train TURJUMAN, we sample from publicly available parallel data employing a simple semantic similarity method to ensure data quality. This allows us to prepare and release AraOPUS-20, a new machine translation benchmark. We publicly release our translation toolkit (TURJUMAN) as well as our benchmark dataset (AraOPUS-20).
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
301,455
1703.07907
Robust Polynomial Reconstruction via Chinese Remainder Theorem in the Presence of Small Degree Residue Errors
Based on unique decoding of the polynomial residue code with non-pairwise coprime moduli, a polynomial with degree less than that of the least common multiple (lcm) of all the moduli can be accurately reconstructed when the number of residue errors is less than half the minimum distance of the code. However, once the number of residue errors is beyond half the minimum distance of the code, the unique decoding may fail and lead to a large reconstruction error. In this paper, assuming that all the residues are allowed to have errors with small degrees, we consider how to reconstruct the polynomial as accurately as possible in the sense that a reconstructed polynomial is obtained with only the last $\tau$ number of coefficients being possibly erroneous, when the residues are affected by errors with degrees upper bounded by $\tau$. In this regard, we first propose a multi-level robust Chinese remainder theorem (CRT) for polynomials, namely, a trade-off between the dynamic range of the degree of the polynomial to be reconstructed and the residue error bound $\tau$ is formulated. Furthermore, a simple closed-form reconstruction algorithm is also proposed.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
70,472
2405.12509
Active Object Detection with Knowledge Aggregation and Distillation from Large Models
Accurately detecting active objects undergoing state changes is essential for comprehending human interactions and facilitating decision-making. The existing methods for active object detection (AOD) primarily rely on visual appearance of the objects within input, such as changes in size, shape and relationship with hands. However, these visual changes can be subtle, posing challenges, particularly in scenarios with multiple distracting no-change instances of the same category. We observe that the state changes are often the result of an interaction being performed upon the object, thus propose to use informed priors about object related plausible interactions (including semantics and visual appearance) to provide more reliable cues for AOD. Specifically, we propose a knowledge aggregation procedure to integrate the aforementioned informed priors into oracle queries within the teacher decoder, offering more object affordance commonsense to locate the active object. To streamline the inference process and reduce extra knowledge inputs, we propose a knowledge distillation approach that encourages the student decoder to mimic the detection capabilities of the teacher decoder using the oracle query by replicating its predictions and attention. Our proposed framework achieves state-of-the-art performance on four datasets, namely Ego4D, Epic-Kitchens, MECCANO, and 100DOH, which demonstrates the effectiveness of our approach in improving AOD.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
455,554
2406.07884
Reinforcement Learning to Disentangle Multiqubit Quantum States from Partial Observations
Using partial knowledge of a quantum state to control multiqubit entanglement is a largely unexplored paradigm in the emerging field of quantum interactive dynamics with the potential to address outstanding challenges in quantum state preparation and compression, quantum control, and quantum complexity. We present a deep reinforcement learning (RL) approach to constructing short disentangling circuits for arbitrary 4-, 5-, and 6-qubit states using an actor-critic algorithm. With access to only two-qubit reduced density matrices, our agent decides which pairs of qubits to apply two-qubit gates on; requiring only local information makes it directly applicable on modern NISQ devices. Utilizing a permutation-equivariant transformer architecture, the agent can autonomously identify qubit permutations within the state, and adjusts the disentangling protocol accordingly. Once trained, it provides circuits from different initial states without further optimization. We demonstrate the agent's ability to identify and exploit the entanglement structure of multiqubit states. For 4-, 5-, and 6-qubit Haar-random states, the agent learns to construct disentangling circuits that exhibit strong correlations both between consecutive gates and among the qubits involved. Through extensive benchmarking, we show the efficacy of the RL approach to find disentangling protocols with minimal gate resources. We explore the resilience of our trained agents to noise, highlighting their potential for real-world quantum computing applications. Analyzing optimal disentangling protocols, we report a general circuit to prepare an arbitrary 4-qubit state using at most 5 two-qubit (10 CNOT) gates.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
463,258
2206.09114
Bear the Query in Mind: Visual Grounding with Query-conditioned Convolution
Visual grounding is a task that aims to locate a target object according to a natural language expression. As a multi-modal task, feature interaction between textual and visual inputs is vital. However, previous solutions mainly handle each modality independently before fusing them together, which does not take full advantage of relevant textual information while extracting visual features. To better leverage the textual-visual relationship in visual grounding, we propose a Query-conditioned Convolution Module (QCM) that extracts query-aware visual features by incorporating query information into the generation of convolutional kernels. With our proposed QCM, the downstream fusion module receives visual features that are more discriminative and focused on the desired object described in the expression, leading to more accurate predictions. Extensive experiments on three popular visual grounding datasets demonstrate that our method achieves state-of-the-art performance. In addition, the query-aware visual features are informative enough to achieve comparable performance to the latest methods when directly used for prediction without further multi-modal fusion.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
303,437
2012.12410
QuickTumorNet: Fast Automatic Multi-Class Segmentation of Brain Tumors
Non-invasive techniques such as magnetic resonance imaging (MRI) are widely employed in brain tumor diagnostics. However, manual segmentation of brain tumors from 3D MRI volumes is a time-consuming task that requires trained expert radiologists. Due to the subjectivity of manual segmentation, there is low inter-rater reliability which can result in diagnostic discrepancies. As the success of many brain tumor treatments depends on early intervention, early detection is paramount. In this context, a fully automated segmentation method for brain tumor segmentation is necessary as an efficient and reliable method for brain tumor detection and quantification. In this study, we propose an end-to-end approach for brain tumor segmentation, capitalizing on a modified version of QuickNAT, a brain tissue type segmentation deep convolutional neural network (CNN). Our method was evaluated on a data set of 233 patient's T1 weighted images containing three tumor type classes annotated (meningioma, glioma, and pituitary). Our model, QuickTumorNet, demonstrated fast, reliable, and accurate brain tumor segmentation that can be utilized to assist clinicians in diagnosis and treatment.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
212,919
2310.00242
Walking = Traversable? : Traversability Prediction via Multiple Human Object Tracking under Occlusion
The emerging ``Floor plan from human trails (PfH)" technique has great potential for improving indoor robot navigation by predicting the traversability of occluded floors. This study presents an innovative approach that replaces first-person-view sensors with a third-person-view monocular camera mounted on the observer robot. This approach can gather measurements from multiple humans, expanding its range of applications. The key idea is to use two types of trackers, SLAM and MOT, to monitor stationary objects and moving humans and assess their interactions. This method achieves stable predictions of traversability even in challenging visual scenarios, such as occlusions, nonlinear perspectives, depth uncertainty, and intersections involving multiple humans. Additionally, we extend map quality metrics to apply to traversability maps, facilitating future research. We validate our proposed method through fusion and comparison with established techniques.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
395,888
2011.08500
A Phase Resonance Approach for Modal Testing of Structures with Nonlinear Dissipation
The concept of nonlinear modes is useful for the dynamical characterization of nonlinear mechanical systems. While efficient and broadly applicable methods are now available for the computation of nonlinear modes, nonlinear modal testing is still in its infancy. The purpose of this work is to overcome its present limitation to conservative nonlinearities. Our approach relies on the recently extended periodic motion concept, according to which nonlinear modes of damped systems are defined as family of periodic motions induced by an appropriate artificial excitation that compensates the natural dissipation. The particularly simple experimental implementation with only a single-point, single-frequency, phase resonant forcing is analyzed in detail. The method permits the experimental extraction of natural frequencies, modal damping ratios and deflection shapes (including harmonics), for each mode of interest, as function of the vibration level. The accuracy, robustness and current limitations of the method are first demonstrated numerically. The method is then verified experimentally for a friction-damped system. Moreover, a self-contained measure for estimating the quality of the extracted modal properties is investigated. The primary advantages over alternative vibration testing methods are noise robustness, broad applicability and short measurement duration. The central limitation of the identified modal quantities is that they only characterize the system in the regime near isolated resonances.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
206,895
2411.02837
On the Comparison between Multi-modal and Single-modal Contrastive Learning
Multi-modal contrastive learning with language supervision has presented a paradigm shift in modern machine learning. By pre-training on a web-scale dataset, multi-modal contrastive learning can learn high-quality representations that exhibit impressive robustness and transferability. Despite its empirical success, the theoretical understanding is still in its infancy, especially regarding its comparison with single-modal contrastive learning. In this work, we introduce a feature learning theory framework that provides a theoretical foundation for understanding the differences between multi-modal and single-modal contrastive learning. Based on a data generation model consisting of signal and noise, our analysis is performed on a ReLU network trained with the InfoMax objective function. Through a trajectory-based optimization analysis and generalization characterization on downstream tasks, we identify the critical factor, which is the signal-to-noise ratio (SNR), that impacts the generalizability in downstream tasks of both multi-modal and single-modal contrastive learning. Through the cooperation between the two modalities, multi-modal learning can achieve better feature learning, leading to improvements in performance in downstream tasks compared to single-modal learning. Our analysis provides a unified framework that can characterize the optimization and generalization of both single-modal and multi-modal contrastive learning. Empirical experiments on both synthetic and real-world datasets further consolidate our theoretical findings.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
505,684
2306.09858
Prototype Learning for Explainable Brain Age Prediction
The lack of explainability of deep learning models limits the adoption of such models in clinical practice. Prototype-based models can provide inherent explainable predictions, but these have predominantly been designed for classification tasks, despite many important tasks in medical imaging being continuous regression problems. Therefore, in this work, we present ExPeRT: an explainable prototype-based model specifically designed for regression tasks. Our proposed model makes a sample prediction from the distances to a set of learned prototypes in latent space, using a weighted mean of prototype labels. The distances in latent space are regularized to be relative to label differences, and each of the prototypes can be visualized as a sample from the training set. The image-level distances are further constructed from patch-level distances, in which the patches of both images are structurally matched using optimal transport. This thus provides an example-based explanation with patch-level detail at inference time. We demonstrate our proposed model for brain age prediction on two imaging datasets: adult MR and fetal ultrasound. Our approach achieved state-of-the-art prediction performance while providing insight into the model's reasoning process.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
373,995
2104.08030
Temporally smooth online action detection using cycle-consistent future anticipation
Many video understanding tasks work in the offline setting by assuming that the input video is given from the start to the end. However, many real-world problems require the online setting, making a decision immediately using only the current and the past frames of videos such as in autonomous driving and surveillance systems. In this paper, we present a novel solution for online action detection by using a simple yet effective RNN-based networks called the Future Anticipation and Temporally Smoothing network (FATSnet). The proposed network consists of a module for anticipating the future that can be trained in an unsupervised manner with the cycle-consistency loss, and another component for aggregating the past and the future for temporally smooth frame-by-frame predictions. We also propose a solution to relieve the performance loss when running RNN-based models on very long sequences. Evaluations on TVSeries, THUMOS14, and BBDB show that our method achieve the state-of-the-art performances compared to the previous works on online action detection.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
230,639
2302.01860
GLADIS: A General and Large Acronym Disambiguation Benchmark
Acronym Disambiguation (AD) is crucial for natural language understanding on various sources, including biomedical reports, scientific papers, and search engine queries. However, existing acronym disambiguation benchmarks and tools are limited to specific domains, and the size of prior benchmarks is rather small. To accelerate the research on acronym disambiguation, we construct a new benchmark named GLADIS with three components: (1) a much larger acronym dictionary with 1.5M acronyms and 6.4M long forms; (2) a pre-training corpus with 160 million sentences; (3) three datasets that cover the general, scientific, and biomedical domains. We then pre-train a language model, \emph{AcroBERT}, on our constructed corpus for general acronym disambiguation, and show the challenges and values of our new benchmark.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
343,771
2202.01537
Bending Graphs: Hierarchical Shape Matching using Gated Optimal Transport
Shape matching has been a long-studied problem for the computer graphics and vision community. The objective is to predict a dense correspondence between meshes that have a certain degree of deformation. Existing methods either consider the local description of sampled points or discover correspondences based on global shape information. In this work, we investigate a hierarchical learning design, to which we incorporate local patch-level information and global shape-level structures. This flexible representation enables correspondence prediction and provides rich features for the matching stage. Finally, we propose a novel optimal transport solver by recurrently updating features on non-confident nodes to learn globally consistent correspondences between the shapes. Our results on publicly available datasets suggest robust performance in presence of severe deformations without the need for extensive training or refinement.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
278,514
2201.01497
Self-dual 2-quasi-cyclic Codes and Dihedral Codes
We characterize the structure of 2-quasi-cyclic codes over a finite field F by the so-called Goursat Lemma. With the characterization, we exhibit a necessary and sufficient condition for a 2-quasi-cyclic code being a dihedral code. And we obtain a necessary and sufficient condition for a self-dual 2-quasi-cyclic code being a dihedral code (if charF = 2), or a consta-dihedral code (if charF odd). As a consequence, any self-dual 2-quasi-cyclic code generated by one element must be (consta-)dihedral. In particular, any self-dual double circulant code must be (consta-)dihedral. Also, we show a necessary and sufficient condition that the three classes (the self-dual double circulant codes, the self-dual 2-quasi-cyclic codes, and the self-dual (consta-)dihedral codes) are coincide each other.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
274,273
2404.17196
Human-Imperceptible Retrieval Poisoning Attacks in LLM-Powered Applications
Presently, with the assistance of advanced LLM application development frameworks, more and more LLM-powered applications can effortlessly augment the LLMs' knowledge with external content using the retrieval augmented generation (RAG) technique. However, these frameworks' designs do not have sufficient consideration of the risk of external content, thereby allowing attackers to undermine the applications developed with these frameworks. In this paper, we reveal a new threat to LLM-powered applications, termed retrieval poisoning, where attackers can guide the application to yield malicious responses during the RAG process. Specifically, through the analysis of LLM application frameworks, attackers can craft documents visually indistinguishable from benign ones. Despite the documents providing correct information, once they are used as reference sources for RAG, the application is misled into generating incorrect responses. Our preliminary experiments indicate that attackers can mislead LLMs with an 88.33\% success rate, and achieve a 66.67\% success rate in the real-world application, demonstrating the potential impact of retrieval poisoning.
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
false
false
449,773
1903.09199
Sparse2Dense: From direct sparse odometry to dense 3D reconstruction
In this paper, we proposed a new deep learning based dense monocular SLAM method. Compared to existing methods, the proposed framework constructs a dense 3D model via a sparse to dense mapping using learned surface normals. With single view learned depth estimation as prior for monocular visual odometry, we obtain both accurate positioning and high quality depth reconstruction. The depth and normal are predicted by a single network trained in a tightly coupled manner.Experimental results show that our method significantly improves the performance of visual tracking and depth prediction in comparison to the state-of-the-art in deep monocular dense SLAM.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
124,995
1202.2709
Potential Theory for Directed Networks
Uncovering factors underlying the network formation is a long-standing challenge for data mining and network analysis. In particular, the microscopic organizing principles of directed networks are less understood than those of undirected networks. This article proposes a hypothesis named potential theory, which assumes that every directed link corresponds to a decrease of a unit potential and subgraphs with definable potential values for all nodes are preferred. Combining the potential theory with the clustering and homophily mechanisms, it is deduced that the Bi-fan structure consisting of 4 nodes and 4 directed links is the most favored local structure in directed networks. Our hypothesis receives strongly positive supports from extensive experiments on 15 directed networks drawn from disparate fields, as indicated by the most accurate and robust performance of Bi-fan predictor within the link prediction framework. In summary, our main contribution is twofold: (i) We propose a new mechanism for the local organization of directed networks; (ii) We design the corresponding link prediction algorithm, which can not only testify our hypothesis, but also find out direct applications in missing link prediction and friendship recommendation.
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
false
14,300
2411.16326
Brain-like emergent properties in deep networks: impact of network architecture, datasets and training
Despite the rapid pace at which deep networks are improving on standardized vision benchmarks, they are still outperformed by humans on real-world vision tasks. This paradoxical lack of generalization could be addressed by making deep networks more brain-like. Although several benchmarks have compared the ability of deep networks to predict brain responses to natural images, they do not capture subtle but important brain-like emergent properties. To resolve this issue, we report several well-known perceptual and neural emergent properties that can be tested on deep networks. To evaluate how various design factors impact brain-like properties, we systematically evaluated over 30 state-of-the-art networks with varying network architectures, training datasets and training regimes. Our main findings are as follows. First, network architecture had the strongest impact on brain-like properties compared to dataset and training regime variations. Second, networks varied widely in their alignment to the brain with no single network outperforming all others. Taken together, our results complement existing benchmarks by revealing brain-like properties that are either emergent or lacking in state-of-the-art deep networks.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
510,990
2209.02995
From Human Walking to Bipedal Robot Locomotion: Reflex Inspired Compensation on Planned and Unplanned Downsteps
Humans are able to negotiate downstep behaviors -- both planned and unplanned -- with remarkable agility and ease. The goal of this paper is to systematically study the translation of this human behavior to bipedal walking robots, even if the morphology is inherently different. Concretely, we begin with human data wherein planned and unplanned downsteps are taken. We analyze this data from the perspective of reduced-order modeling of the human, encoding the center of mass (CoM) kinematics and contact forces, which allows for the translation of these behaviors into the corresponding reduced-order model of a bipedal robot. We embed the resulting behaviors into the full-order dynamics of a bipedal robot via nonlinear optimization-based controllers. The end result is the demonstration of planned and unplanned downsteps in simulation on an underactuated walking robot.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
316,368
1401.6495
User Participation in an Academic Social Networking Service: A Survey of Open Group Users on Mendeley
Although there are a number of social networking services that specifically target scholars, little has been published about the actual practices and the usage of these so-called academic social networking services (ASNSs). To fill this gap, we explore the populations of academics who engage in social activities using an ASNS; as an indicator of further engagement, we also determine their various motivations for joining a group in ASNSs. Using groups and their members in Mendeley as the platform for our case study, we obtained 146 participant responses from our online survey about users' common activities, usage habits, and motivations for joining groups. Our results show that 1) participants did not engage with social-based features as frequently and actively as they engaged with research-based features, and 2) users who joined more groups seemed to have a stronger motivation to increase their professional visibility and to contribute the research articles they had read to the group reading list. Our results generate interesting insights into Mendeley's user populations, their activities, and their motivations relative to the social features of Mendeley. We also argue that further design of ASNSs is needed to take greater account of disciplinary differences in scholarly communication and to establish incentive mechanisms for encouraging user participation.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
30,360
2209.08907
Learning Symbolic Model-Agnostic Loss Functions via Meta-Learning
In this paper, we develop upon the emerging topic of loss function learning, which aims to learn loss functions that significantly improve the performance of the models trained under them. Specifically, we propose a new meta-learning framework for learning model-agnostic loss functions via a hybrid neuro-symbolic search approach. The framework first uses evolution-based methods to search the space of primitive mathematical operations to find a set of symbolic loss functions. Second, the set of learned loss functions are subsequently parameterized and optimized via an end-to-end gradient-based training procedure. The versatility of the proposed framework is empirically validated on a diverse set of supervised learning tasks. Results show that the meta-learned loss functions discovered by the newly proposed method outperform both the cross-entropy loss and state-of-the-art loss function learning methods on a diverse range of neural network architectures and datasets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
318,312
2310.01367
On the Ziv-Merhav theorem beyond Markovianity
We generalize to a broader class of decoupled measures a result of Ziv and Merhav on universal estimation of the specific cross (or relative) entropy for a pair of multi-level Markov measures. The result covers pairs of suitably regular g-measures and pairs of equilibrium measures arising from the small space of interactions in mathematical statistical mechanics.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
396,387
2301.05623
Reworking geometric morphometrics into a methodology of transformation grids
Today's typical application of geometric morphometrics to a quantitative comparison of organismal anatomies begins by standardizing samples of homologously labelled point configurations for location, orientation, and scale, and then renders the ensuing comparisons graphically by thin-plate spline as applied to group averages, principal components, regression predictions, or canonical variates. The scale-standardization step has recently come under criticism as inappropriate, at least for growth studies. This essay argues for a similar rethinking of the centering and rotation, and then the replacement of the thin-plate spline interpolant of the resulting configurations by a different strategy that leaves unexplained residuals at every landmark individually in order to simplify the interpretation of the displayed grid as a whole, the "transformation grid" that has been highlighted as the true underlying topic ever since D'Arcy Thompson's exposition of 1917. For analyses of comparisons involving gradients at large geometric scale, this paper argues for replacement of all the Procrustes conventions by a version of my two-point registration of 1986 (originally Francis Galton's of 1907). The choice of the two points interacts with another non-Procrustes concern, interpretability of the grid lines of a coordinate system deformed according to a fitted polynomial trend rather than an interpolating thin-plate spline. The paper works two examples using previously published cranial data; there result new findings pertinent to the interpretation of both of these classic data sets. A concluding discussion suggests that the current toolkit of geometric morphometrics, centered on Procrustes shape coordinates and thin-plate splines, is too restricted to suit many of the interpretive purposes of evolutionary and developmental biology.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
340,402
2105.10568
High Throughput Soybean Pod-Counting with In-Field Robotic Data Collection and Machine-Vision Based Data Analysis
We report promising results for high-throughput on-field soybean pod count with small mobile robots and machine-vision algorithms. Our results show that the machine-vision based soybean pod counts are strongly correlated with soybean yield. While pod counts has a strong correlation with soybean yield, pod counting is extremely labor intensive, and has been difficult to automate. Our results establish that an autonomous robot equipped with vision sensors can autonomously collect soybean data at maturity. Machine-vision algorithms can be used to estimate pod-counts across a large diversity panel planted across experimental units (EUs, or plots) in a high-throughput, automated manner. We report a correlation of 0.67 between our automated pod counts and soybean yield. The data was collected in an experiment consisting of 1463 single-row plots maintained by the University of Illinois soybean breeding program during the 2020 growing season. We also report a correlation of 0.88 between automated pod counts and manual pod counts over a smaller data set of 16 plots.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
236,433
2203.11506
Rebalanced Siamese Contrastive Mining for Long-Tailed Recognition
Deep neural networks perform poorly on heavily class-imbalanced datasets. Given the promising performance of contrastive learning, we propose Rebalanced Siamese Contrastive Mining (ResCom) to tackle imbalanced recognition. Based on the mathematical analysis and simulation results, we claim that supervised contrastive learning suffers a dual class-imbalance problem at both the original batch and Siamese batch levels, which is more serious than long-tailed classification learning. In this paper, at the original batch level, we introduce a class-balanced supervised contrastive loss to assign adaptive weights for different classes. At the Siamese batch level, we present a class-balanced queue, which maintains the same number of keys for all classes. Furthermore, we note that the imbalanced contrastive loss gradient with respect to the contrastive logits can be decoupled into the positives and negatives, and easy positives and easy negatives will make the contrastive gradient vanish. We propose supervised hard positive and negative pairs mining to pick up informative pairs for contrastive computation and improve representation learning. Finally, to approximately maximize the mutual information between the two views, we propose Siamese Balanced Softmax and joint it with the contrastive loss for one-stage training. Extensive experiments demonstrate that ResCom outperforms the previous methods by large margins on multiple long-tailed recognition benchmarks. Our code and models are made publicly available at: https://github.com/dvlab-research/ResCom.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
286,943
2401.02153
Unit Testing in ASP Revisited: Language and Test-Driven Development Environment
Unit testing frameworks are nowadays considered a best practice, included in almost all modern software development processes, to achieve rapid development of correct specifications. Knowledge representation and reasoning paradigms such as Answer Set Programming (ASP), that have been used in industry-level applications, are not an exception. Indeed, the first unit testing specification language for ASP was proposed in 2011 as a feature of the ASPIDE development environment. Later, a more portable unit testing language was included in the LANA annotation language. In this paper we revisit both languages and tools for unit testing in ASP. We propose a new unit test specification language that allows one to inline tests within ASP programs, and we identify the computational complexity of the tasks associated with checking the various program-correctness assertions. Test-case specifications are transparent to the traditional evaluation, but can be interpreted by a specific testing tool. Thus, we present a novel environment supporting test driven development of ASP programs.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
419,624
1905.12052
A New Distribution on the Simplex with Auto-Encoding Applications
We construct a new distribution for the simplex using the Kumaraswamy distribution and an ordered stick-breaking process. We explore and develop the theoretical properties of this new distribution and prove that it exhibits symmetry under the same conditions as the well-known Dirichlet. Like the Dirichlet, the new distribution is adept at capturing sparsity but, unlike the Dirichlet, has an exact and closed form reparameterization--making it well suited for deep variational Bayesian modeling. We demonstrate the distribution's utility in a variety of semi-supervised auto-encoding tasks. In all cases, the resulting models achieve competitive performance commensurate with their simplicity, use of explicit probability models, and abstinence from adversarial training.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
132,631
1910.10228
Modeling and simulation for varying die gap fluid extrusion
The analysis and the simulation of fluid extruded out of a die opening are of great importance in different industrial applications. Mathematical modeling and experimental studies of extrusion out of a constant die gap has been the subject of numerous publications in the literature. Extrusion Blow Molding is a polymer forming technique used to manufacture a wide range of large hollow parts such as fuel tanks used in the automotive industry. The die gap is varied during extrusion to adjust the manufactured product's final thickness and to compensate for uneven stretching during molding. In this work, mathematical modeling of varying die gap extrusion is introduced. Two models using different approaches are presented to model the time dependent effects of varying the die gap on the extrudate. In the first approach, physics based partial differential equations are used to simulate extrusion. The Navier Stokes equations are used as governing equations to simulate molten polymers. The governing equations are solved using Arbitrary Lagrangian Eulerian based Finite Element Method. A novel numerical method for defining the free surface of the extrudate is proposed. In addition, the effects of varying the die gap on the material velocity, pressure and extrudate shape are reported. In the second approach, a parameter identification scheme is used to obtain the parameters of a novel model with a predefined structure. The purpose of the proposed model is to replicate time dependent extrudate thickness of the physics based model.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
150,420
2408.14195
Representative Arm Identification: A fixed confidence approach to identify cluster representatives
We study the representative arm identification (RAI) problem in the multi-armed bandits (MAB) framework, wherein we have a collection of arms, each associated with an unknown reward distribution. An underlying instance is defined by a partitioning of the arms into clusters of predefined sizes, such that for any $j > i$, all arms in cluster $i$ have a larger mean reward than those in cluster $j$. The goal in RAI is to reliably identify a certain prespecified number of arms from each cluster, while using as few arm pulls as possible. The RAI problem covers as special cases several well-studied MAB problems such as identifying the best arm or any $M$ out of the top $K$, as well as both full and coarse ranking. We start by providing an instance-dependent lower bound on the sample complexity of any feasible algorithm for this setting. We then propose two algorithms, based on the idea of confidence intervals, and provide high probability upper bounds on their sample complexity, which orderwise match the lower bound. Finally, we do an empirical comparison of both algorithms along with an LUCB-type alternative on both synthetic and real-world datasets, and demonstrate the superior performance of our proposed schemes in most cases.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
483,452
2210.02186
TimesNet: Temporal 2D-Variation Modeling for General Time Series Analysis
Time series analysis is of immense importance in extensive applications, such as weather forecasting, anomaly detection, and action recognition. This paper focuses on temporal variation modeling, which is the common key problem of extensive analysis tasks. Previous methods attempt to accomplish this directly from the 1D time series, which is extremely challenging due to the intricate temporal patterns. Based on the observation of multi-periodicity in time series, we ravel out the complex temporal variations into the multiple intraperiod- and interperiod-variations. To tackle the limitations of 1D time series in representation capability, we extend the analysis of temporal variations into the 2D space by transforming the 1D time series into a set of 2D tensors based on multiple periods. This transformation can embed the intraperiod- and interperiod-variations into the columns and rows of the 2D tensors respectively, making the 2D-variations to be easily modeled by 2D kernels. Technically, we propose the TimesNet with TimesBlock as a task-general backbone for time series analysis. TimesBlock can discover the multi-periodicity adaptively and extract the complex temporal variations from transformed 2D tensors by a parameter-efficient inception block. Our proposed TimesNet achieves consistent state-of-the-art in five mainstream time series analysis tasks, including short- and long-term forecasting, imputation, classification, and anomaly detection. Code is available at this repository: https://github.com/thuml/TimesNet.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
321,556
2009.08692
DeepRemaster: Temporal Source-Reference Attention Networks for Comprehensive Video Enhancement
The remastering of vintage film comprises of a diversity of sub-tasks including super-resolution, noise removal, and contrast enhancement which aim to restore the deteriorated film medium to its original state. Additionally, due to the technical limitations of the time, most vintage film is either recorded in black and white, or has low quality colors, for which colorization becomes necessary. In this work, we propose a single framework to tackle the entire remastering task semi-interactively. Our work is based on temporal convolutional neural networks with attention mechanisms trained on videos with data-driven deterioration simulation. Our proposed source-reference attention allows the model to handle an arbitrary number of reference color images to colorize long videos without the need for segmentation while maintaining temporal consistency. Quantitative analysis shows that our framework outperforms existing approaches, and that, in contrast to existing approaches, the performance of our framework increases with longer videos and more reference color images.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
196,320
2005.06594
Action Image Representation: Learning Scalable Deep Grasping Policies with Zero Real World Data
This paper introduces Action Image, a new grasp proposal representation that allows learning an end-to-end deep-grasping policy. Our model achieves $84\%$ grasp success on $172$ real world objects while being trained only in simulation on $48$ objects with just naive domain randomization. Similar to computer vision problems, such as object detection, Action Image builds on the idea that object features are invariant to translation in image space. Therefore, grasp quality is invariant when evaluating the object-gripper relationship; a successful grasp for an object depends on its local context, but is independent of the surrounding environment. Action Image represents a grasp proposal as an image and uses a deep convolutional network to infer grasp quality. We show that by using an Action Image representation, trained networks are able to extract local, salient features of grasping tasks that generalize across different objects and environments. We show that this representation works on a variety of inputs, including color images (RGB), depth images (D), and combined color-depth (RGB-D). Our experimental results demonstrate that networks utilizing an Action Image representation exhibit strong domain transfer between training on simulated data and inference on real-world sensor streams. Finally, our experiments show that a network trained with Action Image improves grasp success ($84\%$ vs. $53\%$) over a baseline model with the same structure, but using actions encoded as vectors.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
177,039
2006.04455
Continual Representation Learning for Biometric Identification
With the explosion of digital data in recent years, continuously learning new tasks from a stream of data without forgetting previously acquired knowledge has become increasingly important. In this paper, we propose a new continual learning (CL) setting, namely ``continual representation learning'', which focuses on learning better representation in a continuous way. We also provide two large-scale multi-step benchmarks for biometric identification, where the visual appearance of different classes are highly relevant. In contrast to requiring the model to recognize more learned classes, we aim to learn feature representation that can be better generalized to not only previously unseen images but also unseen classes/identities. For the new setting, we propose a novel approach that performs the knowledge distillation over a large number of identities by applying the neighbourhood selection and consistency relaxation strategies to improve scalability and flexibility of the continual learning model. We demonstrate that existing CL methods can improve the representation in the new setting, and our method achieves better results than the competitors.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
180,698
2502.02302
EdgeGFL: Rethinking Edge Information in Graph Feature Preference Learning
Graph Neural Networks (GNNs) have significant advantages in handling non-Euclidean data and have been widely applied across various areas, thus receiving increasing attention in recent years. The framework of GNN models mainly includes the information propagation phase and the aggregation phase, treating nodes and edges as information entities and propagation channels, respectively. However, most existing GNN models face the challenge of disconnection between node and edge feature information, as these models typically treat the learning of edge and node features as independent tasks. To address this limitation, we aim to develop an edge-empowered graph feature preference learning framework that can capture edge embeddings to assist node embeddings. By leveraging the learned multidimensional edge feature matrix, we construct multi-channel filters to more effectively capture accurate node features, thereby obtaining the non-local structural characteristics and fine-grained high-order node features. Specifically, the inclusion of multidimensional edge information enhances the functionality and flexibility of the GNN model, enabling it to handle complex and diverse graph data more effectively. Additionally, integrating relational representation learning into the message passing framework allows graph nodes to receive more useful information, thereby facilitating node representation learning. Finally, experiments on four real-world heterogeneous graphs demonstrate the effectiveness of theproposed model.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
530,252
2211.11423
Blur Interpolation Transformer for Real-World Motion from Blur
This paper studies the challenging problem of recovering motion from blur, also known as joint deblurring and interpolation or blur temporal super-resolution. The challenges are twofold: 1) the current methods still leave considerable room for improvement in terms of visual quality even on the synthetic dataset, and 2) poor generalization to real-world data. To this end, we propose a blur interpolation transformer (BiT) to effectively unravel the underlying temporal correlation encoded in blur. Based on multi-scale residual Swin transformer blocks, we introduce dual-end temporal supervision and temporally symmetric ensembling strategies to generate effective features for time-varying motion rendering. In addition, we design a hybrid camera system to collect the first real-world dataset of one-to-many blur-sharp video pairs. Experimental results show that BiT has a significant gain over the state-of-the-art methods on the public dataset Adobe240. Besides, the proposed real-world dataset effectively helps the model generalize well to real blurry scenarios. Code and data are available at https://github.com/zzh-tech/BiT.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
331,725
2307.06135
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
378,979
1704.00551
AutoSVD++: An Efficient Hybrid Collaborative Filtering Model via Contractive Auto-encoders
Collaborative filtering (CF) has been successfully used to provide users with personalized products and services. However, dealing with the increasing sparseness of user-item matrix still remains a challenge. To tackle such issue, hybrid CF such as combining with content based filtering and leveraging side information of users and items has been extensively studied to enhance performance. However, most of these approaches depend on hand-crafted feature engineering, which are usually noise-prone and biased by different feature extraction and selection schemes. In this paper, we propose a new hybrid model by generalizing contractive auto-encoder paradigm into matrix factorization framework with good scalability and computational efficiency, which jointly model content information as representations of effectiveness and compactness, and leverage implicit user feedback to make accurate recommendations. Extensive experiments conducted over three large scale real datasets indicate the proposed approach outperforms the compared methods for item recommendation.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
71,102
2006.16578
Accelerating Binarized Neural Networks via Bit-Tensor-Cores in Turing GPUs
Despite foreseeing tremendous speedups over conventional deep neural networks, the performance advantage of binarized neural networks (BNNs) has merely been showcased on general-purpose processors such as CPUs and GPUs. In fact, due to being unable to leverage bit-level-parallelism with a word-based architecture, GPUs have been criticized for extremely low utilization (1%) when executing BNNs. Consequently, the latest tensorcores in NVIDIA Turing GPUs start to experimentally support bit computation. In this work, we look into this brand new bit computation capability and characterize its unique features. We show that the stride of memory access can significantly affect performance delivery and a data-format co-design is highly desired to support the tensorcores for achieving superior performance than existing software solutions without tensorcores. We realize the tensorcore-accelerated BNN design, particularly the major functions for fully-connect and convolution layers -- bit matrix multiplication and bit convolution. Evaluations on two NVIDIA Turing GPUs show that, with ResNet-18, our BTC-BNN design can process ImageNet at a rate of 5.6K images per second, 77% faster than state-of-the-art. Our BNN approach is released on https://github.com/pnnl/TCBNN.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
184,850
2407.20547
Neuromorphic on-chip reservoir computing with spiking neural network architectures
Reservoir computing is a promising approach for harnessing the computational power of recurrent neural networks while dramatically simplifying training. This paper investigates the application of integrate-and-fire neurons within reservoir computing frameworks for two distinct tasks: capturing chaotic dynamics of the H\'enon map and forecasting the Mackey-Glass time series. Integrate-and-fire neurons can be implemented in low-power neuromorphic architectures such as Intel Loihi. We explore the impact of network topologies created through random interactions on the reservoir's performance. Our study reveals task-specific variations in network effectiveness, highlighting the importance of tailored architectures for distinct computational tasks. To identify optimal network configurations, we employ a meta-learning approach combined with simulated annealing. This method efficiently explores the space of possible network structures, identifying architectures that excel in different scenarios. The resulting networks demonstrate a range of behaviors, showcasing how inherent architectural features influence task-specific capabilities. We study the reservoir computing performance using a custom integrate-and-fire code, Intel's Lava neuromorphic computing software framework, and via an on-chip implementation in Loihi. We conclude with an analysis of the energy performance of the Loihi architecture.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
true
477,198
1411.1356
Impact of credit default swaps on financial contagion
It had been believed in the conventional practice that the risk of a bank going bankrupt is lessened in a straightforward manner by transferring the risk of loan defaults. But the failure of American International Group in 2008 posed a more complex aspect of financial contagion. This study presents an extension of the asset network systemic risk model (ANWSER) to investigate whether credit default swaps mitigate or intensify the severity of financial contagion. A protection buyer bank transfers the risk of every possible debtor bank default to protection seller banks. The empirical distribution of the number of bank bankruptcies is obtained with the extended model. Systemic capital buffer ratio is calculated from the distribution. The ratio quantifies the effective loss absorbency capability of the entire financial system to force back financial contagion. The key finding is that the leverage ratio is a good estimate of a systemic capital buffer ratio as the backstop of a financial system. The risk transfer from small and medium banks to big banks in an interbank network does not mitigate the severity of financial contagion.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
37,333
1909.12989
SURREAL-System: Fully-Integrated Stack for Distributed Deep Reinforcement Learning
We present an overview of SURREAL-System, a reproducible, flexible, and scalable framework for distributed reinforcement learning (RL). The framework consists of a stack of four layers: Provisioner, Orchestrator, Protocol, and Algorithms. The Provisioner abstracts away the machine hardware and node pools across different cloud providers. The Orchestrator provides a unified interface for scheduling and deploying distributed algorithms by high-level description, which is capable of deploying to a wide range of hardware from a personal laptop to full-fledged cloud clusters. The Protocol provides network communication primitives optimized for RL. Finally, the SURREAL algorithms, such as Proximal Policy Optimization (PPO) and Evolution Strategies (ES), can easily scale to 1000s of CPU cores and 100s of GPUs. The learning performances of our distributed algorithms establish new state-of-the-art on OpenAI Gym and Robotics Suites tasks.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
147,286
2306.14915
A GPT-4 Reticular Chemist for Guiding MOF Discovery
We present a new framework integrating the AI model GPT-4 into the iterative process of reticular chemistry experimentation, leveraging a cooperative workflow of interaction between AI and a human researcher. This GPT-4 Reticular Chemist is an integrated system composed of three phases. Each of these utilizes GPT-4 in various capacities, wherein GPT-4 provides detailed instructions for chemical experimentation and the human provides feedback on the experimental outcomes, including both success and failures, for the in-context learning of AI in the next iteration. This iterative human-AI interaction enabled GPT-4 to learn from the outcomes, much like an experienced chemist, by a prompt-learning strategy. Importantly, the system is based on natural language for both development and operation, eliminating the need for coding skills, and thus, make it accessible to all chemists. Our collaboration with GPT-4 Reticular Chemist guided the discovery of an isoreticular series of MOFs, with each synthesis fine-tuned through iterative feedback and expert suggestions. This workflow presents a potential for broader applications in scientific research by harnessing the capability of large language models like GPT-4 to enhance the feasibility and efficiency of research activities.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
375,863
2006.12311
Provably Efficient Causal Reinforcement Learning with Confounded Observational Data
Empowered by expressive function approximators such as neural networks, deep reinforcement learning (DRL) achieves tremendous empirical successes. However, learning expressive function approximators requires collecting a large dataset (interventional data) by interacting with the environment. Such a lack of sample efficiency prohibits the application of DRL to critical scenarios, e.g., autonomous driving and personalized medicine, since trial and error in the online setting is often unsafe and even unethical. In this paper, we study how to incorporate the dataset (observational data) collected offline, which is often abundantly available in practice, to improve the sample efficiency in the online setting. To incorporate the possibly confounded observational data, we propose the deconfounded optimistic value iteration (DOVI) algorithm, which incorporates the confounded observational data in a provably efficient manner. More specifically, DOVI explicitly adjusts for the confounding bias in the observational data, where the confounders are partially observed or unobserved. In both cases, such adjustments allow us to construct the bonus based on a notion of information gain, which takes into account the amount of information acquired from the offline setting. In particular, we prove that the regret of DOVI is smaller than the optimal regret achievable in the pure online setting by a multiplicative factor, which decreases towards zero when the confounded observational data are more informative upon the adjustments. Our algorithm and analysis serve as a step towards causal reinforcement learning.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
183,545
2403.03465
Self-Attention Empowered Graph Convolutional Network for Structure Learning and Node Embedding
In representation learning on graph-structured data, many popular graph neural networks (GNNs) fail to capture long-range dependencies, leading to performance degradation. Furthermore, this weakness is magnified when the concerned graph is characterized by heterophily (low homophily). To solve this issue, this paper proposes a novel graph learning framework called the graph convolutional network with self-attention (GCN-SA). The proposed scheme exhibits an exceptional generalization capability in node-level representation learning. The proposed GCN-SA contains two enhancements corresponding to edges and node features. For edges, we utilize a self-attention mechanism to design a stable and effective graph-structure-learning module that can capture the internal correlation between any pair of nodes. This graph-structure-learning module can identify reliable neighbors for each node from the entire graph. Regarding the node features, we modify the transformer block to make it more applicable to enable GCN to fuse valuable information from the entire graph. These two enhancements work in distinct ways to help our GCN-SA capture long-range dependencies, enabling it to perform representation learning on graphs with varying levels of homophily. The experimental results on benchmark datasets demonstrate the effectiveness of the proposed GCN-SA. Compared to other outstanding GNN counterparts, the proposed GCN-SA is competitive.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
435,207
2104.11082
Digestive System Dynamics in Molecular Communication Perspectives
Consumption of food in excess of the required optimal nutritional requirements has already resulted in a global crisis and this is from the perspective of human health, such as obesity, as well as food waste and sustainability. In order to minimize the impact of these issues, there is a need to develop novel innovative and effective solutions that can optimally match the food consumption to the demand. This requires accurate understanding of the food digestion dynamics and its impact on each individual's physiological characteristics. This study proposes a model to characterize digestive system dynamics by using concepts from the field of Molecular Communications (MC), and this includes integrating advection-diffusion and reaction mechanisms and its role in characterizing the digestion process as a communication system. The model is then used to explore starch digestion dynamics by using communication system metrics such as delay and path loss. Our simulations found that the long gastric emptying time increases the delay in starch digestion and in turn the glucose production and absorption into the blood stream. At the same time, the enzyme activity on the hydrolyzed starch directly impacts the path loss, as higher reaction rates and lower half saturation concentration of starch results in lower path loss. Our work can lead to provide insights formulated for each individuals by creating a digital twin digestion model
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
231,812
1903.11041
Blockchain Solutions for Multi-Agent Robotic Systems: Related Work and Open Questions
The possibilities of decentralization and immutability make blockchain probably one of the most breakthrough and promising technological innovations in recent years. This paper presents an overview, analysis, and classification of possible blockchain solutions for practical tasks facing multi-agent robotic systems. The paper discusses blockchain-based applications that demonstrate how distributed ledger can be used to extend the existing number of research platforms and libraries for multi-agent robotic systems.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
125,421
0711.3452
In memoriam Maurice Gross
Maurice Gross (1934-2001) was both a great linguist and a pioneer in natural language processing. This article is written in homage to his memory
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
936
2309.12657
Exploiting Modality-Specific Features For Multi-Modal Manipulation Detection And Grounding
AI-synthesized text and images have gained significant attention, particularly due to the widespread dissemination of multi-modal manipulations on the internet, which has resulted in numerous negative impacts on society. Existing methods for multi-modal manipulation detection and grounding primarily focus on fusing vision-language features to make predictions, while overlooking the importance of modality-specific features, leading to sub-optimal results. In this paper, we construct a simple and novel transformer-based framework for multi-modal manipulation detection and grounding tasks. Our framework simultaneously explores modality-specific features while preserving the capability for multi-modal alignment. To achieve this, we introduce visual/language pre-trained encoders and dual-branch cross-attention (DCA) to extract and fuse modality-unique features. Furthermore, we design decoupled fine-grained classifiers (DFC) to enhance modality-specific feature mining and mitigate modality competition. Moreover, we propose an implicit manipulation query (IMQ) that adaptively aggregates global contextual cues within each modality using learnable queries, thereby improving the discovery of forged details. Extensive experiments on the $\rm DGM^4$ dataset demonstrate the superior performance of our proposed model compared to state-of-the-art approaches.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
393,883
2401.04406
MapAI: Precision in Building Segmentation
MapAI: Precision in Building Segmentation is a competition arranged with the Norwegian Artificial Intelligence Research Consortium (NORA) in collaboration with Centre for Artificial Intelligence Research at the University of Agder (CAIR), the Norwegian Mapping Authority, AI:Hub, Norkart, and the Danish Agency for Data Supply and Infrastructure. The competition will be held in the fall of 2022. It will be concluded at the Northern Lights Deep Learning conference focusing on the segmentation of buildings using aerial images and laser data. We propose two different tasks to segment buildings, where the first task can only utilize aerial images, while the second must use laser data (LiDAR) with or without aerial images. Furthermore, we use IoU and Boundary IoU to properly evaluate the precision of the models, with the latter being an IoU measure that evaluates the results' boundaries. We provide the participants with a training dataset and keep a test dataset for evaluation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
420,424
2010.01949
Improving Device Directedness Classification of Utterances with Semantic Lexical Features
User interactions with personal assistants like Alexa, Google Home and Siri are typically initiated by a wake term or wakeword. Several personal assistants feature "follow-up" modes that allow users to make additional interactions without the need of a wakeword. For the system to only respond when appropriate, and to ignore speech not intended for it, utterances must be classified as device-directed or non-device-directed. State-of-the-art systems have largely used acoustic features for this task, while others have used only lexical features or have added LM-based lexical features. We propose a directedness classifier that combines semantic lexical features with a lightweight acoustic feature and show it is effective in classifying directedness. The mixed-domain lexical and acoustic feature model is able to achieve 14% relative reduction of EER over a state-of-the-art acoustic-only baseline model. Finally, we successfully apply transfer learning and semi-supervised learning to the model to improve accuracy even further.
false
false
true
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
198,855
2106.00257
A Coarse to Fine Question Answering System based on Reinforcement Learning
In this paper, we present a coarse to fine question answering (CFQA) system based on reinforcement learning which can efficiently processes documents with different lengths by choosing appropriate actions. The system is designed using an actor-critic based deep reinforcement learning model to achieve multi-step question answering. Compared to previous QA models targeting on datasets mainly containing either short or long documents, our multi-step coarse to fine model takes the merits from multiple system modules, which can handle both short and long documents. The system hence obtains a much better accuracy and faster trainings speed compared to the current state-of-the-art models. We test our model on four QA datasets, WIKEREADING, WIKIREADING LONG, CNN and SQuAD, and demonstrate 1.3$\%$-1.7$\%$ accuracy improvements with 1.5x-3.4x training speed-ups in comparison to the baselines using state-of-the-art models.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
238,053
1910.03650
Multi-Head Attention for Multi-Modal Joint Vehicle Motion Forecasting
This paper presents a novel vehicle motion forecasting method based on multi-head attention. It produces joint forecasts for all vehicles on a road scene as sequences of multi-modal probability density functions of their positions. Its architecture uses multi-head attention to account for complete interactions between all vehicles, and long short-term memory layers for encoding and forecasting. It relies solely on vehicle position tracks, does not need maneuver definitions, and does not represent the scene with a spatial grid. This allows it to be more versatile than similar model while combining any forecasting capabilities, namely joint forecast with interactions, uncertainty estimation, and multi-modality. The resulting prediction likelihood outperforms state-of-the-art models on the same dataset.
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
148,548
2406.12019
Hacking Encrypted Wireless Power: Cyber-Security of Dynamic Charging
Recently, energy encryption for wireless power transfer has been developed for energy safety, which is important in public places to suppress unauthorized energy extraction. Most techniques vary the frequency so that unauthorized receivers cannot extract energy because of non-resonance. However, this strategy is unreliable. To stimulate the progress of energy encryption technology and point out security holes, this paper proposes a decryption method for the fundamental principle of encrypted frequency-varying wireless power transfer. The paper uses an auxiliary coil to detect the frequency and a switched-capacitor array to adaptively compensate the receiver for a wide frequency range. The switched-capacitor array contains two capacitors and one semi-conductor switch. One capacitor compensates the receiver all the time while the other's active time during one wireless power transfer cycle is regulated by the switch. Thus, the proposed hacking receiver controls the equivalent capacitance of the compensation and steals energy. Finally, a detailed simulation model and experimental results prove the effectiveness of the attack on frequency-hopping energy encryption. Although any nonnegligible energy extracted would be problematic, we achieved to steal 78% to 84% of the energy an authorized receiver could get. When the frequency changes, the interceptor is coarsely tuned very quickly, which can hack fast frequency-varying encrypted system.
false
false
false
false
false
false
false
false
false
false
true
false
true
false
false
false
false
true
465,180
2312.01795
Distributed Continual Learning with CoCoA in High-dimensional Linear Regression
We consider estimation under scenarios where the signals of interest exhibit change of characteristics over time. In particular, we consider the continual learning problem where different tasks, e.g., data with different distributions, arrive sequentially and the aim is to perform well on the newly arrived task without performance degradation on the previously seen tasks. In contrast to the continual learning literature focusing on the centralized setting, we investigate the problem from a distributed estimation perspective. We consider the well-established distributed learning algorithm COCOA, which distributes the model parameters and the corresponding features over the network. We provide exact analytical characterization for the generalization error of COCOA under continual learning for linear regression in a range of scenarios, where overparameterization is of particular interest. These analytical results characterize how the generalization error depends on the network structure, the task similarity and the number of tasks, and show how these dependencies are intertwined. In particular, our results show that the generalization error can be significantly reduced by adjusting the network size, where the most favorable network size depends on task similarity and the number of tasks. We present numerical results verifying the theoretical analysis and illustrate the continual learning performance of COCOA with a digit classification task.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
412,595
2304.12410
PEFT-Ref: A Modular Reference Architecture and Typology for Parameter-Efficient Finetuning Techniques
Recent parameter-efficient finetuning (PEFT) techniques aim to improve over the considerable cost of fully finetuning large pretrained language models (PLM). As different PEFT techniques proliferate, it is becoming difficult to compare them, in particular in terms of (i) the structure and functionality they add to the PLM, (ii) the different types and degrees of efficiency improvements achieved, (iii) performance at different downstream tasks, and (iv) how differences in structure and functionality relate to efficiency and task performance. To facilitate such comparisons, this paper presents a reference architecture which standardises aspects shared by different PEFT techniques, while isolating differences to specific locations and interactions with the standard components. Through this process of standardising and isolating differences, a modular view of PEFT techniques emerges, supporting not only direct comparison of different techniques and their efficiency and task performance, but also systematic exploration of reusability and composability of the different types of finetuned modules. We demonstrate how the reference architecture can be applied to understand properties and relative advantages of PEFT techniques, hence to inform selection of techniques for specific tasks, and design choices for new PEFT techniques.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
360,193
1906.02897
Semi-supervised Stochastic Multi-Domain Learning using Variational Inference
Supervised models of NLP rely on large collections of text which closely resemble the intended testing setting. Unfortunately matching text is often not available in sufficient quantity, and moreover, within any domain of text, data is often highly heterogenous. In this paper we propose a method to distill the important domain signal as part of a multi-domain learning system, using a latent variable model in which parts of a neural model are stochastically gated based on the inferred domain. We compare the use of discrete versus continuous latent variables, operating in a domain-supervised or a domain semi-supervised setting, where the domain is known only for a subset of training inputs. We show that our model leads to substantial performance improvements over competitive benchmark domain adaptation methods, including methods using adversarial learning.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
134,212
2406.10485
A Label is Worth a Thousand Images in Dataset Distillation
Data $\textit{quality}$ is a crucial factor in the performance of machine learning models, a principle that dataset distillation methods exploit by compressing training datasets into much smaller counterparts that maintain similar downstream performance. Understanding how and why data distillation methods work is vital not only for improving these methods but also for revealing fundamental characteristics of "good" training data. However, a major challenge in achieving this goal is the observation that distillation approaches, which rely on sophisticated but mostly disparate methods to generate synthetic data, have little in common with each other. In this work, we highlight a largely overlooked aspect common to most of these methods: the use of soft (probabilistic) labels. Through a series of ablation experiments, we study the role of soft labels in depth. Our results reveal that the main factor explaining the performance of state-of-the-art distillation methods is not the specific techniques used to generate synthetic data but rather the use of soft labels. Furthermore, we demonstrate that not all soft labels are created equal; they must contain $\textit{structured information}$ to be beneficial. We also provide empirical scaling laws that characterize the effectiveness of soft labels as a function of images-per-class in the distilled dataset and establish an empirical Pareto frontier for data-efficient learning. Combined, our findings challenge conventional wisdom in dataset distillation, underscore the importance of soft labels in learning, and suggest new directions for improving distillation methods. Code for all experiments is available at https://github.com/sunnytqin/no-distillation.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
464,422
1009.0929
Mining Target-Oriented Sequential Patterns with Time-Intervals
A target-oriented sequential pattern is a sequential pattern with a concerned itemset in the end of pattern. A time-interval sequential pattern is a sequential pattern with time-intervals between every pair of successive itemsets. In this paper we present an algorithm to discover target-oriented sequential pattern with time-intervals. To this end, the original sequences are reversed so that the last itemsets can be arranged in front of the sequences. The contrasts between reversed sequences and the concerned itemset are then used to exclude the irrelevant sequences. Clustering analysis is used with typical sequential pattern mining algorithm to extract the sequential patterns with time-intervals between successive itemsets. Finally, the discovered time-interval sequential patterns are reversed again to the original order for searching the target patterns.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
7,486
2303.02370
Self-Supervised Learning for Place Representation Generalization across Appearance Changes
Visual place recognition is a key to unlocking spatial navigation for animals, humans and robots. While state-of-the-art approaches are trained in a supervised manner and therefore hardly capture the information needed for generalizing to unusual conditions, we argue that self-supervised learning may help abstracting the place representation so that it can be foreseen, irrespective of the conditions. More precisely, in this paper, we investigate learning features that are robust to appearance modifications while sensitive to geometric transformations in a self-supervised manner. This dual-purpose training is made possible by combining the two self-supervision main paradigms, \textit{i.e.} contrastive and predictive learning. Our results on standard benchmarks reveal that jointly learning such appearance-robust and geometry-sensitive image descriptors leads to competitive visual place recognition results across adverse seasonal and illumination conditions, without requiring any human-annotated labels.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
349,327
2307.13897
AViT: Adapting Vision Transformers for Small Skin Lesion Segmentation Datasets
Skin lesion segmentation (SLS) plays an important role in skin lesion analysis. Vision transformers (ViTs) are considered an auspicious solution for SLS, but they require more training data compared to convolutional neural networks (CNNs) due to their inherent parameter-heavy structure and lack of some inductive biases. To alleviate this issue, current approaches fine-tune pre-trained ViT backbones on SLS datasets, aiming to leverage the knowledge learned from a larger set of natural images to lower the amount of skin training data needed. However, fully fine-tuning all parameters of large backbones is computationally expensive and memory intensive. In this paper, we propose AViT, a novel efficient strategy to mitigate ViTs' data-hunger by transferring any pre-trained ViTs to the SLS task. Specifically, we integrate lightweight modules (adapters) within the transformer layers, which modulate the feature representation of a ViT without updating its pre-trained weights. In addition, we employ a shallow CNN as a prompt generator to create a prompt embedding from the input image, which grasps fine-grained information and CNN's inductive biases to guide the segmentation task on small datasets. Our quantitative experiments on 4 skin lesion datasets demonstrate that AViT achieves competitive, and at times superior, performance to SOTA but with significantly fewer trainable parameters. Our code is available at https://github.com/siyi-wind/AViT.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
381,732
1612.00131
Blind Estimation of Sparse Multi-User Massive MIMO Channels
We provide a maximum likelihood formulation for the blind estimation of massive mmWave MIMO channels while taking into account their underlying sparse structure. The main advantage of this approach is the fact that the overhead due to pilot sequences can be reduced dramatically especially when operating at low SNR per antenna. Thereby, the sparsity in the angular domain is exploited as a key property to enable the unambiguous blind separation between user's channels. On the other hand, as only the sparsity is assumed, the proposed method is robust with respect to the statistical properties of the channel and data and allows the estimation in rapidly time-varying scenarios and eventually the separation of interfering users from adjacent base stations. Additionally, a performance limit is derived based on the clairvoyant Cram\'er Rao lower bound. Simulation results demonstrate that this maximum likelihood formulation yields superior estimation accuracy with reasonable computational complexity and limited model assumptions.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
64,828
2209.14624
Is Complexity Required for Neural Network Pruning? A Case Study on Global Magnitude Pruning
Pruning neural networks has become popular in the last decade when it was shown that a large number of weights can be safely removed from modern neural networks without compromising accuracy. Numerous pruning methods have been proposed since, each claiming to be better than prior art, however, at the cost of increasingly complex pruning methodologies. These methodologies include utilizing importance scores, getting feedback through back-propagation or having heuristics-based pruning rules amongst others. In this work, we question whether this pattern of introducing complexity is really necessary to achieve better pruning results. We benchmark these SOTA techniques against a simple pruning baseline, namely, Global Magnitude Pruning (Global MP), that ranks weights in order of their magnitudes and prunes the smallest ones. Surprisingly, we find that vanilla Global MP performs very well against the SOTA techniques. When considering sparsity-accuracy trade-off, Global MP performs better than all SOTA techniques at all sparsity ratios. When considering FLOPs-accuracy trade-off, some SOTA techniques outperform Global MP at lower sparsity ratios, however, Global MP starts performing well at high sparsity ratios and performs very well at extremely high sparsity ratios. Moreover, we find that a common issue that many pruning algorithms run into at high sparsity rates, namely, layer-collapse, can be easily fixed in Global MP. We explore why layer collapse occurs in networks and how it can be mitigated in Global MP by utilizing a technique called Minimum Threshold. We showcase the above findings on various models (WRN-28-8, ResNet-32, ResNet-50, MobileNet-V1 and FastGRNN) and multiple datasets (CIFAR-10, ImageNet and HAR-2). Code is available at https://github.com/manasgupta-1/GlobalMP.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
320,309
2004.10568
Up or Down? Adaptive Rounding for Post-Training Quantization
When quantizing neural networks, assigning each floating-point weight to its nearest fixed-point value is the predominant approach. We find that, perhaps surprisingly, this is not the best we can do. In this paper, we propose AdaRound, a better weight-rounding mechanism for post-training quantization that adapts to the data and the task loss. AdaRound is fast, does not require fine-tuning of the network, and only uses a small amount of unlabelled data. We start by theoretically analyzing the rounding problem for a pre-trained neural network. By approximating the task loss with a Taylor series expansion, the rounding task is posed as a quadratic unconstrained binary optimization problem. We simplify this to a layer-wise local loss and propose to optimize this loss with a soft relaxation. AdaRound not only outperforms rounding-to-nearest by a significant margin but also establishes a new state-of-the-art for post-training quantization on several networks and tasks. Without fine-tuning, we can quantize the weights of Resnet18 and Resnet50 to 4 bits while staying within an accuracy loss of 1%.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
173,671
1508.03601
Is Stack Overflow Overflowing With Questions and Tags
Programming question and answer (Q & A) websites, such as Quora, Stack Overflow, and Yahoo! Answer etc. helps us to understand the programming concepts easily and quickly in a way that has been tested and applied by many software developers. Stack Overflow is one of the most frequently used programming Q\&A website where the questions and answers posted are presently analyzed manually, which requires a huge amount of time and resource. To save the effort, we present a topic modeling based technique to analyze the words of the original texts to discover the themes that run through them. We also propose a method to automate the process of reviewing the quality of questions on Stack Overflow dataset in order to avoid ballooning the stack overflow with insignificant questions. The proposed method also recommends the appropriate tags for the new post, which averts the creation of unnecessary tags on Stack Overflow.
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
false
false
false
46,016
1509.03677
Mechatronics Architecture of Smartphone-Based Spacecraft ADCS using VSCMG Actuators
Hardware and software architecture of a novel spacecraft Attitude Determination and Control System (ADCS) based on smartphones using Variable Speed Control Moment Gyroscope (VSCMG) as actuator is proposed here. A spacecraft ground simulator testbed for Hardware-in-the-loop (HIL) attitude estimation and control with VSCMG is also described. The sensor breakouts with independent micro-controller units are used in the conventional ADCS units, which are replaced by a single integrated off-the-shelf smartphone. On-board sensing, data acquisition, data uplink/downlink, state estimation and real-time feedback control objectives can be performed using this novel spacecraft ADCS. The attitude control and attitude determination (estimation) schemes have appeared in prior publications, but are presented in brief here. Experimental results from running the attitude estimation (filtering) scheme with the "onboard" sensors of the smartphone in the HIL simulator are given. These results, obtained in the Spacecraft Guidance, Navigation and Control Laboratory at NMSU, demonstrate the excellent performance of this estimation scheme with the noisy raw data from the smartphone sensors.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
46,856
2012.04372
Freeform shape optimization of a compact DC photo-electron gun using isogeometric analysis
Compact DC high-voltage photo-electron guns are able to meet the sophisticated demands of high-current applications such as energy recovery linacs. A main design parameter for such sources is the electric field strength, which depends on the electrode geometry and is limited by the field emission threshold of the electrode material. In order to minimize the maximum field strength for optimal gun operation, isogeometric analysis (IGA) can be used to exploit the axisymmetric geometry and describe its cross section by non-uniform rational B-splines, the control points of which are the parameters to be optimized. This computationally efficient method is capable of describing CAD-generated geometries using open source software (GeoPDEs, NLopt, Octave) and it can simplify the step from design to simulation. We will present the mathematical formulation, the software workflow, and the results of an IGA-based shape optimization for a planned high-voltage upgrade of the DC photogun teststand Photo-CATCH at TU Darmstadt. The software builds on a general framework for isogeometric analysis and allows for easy adaptations to other geometries or quantities of interest. Simulations assuming a bias voltage of -300 kV yielded maximum field gradients of 9.06 MV/m on the surface of an inverted insulator electrode and below 3 MV/m on the surface of the photocathode.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
210,436
2309.12506
License Plate Super-Resolution Using Diffusion Models
In surveillance, accurately recognizing license plates is hindered by their often low quality and small dimensions, compromising recognition precision. Despite advancements in AI-based image super-resolution, methods like Convolutional Neural Networks (CNNs) and Generative Adversarial Networks (GANs) still fall short in enhancing license plate images. This study leverages the cutting-edge diffusion model, which has consistently outperformed other deep learning techniques in image restoration. By training this model using a curated dataset of Saudi license plates, both in low and high resolutions, we discovered the diffusion model's superior efficacy. The method achieves a 12.55\% and 37.32% improvement in Peak Signal-to-Noise Ratio (PSNR) over SwinIR and ESRGAN, respectively. Moreover, our method surpasses these techniques in terms of Structural Similarity Index (SSIM), registering a 4.89% and 17.66% improvement over SwinIR and ESRGAN, respectively. Furthermore, 92% of human evaluators preferred our images over those from other algorithms. In essence, this research presents a pioneering solution for license plate super-resolution, with tangible potential for surveillance systems.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
393,820
2408.01986
DeMansia: Mamba Never Forgets Any Tokens
This paper examines the mathematical foundations of transformer architectures, highlighting their limitations particularly in handling long sequences. We explore prerequisite models such as Mamba, Vision Mamba (ViM), and LV-ViT that pave the way for our proposed architecture, DeMansia. DeMansia integrates state space models with token labeling techniques to enhance performance in image classification tasks, efficiently addressing the computational challenges posed by traditional transformers. The architecture, benchmark, and comparisons with contemporary models demonstrate DeMansia's effectiveness. The implementation of this paper is available on GitHub at https://github.com/catalpaaa/DeMansia
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
478,453
1806.03905
Retinal Optic Disc Segmentation using Conditional Generative Adversarial Network
This paper proposed a retinal image segmentation method based on conditional Generative Adversarial Network (cGAN) to segment optic disc. The proposed model consists of two successive networks: generator and discriminator. The generator learns to map information from the observing input (i.e., retinal fundus color image), to the output (i.e., binary mask). Then, the discriminator learns as a loss function to train this mapping by comparing the ground-truth and the predicted output with observing the input image as a condition.Experiments were performed on two publicly available dataset; DRISHTI GS1 and RIM-ONE. The proposed model outperformed state-of-the-art-methods by achieving around 0.96% and 0.98% of Jaccard and Dice coefficients, respectively. Moreover, an image segmentation is performed in less than a second on recent GPU.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
100,114
2204.08140
Pricing Real-time Stochastic Storage Operations
Pricing storage operation in the real-time market under demand and generation stochasticities is considered. A scenario-based stochastic rolling-window dispatch model is formulated for the real-time market, consisting of conventional generators, utility-scale storage, and distributed energy resource aggregators. We show that uniform pricing mechanisms require discriminative out-of-the-market uplifts, making settlements under locational marginal pricing (LMP) discriminative. It is shown that the temporal locational marginal pricing (TLMP) that adds nonuniform shadow prices of ramping and state-of-charge to LMP removes the need for out-of-the-market uplifts. Truthful bidding incentives are also established for price-taking participants under TLMP. Revenue adequacy and uplifts are evaluated in numerical simulations.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
291,979
2012.15495
Towards Zero-Shot Knowledge Distillation for Natural Language Processing
Knowledge Distillation (KD) is a common knowledge transfer algorithm used for model compression across a variety of deep learning based natural language processing (NLP) solutions. In its regular manifestations, KD requires access to the teacher's training data for knowledge transfer to the student network. However, privacy concerns, data regulations and proprietary reasons may prevent access to such data. We present, to the best of our knowledge, the first work on Zero-Shot Knowledge Distillation for NLP, where the student learns from the much larger teacher without any task specific data. Our solution combines out of domain data and adversarial training to learn the teacher's output distribution. We investigate six tasks from the GLUE benchmark and demonstrate that we can achieve between 75% and 92% of the teacher's classification score (accuracy or F1) while compressing the model 30 times.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
213,811
2104.02172
Strategy Synthesis for Partially-known Switched Stochastic Systems
We present a data-driven framework for strategy synthesis for partially-known switched stochastic systems. The properties of the system are specified using linear temporal logic (LTL) over finite traces (LTLf), which is as expressive as LTL and enables interpretations over finite behaviors. The framework first learns the unknown dynamics via Gaussian process regression. Then, it builds a formal abstraction of the switched system in terms of an uncertain Markov model, namely an Interval Markov Decision Process (IMDP), by accounting for both the stochastic behavior of the system and the uncertainty in the learning step. Then, we synthesize a strategy on the resulting IMDP that maximizes the satisfaction probability of the LTLf specification and is robust against all the uncertainties in the abstraction. This strategy is then refined into a switching strategy for the original stochastic system. We show that this strategy is near-optimal and provide a bound on its distance (error) to the optimal strategy. We experimentally validate our framework on various case studies, including both linear and non-linear switched stochastic systems.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
228,617
1812.09232
Learning from Web Data: the Benefit of Unsupervised Object Localization
Annotating a large number of training images is very time-consuming. In this background, this paper focuses on learning from easy-to-acquire web data and utilizes the learned model for fine-grained image classification in labeled datasets. Currently, the performance gain from training with web data is incremental, like a common saying "better than nothing, but not by much". Conventionally, the community looks to correcting the noisy web labels to select informative samples. In this work, we first systematically study the built-in gap between the web and standard datasets, i.e. different data distributions between the two kinds of data. Then, in addition to using web labels, we present an unsupervised object localization method, which provides critical insights into the object density and scale in web images. Specifically, we design two constraints on web data to substantially reduce the difference of data distributions for the web and standard datasets. First, we present a method to control the scale, localization and number of objects in the detected region. Second, we propose to select the regions containing objects that are consistent with the web tag. Based on the two constraints, we are able to process web images to reduce the gap, and the processed web data is used to better assist the standard dataset to train CNNs. Experiments on several fine-grained image classification datasets confirm that our method performs favorably against the state-of-the-art methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
117,120
2304.09448
EC^2: Emergent Communication for Embodied Control
Embodied control requires agents to leverage multi-modal pre-training to quickly learn how to act in new environments, where video demonstrations contain visual and motion details needed for low-level perception and control, and language instructions support generalization with abstract, symbolic structures. While recent approaches apply contrastive learning to force alignment between the two modalities, we hypothesize better modeling their complementary differences can lead to more holistic representations for downstream adaption. To this end, we propose Emergent Communication for Embodied Control (EC^2), a novel scheme to pre-train video-language representations for few-shot embodied control. The key idea is to learn an unsupervised "language" of videos via emergent communication, which bridges the semantics of video details and structures of natural language. We learn embodied representations of video trajectories, emergent language, and natural language using a language model, which is then used to finetune a lightweight policy network for downstream control. Through extensive experiments in Metaworld and Franka Kitchen embodied benchmarks, EC^2 is shown to consistently outperform previous contrastive learning methods for both videos and texts as task inputs. Further ablations confirm the importance of the emergent language, which is beneficial for both video and language learning, and significantly superior to using pre-trained video captions. We also present a quantitative and qualitative analysis of the emergent language and discuss future directions toward better understanding and leveraging emergent communication in embodied tasks.
false
false
false
false
false
false
true
false
true
false
false
true
false
false
false
false
false
false
359,061
1210.8194
Extending the Concept of Analog Butterworth Filter for Fractional Order Systems
This paper proposes the design of Fractional Order (FO) Butterworth filter in complex w-plane (w=sq; q being any real number) considering the presence of under-damped, hyper-damped, ultra-damped poles. This is the first attempt to design such fractional Butterworth filters in complex w-plane instead of complex s-plane, as conventionally done for integer order filters. Firstly, the concept of fractional derivatives and w-plane stability of linear fractional order systems are discussed. Detailed mathematical formulation for the design of fractional Butterworth-like filter (FBWF) in w-plane is then presented. Simulation examples are given along with a practical example to design the FO Butterworth filter with given specifications in frequency domain to show the practicability of the proposed formulation.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
19,478
2309.17166
Advances in Kidney Biopsy Lesion Assessment through Dense Instance Segmentation
Renal biopsies are the gold standard for the diagnosis of kidney diseases. Lesion scores made by renal pathologists are semi-quantitative and exhibit high inter-observer variability. Automating lesion classification within segmented anatomical structures can provide decision support in quantification analysis, thereby reducing inter-observer variability. Nevertheless, classifying lesions in regions-of-interest (ROIs) is clinically challenging due to (a) a large amount of densely packed anatomical objects, (b) class imbalance across different compartments (at least 3), (c) significant variation in size and shape of anatomical objects and (d) the presence of multi-label lesions per anatomical structure. Existing models cannot address these complexities in an efficient and generic manner. This paper presents an analysis for a \textbf{generalized solution} to datasets from various sources (pathology departments) with different types of lesions. Our approach utilizes two sub-networks: dense instance segmentation and lesion classification. We introduce \textbf{DiffRegFormer}, an end-to-end dense instance segmentation sub-network designed for multi-class, multi-scale objects within ROIs. Combining diffusion models, transformers, and RCNNs, DiffRegFormer {is a computational-friendly framework that can efficiently recognize over 500 objects across three anatomical classes, i.e., glomeruli, tubuli, and arteries, within ROIs.} In a dataset of 303 ROIs from 148 Jones' silver-stained renal Whole Slide Images (WSIs), our approach outperforms previous methods, achieving an Average Precision of 52.1\% (detection) and 46.8\% (segmentation). Moreover, our lesion classification sub-network achieves 89.2\% precision and 64.6\% recall on 21889 object patches out of the 303 ROIs. Lastly, our model demonstrates direct domain transfer to PAS-stained renal WSIs without fine-tuning.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
395,658
2306.10397
Enhancing the Prediction of Emotional Experience in Movies using Deep Neural Networks: The Significance of Audio and Language
Our paper focuses on making use of deep neural network models to accurately predict the range of human emotions experienced during watching movies. In this certain setup, there exist three clear-cut input modalities that considerably influence the experienced emotions: visual cues derived from RGB video frames, auditory components encompassing sounds, speech, and music, and linguistic elements encompassing actors' dialogues. Emotions are commonly described using a two-factor model including valence (ranging from happy to sad) and arousal (indicating the intensity of the emotion). In this regard, a Plethora of works have presented a multitude of models aiming to predict valence and arousal from video content. However, non of these models contain all three modalities, with language being consistently eliminated across all of them. In this study, we comprehensively combine all modalities and conduct an analysis to ascertain the importance of each in predicting valence and arousal. Making use of pre-trained neural networks, we represent each input modality in our study. In order to process visual input, we employ pre-trained convolutional neural networks to recognize scenes[1], objects[2], and actions[3,4]. For audio processing, we utilize a specialized neural network designed for handling sound-related tasks, namely SoundNet[5]. Finally, Bidirectional Encoder Representations from Transformers (BERT) models are used to extract linguistic features[6] in our analysis. We report results on the COGNIMUSE dataset[7], where our proposed model outperforms the current state-of-the-art approaches. Surprisingly, our findings reveal that language significantly influences the experienced arousal, while sound emerges as the primary determinant for predicting valence. In contrast, the visual modality exhibits the least impact among all modalities in predicting emotions.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
374,212
1806.07834
A Look at Motion Planning for Autonomous Vehicles at an Intersection
Autonomous Vehicles are currently being tested in a variety of scenarios. As we move towards Autonomous Vehicles, how should intersections look? To answer that question, we break down an intersection management into the different conundrums and scenarios involved in the trajectory planning and current approaches to solve them. Then, a brief analysis of current works in autonomous intersection is conducted. With a critical eye, we try to delve into the discrepancies of existing solutions while presenting some critical and important factors that have been addressed. Furthermore, open issues that have to be addressed are also emphasized. We also try to answer the question of how to benchmark intersection management algorithms by providing some factors that impact autonomous navigation at intersection.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
101,027
2109.06831
Multi-Agent Deep Reinforcement Learning For Persistent Monitoring With Sensing, Communication, and Localization Constraints
Determining multi-robot motion policies for persistently monitoring a region with limited sensing, communication, and localization constraints in non-GPS environments is a challenging problem. To take the localization constraints into account, in this paper, we consider a heterogeneous robotic system consisting of two types of agents: anchor agents with accurate localization capability and auxiliary agents with low localization accuracy. To localize itself, the auxiliary agents must be within the communication range of an {anchor}, directly or indirectly. The robotic team's objective is to minimize environmental uncertainty through persistent monitoring. We propose a multi-agent deep reinforcement learning (MARL) based architecture with graph convolution called Graph Localized Proximal Policy Optimization (GALOPP), which incorporates the limited sensor field-of-view, communication, and localization constraints of the agents along with persistent monitoring objectives to determine motion policies for each agent. We evaluate the performance of GALOPP on open maps with obstacles having a different number of anchor and auxiliary agents. We further study (i) the effect of communication range, obstacle density, and sensing range on the performance and (ii) compare the performance of GALOPP with non-RL baselines, namely, greedy search, random search, and random search with communication constraint. For its generalization capability, we also evaluated GALOPP in two different environments -- 2-room and 4-room. The results show that GALOPP learns the policies and monitors the area well. As a proof-of-concept, we perform hardware experiments to demonstrate the performance of GALOPP.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
255,292
2407.06079
Layered Diffusion Model for One-Shot High Resolution Text-to-Image Synthesis
We present a one-shot text-to-image diffusion model that can generate high-resolution images from natural language descriptions. Our model employs a layered U-Net architecture that simultaneously synthesizes images at multiple resolution scales. We show that this method outperforms the baseline of synthesizing images only at the target resolution, while reducing the computational cost per step. We demonstrate that higher resolution synthesis can be achieved by layering convolutions at additional resolution scales, in contrast to other methods which require additional models for super-resolution synthesis.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
471,238
2301.08649
Off-Policy Evaluation with Out-of-Sample Guarantees
We consider the problem of evaluating the performance of a decision policy using past observational data. The outcome of a policy is measured in terms of a loss (aka. disutility or negative reward) and the main problem is making valid inferences about its out-of-sample loss when the past data was observed under a different and possibly unknown policy. Using a sample-splitting method, we show that it is possible to draw such inferences with finite-sample coverage guarantees about the entire loss distribution, rather than just its mean. Importantly, the method takes into account model misspecifications of the past policy - including unmeasured confounding. The evaluation method can be used to certify the performance of a policy using observational data under a specified range of credible model assumptions.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
341,250