id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2210.08473
1st Place Solution in Google Universal Images Embedding
This paper presents the 1st place solution for the Google Universal Images Embedding Competition on Kaggle. The highlighted part of our solution is based on 1) A novel way to conduct training and fine-tuning; 2) The idea of a better ensemble in the pool of models that make embedding; 3) The potential trade-off between fine-tuning on high-resolution and overlapping patches; 4) The potential factors to work for the dynamic margin. Our solution reaches 0.728 in the private leader board, which achieve 1st place in Google Universal Images Embedding Competition.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
324,164
1106.5594
The Swiss Board Directors Network in 2009
We study the networks formed by the directors of the most important Swiss boards and the boards themselves for the year 2009. The networks are obtained by projection from the original bipartite graph. We highlight a number of important statistical features of those networks such as degree distribution, weight distribution, and several centrality measures as well as their interrelationships. While similar statistics were already known for other board systems, and are comparable here, we have extended the study with a careful investigation of director and board centrality, a k-core analysis, and a simulation of the speed of information propagation and its relationships with the topological aspects of the network such as clustering and link weight and betweenness. The overall picture that emerges is one in which the topological structure of the Swiss board and director networks has evolved in such a way that special actors and links between actors play a fundamental role in the flow of information among distant parts of the network. This is shown in particular by the centrality measures and by the simulation of a simple epidemic process on the directors network.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
11,043
1906.06050
Neural Response Generation with Meta-Words
We present open domain response generation with meta-words. A meta-word is a structured record that describes various attributes of a response, and thus allows us to explicitly model the one-to-many relationship within open domain dialogues and perform response generation in an explainable and controllable manner. To incorporate meta-words into generation, we enhance the sequence-to-sequence architecture with a goal tracking memory network that formalizes meta-word expression as a goal and manages the generation process to achieve the goal with a state memory panel and a state controller. Experimental results on two large-scale datasets indicate that our model can significantly outperform several state-of-the-art generation models in terms of response relevance, response diversity, accuracy of one-to-many modeling, accuracy of meta-word expression, and human evaluation.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
135,199
2309.15259
SLIQ: Quantum Image Similarity Networks on Noisy Quantum Computers
Exploration into quantum machine learning has grown tremendously in recent years due to the ability of quantum computers to speed up classical programs. However, these efforts have yet to solve unsupervised similarity detection tasks due to the challenge of porting them to run on quantum computers. To overcome this challenge, we propose SLIQ, the first open-sourced work for resource-efficient quantum similarity detection networks, built with practical and effective quantum learning and variance-reducing algorithms.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
394,899
1511.05219
How much does your data exploration overfit? Controlling bias via information usage
Modern data is messy and high-dimensional, and it is often not clear a priori what are the right questions to ask. Instead, the analyst typically needs to use the data to search for interesting analyses to perform and hypotheses to test. This is an adaptive process, where the choice of analysis to be performed next depends on the results of the previous analyses on the same data. Ultimately, which results are reported can be heavily influenced by the data. It is widely recognized that this process, even if well-intentioned, can lead to biases and false discoveries, contributing to the crisis of reproducibility in science. But while %the adaptive nature of exploration any data-exploration renders standard statistical theory invalid, experience suggests that different types of exploratory analysis can lead to disparate levels of bias, and the degree of bias also depends on the particulars of the data set. In this paper, we propose a general information usage framework to quantify and provably bound the bias and other error metrics of an arbitrary exploratory analysis. We prove that our mutual information based bound is tight in natural settings, and then use it to give rigorous insights into when commonly used procedures do or do not lead to substantially biased estimation. Through the lens of information usage, we analyze the bias of specific exploration procedures such as filtering, rank selection and clustering. Our general framework also naturally motivates randomization techniques that provably reduces exploration bias while preserving the utility of the data analysis. We discuss the connections between our approach and related ideas from differential privacy and blinded data analysis, and supplement our results with illustrative simulations.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
49,012
2404.09203
Advanced Intelligent Optimization Algorithms for Multi-Objective Optimal Power Flow in Future Power Systems: A Review
This review explores the application of intelligent optimization algorithms to Multi-Objective Optimal Power Flow (MOPF) in enhancing modern power systems. It delves into the challenges posed by the integration of renewables, smart grids, and increasing energy demands, focusing on evolutionary algorithms, swarm intelligence, and deep reinforcement learning. The effectiveness, scalability, and application of these algorithms are analyzed, with findings suggesting that algorithm selection is contingent on the specific MOPF problem at hand, and hybrid approaches offer significant promise. The importance of standard test systems for verifying solutions and the role of software tools in facilitating analysis are emphasized. Future research is directed towards exploiting machine learning for dynamic optimization, embracing decentralized energy systems, and adapting to evolving policy frameworks to improve power system efficiency and sustainability. This review aims to advance MOPF research by highlighting state-of-the-art methodologies and encouraging the development of innovative solutions for future energy challenges.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
true
false
false
446,575
2408.03394
Faster Model Predictive Control via Self-Supervised Initialization Learning
Optimization for robot control tasks, spanning various methodologies, includes Model Predictive Control (MPC). However, the complexity of the system, such as non-convex and non-differentiable cost functions and prolonged planning horizons often drastically increases the computation time, limiting MPC's real-world applicability. Prior works in speeding up the optimization have limitations on solving convex problem and generalizing to hold out domains. To overcome this challenge, we develop a novel framework aiming at expediting optimization processes. In our framework, we combine offline self-supervised learning and online fine-tuning through reinforcement learning to improve the control performance and reduce optimization time. We demonstrate the effectiveness of our method on a novel, challenging Formula-1-track driving task, achieving 3.9\% higher performance in optimization time and 3.6\% higher performance in tracking accuracy on challenging holdout tracks.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
478,995
2304.10642
Word Sense Induction with Knowledge Distillation from BERT
Pre-trained contextual language models are ubiquitously employed for language understanding tasks, but are unsuitable for resource-constrained systems. Noncontextual word embeddings are an efficient alternative in these settings. Such methods typically use one vector to encode multiple different meanings of a word, and incur errors due to polysemy. This paper proposes a two-stage method to distill multiple word senses from a pre-trained language model (BERT) by using attention over the senses of a word in a context and transferring this sense information to fit multi-sense embeddings in a skip-gram-like framework. We demonstrate an effective approach to training the sense disambiguation mechanism in our model with a distribution over word senses extracted from the output layer embeddings of BERT. Experiments on the contextual word similarity and sense induction tasks show that this method is superior to or competitive with state-of-the-art multi-sense embeddings on multiple benchmark data sets, and experiments with an embedding-based topic model (ETM) demonstrates the benefits of using this multi-sense embedding in a downstream application.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
359,494
2410.17715
Continual Learning on a Data Diet
Continual Learning (CL) methods usually learn from all available data. However, this is not the case in human cognition which efficiently focuses on key experiences while disregarding the redundant information. Similarly, not all data points in a dataset have equal potential; some can be more informative than others. This disparity may significantly impact the performance, as both the quality and quantity of samples directly influence the model's generalizability and efficiency. Drawing inspiration from this, we explore the potential of learning from important samples and present an empirical study for evaluating coreset selection techniques in the context of CL to stimulate research in this unexplored area. We train different continual learners on increasing amounts of selected samples and investigate the learning-forgetting dynamics by shedding light on the underlying mechanisms driving their improved stability-plasticity balance. We present several significant observations: learning from selectively chosen samples (i) enhances incremental accuracy, (ii) improves knowledge retention of previous tasks, and (iii) refines learned representations. This analysis contributes to a deeper understanding of selective learning strategies in CL scenarios.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
501,584
1501.02246
The Effect of Wedge Tip Angles on Stress Intensity Factors in the Contact Problem between Tilted Wedge and a Half Plane with an Edge Crack Using Digital Image Correlation
The first and second mode stress intensity factors (SIFs) of a contact problem between a half-plane with an edge crack and an asymmetric tilted wedge were obtained using experimental method of Digital Image Correlation (DIC). In this technique, displacement and strain fields can be measured using two digital images of the same sample at different stages of loading. However, several images were taken consequently in each stage of this experiment to avoid the noise effect. A pair of images of each stage was compared to each other. Then, the correlation coefficients between them were studied using a computer code. The pairs with the correlation coefficient higher than 0.8 were selected as the acceptable match for displacement measurements near the crack tip. Subsequently, the SIFs of specimens were calculated using displacement fields obtained from DIC method. The effect of wedge tips angle on their SIFs was also studied. Moreover, the results of DIC method were compared with the results of photoelasticity method and a close agreement between them was observed.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
39,160
2304.00111
Identifying Symptoms of Delirium from Clinical Narratives Using Natural Language Processing
Delirium is an acute decline or fluctuation in attention, awareness, or other cognitive function that can lead to serious adverse outcomes. Despite the severe outcomes, delirium is frequently unrecognized and uncoded in patients' electronic health records (EHRs) due to its transient and diverse nature. Natural language processing (NLP), a key technology that extracts medical concepts from clinical narratives, has shown great potential in studies of delirium outcomes and symptoms. To assist in the diagnosis and phenotyping of delirium, we formed an expert panel to categorize diverse delirium symptoms, composed annotation guidelines, created a delirium corpus with diverse delirium symptoms, and developed NLP methods to extract delirium symptoms from clinical notes. We compared 5 state-of-the-art transformer models including 2 models (BERT and RoBERTa) from the general domain and 3 models (BERT_MIMIC, RoBERTa_MIMIC, and GatorTron) from the clinical domain. GatorTron achieved the best strict and lenient F1 scores of 0.8055 and 0.8759, respectively. We conducted an error analysis to identify challenges in annotating delirium symptoms and developing NLP systems. To the best of our knowledge, this is the first large language model-based delirium symptom extraction system. Our study lays the foundation for the future development of computable phenotypes and diagnosis methods for delirium.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
355,558
2012.14970
Alternative Paths Planner (APP) for Provably Fixed-time Manipulation Planning in Semi-structured Environments
In many applications, including logistics and manufacturing, robot manipulators operate in semi-structured environments alongside humans or other robots. These environments are largely static, but they may contain some movable obstacles that the robot must avoid. Manipulation tasks in these applications are often highly repetitive, but require fast and reliable motion planning capabilities, often under strict time constraints. Existing preprocessing-based approaches are beneficial when the environments are highly-structured, but their performance degrades in the presence of movable obstacles, since these are not modelled a priori. We propose a novel preprocessing-based method called Alternative Paths Planner (APP) that provides provably fixed-time planning guarantees in semi-structured environments. APP plans a set of alternative paths offline such that, for any configuration of the movable obstacles, at least one of the paths from this set is collision-free. During online execution, a collision-free path can be looked up efficiently within a few microseconds. We evaluate APP on a 7 DoF robot arm in semi-structured domains of varying complexity and demonstrate that APP is several orders of magnitude faster than state-of-the-art motion planners for each domain. We further validate this approach with real-time experiments on a robotic manipulator.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
213,657
1701.08718
Memory Augmented Neural Networks with Wormhole Connections
Recent empirical results on long-term dependency tasks have shown that neural networks augmented with an external memory can learn the long-term dependency tasks more easily and achieve better generalization than vanilla recurrent neural networks (RNN). We suggest that memory augmented neural networks can reduce the effects of vanishing gradients by creating shortcut (or wormhole) connections. Based on this observation, we propose a novel memory augmented neural network model called TARDIS (Temporal Automatic Relation Discovery in Sequences). The controller of TARDIS can store a selective set of embeddings of its own previous hidden states into an external memory and revisit them as and when needed. For TARDIS, memory acts as a storage for wormhole connections to the past to propagate the gradients more effectively and it helps to learn the temporal dependencies. The memory structure of TARDIS has similarities to both Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but both read and write operations of TARDIS are simpler and more efficient. We use discrete addressing for read/write operations which helps to substantially to reduce the vanishing gradient problem with very long sequences. Read and write operations in TARDIS are tied with a heuristic once the memory becomes full, and this makes the learning problem simpler when compared to NTM or D-NTM type of architectures. We provide a detailed analysis on the gradient propagation in general for MANNs. We evaluate our models on different long-term dependency tasks and report competitive results in all of them.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
67,510
2502.04795
Developmentally-plausible Working Memory Shapes a Critical Period for Language Acquisition
Large language models possess general linguistic abilities but acquire language less efficiently than humans. This study proposes a method for integrating the developmental characteristics of working memory during the critical period, a stage when human language acquisition is particularly efficient, into the training process of language models. The proposed method introduces a mechanism that initially constrains working memory during the early stages of training and gradually relaxes this constraint in an exponential manner as learning progresses. Targeted syntactic evaluation shows that the proposed method outperforms conventional methods without memory constraints or with static memory constraints. These findings not only provide new directions for designing data-efficient language models but also offer indirect evidence supporting the role of the developmental characteristics of working memory as the underlying mechanism of the critical period in language acquisition.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
531,325
1403.7123
User Cooperation in Wireless Powered Communication Networks
This paper studies user cooperation in the emerging wireless powered communication network (WPCN) for throughput optimization. For the purpose of exposition, we consider a two-user WPCN, in which one hybrid access point (H-AP) broadcasts wireless energy to two distributed users in the downlink (DL) and the users transmit their independent information using their individually harvested energy to the H-AP in the uplink (UL) through time-division-multiple-access (TDMA). We propose user cooperation in the WPCN where the user which is nearer to the H-AP and has a better channel for DL energy harvesting and UL information transmission uses part of its allocated UL time and DL harvested energy to help to relay the far user's information to the H-AP, in order to achieve more balanced throughput optimization. We maximize the weighted sum-rate (WSR) of the two users by jointly optimizing the time and power allocations in the network for both wireless energy transfer in the DL and wireless information transmission and relaying in the UL. Simulation results show that the proposed user cooperation scheme can effectively improve the achievable throughput in the WPCN with desired user fairness.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
31,873
2402.02690
Competitive Equilibrium in Microgrids With Dynamic Loads
In this paper, we consider microgrids that interconnect prosumers with distributed energy resources and dynamic loads. Prosumers are connected through the microgrid to trade energy and gain profit while respecting the network constraints. We establish a local energy market by defining a competitive equilibrium which balances energy and satisfies voltage constraints within the microgrid for all time. Using duality theory, we prove that under some convexity assumptions, a competitive equilibrium is equivalent to a social welfare maximization solution. Additionally, we show that a competitive equilibrium is equivalent to a Nash equilibrium of a standard game. In general, the energy price for each prosumer is different, leading to the concept of locational prices. We investigate a case under which all prosumers have the same locational prices. Additionally, we show that under some assumptions on the resource supply and network topology, locational prices decay to zero after a period of time, implying the available supply will be more than the demand required to stabilize the system. Finally, two numerical examples are provided to validate the results, one of which is a direct application of our results on electric vehicle charging control.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
426,685
1711.05133
Reinforcement Learning in a large scale photonic Recurrent Neural Network
Photonic Neural Network implementations have been gaining considerable attention as a potentially disruptive future technology. Demonstrating learning in large scale neural networks is essential to establish photonic machine learning substrates as viable information processing systems. Realizing photonic Neural Networks with numerous nonlinear nodes in a fully parallel and efficient learning hardware was lacking so far. We demonstrate a network of up to 2500 diffractively coupled photonic nodes, forming a large scale Recurrent Neural Network. Using a Digital Micro Mirror Device, we realize reinforcement learning. Our scheme is fully parallel, and the passive weights maximize energy efficiency and bandwidth. The computational output efficiently converges and we achieve very good performance.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
84,503
2104.13659
Exploiting Degeneracy in Belief Propagation Decoding of Quantum Codes
Quantum information needs to be protected by quantum error-correcting codes due to imperfect physical devices and operations. One would like to have an efficient and high-performance decoding procedure for the class of quantum stabilizer codes. A potential candidate is Pearl's belief propagation (BP), but its performance suffers from the many short cycles inherent in a quantum stabilizer code, especially highly-degenerate codes. A general impression exists that BP is not effective for topological codes. In this paper, we propose a decoding algorithm for quantum codes based on quaternary BP with additional memory effects (called MBP). This MBP is like a recursive neural network with inhibitions between neurons (edges with negative weights), which enhance the perception capability of a network. Moreover, MBP exploits the degeneracy of a quantum code so that the most probable error or its degenerate errors can be found with high probability. The decoding performance is significantly improved over the conventional BP for various quantum codes, including quantum bicycle, hypergraph-product, surface and toric codes. For MBP on the surface and toric codes over depolarizing errors, we observe error thresholds of 16% and 17.5%, respectively.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
232,573
1811.12819
Structure-Preserving Constrained Optimal Trajectory Planning of a Wheeled Inverted Pendulum
The Wheeled Inverted Pendulum (WIP) is an underactuated, nonholonomic mechatronic system, and has been popularized commercially as the Segway. Designing a control law for motion planning, that incorporates the state and control constraints, while respecting the configuration manifold, is a challenging problem. In this article we derive a discrete-time model of the WIP system using discrete mechanics and generate optimal trajectories for the WIP system by solving a discrete-time constrained optimal control problem. Further, we describe a nonlinear continuous-time model with parameters for designing a closed loop LQ-controller. A dual control architecture is implemented in which the designed optimal trajectory is then provided as a reference to the robot with the optimal control trajectory as a feedforward control action, and an LQ-controller in the feedback mode is employed to mitigate noise and disturbances for ensuing stable motion of the WIP system. While performing experiments on the WIP system involving aggressive maneuvers with fairly sharp turns, we found a high degree of congruence in the designed optimal trajectories and the path traced by the robot while tracking these trajectories. This corroborates the validity of the nonlinear model and the control scheme. Finally, these experiments demonstrate the highly nonlinear nature of the WIP system and robustness of the control scheme.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
115,111
2307.06555
Deep Network Approximation: Beyond ReLU to Diverse Activation Functions
This paper explores the expressive power of deep neural networks for a diverse range of activation functions. An activation function set $\mathscr{A}$ is defined to encompass the majority of commonly used activation functions, such as $\mathtt{ReLU}$, $\mathtt{LeakyReLU}$, $\mathtt{ReLU}^2$, $\mathtt{ELU}$, $\mathtt{CELU}$, $\mathtt{SELU}$, $\mathtt{Softplus}$, $\mathtt{GELU}$, $\mathtt{SiLU}$, $\mathtt{Swish}$, $\mathtt{Mish}$, $\mathtt{Sigmoid}$, $\mathtt{Tanh}$, $\mathtt{Arctan}$, $\mathtt{Softsign}$, $\mathtt{dSiLU}$, and $\mathtt{SRS}$. We demonstrate that for any activation function $\varrho\in \mathscr{A}$, a $\mathtt{ReLU}$ network of width $N$ and depth $L$ can be approximated to arbitrary precision by a $\varrho$-activated network of width $3N$ and depth $2L$ on any bounded set. This finding enables the extension of most approximation results achieved with $\mathtt{ReLU}$ networks to a wide variety of other activation functions, albeit with slightly increased constants. Significantly, we establish that the (width,$\,$depth) scaling factors can be further reduced from $(3,2)$ to $(1,1)$ if $\varrho$ falls within a specific subset of $\mathscr{A}$. This subset includes activation functions such as $\mathtt{ELU}$, $\mathtt{CELU}$, $\mathtt{SELU}$, $\mathtt{Softplus}$, $\mathtt{GELU}$, $\mathtt{SiLU}$, $\mathtt{Swish}$, and $\mathtt{Mish}$.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
379,104
1904.11490
RepPoints: Point Set Representation for Object Detection
Modern object detectors rely heavily on rectangular bounding boxes, such as anchors, proposals and the final predictions, to represent objects at various recognition stages. The bounding box is convenient to use but provides only a coarse localization of objects and leads to a correspondingly coarse extraction of object features. In this paper, we present \textbf{RepPoints} (representative points), a new finer representation of objects as a set of sample points useful for both localization and recognition. Given ground truth localization and recognition targets for training, RepPoints learn to automatically arrange themselves in a manner that bounds the spatial extent of an object and indicates semantically significant local areas. They furthermore do not require the use of anchors to sample a space of bounding boxes. We show that an anchor-free object detector based on RepPoints can be as effective as the state-of-the-art anchor-based detection methods, with 46.5 AP and 67.4 $AP_{50}$ on the COCO test-dev detection benchmark, using ResNet-101 model. Code is available at https://github.com/microsoft/RepPoints.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
128,872
2312.17076
Minimally-intrusive Navigation in Dense Crowds with Integrated Macro and Micro-level Dynamics
In mobile robot navigation, despite advancements, the generation of optimal paths often disrupts pedestrian areas. To tackle this, we propose three key contributions to improve human-robot coexistence in shared spaces. Firstly, we have established a comprehensive framework to understand disturbances at individual and flow levels. Our framework provides specialized computational strategies for in-depth studies of human-robot interactions from both micro and macro perspectives. By employing novel penalty terms, namely Flow Disturbance Penalty (FDP) and Individual Disturbance Penalty (IDP), our framework facilitates a more nuanced assessment and analysis of the robot navigation's impact on pedestrians. Secondly, we introduce an innovative sampling-based navigation system that adeptly integrates a suite of safety measures with the predictability of robotic movements. This system not only accounts for traditional factors such as trajectory length and travel time but also actively incorporates pedestrian awareness. Our navigation system aims to minimize disturbances and promote harmonious coexistence by considering safety protocols, trajectory clarity, and pedestrian engagement. Lastly, we validate our algorithm's effectiveness and real-time performance through simulations and real-world tests, demonstrating its ability to navigate with minimal pedestrian disturbance in various environments.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
418,614
1808.09933
Certified Mapper: Repeated testing for acyclicity and obstructions to the nerve lemma
The Mapper algorithm does not include a check for whether the cover produced conforms to the requirements of the nerve lemma. To perform a check for obstructions to the nerve lemma, statistical considerations of multiple testing quickly arise. In this paper, we propose several statistical approaches to finding obstructions: through a persistent nerve lemma, through simulation testing, and using a parametric refinement of simulation tests. We suggest Certified Mapper -- a method built from these approaches to generate certificates of non-obstruction, or identify specific obstructions to the nerve lemma -- and we give recommendations for which statistical approaches are most appropriate for the task.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
106,297
2110.03170
TreeGCN-ED: Encoding Point Cloud using a Tree-Structured Graph Network
Point cloud is one of the widely used techniques for representing and storing 3D geometric data. In the past several methods have been proposed for processing point clouds. Methods such as PointNet and FoldingNet have shown promising results for tasks like 3D shape classification and segmentation. This work proposes a tree-structured autoencoder framework to generate robust embeddings of point clouds by utilizing hierarchical information using graph convolution. We perform multiple experiments to assess the quality of embeddings generated by the proposed encoder architecture and visualize the t-SNE map to highlight its ability to distinguish between different object classes. We further demonstrate the applicability of the proposed framework in applications like: 3D point cloud completion and Single image-based 3D reconstruction.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
259,393
1911.08794
Natural Language Generation Challenges for Explainable AI
Good quality explanations of artificial intelligence (XAI) reasoning must be written (and evaluated) for an explanatory purpose, targeted towards their readers, have a good narrative and causal structure, and highlight where uncertainty and data quality affect the AI output. I discuss these challenges from a Natural Language Generation (NLG) perspective, and highlight four specific NLG for XAI research challenges.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
154,312
2105.12021
Inner Approximations of the Positive-Semidefinite Cone via Grassmannian Packings
We investigate the problem of finding inner ap-proximations of positive semidefinite (PSD) cones. We developa novel decomposition framework of the PSD cone by meansof conical combinations of smaller dimensional sub-cones. Weshow that many inner approximation techniques could besummarized within this framework, including the set of (scaled)diagonally dominant matrices, Factor-widthkmatrices, andChordal Sparse matrices. Furthermore, we provide a moreflexible family of inner approximations of the PSD cone, wherewe aim to arrange the sub-cones so that they are maximallyseparated from each other. In doing so, these approximationstend to occupy large fractions of the volume of the PSD cone.The proposed approach is connected to a classical packingproblem in Riemannian Geometry. Precisely, we show thatthe problem of finding maximally distant sub-cones in anambient PSD cone is equivalent to the problem of packingsub-spaces in a Grassmannian Manifold. We further leverageexisting computational method for constructing packings inGrassmannian manifolds to build tighter approximations ofthe PSD cone. Numerical experiments show how the proposedframework can balance between accuracy and computationalcomplexity, to efficiently solve positive-semidefinite programs.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
236,889
2306.02268
Revisiting Class Imbalance for End-to-end Semi-Supervised Object Detection
Semi-supervised object detection (SSOD) has made significant progress with the development of pseudo-label-based end-to-end methods. However, many of these methods face challenges due to class imbalance, which hinders the effectiveness of the pseudo-label generator. Furthermore, in the literature, it has been observed that low-quality pseudo-labels severely limit the performance of SSOD. In this paper, we examine the root causes of low-quality pseudo-labels and present novel learning mechanisms to improve the label generation quality. To cope with high false-negative and low precision rates, we introduce an adaptive thresholding mechanism that helps the proposed network to filter out optimal bounding boxes. We further introduce a Jitter-Bagging module to provide accurate information on localization to help refine the bounding boxes. Additionally, two new losses are introduced using the background and foreground scores predicted by the teacher and student networks to improvise the pseudo-label recall rate. Furthermore, our method applies strict supervision to the teacher network by feeding strong & weak augmented data to generate robust pseudo-labels so that it can detect small and complex objects. Finally, the extensive experiments show that the proposed network outperforms state-of-the-art methods on MS-COCO and Pascal VOC datasets and allows the baseline network to achieve 100% supervised performance with much less (i.e., 20%) labeled data.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
370,825
2111.06980
GraSSNet: Graph Soft Sensing Neural Networks
In the era of big data, data-driven based classification has become an essential method in smart manufacturing to guide production and optimize inspection. The industrial data obtained in practice is usually time-series data collected by soft sensors, which are highly nonlinear, nonstationary, imbalanced, and noisy. Most existing soft-sensing machine learning models focus on capturing either intra-series temporal dependencies or pre-defined inter-series correlations, while ignoring the correlation between labels as each instance is associated with multiple labels simultaneously. In this paper, we propose a novel graph based soft-sensing neural network (GraSSNet) for multivariate time-series classification of noisy and highly-imbalanced soft-sensing data. The proposed GraSSNet is able to 1) capture the inter-series and intra-series dependencies jointly in the spectral domain; 2) exploit the label correlations by superimposing label graph that built from statistical co-occurrence information; 3) learn features with attention mechanism from both textual and numerical domain; and 4) leverage unlabeled data and mitigate data imbalance by semi-supervised learning. Comparative studies with other commonly used classifiers are carried out on Seagate soft sensing data, and the experimental results validate the competitive performance of our proposed method.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
266,237
2307.02007
Remote Sensing Image Change Detection with Graph Interaction
Modern remote sensing image change detection has witnessed substantial advancements by harnessing the potent feature extraction capabilities of CNNs and Transforms.Yet,prevailing change detection techniques consistently prioritize extracting semantic features related to significant alterations,overlooking the viability of directly interacting with bitemporal image features.In this letter,we propose a bitemporal image graph Interaction network for remote sensing change detection,namely BGINet-CD. More specifically,by leveraging the concept of non-local operations and mapping the features obtained from the backbone network to the graph structure space,we propose a unified self-focus mechanism for bitemporal images.This approach enhances the information coupling between the two temporal images while effectively suppressing task-irrelevant interference,Based on a streamlined backbone architecture,namely ResNet18,our model demonstrates superior performance compared to other state-of-the-art methods (SOTA) on the GZ CD dataset. Moreover,the model exhibits an enhanced trade-off between accuracy and computational efficiency,further improving its overall effectiveness
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
377,550
2002.08538
Non-asymptotic and Accurate Learning of Nonlinear Dynamical Systems
We consider the problem of learning stabilizable systems governed by nonlinear state equation $h_{t+1}=\phi(h_t,u_t;\theta)+w_t$. Here $\theta$ is the unknown system dynamics, $h_t $ is the state, $u_t$ is the input and $w_t$ is the additive noise vector. We study gradient based algorithms to learn the system dynamics $\theta$ from samples obtained from a single finite trajectory. If the system is run by a stabilizing input policy, we show that temporally-dependent samples can be approximated by i.i.d. samples via a truncation argument by using mixing-time arguments. We then develop new guarantees for the uniform convergence of the gradients of empirical loss. Unlike existing work, our bounds are noise sensitive which allows for learning ground-truth dynamics with high accuracy and small sample complexity. Together, our results facilitate efficient learning of the general nonlinear system under stabilizing policy. We specialize our guarantees to entry-wise nonlinear activations and verify our theory in various numerical experiments
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
164,785
2102.13493
ACDnet: An action detection network for real-time edge computing based on flow-guided feature approximation and memory aggregation
Interpreting human actions requires understanding the spatial and temporal context of the scenes. State-of-the-art action detectors based on Convolutional Neural Network (CNN) have demonstrated remarkable results by adopting two-stream or 3D CNN architectures. However, these methods typically operate in a non-real-time, ofline fashion due to system complexity to reason spatio-temporal information. Consequently, their high computational cost is not compliant with emerging real-world scenarios such as service robots or public surveillance where detection needs to take place at resource-limited edge devices. In this paper, we propose ACDnet, a compact action detection network targeting real-time edge computing which addresses both efficiency and accuracy. It intelligently exploits the temporal coherence between successive video frames to approximate their CNN features rather than naively extracting them. It also integrates memory feature aggregation from past video frames to enhance current detection stability, implicitly modeling long temporal cues over time. Experiments conducted on the public benchmark datasets UCF-24 and JHMDB-21 demonstrate that ACDnet, when integrated with the SSD detector, can robustly achieve detection well above real-time (75 FPS). At the same time, it retains reasonable accuracy (70.92 and 49.53 frame mAP) compared to other top-performing methods using far heavier configurations. Codes will be available at https://github.com/dginhac/ACDnet.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
222,079
2308.08459
Knowledge Prompt-tuning for Sequential Recommendation
Pre-trained language models (PLMs) have demonstrated strong performance in sequential recommendation (SR), which are utilized to extract general knowledge. However, existing methods still lack domain knowledge and struggle to capture users' fine-grained preferences. Meanwhile, many traditional SR methods improve this issue by integrating side information while suffering from information loss. To summarize, we believe that a good recommendation system should utilize both general and domain knowledge simultaneously. Therefore, we introduce an external knowledge base and propose Knowledge Prompt-tuning for Sequential Recommendation (\textbf{KP4SR}). Specifically, we construct a set of relationship templates and transform a structured knowledge graph (KG) into knowledge prompts to solve the problem of the semantic gap. However, knowledge prompts disrupt the original data structure and introduce a significant amount of noise. We further construct a knowledge tree and propose a knowledge tree mask, which restores the data structure in a mask matrix form, thus mitigating the noise problem. We evaluate KP4SR on three real-world datasets, and experimental results show that our approach outperforms state-of-the-art methods on multiple evaluation metrics. Specifically, compared with PLM-based methods, our method improves NDCG@5 and HR@5 by \textcolor{red}{40.65\%} and \textcolor{red}{36.42\%} on the books dataset, \textcolor{red}{11.17\%} and \textcolor{red}{11.47\%} on the music dataset, and \textcolor{red}{22.17\%} and \textcolor{red}{19.14\%} on the movies dataset, respectively. Our code is publicly available at the link: \href{https://github.com/zhaijianyang/KP4SR}{\textcolor{blue}{https://github.com/zhaijianyang/KP4SR}.}
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
385,909
2401.03337
MTAC: Hierarchical Reinforcement Learning-based Multi-gait Terrain-adaptive Quadruped Controller
Urban search and rescue missions require rapid first response to minimize loss of life and damage. Often, such efforts are assisted by humanitarian robots which need to handle dynamic operational conditions such as uneven and rough terrains, especially during mass casualty incidents like an earthquake. Quadruped robots, owing to their versatile design, have the potential to assist in such scenarios. However, control of quadruped robots in dynamic and rough terrain environments is a challenging problem due to the many degrees of freedom of these robots. Current locomotion controllers for quadrupeds are limited in their ability to produce multiple adaptive gaits, solve tasks in a time and resource-efficient manner, and require tedious training and manual tuning procedures. To address these challenges, we propose MTAC: a multi-gait terrain-adaptive controller, which utilizes a Hierarchical reinforcement learning (HRL) approach while being time and memory-efficient. We show that our proposed method scales well to a diverse range of environments with similar compute times as state-of-the-art methods. Our method showed greater than 75% on most tasks, outperforming previous work on the majority of test cases.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
420,065
2112.07242
On the Impact of Channel Estimation on the Design and Analysis of IRSA based Systems
Irregular repetition slotted aloha (IRSA) is a distributed grant-free random access protocol where users transmit multiple replicas of their packets to a base station (BS). The BS recovers the packets using successive interference cancellation. In this paper, we first derive channel estimates for IRSA, exploiting the sparsity structure of IRSA transmissions, when non-orthogonal pilots are employed across users to facilitate channel estimation at the BS. Allowing for the use of non-orthogonal pilots is important, as the length of orthogonal pilots scales linearly with the total number of devices, leading to prohibitive overhead as the number of devices increases. Next, we present a novel analysis of the throughput of IRSA under practical channel estimation errors, with the use of multiple antennas at the BS. Finally, we theoretically characterize the asymptotic throughput performance of IRSA using a density evolution based analysis. Simulation results underline the importance of accounting for channel estimation errors in analyzing IRSA, which can even lead to 70% loss in performance in severely interference-limited regimes. We also provide novel insights on the effect of parameters such as pilot length, SNR, number of antennas at the BS, etc, on the system throughput.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
271,423
2109.05113
Natural Multicontact Walking for Robotic Assistive Devices via Musculoskeletal Models and Hybrid Zero Dynamics
Generating stable walking gaits that yield natural locomotion when executed on robotic-assistive devices is a challenging task that often requires hand-tuning by domain experts. This paper presents an alternative methodology, where we propose the addition of musculoskeletal models directly into the gait generation process to intuitively shape the resulting behavior. In particular, we construct a multi-domain hybrid system model that combines the system dynamics with muscle models to represent natural multicontact walking. Provably stable walking gaits can then be generated for this model via the hybrid zero dynamics (HZD) method. We experimentally apply our integrated framework towards achieving multicontact locomotion on a dual-actuated transfemoral prosthesis, AMPRO3, for two subjects. The results demonstrate that enforcing muscle model constraints produces gaits that yield natural locomotion (as analyzed via comparison to motion capture data and electromyography). Moreover, gaits generated with our framework were strongly preferred by the non-disabled prosthetic users as compared to gaits generated with the nominal HZD method, even with the use of systematic tuning methods. We conclude that the novel approach of combining robotic walking methods (specifically HZD) with muscle models successfully generates anthropomorphic robotic-assisted locomotion.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
254,662
0803.1600
Understanding Retail Productivity by Simulating Management Practise
Intelligent agents offer a new and exciting way of understanding the world of work. In this paper we apply agent-based modeling and simulation to investigate a set of problems in a retail context. Specifically, we are working to understand the relationship between human resource management practices and retail productivity. Despite the fact we are working within a relatively novel and complex domain, it is clear that intelligent agents could offer potential for fostering sustainable organizational capabilities in the future. Our research so far has led us to conduct case study work with a top ten UK retailer, collecting data in four departments in two stores. Based on our case study data we have built and tested a first version of a department store simulator. In this paper we will report on the current development of our simulator which includes new features concerning more realistic data on the pattern of footfall during the day and the week, a more differentiated view of customers, and the evolution of customers over time. This allows us to investigate more complex scenarios and to analyze the impact of various management practices.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
1,423
2405.12856
LLM Processes: Numerical Predictive Distributions Conditioned on Natural Language
Machine learning practitioners often face significant challenges in formally integrating their prior knowledge and beliefs into predictive models, limiting the potential for nuanced and context-aware analyses. Moreover, the expertise needed to integrate this prior knowledge into probabilistic modeling typically limits the application of these models to specialists. Our goal is to build a regression model that can process numerical data and make probabilistic predictions at arbitrary locations, guided by natural language text which describes a user's prior knowledge. Large Language Models (LLMs) provide a useful starting point for designing such a tool since they 1) provide an interface where users can incorporate expert insights in natural language and 2) provide an opportunity for leveraging latent problem-relevant knowledge encoded in LLMs that users may not have themselves. We start by exploring strategies for eliciting explicit, coherent numerical predictive distributions from LLMs. We examine these joint predictive distributions, which we call LLM Processes, over arbitrarily-many quantities in settings such as forecasting, multi-dimensional regression, black-box optimization, and image modeling. We investigate the practical details of prompting to elicit coherent predictive distributions, and demonstrate their effectiveness at regression. Finally, we demonstrate the ability to usefully incorporate text into numerical predictions, improving predictive performance and giving quantitative structure that reflects qualitative descriptions. This lets us begin to explore the rich, grounded hypothesis space that LLMs implicitly encode.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
455,677
1704.01891
On Multi-source Networks: Enumeration, Rate Region Computation, and Hierarchy
Recent algorithmic developments have enabled computers to automatically determine and prove the capacity regions of small hypergraph networks under network coding. A structural theory relating network coding problems of different sizes is developed to make best use of this newfound computational capability. A formal notion of network minimality is developed which removes components of a network coding problem that are inessential to its core complexity. Equivalence between different network coding problems under relabeling is formalized via group actions, an algorithm which can directly list single representatives from each equivalence class of minimal networks up to a prescribed network size is presented. This algorithm, together with rate region software, is leveraged to create a database containing the rate regions for all minimal network coding problems with five or fewer sources and edges, a collection of 744119 equivalence classes representing more than 9 million networks. In order to best learn from this database, and to leverage it to infer rate regions and their characteristics of networks at scale, a hierarchy between different network coding problems is created with a new theory of combinations and embedding operators.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
71,342
2005.02160
Printing and Scanning Attack for Image Counter Forensics
Examining the authenticity of images has become increasingly important as manipulation tools become more accessible and advanced. Recent work has shown that while CNN-based image manipulation detectors can successfully identify manipulations, they are also vulnerable to adversarial attacks, ranging from simple double JPEG compression to advanced pixel-based perturbation. In this paper we explore another method of highly plausible attack: printing and scanning. We demonstrate the vulnerability of two state-of-the-art models to this type of attack. We also propose a new machine learning model that performs comparably to these state-of-the-art models when trained and validated on printed and scanned images. Of the three models, our proposed model outperforms the others when trained and validated on images from a single printer. To facilitate this exploration, we create a dataset of over 6,000 printed and scanned image blocks. Further analysis suggests that variation between images produced from different printers is significant, large enough that good validation accuracy on images from one printer does not imply similar validation accuracy on identical images from a different printer.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
175,792
2409.11516
Learning-Augmented Frequency Estimation in Sliding Windows
We show how to utilize machine learning approaches to improve sliding window algorithms for approximate frequency estimation problems, under the ``algorithms with predictions'' framework. In this dynamic environment, previous learning-augmented algorithms are less effective, since properties in sliding window resolution can differ significantly from the properties of the entire stream. Our focus is on the benefits of predicting and filtering out items with large next arrival times -- that is, there is a large gap until their next appearance -- from the stream, which we show improves the memory-accuracy tradeoffs significantly. We provide theorems that provide insight into how and by how much our technique can improve the sliding window algorithm, as well as experimental results using real-world data sets. Our work demonstrates that predictors can be useful in the challenging sliding window setting.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
489,189
2209.11748
GLSO: Grammar-guided Latent Space Optimization for Sample-efficient Robot Design Automation
Robots have been used in all sorts of automation, and yet the design of robots remains mainly a manual task. We seek to provide design tools to automate the design of robots themselves. An important challenge in robot design automation is the large and complex design search space which grows exponentially with the number of components, making optimization difficult and sample inefficient. In this work, we present Grammar-guided Latent Space Optimization (GLSO), a framework that transforms design automation into a low-dimensional continuous optimization problem by training a graph variational autoencoder (VAE) to learn a mapping between the graph-structured design space and a continuous latent space. This transformation allows optimization to be conducted in a continuous latent space, where sample efficiency can be significantly boosted by applying algorithms such as Bayesian Optimization. GLSO guides training of the VAE using graph grammar rules and robot world space features, such that the learned latent space focus on valid robots and is easier for the optimization algorithm to explore. Importantly, the trained VAE can be reused to search for designs specialized to multiple different tasks without retraining. We evaluate GLSO by designing robots for a set of locomotion tasks in simulation, and demonstrate that our method outperforms related state-of-the-art robot design automation methods.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
319,283
2011.12131
A Reusable Framework Based on Reinforcement Learning to Design Antennas for Curved Surfaces
The design and implementation of low-profile antennas has been analyzed in past decades from different perspectives while the purpose is to have a small size in the device, and an adequate electromagnetic behavior. This work pursues a methodology to identify small antennas and consequently presents some similarities. Meanwhile, curved surfaces are considered for a certain variety of antennas with reduced size. The so-called deep reinforcement learning technique is used as an assistance against morphological variations that are specifically taken into account in this work. The objective is to identify antennas that can be efficiently mounted on the surface of metal tubes such as those frequently present in public infrastructure (e.g. traffic lights and luminaries). The motivation is to reduce the visual impact and optimize the radiation pattern of the antenna. It is analyzed that if changes in variables such as the radius of curvature, or the electromagnetic properties of the materials appear, an automatic identification of the underlying characteristics of the problem (by means of machine learning techniques) can readjust the design efficiently. The results obtained in this work are analyzed based on variables that are typically used to characterize antennas, such as their impedance and radiation pattern.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
208,067
1701.06884
Combination Networks with End-user-caches: Novel Achievable and Converse Bounds under Uncoded Cache Placement
Caching is an efficient way to reduce network traffic congestion during peak hours by storing some content at the users' local caches. For the shared-link network with end-user-caches, Maddah-Ali and Niesen proposed a two-phase coded caching strategy. In practice, users may communicate with the server through intermediate relays. This paper studies the tradeoff between the memory size $M$ and the network load $R$ for networks where a server with $N$ files is connected to $H$ relays (without caches), which in turn are connected to $K$ users equipped with caches of $M$ files. When each user is connected to a different subset of $r$ relays, i.e., $K = \binom{H}{r}$, the system is referred to as a {\it combination network with end-user-caches}. In this work, converse bounds are derived for the practically motivated case of {\it uncoded} cache contents, that is, bits of the various files are directly pushed into the user caches without any coding. In this case, once the cache contents and the user demands are known, the problem reduces to a general index coding problem.This paper shows that relying on a well-known "acyclic index coding converse bound" results in converse bounds that are not tight for combination networks with end-user-caches. A novel converse bound that leverages the network topology is proposed, which is the tightest converse bound known to date. As a result of independent interest, an inequality that generalizes the well-known sub-modularity of entropy is derived. Several novel caching schemes are proposed, based on the Maddah-Ali and Niesen cache placement. The proposed schemes are proved: (i) to be (order) optimal for some $(N,M,H,r)$ parameters regimes under the constraint of uncoded cache placement, and (ii) to outperform the state-of-the-art schemes in numerical evaluations.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
67,209
2312.05021
A Negative Result on Gradient Matching for Selective Backprop
With increasing scale in model and dataset size, the training of deep neural networks becomes a massive computational burden. One approach to speed up the training process is Selective Backprop. For this approach, we perform a forward pass to obtain a loss value for each data point in a minibatch. The backward pass is then restricted to a subset of that minibatch, prioritizing high-loss examples. We build on this approach, but seek to improve the subset selection mechanism by choosing the (weighted) subset which best matches the mean gradient over the entire minibatch. We use the gradients w.r.t. the model's last layer as a cheap proxy, resulting in virtually no overhead in addition to the forward pass. At the same time, for our experiments we add a simple random selection baseline which has been absent from prior work. Surprisingly, we find that both the loss-based as well as the gradient-matching strategy fail to consistently outperform the random baseline.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
413,921
2003.02341
QED: using Quality-Environment-Diversity to evolve resilient robot swarms
In swarm robotics, any of the robots in a swarm may be affected by different faults, resulting in significant performance declines. To allow fault recovery from randomly injected faults to different robots in a swarm, a model-free approach may be preferable due to the accumulation of faults in models and the difficulty to predict the behaviour of neighbouring robots. One model-free approach to fault recovery involves two phases: during simulation, a quality-diversity algorithm evolves a behaviourally diverse archive of controllers; during the target application, a search for the best controller is initiated after fault injection. In quality-diversity algorithms, the choice of the behavioural descriptor is a key design choice that determines the quality of the evolved archives, and therefore the fault recovery performance. Although the environment is an important determinant of behaviour, the impact of environmental diversity is often ignored in the choice of a suitable behavioural descriptor. This study compares different behavioural descriptors, including two generic descriptors that work on a wide range of tasks, one hand-coded descriptor which fits the domain of interest, and one novel type of descriptor based on environmental diversity, which we call Quality-Environment-Diversity (QED). Results demonstrate that the above-mentioned model-free approach to fault recovery is feasible in the context of swarm robotics, reducing the fault impact by a factor 2-3. Further, the environmental diversity obtained with QED yields a unique behavioural diversity profile that allows it to recover from high-impact faults.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
166,910
2411.17406
CoA: Chain-of-Action for Generative Semantic Labels
Recent advances in vision-language models (VLM) have demonstrated remarkable capability in image classification. These VLMs leverage a predefined set of categories to construct text prompts for zero-shot reasoning. However, in more open-ended domains like autonomous driving, using a predefined set of labels becomes impractical, as the semantic label space is unknown and constantly evolving. Additionally, fixed embedding text prompts often tend to predict a single label (while in reality, multiple labels commonly exist per image). In this paper, we introduce CoA, an innovative Chain-of-Action (CoA) method that generates labels aligned with all contextually relevant features of an image. CoA is designed based on the observation that enriched and valuable contextual information improves generative performance during inference. Traditional vision-language models tend to output singular and redundant responses. Therefore, we employ a tailored CoA to alleviate this problem. We first break down the generative labeling task into detailed actions and construct an CoA leading to the final generative objective. Each action extracts and merges key information from the previous action and passes the enriched information as context to the next action, ultimately improving the VLM in generating comprehensive and accurate semantic labels. We assess the effectiveness of CoA through comprehensive evaluations on widely-used benchmark datasets and the results demonstrate significant improvements across key performance metrics.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
511,430
2412.11711
MiMoTable: A Multi-scale Spreadsheet Benchmark with Meta Operations for Table Reasoning
Extensive research has been conducted to explore the capability of Large Language Models (LLMs) for table reasoning and has significantly improved the performance on existing benchmarks. However, tables and user questions in real-world applications are more complex and diverse, presenting an unignorable gap compared to the existing benchmarks. To fill the gap, we propose a \textbf{M}ult\textbf{i}-scale spreadsheet benchmark with \textbf{M}eta \textbf{o}perations for \textbf{Table} reasoning, named as MiMoTable. Specifically, MiMoTable incorporates two key features. First, the tables in MiMoTable are all spreadsheets used in real-world scenarios, which cover seven domains and contain different types. Second, we define a new criterion with six categories of meta operations for measuring the difficulty of each question in MiMoTable, simultaneously as a new perspective for measuring the difficulty of the existing benchmarks. Experimental results show that Claude-3.5-Sonnet achieves the best performance with 77.4\% accuracy, indicating that there is still significant room to improve for LLMs on MiMoTable. Furthermore, we grade the difficulty of existing benchmarks according to our new criteria. Experiments have shown that the performance of LLMs decreases as the difficulty of benchmarks increases, thereby proving the effectiveness of our proposed new criterion.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
517,548
2411.04445
Large Sets of Quasi-Complementary Sequences From Polynomials over Finite Fields and Gaussian Sums
Perfect complementary sequence sets (PCSSs) are widely used in multi-carrier code-division multiple-access (MC-CDMA) communication systems. However, the set size of a PCSS is upper bounded by the number of row sequences of each two-dimensional matrix in the PCSS. Then quasi-complementary sequence sets (QCSSs) were proposed to support more users in MC-CDMA communications. For practical applications, it is desirable to construct an $(M,K,N,\vartheta_{\max})$-QCSS with $M$ as large as possible and $\vartheta_{max}$ as small as possible, where $M$ is the number of matrices with $K$ rows and $N$ columns in the set and $\vartheta_{\max}$ denotes its periodic tolerance. There exists a tradeoff among these parameters. Constructing QCSSs achieving or nearly achieving the known correlation lower bound has been an interesting research topic. Up to now, only a few constructions of asymptotically optimal or near-optimal periodic QCSSs have been reported in the literature. In this paper, based on polynomials over finite fields and Gaussian sums, we construct five new families of asymptotically optimal or near-optimal periodic QCSSs with large set sizes and low periodic tolerances. These families of QCSSs have set size $\Theta(K^2)$ or $\Theta(K^3)$ and flock size $K$. To the best of our knowledge, only a small amount of known families of periodic QCSSs with set size $\Theta(K^2)$ have been constructed and most of other known periodic QCSSs have set sizes much smaller than $\Theta(K^2)$. Our new constructed periodic QCSSs with set size $\Theta(K^2)$ and flock size $K$ have the best parameters among all known ones. They have larger set sizes or lower periodic tolerances. The periodic QCSSs with set size $\Theta(K^3)$ and flock size $K$ constructed in this paper have the largest set size among all known families of asymptotically optimal or near-optimal periodic QCSSs.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
506,272
2310.04536
Improving Portfolio Performance Using a Novel Method for Predicting Financial Regimes
This work extends a previous work in regime detection, which allowed trading positions to be profitably adjusted when a new regime was detected, to ex ante prediction of regimes, leading to substantial performance improvements over the earlier model, over all three asset classes considered (equities, commodities, and foreign exchange), over a test period of four years. The proposed new model is also benchmarked over this same period against a hidden Markov model, the most popular current model for financial regime prediction, and against an appropriate index benchmark for each asset class, in the case of the commodities model having a test period cost-adjusted cumulative return over four times higher than that expected from the index. Notably, the proposed model makes use of a contrarian trading strategy, not uncommon in the financial industry but relatively unexplored in machine learning models. The model also makes use of frequent short positions, something not always desirable to investors due to issues of both financial risk and ethics; however, it is discussed how further work could remove this reliance on shorting and allow the construction of a long-only version of the model.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
397,700
2306.13325
Differentiable Display Photometric Stereo
Photometric stereo leverages variations in illumination conditions to reconstruct surface normals. Display photometric stereo, which employs a conventional monitor as an illumination source, has the potential to overcome limitations often encountered in bulky and difficult-to-use conventional setups. In this paper, we present differentiable display photometric stereo (DDPS), addressing an often overlooked challenge in display photometric stereo: the design of display patterns. Departing from using heuristic display patterns, DDPS learns the display patterns that yield accurate normal reconstruction for a target system in an end-to-end manner. To this end, we propose a differentiable framework that couples basis-illumination image formation with analytic photometric-stereo reconstruction. The differentiable framework facilitates the effective learning of display patterns via auto-differentiation. Also, for training supervision, we propose to use 3D printing for creating a real-world training dataset, enabling accurate reconstruction on the target real-world setup. Finally, we exploit that conventional LCD monitors emit polarized light, which allows for the optical separation of diffuse and specular reflections when combined with a polarization camera, leading to accurate normal reconstruction. Extensive evaluation of DDPS shows improved normal-reconstruction accuracy compared to heuristic patterns and demonstrates compelling properties such as robustness to pattern initialization, calibration errors, and simplifications in image formation and reconstruction.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
375,242
2009.11649
Prescribed-Time Fully Distributed Nash Equilibrium Seeking in Noncooperative Games
In this paper, we investigate a prescribed-time and fully distributed Nash Equilibrium (NE) seeking problem for continuous-time noncooperative games. By exploiting pseudo-gradient play and consensus-based schemes, various distributed NE seeking algorithms are presented over either fixed or switching communication topologies so that the convergence to the NE is reached in a prescribed time. In particular, a prescribed-time distributed NE seeking algorithm is firstly developed under a fixed graph to find the NE in a prior-given and user-defined time, provided that a static controller gain can be selected based on certain global information such as the algebraic connectivity of the communication graph and both the Lipschitz and monotone constants of the pseudo-gradient associated with players' objective functions. Secondly, a prescribed-time and fully distributed NE seeking algorithm is proposed to remove global information by designing heterogeneous dynamic gains that turn on-line the weights of the communication topology. Further, we extend this algorithm to accommodate jointly switching topologies. It is theoretically proved that the global convergence of those proposed algorithms to the NE is rigorously guaranteed in a prescribed time based on a time function transformation approach. In the last, numerical simulation results are presented to verify the effectiveness of the designs.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
197,221
2412.04703
Transformers Struggle to Learn to Search
Search is an ability foundational in many important tasks, and recent studies have shown that large language models (LLMs) struggle to perform search robustly. It is unknown whether this inability is due to a lack of data, insufficient model parameters, or fundamental limitations of the transformer architecture. In this work, we use the foundational graph connectivity problem as a testbed to generate effectively limitless high-coverage data to train small transformers and test whether they can learn to perform search. We find that, when given the right training distribution, the transformer is able to learn to search. We analyze the algorithm that the transformer has learned through a novel mechanistic interpretability technique that enables us to extract the computation graph from the trained model. We find that for each vertex in the input graph, transformers compute the set of vertices reachable from that vertex. Each layer then progressively expands these sets, allowing the model to search over a number of vertices exponential in the number of layers. However, we find that as the input graph size increases, the transformer has greater difficulty in learning the task. This difficulty is not resolved even as the number of parameters is increased, suggesting that increasing model scale will not lead to robust search abilities. We also find that performing search in-context (i.e., chain-of-thought) does not resolve this inability to learn to search on larger graphs.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
514,522
1905.00495
A Modular Framework for Motion Planning using Safe-by-Design Motion Primitives
We present a modular framework for solving a motion planning problem among a group of robots. The proposed framework utilizes a finite set of low level motion primitives to generate motions in a gridded workspace. The constraints on allowable sequences of motion primitives are formalized through a maneuver automaton. At the high level, a control policy determines which motion primitive is executed in each box of the gridded workspace. We state general conditions on motion primitives to obtain provably correct behavior so that a library of safe-by-design motion primitives can be designed. The overall framework yields a highly robust design by utilizing feedback strategies at both the low and high levels. We provide specific designs for motion primitives and control policies suitable for multi-robot motion planning; the modularity of our approach enables one to independently customize the designs of each of these components. Our approach is experimentally validated on a group of quadrocopters.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
129,480
2212.06250
ScanEnts3D: Exploiting Phrase-to-3D-Object Correspondences for Improved Visio-Linguistic Models in 3D Scenes
The two popular datasets ScanRefer [16] and ReferIt3D [3] connect natural language to real-world 3D data. In this paper, we curate a large-scale and complementary dataset extending both the aforementioned ones by associating all objects mentioned in a referential sentence to their underlying instances inside a 3D scene. Specifically, our Scan Entities in 3D (ScanEnts3D) dataset provides explicit correspondences between 369k objects across 84k natural referential sentences, covering 705 real-world scenes. Crucially, we show that by incorporating intuitive losses that enable learning from this novel dataset, we can significantly improve the performance of several recently introduced neural listening architectures, including improving the SoTA in both the Nr3D and ScanRefer benchmarks by 4.3% and 5.0%, respectively. Moreover, we experiment with competitive baselines and recent methods for the task of language generation and show that, as with neural listeners, 3D neural speakers can also noticeably benefit by training with ScanEnts3D, including improving the SoTA by 13.2 CIDEr points on the Nr3D benchmark. Overall, our carefully conducted experimental studies strongly support the conclusion that, by learning on ScanEnts3D, commonly used visio-linguistic 3D architectures can become more efficient and interpretable in their generalization without needing to provide these newly collected annotations at test time. The project's webpage is https://scanents3d.github.io/ .
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
336,045
2110.07235
HUMAN4D: A Human-Centric Multimodal Dataset for Motions and Immersive Media
We introduce HUMAN4D, a large and multimodal 4D dataset that contains a variety of human activities simultaneously captured by a professional marker-based MoCap, a volumetric capture and an audio recording system. By capturing 2 female and $2$ male professional actors performing various full-body movements and expressions, HUMAN4D provides a diverse set of motions and poses encountered as part of single- and multi-person daily, physical and social activities (jumping, dancing, etc.), along with multi-RGBD (mRGBD), volumetric and audio data. Despite the existence of multi-view color datasets captured with the use of hardware (HW) synchronization, to the best of our knowledge, HUMAN4D is the first and only public resource that provides volumetric depth maps with high synchronization precision due to the use of intra- and inter-sensor HW-SYNC. Moreover, a spatio-temporally aligned scanned and rigged 3D character complements HUMAN4D to enable joint research on time-varying and high-quality dynamic meshes. We provide evaluation baselines by benchmarking HUMAN4D with state-of-the-art human pose estimation and 3D compression methods. For the former, we apply 2D and 3D pose estimation algorithms both on single- and multi-view data cues. For the latter, we benchmark open-source 3D codecs on volumetric data respecting online volumetric video encoding and steady bit-rates. Furthermore, qualitative and quantitative visual comparison between mesh-based volumetric data reconstructed in different qualities showcases the available options with respect to 4D representations. HUMAN4D is introduced to the computer vision and graphics research communities to enable joint research on spatio-temporally aligned pose, volumetric, mRGBD and audio data cues. The dataset and its code are available https://tofis.github.io/myurls/human4d.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
260,911
1307.5827
Cooperative Energy Harvesting Networks with Spatially Random Users
This paper considers a cooperative network with multiple source-destination pairs and one energy harvesting relay. The outage probability experienced by users in this network is characterized by taking the spatial randomness of user locations into consideration. In addition, the cooperation among users is modeled as a canonical coalitional game and the grand coalition is shown to be stable in the addressed scenario. Simulation results are provided to demonstrate the accuracy of the developed analytical results.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
25,982
2007.00886
Modelling Drosophila Motion Vision Pathways for Decoding the Direction of Translating Objects Against Cluttered Moving Backgrounds
Decoding the direction of translating objects in front of cluttered moving backgrounds, accurately and efficiently, is still a challenging problem. In nature, lightweight and low-powered flying insects apply motion vision to detect a moving target in highly variable environments during flight, which are excellent paradigms to learn motion perception strategies. This paper investigates the fruit fly \textit{Drosophila} motion vision pathways and presents computational modelling based on cutting-edge physiological researches. The proposed visual system model features bio-plausible ON and OFF pathways, wide-field horizontal-sensitive (HS) and vertical-sensitive (VS) systems. The main contributions of this research are on two aspects: 1) the proposed model articulates the forming of both direction-selective (DS) and direction-opponent (DO) responses, revealed as principal features of motion perception neural circuits, in a feed-forward manner; 2) it also shows robust direction selectivity to translating objects in front of cluttered moving backgrounds, via the modelling of spatiotemporal dynamics including combination of motion pre-filtering mechanisms and ensembles of local correlators inside both the ON and OFF pathways, which works effectively to suppress irrelevant background motion or distractors, and to improve the dynamic response. Accordingly, the direction of translating objects is decoded as global responses of both the HS and VS systems with positive or negative output indicating preferred-direction (PD) or null-direction (ND) translation. The experiments have verified the effectiveness of the proposed neural system model, and demonstrated its responsive preference to faster-moving, higher-contrast and larger-size targets embedded in cluttered moving backgrounds.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
185,257
2205.01414
Multimodal Detection of Unknown Objects on Roads for Autonomous Driving
Tremendous progress in deep learning over the last years has led towards a future with autonomous vehicles on our roads. Nevertheless, the performance of their perception systems is strongly dependent on the quality of the utilized training data. As these usually only cover a fraction of all object classes an autonomous driving system will face, such systems struggle with handling the unexpected. In order to safely operate on public roads, the identification of objects from unknown classes remains a crucial task. In this paper, we propose a novel pipeline to detect unknown objects. Instead of focusing on a single sensor modality, we make use of lidar and camera data by combining state-of-the art detection models in a sequential manner. We evaluate our approach on the Waymo Open Perception Dataset and point out current research gaps in anomaly detection.
false
false
false
false
false
false
true
true
false
false
false
true
false
false
false
false
false
false
294,573
2203.04967
UNeXt: MLP-based Rapid Medical Image Segmentation Network
UNet and its latest extensions like TransUNet have been the leading medical image segmentation methods in recent years. However, these networks cannot be effectively adopted for rapid image segmentation in point-of-care applications as they are parameter-heavy, computationally complex and slow to use. To this end, we propose UNeXt which is a Convolutional multilayer perceptron (MLP) based network for image segmentation. We design UNeXt in an effective way with an early convolutional stage and a MLP stage in the latent stage. We propose a tokenized MLP block where we efficiently tokenize and project the convolutional features and use MLPs to model the representation. To further boost the performance, we propose shifting the channels of the inputs while feeding in to MLPs so as to focus on learning local dependencies. Using tokenized MLPs in latent space reduces the number of parameters and computational complexity while being able to result in a better representation to help segmentation. The network also consists of skip connections between various levels of encoder and decoder. We test UNeXt on multiple medical image segmentation datasets and show that we reduce the number of parameters by 72x, decrease the computational complexity by 68x, and improve the inference speed by 10x while also obtaining better segmentation performance over the state-of-the-art medical image segmentation architectures. Code is available at https://github.com/jeya-maria-jose/UNeXt-pytorch
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
284,670
2111.15182
Easy Semantification of Bioassays
Biological data and knowledge bases increasingly rely on Semantic Web technologies and the use of knowledge graphs for data integration, retrieval and federated queries. We propose a solution for automatically semantifying biological assays. Our solution contrasts the problem of automated semantification as labeling versus clustering where the two methods are on opposite ends of the method complexity spectrum. Characteristically modeling our problem, we find the clustering solution significantly outperforms a deep neural network state-of-the-art labeling approach. This novel contribution is based on two factors: 1) a learning objective closely modeled after the data outperforms an alternative approach with sophisticated semantic modeling; 2) automatically semantifying biological assays achieves a high performance F1 of nearly 83%, which to our knowledge is the first reported standardized evaluation of the task offering a strong benchmark model.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
true
268,849
2111.05409
Pipeline for 3D reconstruction of the human body from AR/VR headset mounted egocentric cameras
In this paper, we propose a novel pipeline for the 3D reconstruction of the full body from egocentric viewpoints. 3-D reconstruction of the human body from egocentric viewpoints is a challenging task as the view is skewed and the body parts farther from the cameras are occluded. One such example is the view from cameras installed below VR headsets. To achieve this task, we first make use of conditional GANs to translate the egocentric views to full body third-person views. This increases the comprehensibility of the image and caters to occlusions. The generated third-person view is further sent through the 3D reconstruction module that generates a 3D mesh of the body. We also train a network that can take the third person full-body view of the subject and generate the texture maps for applying on the mesh. The generated mesh has fairly realistic body proportions and is fully rigged allowing for further applications such as real-time animation and pose transfer in games. This approach can be key to a new domain of mobile human telepresence.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
265,781
1511.05297
On the interplay of network structure and gradient convergence in deep learning
The regularization and output consistency behavior of dropout and layer-wise pretraining for learning deep networks have been fairly well studied. However, our understanding of how the asymptotic convergence of backpropagation in deep architectures is related to the structural properties of the network and other design choices (like denoising and dropout rate) is less clear at this time. An interesting question one may ask is whether the network architecture and input data statistics may guide the choices of learning parameters and vice versa. In this work, we explore the association between such structural, distributional and learnability aspects vis-\`a-vis their interaction with parameter convergence rates. We present a framework to address these questions based on convergence of backpropagation for general nonconvex objectives using first-order information. This analysis suggests an interesting relationship between feature denoising and dropout. Building upon these results, we obtain a setup that provides systematic guidance regarding the choice of learning parameters and network sizes that achieve a certain level of convergence (in the optimization sense) often mediated by statistical attributes of the inputs. Our results are supported by a set of experimental evaluations as well as independent empirical observations reported by other groups.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
49,031
2209.10163
DDGHM: Dual Dynamic Graph with Hybrid Metric Training for Cross-Domain Sequential Recommendation
Sequential Recommendation (SR) characterizes evolving patterns of user behaviors by modeling how users transit among items. However, the short interaction sequences limit the performance of existing SR. To solve this problem, we focus on Cross-Domain Sequential Recommendation (CDSR) in this paper, which aims to leverage information from other domains to improve the sequential recommendation performance of a single domain. Solving CDSR is challenging. On the one hand, how to retain single domain preferences as well as integrate cross-domain influence remains an essential problem. On the other hand, the data sparsity problem cannot be totally solved by simply utilizing knowledge from other domains, due to the limited length of the merged sequences. To address the challenges, we propose DDGHM, a novel framework for the CDSR problem, which includes two main modules, i.e., dual dynamic graph modeling and hybrid metric training. The former captures intra-domain and inter-domain sequential transitions through dynamically constructing two-level graphs, i.e., the local graphs and the global graph, and incorporating them with a fuse attentive gating mechanism. The latter enhances user and item representations by employing hybrid metric learning, including collaborative metric for achieving alignment and contrastive metric for preserving uniformity, to further alleviate data sparsity issue and improve prediction accuracy. We conduct experiments on two benchmark datasets and the results demonstrate the effectiveness of DDHMG.
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
318,773
1603.06216
Skew-t inference with improved covariance matrix approximation
Filtering and smoothing algorithms for linear discrete-time state-space models with skew-t distributed measurement noise are presented. The proposed algorithms improve upon our earlier proposed filter and smoother using the mean field variational Bayes approximation of the posterior distribution to a skew-t likelihood and normal prior. Our simulations show that the proposed variational Bayes approximation gives a more accurate approximation of the posterior covariance matrix than our earlier proposed method. Furthermore, the novel filter and smoother outperform our earlier proposed methods and conventional low complexity alternatives in accuracy and speed.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
53,465
2501.14660
Mean-field limit from general mixtures of experts to quantum neural networks
In this work, we study the asymptotic behavior of Mixture of Experts (MoE) trained via gradient flow on supervised learning problems. Our main result establishes the propagation of chaos for a MoE as the number of experts diverges. We demonstrate that the corresponding empirical measure of their parameters is close to a probability measure that solves a nonlinear continuity equation, and we provide an explicit convergence rate that depends solely on the number of experts. We apply our results to a MoE generated by a quantum neural network.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
527,206
2110.09994
DPFM: Deep Partial Functional Maps
We consider the problem of computing dense correspondences between non-rigid shapes with potentially significant partiality. Existing formulations tackle this problem through heavy manifold optimization in the spectral domain, given hand-crafted shape descriptors. In this paper, we propose the first learning method aimed directly at partial non-rigid shape correspondence. Our approach uses the functional map framework, can be trained in a supervised or unsupervised manner, and learns descriptors directly from the data, thus both improving robustness and accuracy in challenging cases. Furthermore, unlike existing techniques, our method is also applicable to partial-to-partial non-rigid matching, in which the common regions on both shapes are unknown a priori. We demonstrate that the resulting method is data-efficient, and achieves state-of-the-art results on several benchmark datasets. Our code and data can be found online: https://github.com/pvnieo/DPFM
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
261,999
0706.3295
Lower bounds on the minimum average distance of binary codes
New lower bounds on the minimum average Hamming distance of binary codes are derived. The bounds are obtained using linear programming approach.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
349
2404.02257
SnAG: Scalable and Accurate Video Grounding
Temporal grounding of text descriptions in videos is a central problem in vision-language learning and video understanding. Existing methods often prioritize accuracy over scalability -- they have been optimized for grounding only a few text queries within short videos, and fail to scale up to long videos with hundreds of queries. In this paper, we study the effect of cross-modal fusion on the scalability of video grounding models. Our analysis establishes late fusion as a more cost-effective fusion scheme for long-form videos with many text queries. Moreover, it leads us to a novel, video-centric sampling scheme for efficient training. Based on these findings, we present SnAG, a simple baseline for scalable and accurate video grounding. Without bells and whistles, SnAG is 43% more accurate and 1.5x faster than CONE, a state of the art for long-form video grounding on the challenging MAD dataset, while achieving highly competitive results on short videos.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
443,781
2406.05986
Neural-g: A Deep Learning Framework for Mixing Density Estimation
Mixing (or prior) density estimation is an important problem in machine learning and statistics, especially in empirical Bayes $g$-modeling where accurately estimating the prior is necessary for making good posterior inferences. In this paper, we propose neural-$g$, a new neural network-based estimator for $g$-modeling. Neural-$g$ uses a softmax output layer to ensure that the estimated prior is a valid probability density. Under default hyperparameters, we show that neural-$g$ is very flexible and capable of capturing many unknown densities, including those with flat regions, heavy tails, and/or discontinuities. In contrast, existing methods struggle to capture all of these prior shapes. We provide justification for neural-$g$ by establishing a new universal approximation theorem regarding the capability of neural networks to learn arbitrary probability mass functions. To accelerate convergence of our numerical implementation, we utilize a weighted average gradient descent approach to update the network parameters. Finally, we extend neural-$g$ to multivariate prior density estimation. We illustrate the efficacy of our approach through simulations and analyses of real datasets. A software package to implement neural-$g$ is publicly available at https://github.com/shijiew97/neuralG.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
462,384
2010.10047
Robust Neural Networks inspired by Strong Stability Preserving Runge-Kutta methods
Deep neural networks have achieved state-of-the-art performance in a variety of fields. Recent works observe that a class of widely used neural networks can be viewed as the Euler method of numerical discretization. From the numerical discretization perspective, Strong Stability Preserving (SSP) methods are more advanced techniques than the explicit Euler method that produce both accurate and stable solutions. Motivated by the SSP property and a generalized Runge-Kutta method, we propose Strong Stability Preserving networks (SSP networks) which improve robustness against adversarial attacks. We empirically demonstrate that the proposed networks improve the robustness against adversarial examples without any defensive methods. Further, the SSP networks are complementary with a state-of-the-art adversarial training scheme. Lastly, our experiments show that SSP networks suppress the blow-up of adversarial perturbations. Our results open up a way to study robust architectures of neural networks leveraging rich knowledge from numerical discretization literature.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
201,766
1608.01859
Wireless-Powered Two-Way Relaying with Power Splitting-based Energy Accumulation
This paper investigates a wireless-powered two-way relay network (WP-TWRN), in which two sources exchange information with the aid of one amplify-and-forward (AF) relay. Contrary to the conventional two-way relay networks, we consider the scenario that the AF relay has no embedded energy supply, and it is equipped with an energy harvesting unit and rechargeable battery. As such, it can accumulate the energy harvested from both sources' signals before helping forwarding their information. In this paper, we develop a power splitting-based energy accumulation (PS-EA) scheme for the considered WP-TWRN. To determine whether the relay has accumulated sufficient energy, we set a predefined energy threshold for the relay. When the accumulated energy reaches the threshold, relay splits the received signal power into two parts, one for energy harvesting and the other for information forwarding. If the stored energy at the relay is below the threshold, all the received signal power will be accumulated at the relay's battery. By modeling the finite-capacity battery of relay as a finite-state Markov Chain (MC), we derive a closed-form expression for the system throughput of the proposed PS-EA scheme over Nakagami-m fading channels. Numerical results validate our theoretical analysis and show that the proposed PS-EA scheme outperforms the conventional time switching-based energy accumulation (TS-EA) scheme and the existing power splitting schemes without energy accumulation.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
59,477
2203.14940
Learning to Prompt for Open-Vocabulary Object Detection with Vision-Language Model
Recently, vision-language pre-training shows great potential in open-vocabulary object detection, where detectors trained on base classes are devised for detecting new classes. The class text embedding is firstly generated by feeding prompts to the text encoder of a pre-trained vision-language model. It is then used as the region classifier to supervise the training of a detector. The key element that leads to the success of this model is the proper prompt, which requires careful words tuning and ingenious design. To avoid laborious prompt engineering, there are some prompt representation learning methods being proposed for the image classification task, which however can only be sub-optimal solutions when applied to the detection task. In this paper, we introduce a novel method, detection prompt (DetPro), to learn continuous prompt representations for open-vocabulary object detection based on the pre-trained vision-language model. Different from the previous classification-oriented methods, DetPro has two highlights: 1) a background interpretation scheme to include the proposals in image background into the prompt training; 2) a context grading scheme to separate proposals in image foreground for tailored prompt training. We assemble DetPro with ViLD, a recent state-of-the-art open-world object detector, and conduct experiments on the LVIS as well as transfer learning on the Pascal VOC, COCO, Objects365 datasets. Experimental results show that our DetPro outperforms the baseline ViLD in all settings, e.g., +3.4 APbox and +3.0 APmask improvements on the novel classes of LVIS. Code and models are available at https://github.com/dyabel/detpro.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
288,182
2410.21326
Self-Supervised Learning and Opportunistic Inference for Continuous Monitoring of Freezing of Gait in Parkinson's Disease
Parkinson's disease (PD) is a progressive neurological disorder that impacts the quality of life significantly, making in-home monitoring of motor symptoms such as Freezing of Gait (FoG) critical. However, existing symptom monitoring technologies are power-hungry, rely on extensive amounts of labeled data, and operate in controlled settings. These shortcomings limit real-world deployment of the technology. This work presents LIFT-PD, a computationally-efficient self-supervised learning framework for real-time FoG detection. Our method combines self-supervised pre-training on unlabeled data with a novel differential hopping windowing technique to learn from limited labeled instances. An opportunistic model activation module further minimizes power consumption by selectively activating the deep learning module only during active periods. Extensive experimental results show that LIFT-PD achieves a 7.25% increase in precision and 4.4% improvement in accuracy compared to supervised models while using as low as 40% of the labeled training data used for supervised learning. Additionally, the model activation module reduces inference time by up to 67% compared to continuous inference. LIFT-PD paves the way for practical, energy-efficient, and unobtrusive in-home monitoring of PD patients with minimal labeling requirements.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
503,204
2410.20469
Graph Neural Networks on Discriminative Graphs of Words
In light of the recent success of Graph Neural Networks (GNNs) and their ability to perform inference on complex data structures, many studies apply GNNs to the task of text classification. In most previous methods, a heterogeneous graph, containing both word and document nodes, is constructed using the entire corpus and a GNN is used to classify document nodes. In this work, we explore a new Discriminative Graph of Words Graph Neural Network (DGoW-GNN) approach encapsulating both a novel discriminative graph construction and model to classify text. In our graph construction, containing only word nodes and no document nodes, we split the training corpus into disconnected subgraphs according to their labels and weight edges by the pointwise mutual information of the represented words. Our graph construction, for which we provide theoretical motivation, allows us to reformulate the task of text classification as the task of walk classification. We also propose a new model for the graph-based classification of text, which combines a GNN and a sequence model. We evaluate our approach on seven benchmark datasets and find that it is outperformed by several state-of-the-art baseline models. We analyse reasons for this performance difference and hypothesise under which conditions it is likely to change.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
502,824
2407.21029
Data-Driven Abstractions via Binary-Tree Gaussian Processes for Formal Verification
To advance formal verification of stochastic systems against temporal logic requirements for handling unknown dynamics, researchers have been designing data-driven approaches inspired by breakthroughs in the underlying machine learning techniques. As one promising research direction, abstraction-based solutions based on Gaussian process (GP) regression have become popular for their ability to learn a representation of the latent system from data with a quantified error. Results obtained based on this model are then translated to the true system via various methods. In a recent publication, GPs using a so-called binary-tree kernel have demonstrated a polynomial speedup w.r.t. the size of the data compared to their vanilla version, outcompeting all existing sparse GP approximations. Incidentally, the resulting binary-tree Gaussian process (BTGP) is characteristic for its piecewise-constant posterior mean and covariance functions, naturally abstracting the input space into discrete partitions. In this paper, we leverage this natural abstraction of the BTGP for formal verification, eliminating the need for cumbersome abstraction and error quantification procedures. We show that the BTGP allows us to construct an interval Markov chain model of the unknown system with a speedup that is polynomial w.r.t. the size of the abstraction compared to alternative approaches. We provide a delocalized error quantification via a unified formula even when the true dynamics do not live in the function space of the BTGP. This allows us to compute upper and lower bounds on the probability of satisfying reachability specifications that are robust to both aleatoric and epistemic uncertainties.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
true
477,365
2308.07615
Self-supervised Hypergraphs for Learning Multiple World Interpretations
We present a method for learning multiple scene representations given a small labeled set, by exploiting the relationships between such representations in the form of a multi-task hypergraph. We also show how we can use the hypergraph to improve a powerful pretrained VisTransformer model without any additional labeled data. In our hypergraph, each node is an interpretation layer (e.g., depth or segmentation) of the scene. Within each hyperedge, one or several input nodes predict the layer at the output node. Thus, each node could be an input node in some hyperedges and an output node in others. In this way, multiple paths can reach the same node, to form ensembles from which we obtain robust pseudolabels, which allow self-supervised learning in the hypergraph. We test different ensemble models and different types of hyperedges and show superior performance to other multi-task graph models in the field. We also introduce Dronescapes, a large video dataset captured with UAVs in different complex real-world scenes, with multiple representations, suitable for multi-task learning.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
385,586
2306.17514
A behaviouristic approach to representing processes and procedures in the OASIS 2 ontology
Foundational ontologies devoted to the effective representation of processes and procedures are not widely investigated at present, thereby limiting the practical adoption of semantic approaches in real scenarios where the precise instructions to follow must be considered. Also, the representation ought to include how agents should carry out the actions associated with the process, whether or not agents are able to perform those actions, the possible roles played as well as the related events. The OASIS ontology provides an established model to capture agents and their interactions but lacks means for representing processes and procedures carried out by agents. This motivates the research presented in this article, which delivers an extension of the OASIS 2 ontology to combine the capabilities for representing agents and their behaviours with the full conceptualization of processes and procedures. The overarching goal is to deliver a foundational OWL ontology that deals with agent planning, reaching a balance between generality and applicability, which is known to be an open challenge.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
376,729
1506.05636
Translational and Scaling Formation Maneuver Control via a Bearing-Based Approach
This paper studies distributed maneuver control of multi-agent formations in arbitrary dimensions. The objective is to control the translation and scale of the formation while maintaining the desired formation pattern. Unlike conventional approaches where the target formation is defined by relative positions or distances, we propose a novel bearing-based approach where the target formation is defined by inter-neighbor bearings. Since the bearings are invariant to the translation and scale of the formation, the bearing-based approach provides a simple solution to the problem of translational and scaling formation maneuver control. Linear formation control laws for double-integrator dynamics are proposed and the global formation stability is analyzed. This paper also studies bearing-based formation control in the presence of practical problems including input disturbances, acceleration saturation, and collision avoidance. The theoretical results are illustrated with numerical simulations.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
44,322
2001.11981
Lifted Reed-Solomon Codes with Application to Batch Codes
Guo, Kopparty and Sudan have initiated the study of error-correcting codes derived by lifting of affine-invariant codes. Lifted Reed-Solomon (RS) codes are defined as the evaluation of polynomials in a vector space over a field by requiring their restriction to every line in the space to be a codeword of the RS code. In this paper, we investigate lifted RS codes and discuss their application to batch codes, a notion introduced in the context of private information retrieval and load-balancing in distributed storage systems. First, we improve the estimate of the code rate of lifted RS codes for lifting parameter $m\ge 3$ and large field size. Second, a new explicit construction of batch codes utilizing lifted RS codes is proposed. For some parameter regimes, our codes have a better trade-off between parameters than previously known batch codes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
162,215
2409.18139
Robust optimization and uncertainty quantification in the nonlinear mechanics of an elevator brake system
This paper deals with nonlinear mechanics of an elevator brake system subjected to uncertainties. A deterministic model that relates the braking force with uncertain parameters is deduced from mechanical equilibrium conditions. In order to take into account parameters variabilities, a parametric probabilistic approach is employed. In this stochastic formalism, the uncertain parameters are modeled as random variables, with distributions specified by the maximum entropy principle. The uncertainties are propagated by the Monte Carlo method, which provides a detailed statistical characterization of the response. This work still considers the optimum design of the brake system, formulating and solving nonlinear optimization problems, with and without the uncertainties effects.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
492,121
1802.07370
SufiSent - Universal Sentence Representations Using Suffix Encodings
Computing universal distributed representations of sentences is a fundamental task in natural language processing. We propose a method to learn such representations by encoding the suffixes of word sequences in a sentence and training on the Stanford Natural Language Inference (SNLI) dataset. We demonstrate the effectiveness of our approach by evaluating it on the SentEval benchmark, improving on existing approaches on several transfer tasks.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
90,875
2312.07753
Polynomial-based Self-Attention for Table Representation learning
Structured data, which constitutes a significant portion of existing data types, has been a long-standing research topic in the field of machine learning. Various representation learning methods for tabular data have been proposed, ranging from encoder-decoder structures to Transformers. Among these, Transformer-based methods have achieved state-of-the-art performance not only in tabular data but also in various other fields, including computer vision and natural language processing. However, recent studies have revealed that self-attention, a key component of Transformers, can lead to an oversmoothing issue. We show that Transformers for tabular data also face this problem, and to address the problem, we propose a novel matrix polynomial-based self-attention layer as a substitute for the original self-attention layer, which enhances model scalability. In our experiments with three representative table learning models equipped with our proposed layer, we illustrate that the layer effectively mitigates the oversmoothing problem and enhances the representation performance of the existing methods, outperforming the state-of-the-art table representation methods.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
415,039
2312.02671
Learning a Sparse Representation of Barron Functions with the Inverse Scale Space Flow
This paper presents a method for finding a sparse representation of Barron functions. Specifically, given an $L^2$ function $f$, the inverse scale space flow is used to find a sparse measure $\mu$ minimising the $L^2$ loss between the Barron function associated to the measure $\mu$ and the function $f$. The convergence properties of this method are analysed in an ideal setting and in the cases of measurement noise and sampling bias. In an ideal setting the objective decreases strictly monotone in time to a minimizer with $\mathcal{O}(1/t)$, and in the case of measurement noise or sampling bias the optimum is achieved up to a multiplicative or additive constant. This convergence is preserved on discretization of the parameter space, and the minimizers on increasingly fine discretizations converge to the optimum on the full parameter space.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
412,967
1902.07370
Learning Transferable Self-attentive Representations for Action Recognition in Untrimmed Videos with Weak Supervision
Action recognition in videos has attracted a lot of attention in the past decade. In order to learn robust models, previous methods usually assume videos are trimmed as short sequences and require ground-truth annotations of each video frame/sequence, which is quite costly and time-consuming. In this paper, given only video-level annotations, we propose a novel weakly supervised framework to simultaneously locate action frames as well as recognize actions in untrimmed videos. Our proposed framework consists of two major components. First, for action frame localization, we take advantage of the self-attention mechanism to weight each frame, such that the influence of background frames can be effectively eliminated. Second, considering that there are trimmed videos publicly available and also they contain useful information to leverage, we present an additional module to transfer the knowledge from trimmed videos for improving the classification performance in untrimmed ones. Extensive experiments are conducted on two benchmark datasets (i.e., THUMOS14 and ActivityNet1.3), and experimental results clearly corroborate the efficacy of our method.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
121,966
2207.03620
More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity
Transformers have quickly shined in the computer vision world since the emergence of Vision Transformers (ViTs). The dominant role of convolutional neural networks (CNNs) seems to be challenged by increasingly effective transformer-based models. Very recently, a couple of advanced convolutional models strike back with large kernels motivated by the local-window attention mechanism, showing appealing performance and efficiency. While one of them, i.e. RepLKNet, impressively manages to scale the kernel size to 31x31 with improved performance, the performance starts to saturate as the kernel size continues growing, compared to the scaling trend of advanced ViTs such as Swin Transformer. In this paper, we explore the possibility of training extreme convolutions larger than 31x31 and test whether the performance gap can be eliminated by strategically enlarging convolutions. This study ends up with a recipe for applying extremely large kernels from the perspective of sparsity, which can smoothly scale up kernels to 61x61 with better performance. Built on this recipe, we propose Sparse Large Kernel Network (SLaK), a pure CNN architecture equipped with sparse factorized 51x51 kernels that can perform on par with or better than state-of-the-art hierarchical Transformers and modern ConvNet architectures like ConvNeXt and RepLKNet, on ImageNet classification as well as a wide range of downstream tasks including semantic segmentation on ADE20K, object detection on PASCAL VOC 2007, and object detection/segmentation on MS COCO.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
306,916
2007.05474
A Reinforcement Learning Approach for Fast Frequency Control in Low-Inertia Power Systems
The electric grid is undergoing a major transition from fossil fuel-based power generation to renewable energy sources, typically interfaced to the grid via power electronics. The future power systems are thus expected to face increased control complexity and challenges pertaining to frequency stability due to lower levels of inertia and damping. As a result, the frequency control and development of novel ancillary services is becoming imperative. This paper proposes a data-driven control scheme, based on Reinforcement Learning (RL), for grid-forming Voltage Source Converters (VSCs), with the goal of exploiting their fast response capabilities to provide fast frequency control to the system. A centralized RL-based controller collects generator frequencies and adjusts the VSC power output, in response to a disturbance, to prevent frequency threshold violations. The proposed control scheme is analyzed and its performance evaluated through detailed time-domain simulations of the IEEE 14-bus test system.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
186,691
2006.03531
A Meta-Bayesian Model of Intentional Visual Search
We propose a computational model of visual search that incorporates Bayesian interpretations of the neural mechanisms that underlie categorical perception and saccade planning. To enable meaningful comparisons between simulated and human behaviours, we employ a gaze-contingent paradigm that required participants to classify occluded MNIST digits through a window that followed their gaze. The conditional independencies imposed by a separation of time scales in this task are embodied by constraints on the hierarchical structure of our model; planning and decision making are cast as a partially observable Markov Decision Process while proprioceptive and exteroceptive signals are integrated by a dynamic model that facilitates approximate inference on visual information and its latent causes. Our model is able to recapitulate human behavioural metrics such as classification accuracy while retaining a high degree of interpretability, which we demonstrate by recovering subject-specific parameters from observed human behaviour.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
180,340
2108.01338
Position Control and Variable-Height Trajectory Tracking of a Soft Pneumatic Legged Robot
Soft pneumatic legged robots show promise in their ability to traverse a range of different types of terrain, including natural unstructured terrain met in applications like precision agriculture. They can adapt their body morphology to the intricacies of the terrain at hand, thus enabling robust and resilient locomotion. In this paper we capitalize upon recent developments on soft pneumatic legged robots to introduce a closed-loop trajectory tracking control scheme for operation over flat ground. Closed-loop pneumatic actuation feedback is achieved via a compact and portable pneumatic regulation board. Experimental results reveal that our soft legged robot can precisely control its body height and orientation while in quasi-static operation based on a geometric model. The robot can track both straight line and curved trajectories as well as variable-height trajectories. This work lays the basis to enable autonomous navigation for soft legged robots.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
248,999
2210.12183
Weighted Coordinates Poset Block Codes
Given $[n]=\{1,2,\ldots,n\}$, a partial order $\preceq$ on $[n]$, a label map $\pi : [n] \rightarrow \mathbb{N}$ defined by $\pi(i) = k_i$ with $\sum_{i=1}^{n}\pi (i) = N$, the direct sum $ \mathbb{F}_{q}^{k_1} \oplus \mathbb{F}_{q}^{k_2}\oplus \ldots \oplus \mathbb{F}_{q}^{k_n} $ of $ \mathbb{F}_q^N $, and a weight function $w$ on $ \mathbb{F}_q $, we define a poset block metric $d_{(P,w,\pi)}$ on $\mathbb{F}_{q}^{N}$ based on the poset $P=([n],\preceq)$. The metric $d_{(P,w,\pi)}$ is said to be weighted coordinates poset block metric ($(P,w,\pi)$-metric). It extends the weighted coordinates poset metric ($(P,w)$-metric) introduced by L. Panek and J. A. Pinheiro and generalizes the poset block metric ($(P,\pi)$-metric) introduced by M. M. S. Alves et al. We determine the complete weight distribution of a $(P,w,\pi)$-space, thereby obtaining it for $(P,w)$-space, $(P,\pi)$-space, $\pi$-space, and $P$-space as special cases. We obtain the Singleton bound for $(P,w,\pi)$-codes and for $(P,w)$-codes as well. In particular, we re-obtain the Singleton bound for any code with respect to $(P,\pi)$-metric and $P$-metric. Moreover, packing radius and Singleton bound for NRT block codes are found.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
325,617
2107.03648
Deep Learning Based Image Retrieval in the JPEG Compressed Domain
Content-based image retrieval (CBIR) systems on pixel domain use low-level features, such as colour, texture and shape, to retrieve images. In this context, two types of image representations i.e. local and global image features have been studied in the literature. Extracting these features from pixel images and comparing them with images from the database is very time-consuming. Therefore, in recent years, there has been some effort to accomplish image analysis directly in the compressed domain with lesser computations. Furthermore, most of the images in our daily transactions are stored in the JPEG compressed format. Therefore, it would be ideal if we could retrieve features directly from the partially decoded or compressed data and use them for retrieval. Here, we propose a unified model for image retrieval which takes DCT coefficients as input and efficiently extracts global and local features directly in the JPEG compressed domain for accurate image retrieval. The experimental findings indicate that our proposed model performed similarly to the current DELG model which takes RGB features as an input with reference to mean average precision while having a faster training and retrieval speed.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
245,217
2012.11101
ResizeMix: Mixing Data with Preserved Object Information and True Labels
Data augmentation is a powerful technique to increase the diversity of data, which can effectively improve the generalization ability of neural networks in image recognition tasks. Recent data mixing based augmentation strategies have achieved great success. Especially, CutMix uses a simple but effective method to improve the classifiers by randomly cropping a patch from one image and pasting it on another image. To further promote the performance of CutMix, a series of works explore to use the saliency information of the image to guide the mixing. We systematically study the importance of the saliency information for mixing data, and find that the saliency information is not so necessary for promoting the augmentation performance. Furthermore, we find that the cutting based data mixing methods carry two problems of label misallocation and object information missing, which cannot be resolved simultaneously. We propose a more effective but very easily implemented method, namely ResizeMix. We mix the data by directly resizing the source image to a small patch and paste it on another image. The obtained patch preserves more substantial object information compared with conventional cut-based methods. ResizeMix shows evident advantages over CutMix and the saliency-guided methods on both image classification and object detection tasks without additional computation cost, which even outperforms most costly search-based automatic augmentation methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
212,529
1111.7170
REX: Explaining Relationships between Entity Pairs
Knowledge bases of entities and relations (either constructed manually or automatically) are behind many real world search engines, including those at Yahoo!, Microsoft, and Google. Those knowledge bases can be viewed as graphs with nodes representing entities and edges representing (primary) relationships, and various studies have been conducted on how to leverage them to answer entity seeking queries. Meanwhile, in a complementary direction, analyses over the query logs have enabled researchers to identify entity pairs that are statistically correlated. Such entity relationships are then presented to search users through the "related searches" feature in modern search engines. However, entity relationships thus discovered can often be "puzzling" to the users because why the entities are connected is often indescribable. In this paper, we propose a novel problem called "entity relationship explanation", which seeks to explain why a pair of entities are connected, and solve this challenging problem by integrating the above two complementary approaches, i.e., we leverage the knowledge base to "explain" the connections discovered between entity pairs. More specifically, we present REX, a system that takes a pair of entities in a given knowledge base as input and efficiently identifies a ranked list of relationship explanations. We formally define relationship explanations and analyze their desirable properties. Furthermore, we design and implement algorithms to efficiently enumerate and rank all relationship explanations based on multiple measures of "interestingness." We perform extensive experiments over real web-scale data gathered from DBpedia and a commercial search engine, demonstrating the efficiency and scalability of REX. We also perform user studies to corroborate the effectiveness of explanations generated by REX.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
13,250
2309.04257
A Tutorial on Distributed Optimization for Cooperative Robotics: from Setups and Algorithms to Toolboxes and Research Directions
Several interesting problems in multi-robot systems can be cast in the framework of distributed optimization. Examples include multi-robot task allocation, vehicle routing, target protection and surveillance. While the theoretical analysis of distributed optimization algorithms has received significant attention, its application to cooperative robotics has not been investigated in detail. In this paper, we show how notable scenarios in cooperative robotics can be addressed by suitable distributed optimization setups. Specifically, after a brief introduction on the widely investigated consensus optimization (most suited for data analytics) and on the partition-based setup (matching the graph structure in the optimization), we focus on two distributed settings modeling several scenarios in cooperative robotics, i.e., the so-called constraint-coupled and aggregative optimization frameworks. For each one, we consider use-case applications, and we discuss tailored distributed algorithms with their convergence properties. Then, we revise state-of-the-art toolboxes allowing for the implementation of distributed schemes on real networks of robots without central coordinators. For each use case, we discuss their implementation in these toolboxes and provide simulations and real experiments on networks of heterogeneous robots.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
390,662
1206.4952
Space-Efficient Sampling from Social Activity Streams
In order to efficiently study the characteristics of network domains and support development of network systems (e.g. algorithms, protocols that operate on networks), it is often necessary to sample a representative subgraph from a large complex network. Although recent subgraph sampling methods have been shown to work well, they focus on sampling from memory-resident graphs and assume that the sampling algorithm can access the entire graph in order to decide which nodes/edges to select. Many large-scale network datasets, however, are too large and/or dynamic to be processed using main memory (e.g., email, tweets, wall posts). In this work, we formulate the problem of sampling from large graph streams. We propose a streaming graph sampling algorithm that dynamically maintains a representative sample in a reservoir based setting. We evaluate the efficacy of our proposed methods empirically using several real-world data sets. Across all datasets, we found that our method produce samples that preserve better the original graph distributions.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
false
16,758
2309.08739
Concept explainability for plant diseases classification
Plant diseases remain a considerable threat to food security and agricultural sustainability. Rapid and early identification of these diseases has become a significant concern motivating several studies to rely on the increasing global digitalization and the recent advances in computer vision based on deep learning. In fact, plant disease classification based on deep convolutional neural networks has shown impressive performance. However, these methods have yet to be adopted globally due to concerns regarding their robustness, transparency, and the lack of explainability compared with their human experts counterparts. Methods such as saliency-based approaches associating the network output to perturbations of the input pixels have been proposed to give insights into these algorithms. Still, they are not easily comprehensible and not intuitive for human users and are threatened by bias. In this work, we deploy a method called Testing with Concept Activation Vectors (TCAV) that shifts the focus from pixels to user-defined concepts. To the best of our knowledge, our paper is the first to employ this method in the field of plant disease classification. Important concepts such as color, texture and disease related concepts were analyzed. The results suggest that concept-based explanation methods can significantly benefit automated plant disease identification.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
392,299
2403.08943
LMStyle Benchmark: Evaluating Text Style Transfer for Chatbots
Since the breakthrough of ChatGPT, large language models (LLMs) have garnered significant attention in the research community. With the development of LLMs, the question of text style transfer for conversational models has emerged as a natural extension, where chatbots may possess their own styles or even characters. However, standard evaluation metrics have not yet been established for this new settings. This paper aims to address this issue by proposing the LMStyle Benchmark, a novel evaluation framework applicable to chat-style text style transfer (C-TST), that can measure the quality of style transfer for LLMs in an automated and scalable manner. In addition to conventional style strength metrics, LMStyle Benchmark further considers a novel aspect of metrics called appropriateness, a high-level metrics take account of coherence, fluency and other implicit factors without the aid of reference samples. Our experiments demonstrate that the new evaluation methods introduced by LMStyle Benchmark have a higher correlation with human judgments in terms of appropriateness. Based on LMStyle Benchmark, we present a comprehensive list of evaluation results for popular LLMs, including LLaMA, Alpaca, and Vicuna, reflecting their stylistic properties, such as formality and sentiment strength, along with their appropriateness.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
437,551
2406.00507
Prompt Chaining or Stepwise Prompt? Refinement in Text Summarization
Large language models (LLMs) have demonstrated the capacity to improve summary quality by mirroring a human-like iterative process of critique and refinement starting from the initial draft. Two strategies are designed to perform this iterative process: Prompt Chaining and Stepwise Prompt. Prompt chaining orchestrates the drafting, critiquing, and refining phases through a series of three discrete prompts, while Stepwise prompt integrates these phases within a single prompt. However, the relative effectiveness of the two methods has not been extensively studied. This paper is dedicated to examining and comparing these two methods in the context of text summarization to ascertain which method stands out as the most effective. Experimental results show that the prompt chaining method can produce a more favorable outcome. This might be because stepwise prompt might produce a simulated refinement process according to our various experiments. Since refinement is adaptable to diverse tasks, our conclusions have the potential to be extrapolated to other applications, thereby offering insights that may contribute to the broader development of LLMs.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
459,889
2102.13143
Robust Pollen Imagery Classification with Generative Modeling and Mixup Training
Deep learning approaches have shown great success in image classification tasks and can aid greatly towards the fast and reliable classification of pollen grain aerial imagery. However, often-times deep learning methods in the setting of natural images can suffer generalization problems and yield poor performance on unseen test distribution. In this work, we present and a robust deep learning framework that can generalize well for pollen grain aerobiological imagery classification. We develop a convolutional neural network-based pollen grain classification approach and combine some of the best practices in deep learning for better generalization. In addition to commonplace approaches like data-augmentation and weight regularization, we utilize implicit regularization methods like manifold mixup to allow learning of smoother decision boundaries. We also make use of proven state-of-the-art architectural choices like EfficientNet convolutional neural networks. Inspired by the success of generative modeling with variational autoencoders, we train models with a richer learning objective which can allow the model to focus on the relevant parts of the image. Finally, we create an ensemble of neural networks, for the robustness of the test set predictions. Based on our experiments, we show improved generalization performance as measured with a weighted F1-score with the aforementioned approaches. The proposed approach earned a fourth-place in the final rankings in the ICPR-2020 Pollen Grain Classification Challenge; with a 0.972578 weighted F1 score,0.950828 macro average F1 scores, and 0.972877 recognition accuracy.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
221,959
1304.1097
A Randomized Approximation Algorithm of Logic Sampling
In recent years, researchers in decision analysis and artificial intelligence (AI) have used Bayesian belief networks to build models of expert opinion. Using standard methods drawn from the theory of computational complexity, workers in the field have shown that the problem of exact probabilistic inference on belief networks almost certainly requires exponential computation in the worst ease [3]. We have previously described a randomized approximation scheme, called BN-RAS, for computation on belief networks [ 1, 2, 4]. We gave precise analytic bounds on the convergence of BN-RAS and showed how to trade running time for accuracy in the evaluation of posterior marginal probabilities. We now extend our previous results and demonstrate the generality of our framework by applying similar mathematical techniques to the analysis of convergence for logic sampling [7], an alternative simulation algorithm for probabilistic inference.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
23,450
2412.12230
The impact of AI on engineering design procedures for dynamical systems
Artificial intelligence (AI) is driving transformative changes across numerous fields, revolutionizing conventional processes and creating new opportunities for innovation. The development of mechatronic systems is undergoing a similar transformation. Over the past decade, modeling, simulation, and optimization techniques have become integral to the design process, paving the way for the adoption of AI-based methods. In this paper, we examine the potential for integrating AI into the engineering design process, using the V-model from the VDI guideline 2206, considered the state-of-the-art in product design, as a foundation. We identify and classify AI methods based on their suitability for specific stages within the engineering product design workflow. Furthermore, we present a series of application examples where AI-assisted design has been successfully implemented by the authors. These examples, drawn from research projects within the DFG Priority Program \emph{SPP~2353: Daring More Intelligence - Design Assistants in Mechanics and Dynamics}, showcase a diverse range of applications across mechanics and mechatronics, including areas such as acoustics and robotics.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
517,809