id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2410.00699 | Investigating the Impact of Model Complexity in Large Language Models | Large Language Models (LLMs) based on the pre-trained fine-tuning paradigm have become pivotal in solving natural language processing tasks, consistently achieving state-of-the-art performance. Nevertheless, the theoretical understanding of how model complexity influences fine-tuning performance remains challenging and has not been well explored yet. In this paper, we focus on autoregressive LLMs and propose to employ Hidden Markov Models (HMMs) to model them. Based on the HMM modeling, we investigate the relationship between model complexity and the generalization capability in downstream tasks. Specifically, we consider a popular tuning paradigm for downstream tasks, head tuning, where all pre-trained parameters are frozen and only individual heads are trained atop pre-trained LLMs. Our theoretical analysis reveals that the risk initially increases and then decreases with rising model complexity, showcasing a "double descent" phenomenon. In this case, the initial "descent" is degenerate, signifying that the "sweet spot" where bias and variance are balanced occurs when the model size is zero. Obtaining the presented in this study conclusion confronts several challenges, primarily revolving around effectively modeling autoregressive LLMs and downstream tasks, as well as conducting a comprehensive risk analysis for multivariate regression. Our research is substantiated by experiments conducted on data generated from HMMs, which provided empirical support and alignment with our theoretical insights. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 493,466 |
2202.01108 | Learning to reason about and to act on physical cascading events | Reasoning and interacting with dynamic environments is a fundamental problem in AI, but it becomes extremely challenging when actions can trigger cascades of cross-dependent events. We introduce a new supervised learning setup called {\em Cascade} where an agent is shown a video of a physically simulated dynamic scene, and is asked to intervene and trigger a cascade of events, such that the system reaches a "counterfactual" goal. For instance, the agent may be asked to "Make the blue ball hit the red one, by pushing the green ball". The agent intervention is drawn from a continuous space, and cascades of events makes the dynamics highly non-linear. We combine semantic tree search with an event-driven forward model and devise an algorithm that learns to search in semantic trees in continuous spaces. We demonstrate that our approach learns to effectively follow instructions to intervene in previously unseen complex scenes. It can also reason about alternative outcomes, when provided an observed cascade of events. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 278,369 |
1508.02865 | Maximum Entropy Vector Kernels for MIMO system identification | Recent contributions have framed linear system identification as a nonparametric regularized inverse problem. Relying on $\ell_2$-type regularization which accounts for the stability and smoothness of the impulse response to be estimated, these approaches have been shown to be competitive w.r.t classical parametric methods. In this paper, adopting Maximum Entropy arguments, we derive a new $\ell_2$ penalty deriving from a vector-valued kernel; to do so we exploit the structure of the Hankel matrix, thus controlling at the same time complexity, measured by the McMillan degree, stability and smoothness of the identified models. As a special case we recover the nuclear norm penalty on the squared block Hankel matrix. In contrast with previous literature on reweighted nuclear norm penalties, our kernel is described by a small number of hyper-parameters, which are iteratively updated through marginal likelihood maximization; constraining the structure of the kernel acts as a (hyper)regularizer which helps controlling the effective degrees of freedom of our estimator. To optimize the marginal likelihood we adapt a Scaled Gradient Projection (SGP) algorithm which is proved to be significantly computationally cheaper than other first and second order off-the-shelf optimization methods. The paper also contains an extensive comparison with many state-of-the-art methods on several Monte-Carlo studies, which confirms the effectiveness of our procedure. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 45,950 |
1907.04839 | Fast geodesic shooting for landmark matching using CUDA | Landmark matching via geodesic shooting is a prerequisite task for numerous registration based applications in biomedicine. Geodesic shooting has been developed as one solution approach and formulates the diffeomorphic registration as an optimal control problem under the Hamiltonian framework. In this framework, with landmark positions q0 fixed, the problem solely depends on the initial momentum p0 and evolves through time steps according to a set of constraint equations. Given an initial p0, the algorithm flows q and p forward through time steps, calculates a loss based on point-set mismatch and kinetic energy, back-propagate through time to calculate gradient on p0 and update it. In the forward and backward pass, a pair-wise kernel on landmark points K and additional intermediate terms have to be calculated and marginalized, leading to O(N2) computational complexity, N being the number of points to be registered. For medical image applications, N maybe in the range of thousands, rendering this operation computationally expensive. In this work we ropose a CUDA implementation based on shared memory reduction. Our implementation achieves nearly 2 orders magnitude speed up compared to a naive CPU-based implementation, in addition to improved numerical accuracy as well as better registration results. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 138,213 |
1710.10774 | Sequence-to-Sequence ASR Optimization via Reinforcement Learning | Despite the success of sequence-to-sequence approaches in automatic speech recognition (ASR) systems, the models still suffer from several problems, mainly due to the mismatch between the training and inference conditions. In the sequence-to-sequence architecture, the model is trained to predict the grapheme of the current time-step given the input of speech signal and the ground-truth grapheme history of the previous time-steps. However, it remains unclear how well the model approximates real-world speech during inference. Thus, generating the whole transcription from scratch based on previous predictions is complicated and errors can propagate over time. Furthermore, the model is optimized to maximize the likelihood of training data instead of error rate evaluation metrics that actually quantify recognition quality. This paper presents an alternative strategy for training sequence-to-sequence ASR models by adopting the idea of reinforcement learning (RL). Unlike the standard training scheme with maximum likelihood estimation, our proposed approach utilizes the policy gradient algorithm. We can (1) sample the whole transcription based on the model's prediction in the training process and (2) directly optimize the model with negative Levenshtein distance as the reward. Experimental results demonstrate that we significantly improved the performance compared to a model trained only with maximum likelihood estimation. | false | false | true | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 83,470 |
2404.08567 | CATP: Cross-Attention Token Pruning for Accuracy Preserved Multimodal
Model Inference | In response to the rising interest in large multimodal models, we introduce Cross-Attention Token Pruning (CATP), a precision-focused token pruning method. Our approach leverages cross-attention layers in multimodal models, exemplified by BLIP-2, to extract valuable information for token importance determination. CATP employs a refined voting strategy across model heads and layers. In evaluations, CATP achieves up to 12.1X higher accuracy compared to existing token pruning methods, addressing the trade-off between computational efficiency and model precision. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 446,295 |
1709.07121 | Discrete-Time Polar Opinion Dynamics with Susceptibility | This paper considers a discrete-time opinion dynamics model in which each individual's susceptibility to being influenced by others is dependent on her current opinion. We assume that the social network has time-varying topology and that the opinions are scalars on a continuous interval. We first propose a general opinion dynamics model based on the DeGroot model, with a general function to describe the functional dependence of each individual's susceptibility on her own opinion, and show that this general model is analogous to the Friedkin-Johnsen model, which assumes a constant susceptibility for each individual. We then consider two specific functions in which the individual's susceptibility depends on the \emph{polarity} of her opinion, and provide motivating social examples. First, we consider stubborn positives, who have reduced susceptibility if their opinions are at one end of the interval and increased susceptibility if their opinions are at the opposite end. A court jury is used as a motivating example. Second, we consider stubborn neutrals, who have reduced susceptibility when their opinions are in the middle of the spectrum, and our motivating examples are social networks discussing established social norms or institutionalized behavior. For each specific susceptibility model, we establish the initial and graph topology conditions in which consensus is reached, and develop necessary and sufficient conditions on the initial conditions for the final consensus value to be at either extreme of the opinion interval. Simulations are provided to show the effects of the susceptibility function when compared to the DeGroot model. | false | false | false | true | false | false | false | false | false | false | true | false | false | false | true | false | false | false | 81,227 |
1603.03357 | Cognitive Green Radio for Energy-Aware Communications | In 5G networks, the number of connected devices, data rate and data volume per area, as well as the variety of QoS requirements, will attain unprecedented scales. The achievement of these goals will rely on new technologies and disruptive changes in network architecture and node design. Energy efficiency is believed to play a key role in complementing the 5G technologies and optimizing their deployment, dynamic configuration and management [1]. Within the framework of green communications and networks, especially for next-generation green cellular radio access networks, the GREAT (Green Cognitive Radio for Energy-Aware wireless communication Technologies evolution) initiative, a CominLabs Excellence Center (Laboratoire d'Excellence) and Universit\'e Europ\'eenne de Bretagne (UEB)project, has mainly addressed the fundamental issues of energy efficiency from various perspectives and angles, leveraging on cognitive techniques, at networking level as well as at thephysical layer level. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 53,106 |
2105.03767 | Aerospace Sliding Mode Control Toolbox: Relative Degree Approach with
Resource Prospector Lander and Launch Vehicle Case Studies | Conventional Sliding mode control and observation techniques are widely used in aerospace applications, including aircrafts, UAVs, launch vehicles, missile interceptors, and hypersonic missiles. This work is dedicated to creating a MATLAB-based sliding mode controller design and simulation software toolbox that aims to support aerospace vehicle applications. An architecture of the aerospace sliding mode control toolbox (SMC Aero) using the relative degree approach is proposed. The SMC Aero libraries include 1st order sliding mode control (1-SMC), second order sliding mode control (2-SMC), higher order sliding mode (HOSM) control (either fixed gain or adaptive), as well as higher order sliding mode differentiators. The efficacy of the SMC Aero toolbox is confirmed in two case studies: controlling and simulating resource prospector lander (RPL) soft landing on the Moon and launch vehicle (LV) attitude control in ascent mode. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 234,257 |
1901.10461 | The effect of a physical robot on vocabulary learning | This study investigates the effect of a physical robot taking the role of a teacher or exercise partner in a language learning exercise. In order to investigate this, an application was developed enabling a 2:nd language learning vocabulary exercise in three different conditions. In the first condition the learner would receive tutoring from a disembodied voice, in the second condition the tutor would be embodied by an animated avatar on a computer screen, and in the final condition the tutor was a physical robotic head with a 3D animated face mask. A Russian language vocabulary exercise with 15 subjects was conducted. None of the subjects reported any Russian language skills prior to the exercises. Each subject were taught a set of 9 words in each of the three conditions during a practice phase, and were then asked to recall the words in a test phase. Results show that the recall of the words practiced with the physical robot were significantly higher than that of the words practiced with the avatar on the screen or with the disembodied voice. | true | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 120,026 |
2403.01491 | Ultimate linear block and convolutional codes | Codes considered as structures within unit schemes greatly extends the availability of linear block and convolutional codes and allows the construction of these codes to required length, rate, distance and type. Properties of a code emanate from properties of the unit from which it was derived. Orthogonal units, units in group rings, Fourier/Vandermonde units and related units are used to construct and analyse linear block and convolutional codes and to construct these to predefined length, rate, distance and type. Self-dual, dual containing, quantum error-correcting and linear complementary dual codes are constructed for both linear block and convolutional codes. Low density parity check linear block and convolutional codes are constructed with no short cycles in the control matrix. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 434,448 |
2012.07483 | On the Treatment of Optimization Problems with L1 Penalty Terms via
Multiobjective Continuation | We present a novel algorithm that allows us to gain detailed insight into the effects of sparsity in linear and nonlinear optimization, which is of great importance in many scientific areas such as image and signal processing, medical imaging, compressed sensing, and machine learning (e.g., for the training of neural networks). Sparsity is an important feature to ensure robustness against noisy data, but also to find models that are interpretable and easy to analyze due to the small number of relevant terms. It is common practice to enforce sparsity by adding the $\ell_1$-norm as a weighted penalty term. In order to gain a better understanding and to allow for an informed model selection, we directly solve the corresponding multiobjective optimization problem (MOP) that arises when we minimize the main objective and the $\ell_1$-norm simultaneously. As this MOP is in general non-convex for nonlinear objectives, the weighting method will fail to provide all optimal compromises. To avoid this issue, we present a continuation method which is specifically tailored to MOPs with two objective functions one of which is the $\ell_1$-norm. Our method can be seen as a generalization of well-known homotopy methods for linear regression problems to the nonlinear case. Several numerical examples - including neural network training - demonstrate our theoretical findings and the additional insight that can be gained by this multiobjective approach. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 211,470 |
2410.12208 | Vehicle Localization in GPS-Denied Scenarios Using Arc-Length-Based Map
Matching | Automated driving systems face challenges in GPS-denied situations. To address this issue, kinematic dead reckoning is implemented using measurements from the steering angle, steering rate, yaw rate, and wheel speed sensors onboard the vehicle. However, dead reckoning methods suffer from drift. This paper provides an arc-length-based map matching method that uses a digital 2D map of the scenario in order to correct drift in the dead reckoning estimate. The kinematic model's prediction is used to introduce a temporal notion to the spatial information available in the map data. Results show reliable improvement in drift for all GPS-denied scenarios tested in this study. This innovative approach ensures that automated vehicles can maintain continuous and reliable navigation, significantly enhancing their safety and operational reliability in environments where GPS signals are compromised or unavailable. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 498,905 |
2206.09766 | Quantitative CT texture-based method to predict diagnosis and prognosis
of fibrosing interstitial lung disease patterns | Purpose: To utilize high-resolution quantitative CT (QCT) imaging features for prediction of diagnosis and prognosis in fibrosing interstitial lung diseases (ILD). Approach: 40 ILD patients (20 usual interstitial pneumonia (UIP), 20 non-UIP pattern ILD) were classified by expert consensus of 2 radiologists and followed for 7 years. Clinical variables were recorded. Following segmentation of the lung field, a total of 26 texture features were extracted using a lattice-based approach (TM model). The TM model was compared with previously histogram-based model (HM) for their abilities to classify UIP vs non-UIP. For prognostic assessment, survival analysis was performed comparing the expert diagnostic labels versus TM metrics. Results: In the classification analysis, the TM model outperformed the HM method with AUC of 0.70. While survival curves of UIP vs non-UIP expert labels in Cox regression analysis were not statistically different, TM QCT features allowed statistically significant partition of the cohort. Conclusions: TM model outperformed HM model in distinguishing UIP from non-UIP patterns. Most importantly, TM allows for partitioning of the cohort into distinct survival groups, whereas expert UIP vs non-UIP labeling does not. QCT TM models may improve diagnosis of ILD and offer more accurate prognostication, better guiding patient management. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 303,682 |
2407.06778 | A BERT-based Empirical Study of Privacy Policies' Compliance with GDPR | Since its implementation in May 2018, the General Data Protection Regulation (GDPR) has prompted businesses to revisit and revise their data handling practices to ensure compliance. The privacy policy, which serves as the primary means of informing users about their privacy rights and the data practices of companies, has been significantly updated by numerous businesses post-GDPR implementation. However, many privacy policies remain packed with technical jargon, lengthy explanations, and vague descriptions of data practices and user rights. This makes it a challenging task for users and regulatory authorities to manually verify the GDPR compliance of these privacy policies. In this study, we aim to address the challenge of compliance analysis between GDPR (Article 13) and privacy policies for 5G networks. We manually collected privacy policies from almost 70 different 5G MNOs, and we utilized an automated BERT-based model for classification. We show that an encouraging 51$\%$ of companies demonstrate a strong adherence to GDPR. In addition, we present the first study that provides current empirical evidence on the readability of privacy policies for 5G network. we adopted readability analysis toolset that incorporates various established readability metrics. The findings empirically show that the readability of the majority of current privacy policies remains a significant challenge. Hence, 5G providers need to invest considerable effort into revising these documents to enhance both their utility and the overall user experience. | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | false | false | false | 471,529 |
2210.08868 | Cerebrovascular Segmentation via Vessel Oriented Filtering Network | Accurate cerebrovascular segmentation from Magnetic Resonance Angiography (MRA) and Computed Tomography Angiography (CTA) is of great significance in diagnosis and treatment of cerebrovascular pathology. Due to the complexity and topology variability of blood vessels, complete and accurate segmentation of vascular network is still a challenge. In this paper, we proposed a Vessel Oriented Filtering Network (VOF-Net) which embeds domain knowledge into the convolutional neural network. We design oriented filters for blood vessels according to vessel orientation field, which is obtained by orientation estimation network. Features extracted by oriented filtering are injected into segmentation network, so as to make use of the prior information that the blood vessels are slender and curved tubular structure. Experimental results on datasets of CTA and MRA show that the proposed method is effective for vessel segmentation, and embedding the specific vascular filter improves the segmentation performance. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 324,329 |
2212.02941 | Safe Imitation Learning of Nonlinear Model Predictive Control for
Flexible Robots | Flexible robots may overcome some of the industry's major challenges, such as enabling intrinsically safe human-robot collaboration and achieving a higher payload-to-mass ratio. However, controlling flexible robots is complicated due to their complex dynamics, which include oscillatory behavior and a high-dimensional state space. Nonlinear model predictive control (NMPC) offers an effective means to control such robots, but its significant computational demand often limits its application in real-time scenarios. To enable fast control of flexible robots, we propose a framework for a safe approximation of NMPC using imitation learning and a predictive safety filter. Our framework significantly reduces computation time while incurring a slight loss in performance. Compared to NMPC, our framework shows more than an eightfold improvement in computation time when controlling a three-dimensional flexible robot arm in simulation, all while guaranteeing safety constraints. Notably, our approach outperforms state-of-the-art reinforcement learning methods. The development of fast and safe approximate NMPC holds the potential to accelerate the adoption of flexible robots in industry. The project code is available at: tinyurl.com/anmpc4fr | false | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | 334,943 |
2406.14947 | LiCS: Navigation using Learned-imitation on Cluttered Space | In this letter, we propose a robust and fast navigation system in a narrow indoor environment for UGV (Unmanned Ground Vehicle) using 2D LiDAR and odometry. We used behavior cloning with Transformer neural network to learn the optimization-based baseline algorithm. We inject Gaussian noise during expert demonstration to increase the robustness of learned policy. We evaluate the performance of LiCS using both simulation and hardware experiments. It outperforms all other baselines in terms of navigation performance and can maintain its robust performance even on highly cluttered environments. During the hardware experiments, LiCS can maintain safe navigation at maximum speed of $1.5\ m/s$. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 466,548 |
2005.01979 | Demand-Side Scheduling Based on Multi-Agent Deep Actor-Critic Learning
for Smart Grids | We consider the problem of demand-side energy management, where each household is equipped with a smart meter that is able to schedule home appliances online. The goal is to minimize the overall cost under a real-time pricing scheme. While previous works have introduced centralized approaches in which the scheduling algorithm has full observability, we propose the formulation of a smart grid environment as a Markov game. Each household is a decentralized agent with partial observability, which allows scalability and privacy-preservation in a realistic setting. The grid operator produces a price signal that varies with the energy demand. We propose an extension to a multi-agent, deep actor-critic algorithm to address partial observability and the perceived non-stationarity of the environment from the agent's viewpoint. This algorithm learns a centralized critic that coordinates training of decentralized agents. Our approach thus uses centralized learning but decentralized execution. Simulation results show that our online deep reinforcement learning method can reduce both the peak-to-average ratio of total energy consumed and the cost of electricity for all households based purely on instantaneous observations and a price signal. | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | 175,725 |
2309.10298 | Learning Orbitally Stable Systems for Diagrammatically Teaching | Diagrammatic Teaching is a paradigm for robots to acquire novel skills, whereby the user provides 2D sketches over images of the scene to shape the robot's motion. In this work, we tackle the problem of teaching a robot to approach a surface and then follow cyclic motion on it, where the cycle of the motion can be arbitrarily specified by a single user-provided sketch over an image from the robot's camera. Accordingly, we contribute the Stable Diffeomorphic Diagrammatic Teaching (SDDT) framework. SDDT models the robot's motion as an Orbitally Asymptotically Stable (O.A.S.) dynamical system that learns to stablize based on a single diagrammatic sketch provided by the user. This is achieved by applying a \emph{diffeomorphism}, i.e. a differentiable and invertible function, to morph a known O.A.S. system. The parameterised diffeomorphism is then optimised with respect to the Hausdorff distance between the limit cycle of our modelled system and the sketch, to produce the desired robot motion. We provide novel theoretical insight into the behaviour of the optimised system and also empirically evaluate SDDT, both in simulation and on a quadruped with a mounted 6-DOF manipulator. Results show that we can diagrammatically teach complex cyclic motion patterns with a high degree of accuracy. | false | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | 392,949 |
2410.09902 | Multi class activity classification in videos using Motion History Image
generation | Human action recognition has been a topic of interest across multiple fields ranging from security to entertainment systems. Tracking the motion and identifying the action being performed on a real time basis is necessary for critical security systems. In entertainment, especially gaming, the need for immediate responses for actions and gestures are paramount for the success of that system. We show that Motion History image has been a well established framework to capture the temporal and activity information in multi dimensional detail enabling various usecases including classification. We utilize MHI to produce sample data to train a classifier and demonstrate its effectiveness for action classification across six different activities in a single multi-action video. We analyze the classifier performance and identify usecases where MHI struggles to generate the appropriate activity image and discuss mechanisms and future work to overcome those limitations. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 497,821 |
2312.07425 | Deep Internal Learning: Deep Learning from a Single Input | Deep learning, in general, focuses on training a neural network from large labeled datasets. Yet, in many cases there is value in training a network just from the input at hand. This is particularly relevant in many signal and image processing problems where training data is scarce and diversity is large on the one hand, and on the other, there is a lot of structure in the data that can be exploited. Using this information is the key to deep internal-learning strategies, which may involve training a network from scratch using a single input or adapting an already trained network to a provided input example at inference time. This survey paper aims at covering deep internal-learning techniques that have been proposed in the past few years for these two important directions. While our main focus will be on image processing problems, most of the approaches that we survey are derived for general signals (vectors with recurring patterns that can be distinguished from noise) and are therefore applicable to other modalities. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 414,921 |
1902.10182 | Obstacle-aware Adaptive Informative Path Planning for UAV-based Target
Search | Target search with unmanned aerial vehicles (UAVs) is relevant problem to many scenarios, e.g., search and rescue (SaR). However, a key challenge is planning paths for maximal search efficiency given flight time constraints. To address this, we propose the Obstacle-aware Adaptive Informative Path Planning (OA-IPP) algorithm for target search in cluttered environments using UAVs. Our approach leverages a layered planning strategy using a Gaussian Process (GP)-based model of target occupancy to generate informative paths in continuous 3D space. Within this framework, we introduce an adaptive replanning scheme which allows us to trade off between information gain, field coverage, sensor performance, and collision avoidance for efficient target detection. Extensive simulations show that our OA-IPP method performs better than state-of-the-art planners, and we demonstrate its application in a realistic urban SaR scenario. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 122,603 |
2210.05156 | Task-Aware Specialization for Efficient and Robust Dense Retrieval for
Open-Domain Question Answering | Given its effectiveness on knowledge-intensive natural language processing tasks, dense retrieval models have become increasingly popular. Specifically, the de-facto architecture for open-domain question answering uses two isomorphic encoders that are initialized from the same pretrained model but separately parameterized for questions and passages. This bi-encoder architecture is parameter-inefficient in that there is no parameter sharing between encoders. Further, recent studies show that such dense retrievers underperform BM25 in various settings. We thus propose a new architecture, Task-aware Specialization for dense Retrieval (TASER), which enables parameter sharing by interleaving shared and specialized blocks in a single encoder. Our experiments on five question answering datasets show that TASER can achieve superior accuracy, surpassing BM25, while using about 60% of the parameters as bi-encoder dense retrievers. In out-of-domain evaluations, TASER is also empirically more robust than bi-encoder dense retrievers. Our code is available at https://github.com/microsoft/taser. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 322,726 |
2106.03033 | Graph Belief Propagation Networks | With the wide-spread availability of complex relational data, semi-supervised node classification in graphs has become a central machine learning problem. Graph neural networks are a recent class of easy-to-train and accurate methods for this problem that map the features in the neighborhood of a node to its label, but they ignore label correlation during inference and their predictions are difficult to interpret. On the other hand, collective classification is a traditional approach based on interpretable graphical models that explicitly model label correlations. Here, we introduce a model that combines the advantages of these two approaches, where we compute the marginal probabilities in a conditional random field, similar to collective classification, and the potentials in the random field are learned through end-to-end training, akin to graph neural networks. In our model, potentials on each node only depend on that node's features, and edge potentials are learned via a coupling matrix. This structure enables simple training with interpretable parameters, scales to large networks, naturally incorporates training labels at inference, and is often more accurate than related approaches. Our approach can be viewed as either an interpretable message-passing graph neural network or a collective classification method with higher capacity and modernized training. | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 239,139 |
2412.20796 | FastCHGNet: Training one Universal Interatomic Potential to 1.5 Hours
with 32 GPUs | Graph neural network universal interatomic potentials (GNN-UIPs) have demonstrated remarkable generalization and transfer capabilities in material discovery and property prediction. These models can accelerate molecular dynamics (MD) simulation by several orders of magnitude while maintaining \textit{ab initio} accuracy, making them a promising new paradigm in material simulations. One notable example is Crystal Hamiltonian Graph Neural Network (CHGNet), pretrained on the energies, forces, stresses, and magnetic moments from the MPtrj dataset, representing a state-of-the-art GNN-UIP model for charge-informed MD simulations. However, training the CHGNet model is time-consuming(8.3 days on one A100 GPU) for three reasons: (i) requiring multi-layer propagation to reach more distant atom information, (ii) requiring second-order derivatives calculation to finish weights updating and (iii) the implementation of reference CHGNet does not fully leverage the computational capabilities. This paper introduces FastCHGNet, an optimized CHGNet, with three contributions: Firstly, we design innovative Force/Stress Readout modules to decompose Force/Stress prediction. Secondly, we adopt massive optimizations such as kernel fusion, redundancy bypass, etc, to exploit GPU computation power sufficiently. Finally, we extend CHGNet to support multiple GPUs and propose a load-balancing technique to enhance GPU utilization. Numerical results show that FastCHGNet reduces memory footprint by a factor of 3.59. The final training time of FastCHGNet can be decreased to \textbf{1.53 hours} on 32 GPUs without sacrificing model accuracy. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 521,358 |
1801.03562 | Discrete symbolic optimization and Boltzmann sampling by continuous
neural dynamics: Gradient Symbolic Computation | Gradient Symbolic Computation is proposed as a means of solving discrete global optimization problems using a neurally plausible continuous stochastic dynamical system. Gradient symbolic dynamics involves two free parameters that must be adjusted as a function of time to obtain the global maximizer at the end of the computation. We provide a summary of what is known about the GSC dynamics for special cases of settings of the parameters, and also establish that there is a schedule for the two parameters for which convergence to the correct answer occurs with high probability. These results put the empirical results already obtained for GSC on a sound theoretical footing. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 88,114 |
1210.4749 | An Auction Approach to Distributed Power Allocation for Multiuser
Cooperative Networks | This paper studies a wireless network where multiple users cooperate with each other to improve the overall network performance. Our goal is to design an optimal distributed power allocation algorithm that enables user cooperation, in particular, to guide each user on the decision of transmission mode selection and relay selection. Our algorithm has the nice interpretation of an auction mechanism with multiple auctioneers and multiple bidders. Specifically, in our proposed framework, each user acts as both an auctioneer (seller) and a bidder (buyer). Each auctioneer determines its trading price and allocates power to bidders, and each bidder chooses the demand from each auctioneer. By following the proposed distributed algorithm, each user determines how much power to reserve for its own transmission, how much power to purchase from other users, and how much power to contribute for relaying the signals of others. We derive the optimal bidding and pricing strategies that maximize the weighted sum rates of the users. Extensive simulations are carried out to verify our proposed approach. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 19,158 |
2502.08646 | Poly-Autoregressive Prediction for Modeling Interactions | We introduce a simple framework for predicting the behavior of an agent in multi-agent settings. In contrast to autoregressive (AR) tasks, such as language processing, our focus is on scenarios with multiple agents whose interactions are shaped by physical constraints and internal motivations. To this end, we propose Poly-Autoregressive (PAR) modeling, which forecasts an ego agent's future behavior by reasoning about the ego agent's state history and the past and current states of other interacting agents. At its core, PAR represents the behavior of all agents as a sequence of tokens, each representing an agent's state at a specific timestep. With minimal data pre-processing changes, we show that PAR can be applied to three different problems: human action forecasting in social situations, trajectory prediction for autonomous vehicles, and object pose forecasting during hand-object interaction. Using a small proof-of-concept transformer backbone, PAR outperforms AR across these three scenarios. The project website can be found at https://neerja.me/PAR/. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 533,099 |
1610.04576 | Kernel Alignment Inspired Linear Discriminant Analysis | Kernel alignment measures the degree of similarity between two kernels. In this paper, inspired from kernel alignment, we propose a new Linear Discriminant Analysis (LDA) formulation, kernel alignment LDA (kaLDA). We first define two kernels, data kernel and class indicator kernel. The problem is to find a subspace to maximize the alignment between subspace-transformed data kernel and class indicator kernel. Surprisingly, the kernel alignment induced kaLDA objective function is very similar to classical LDA and can be expressed using between-class and total scatter matrices. This can be extended to multi-label data. We use a Stiefel-manifold gradient descent algorithm to solve this problem. We perform experiments on 8 single-label and 6 multi-label data sets. Results show that kaLDA has very good performance on many single-label and multi-label problems. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 62,408 |
2501.12690 | Growth strategies for arbitrary DAG neural architectures | Deep learning has shown impressive results obtained at the cost of training huge neural networks. However, the larger the architecture, the higher the computational, financial, and environmental costs during training and inference. We aim at reducing both training and inference durations. We focus on Neural Architecture Growth, which can increase the size of a small model when needed, directly during training using information from the backpropagation. We expand existing work and freely grow neural networks in the form of any Directed Acyclic Graph by reducing expressivity bottlenecks in the architecture. We explore strategies to reduce excessive computations and steer network growth toward more parameter-efficient architectures. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 526,406 |
2306.05949 | Evaluating the Social Impact of Generative AI Systems in Systems and
Society | Generative AI systems across modalities, ranging from text (including code), image, audio, and video, have broad social impacts, but there is no official standard for means of evaluating those impacts or for which impacts should be evaluated. In this paper, we present a guide that moves toward a standard approach in evaluating a base generative AI system for any modality in two overarching categories: what can be evaluated in a base system independent of context and what can be evaluated in a societal context. Importantly, this refers to base systems that have no predetermined application or deployment context, including a model itself, as well as system components, such as training data. Our framework for a base system defines seven categories of social impact: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. Suggested methods for evaluation apply to listed generative modalities and analyses of the limitations of existing evaluations serve as a starting point for necessary investment in future evaluations. We offer five overarching categories for what can be evaluated in a broader societal context, each with its own subcategories: trustworthiness and autonomy; inequality, marginalization, and violence; concentration of authority; labor and creativity; and ecosystem and environment. Each subcategory includes recommendations for mitigating harm. | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | 372,384 |
2501.10160 | CSSDM Ontology to Enable Continuity of Care Data Interoperability | The rapid advancement of digital technologies and recent global pandemic scenarios have led to a growing focus on how these technologies can enhance healthcare service delivery and workflow to address crises. Action plans that consolidate existing digital transformation programs are being reviewed to establish core infrastructure and foundations for sustainable healthcare solutions. Reforming health and social care to personalize home care, for example, can help avoid treatment in overcrowded acute hospital settings and improve the experiences and outcomes for both healthcare professionals and service users. In this information-intensive domain, addressing the interoperability challenge through standards-based roadmaps is crucial for enabling effective connections between health and social care services. This approach facilitates safe and trustworthy data workflows between different healthcare system providers. In this paper, we present a methodology for extracting, transforming, and loading data through a semi-automated process using a Common Semantic Standardized Data Model (CSSDM) to create personalized healthcare knowledge graph (KG). The CSSDM is grounded in the formal ontology of ISO 13940 ContSys and incorporates FHIR-based specifications to support structural attributes for generating KGs. We propose that the CSSDM facilitates data harmonization and linking, offering an alternative approach to interoperability. This approach promotes a novel form of collaboration between companies developing health information systems and cloud-enabled health services. Consequently, it provides multiple stakeholders with access to high-quality data and information sharing. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 525,418 |
cs/0501090 | Stochastic Iterative Decoders | This paper presents a stochastic algorithm for iterative error control decoding. We show that the stochastic decoding algorithm is an approximation of the sum-product algorithm. When the code's factor graph is a tree, as with trellises, the algorithm approaches maximum a-posteriori decoding. We also demonstrate a stochastic approximations to the alternative update rule known as successive relaxation. Stochastic decoders have very simple digital implementations which have almost no RAM requirements. We present example stochastic decoders for a trellis-based Hamming code, and for a Block Turbo code constructed from Hamming codes. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 538,526 |
2405.14452 | JointRF: End-to-End Joint Optimization for Dynamic Neural Radiance Field
Representation and Compression | Neural Radiance Field (NeRF) excels in photo-realistically static scenes, inspiring numerous efforts to facilitate volumetric videos. However, rendering dynamic and long-sequence radiance fields remains challenging due to the significant data required to represent volumetric videos. In this paper, we propose a novel end-to-end joint optimization scheme of dynamic NeRF representation and compression, called JointRF, thus achieving significantly improved quality and compression efficiency against the previous methods. Specifically, JointRF employs a compact residual feature grid and a coefficient feature grid to represent the dynamic NeRF. This representation handles large motions without compromising quality while concurrently diminishing temporal redundancy. We also introduce a sequential feature compression subnetwork to further reduce spatial-temporal redundancy. Finally, the representation and compression subnetworks are end-to-end trained combined within the JointRF. Extensive experiments demonstrate that JointRF can achieve superior compression performance across various datasets. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 456,435 |
2502.08071 | Collaborative Filtering Meets Spectrum Shift: Connecting User-Item
Interaction with Graph-Structured Side Information | Graph Neural Network (GNN) has demonstrated their superiority in collaborative filtering, where the user-item (U-I) interaction bipartite graph serves as the fundamental data format. However, when graph-structured side information (e.g., multimodal similarity graphs or social networks) is integrated into the U-I bipartite graph, existing graph collaborative filtering methods fall short of achieving satisfactory performance. We quantitatively analyze this problem from a spectral perspective. Recall that a bipartite graph possesses a full spectrum within the range of [-1, 1], with the highest frequency exactly achievable at -1 and the lowest frequency at 1; however, we observe as more side information is incorporated, the highest frequency of the augmented adjacency matrix progressively shifts rightward. This spectrum shift phenomenon has caused previous approaches built for the full spectrum [-1, 1] to assign mismatched importance to different frequencies. To this end, we propose Spectrum Shift Correction (dubbed SSC), incorporating shifting and scaling factors to enable spectral GNNs to adapt to the shifted spectrum. Unlike previous paradigms of leveraging side information, which necessitate tailored designs for diverse data types, SSC directly connects traditional graph collaborative filtering with any graph-structured side information. Experiments on social and multimodal recommendation demonstrate the effectiveness of SSC, achieving relative improvements of up to 23% without incurring any additional computational overhead. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 532,879 |
2210.04431 | Scientific Machine Learning for Modeling and Simulating Complex Fluids | The formulation of rheological constitutive equations -- models that relate internal stresses and deformations in complex fluids -- is a critical step in the engineering of systems involving soft materials. While data-driven models provide accessible alternatives to expensive first-principles models and less accurate empirical models in many engineering disciplines, the development of similar models for complex fluids has lagged. The diversity of techniques for characterizing non-Newtonian fluid dynamics creates a challenge for classical machine learning approaches, which require uniformly structured training data. Consequently, early machine learning constitutive equations have not been portable between different deformation protocols or mechanical observables. Here, we present a data-driven framework that resolves such issues, allowing rheologists to construct learnable models that incorporate essential physical information, while remaining agnostic to details regarding particular experimental protocols or flow kinematics. These scientific machine learning models incorporate a universal approximator within a materially objective tensorial constitutive framework. By construction, these models respect physical constraints, such as frame-invariance and tensor symmetry, required by continuum mechanics. We demonstrate that this framework facilitates the rapid discovery of accurate constitutive equations from limited data, and that the learned models may be used to describe more kinematically complex flows. This inherent flexibility admits the application of these 'digital fluid twins' to a range of material systems and engineering problems. We illustrate this flexibility by deploying a trained model within a multidimensional computational fluid dynamics simulation -- a task that is not achievable using any previously developed data-driven rheological equation of state. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 322,452 |
2407.18897 | Small Molecule Optimization with Large Language Models | Recent advancements in large language models have opened new possibilities for generative molecular drug design. We present Chemlactica and Chemma, two language models fine-tuned on a novel corpus of 110M molecules with computed properties, totaling 40B tokens. These models demonstrate strong performance in generating molecules with specified properties and predicting new molecular characteristics from limited samples. We introduce a novel optimization algorithm that leverages our language models to optimize molecules for arbitrary properties given limited access to a black box oracle. Our approach combines ideas from genetic algorithms, rejection sampling, and prompt optimization. It achieves state-of-the-art performance on multiple molecular optimization benchmarks, including an 8% improvement on Practical Molecular Optimization compared to previous methods. We publicly release the training corpus, the language models and the optimization algorithm. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 476,552 |
1911.05584 | Tensor Decomposition with Relational Constraints for Predicting Multiple
Types of MicroRNA-disease Associations | MicroRNAs (miRNAs) play crucial roles in multifarious biological processes associated with human diseases. Identifying potential miRNA-disease associations contributes to understanding the molecular mechanisms of miRNA-related diseases. Most of the existing computational methods mainly focus on predicting whether a miRNA-disease association exists or not. However, the roles of miRNAs in diseases are prominently diverged, for instance, Genetic variants of microRNA (mir-15) may affect expression level of miRNAs leading to B cell chronic lymphocytic leukemia, while circulating miRNAs (including mir-1246, mir-1307-3p, etc.) have potentials to detecting breast cancer in the early stage. In this paper, we aim to predict multi-type miRNA-disease associations instead of taking them as binary. To this end, we innovatively represent miRNA-disease-type triplets as a tensor and introduce Tensor Decomposition methods to solve the prediction task. Experimental results on two widely-adopted miRNA-disease datasets: HMDD v2.0 and HMDD v3.2 show that tensor decomposition methods improve a recent baseline in a large scale (up to 38% in top-1 F1). We further propose a novel method, Tensor Decomposition with Relational Constraints (TDRC), which incorporates biological features as relational constraints to further the existing tensor decomposition methods. Compared with two existing tensor decomposition methods, TDRC can produce better performance while being more efficient. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 153,305 |
1001.2554 | A new proof of Delsarte, Goethals and Mac Williams theorem on minimal
weight codewords of generalized Reed-Muller code | We give a new proof of Delsarte, Goethals and Mac williams theorem on minimal weight codewords of generalized Reed-Muller codes published in 1970. To prove this theorem, we consider intersection of support of minimal weight codewords with affine hyperplanes and we proceed by recursion. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 5,397 |
2407.04215 | T2IShield: Defending Against Backdoors on Text-to-Image Diffusion Models | While text-to-image diffusion models demonstrate impressive generation capabilities, they also exhibit vulnerability to backdoor attacks, which involve the manipulation of model outputs through malicious triggers. In this paper, for the first time, we propose a comprehensive defense method named T2IShield to detect, localize, and mitigate such attacks. Specifically, we find the "Assimilation Phenomenon" on the cross-attention maps caused by the backdoor trigger. Based on this key insight, we propose two effective backdoor detection methods: Frobenius Norm Threshold Truncation and Covariance Discriminant Analysis. Besides, we introduce a binary-search approach to localize the trigger within a backdoor sample and assess the efficacy of existing concept editing methods in mitigating backdoor attacks. Empirical evaluations on two advanced backdoor attack scenarios show the effectiveness of our proposed defense method. For backdoor sample detection, T2IShield achieves a detection F1 score of 88.9$\%$ with low computational cost. Furthermore, T2IShield achieves a localization F1 score of 86.4$\%$ and invalidates 99$\%$ poisoned samples. Codes are released at https://github.com/Robin-WZQ/T2IShield. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 470,471 |
2010.10901 | On Information Asymmetry in Competitive Multi-Agent Reinforcement
Learning: Convergence and Optimality | In this work, we study the system of interacting non-cooperative two Q-learning agents, where one agent has the privilege of observing the other's actions. We show that this information asymmetry can lead to a stable outcome of population learning, which generally does not occur in an environment of general independent learners. The resulting post-learning policies are almost optimal in the underlying game sense, i.e., they form a Nash equilibrium. Furthermore, we propose in this work a Q-learning algorithm, requiring predictive observation of two subsequent opponent's actions, yielding an optimal strategy given that the latter applies a stationary strategy, and discuss the existence of the Nash equilibrium in the underlying information asymmetrical game. | false | false | false | false | false | false | true | false | false | false | true | false | false | false | true | false | false | true | 202,049 |
2002.03054 | Extrapolation Towards Imaginary $0$-Nearest Neighbour and Its Improved
Convergence Rate | $k$-nearest neighbour ($k$-NN) is one of the simplest and most widely-used methods for supervised classification, that predicts a query's label by taking weighted ratio of observed labels of $k$ objects nearest to the query. The weights and the parameter $k \in \mathbb{N}$ regulate its bias-variance trade-off, and the trade-off implicitly affects the convergence rate of the excess risk for the $k$-NN classifier; several existing studies considered selecting optimal $k$ and weights to obtain faster convergence rate. Whereas $k$-NN with non-negative weights has been developed widely, it was also proved that negative weights are essential for eradicating the bias terms and attaining optimal convergence rate. In this paper, we propose a novel multiscale $k$-NN (MS-$k$-NN), that extrapolates unweighted $k$-NN estimators from several $k \ge 1$ values to $k=0$, thus giving an imaginary 0-NN estimator. Our method implicitly computes optimal real-valued weights that are adaptive to the query and its neighbour points. We theoretically prove that the MS-$k$-NN attains the improved rate, which coincides with the existing optimal rate under some conditions. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 163,118 |
2304.08466 | Synthetic Data from Diffusion Models Improves ImageNet Classification | Deep generative models are becoming increasingly powerful, now generating diverse high fidelity photo-realistic samples given text prompts. Have they reached the point where models of natural images can be used for generative data augmentation, helping to improve challenging discriminative tasks? We show that large-scale text-to image diffusion models can be fine-tuned to produce class conditional models with SOTA FID (1.76 at 256x256 resolution) and Inception Score (239 at 256x256). The model also yields a new SOTA in Classification Accuracy Scores (64.96 for 256x256 generative samples, improving to 69.24 for 1024x1024 samples). Augmenting the ImageNet training set with samples from the resulting models yields significant improvements in ImageNet classification accuracy over strong ResNet and Vision Transformer baselines. | false | false | false | false | true | false | true | false | true | false | false | true | false | false | false | false | false | false | 358,720 |
2301.02064 | Single-round Self-supervised Distributed Learning using Vision
Transformer | Despite the recent success of deep learning in the field of medicine, the issue of data scarcity is exacerbated by concerns about privacy and data ownership. Distributed learning approaches, including federated learning, have been investigated to address these issues. However, they are hindered by the need for cumbersome communication overheads and weaknesses in privacy protection. To tackle these challenges, we propose a self-supervised masked sampling distillation method for the vision transformer. This method can be implemented without continuous communication and can enhance privacy by utilizing a vision transformer-specific encryption technique. We conducted extensive experiments on two different tasks, which demonstrated the effectiveness of our method. We achieved superior performance compared to the existing distributed learning strategy as well as the fine-tuning only baseline. Furthermore, since the self-supervised model created using our proposed method can achieve a general semantic understanding of the image, we demonstrate its potential as a task-agnostic self-supervised foundation model for various downstream tasks, thereby expanding its applicability in the medical domain. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 339,403 |
1506.05143 | Interference-Nulling Time-Reversal Beamforming for mm-Wave Massive MIMO
in Multi-User Frequency-Selective Indoor Channels | Millimeter wave (mm-wave) and massive MIMO have been proposed for next generation wireless systems. However, there are many open problems for the implementation of those technologies. In particular, beamforming is necessary in mm-wave systems in order to counter high propagation losses. However, conventional beamsteering is not always appropriate in rich scattering multipath channels with frequency selective fading, such as those found in indoor environments. In this context, time-reversal (TR) is considered a promising beamforming technique for such mm-wave massive MIMO systems. In this paper, we analyze a baseband TR beamforming system for mm-wave multi-user massive MIMO. We verify that, as the number of antennas increases, TR yields good equalization and interference mitigation properties, but inter-user interference (IUI) remains a main impairment. Thus, we propose a novel technique called interference-nulling TR (INTR) to minimize IUI. We evaluate numerically the performance of INTR and compare it with conventional TR and equalized TR beamforming. We use a 60 GHz MIMO channel model with spatial correlation based on the IEEE 802.11ad SISO NLoS model. We demonstrate that INTR outperforms conventional TR with respect to average BER per user and achievable sum rate under diverse conditions, providing both diversity and multiplexing gains simultaneously. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 44,258 |
2001.11846 | Quaternion-Valued Recurrent Projection Neural Networks on Unit
Quaternions | Hypercomplex-valued neural networks, including quaternion-valued neural networks, can treat multi-dimensional data as a single entity. In this paper, we present the quaternion-valued recurrent projection neural networks (QRPNNs). Briefly, QRPNNs are obtained by combining the non-local projection learning with the quaternion-valued recurrent correlation neural network (QRCNNs). We show that QRPNNs overcome the cross-talk problem of QRCNNs. Thus, they are appropriate to implement associative memories. Furthermore, computational experiments reveal that QRPNNs exhibit greater storage capacity and noise tolerance than their corresponding QRCNNs. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 162,187 |
1203.4870 | Variational Bayesian algorithm for quantized compressed sensing | Compressed sensing (CS) is on recovery of high dimensional signals from their low dimensional linear measurements under a sparsity prior and digital quantization of the measurement data is inevitable in practical implementation of CS algorithms. In the existing literature, the quantization error is modeled typically as additive noise and the multi-bit and 1-bit quantized CS problems are dealt with separately using different treatments and procedures. In this paper, a novel variational Bayesian inference based CS algorithm is presented, which unifies the multi- and 1-bit CS processing and is applicable to various cases of noiseless/noisy environment and unsaturated/saturated quantizer. By decoupling the quantization error from the measurement noise, the quantization error is modeled as a random variable and estimated jointly with the signal being recovered. Such a novel characterization of the quantization error results in superior performance of the algorithm which is demonstrated by extensive simulations in comparison with state-of-the-art methods for both multi-bit and 1-bit CS problems. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 15,064 |
2110.14020 | The Difficulty of Passive Learning in Deep Reinforcement Learning | Learning to act from observational data without active environmental interaction is a well-known challenge in Reinforcement Learning (RL). Recent approaches involve constraints on the learned policy or conservative updates, preventing strong deviations from the state-action distribution of the dataset. Although these methods are evaluated using non-linear function approximation, theoretical justifications are mostly limited to the tabular or linear cases. Given the impressive results of deep reinforcement learning, we argue for a need to more clearly understand the challenges in this setting. In the vein of Held & Hein's classic 1963 experiment, we propose the "tandem learning" experimental paradigm which facilitates our empirical analysis of the difficulties in offline reinforcement learning. We identify function approximation in conjunction with fixed data distributions as the strongest factors, thereby extending but also challenging hypotheses stated in past work. Our results provide relevant insights for offline deep reinforcement learning, while also shedding new light on phenomena observed in the online case of learning control. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 263,382 |
2210.03112 | A New Path: Scaling Vision-and-Language Navigation with Synthetic
Instructions and Imitation Learning | Recent studies in Vision-and-Language Navigation (VLN) train RL agents to execute natural-language navigation instructions in photorealistic environments, as a step towards robots that can follow human instructions. However, given the scarcity of human instruction data and limited diversity in the training environments, these agents still struggle with complex language grounding and spatial language understanding. Pretraining on large text and image-text datasets from the web has been extensively explored but the improvements are limited. We investigate large-scale augmentation with synthetic instructions. We take 500+ indoor environments captured in densely-sampled 360 degree panoramas, construct navigation trajectories through these panoramas, and generate a visually-grounded instruction for each trajectory using Marky, a high-quality multilingual navigation instruction generator. We also synthesize image observations from novel viewpoints using an image-to-image GAN. The resulting dataset of 4.2M instruction-trajectory pairs is two orders of magnitude larger than existing human-annotated datasets, and contains a wider variety of environments and viewpoints. To efficiently leverage data at this scale, we train a simple transformer agent with imitation learning. On the challenging RxR dataset, our approach outperforms all existing RL agents, improving the state-of-the-art NDTW from 71.1 to 79.1 in seen environments, and from 64.6 to 66.8 in unseen test environments. Our work points to a new path to improving instruction-following agents, emphasizing large-scale imitation learning and the development of synthetic instruction generation capabilities. | false | false | false | false | false | false | true | true | true | false | false | true | false | false | false | false | false | false | 321,901 |
2106.03354 | AI without networks | Contemporary Artificial Intelligence (AI) stands on two legs: large training data corpora and many-parameter artificial neural networks (ANNs). The data corpora are needed to represent the complexity and heterogeneity of the world. The role of the networks is less transparent due to the obscure dependence of the network parameters and outputs on the training data and inputs. This raises problems, ranging from technical-scientific to legal-ethical. We hypothesize that a transparent approach to machine learning is possible without using networks at all. By generalizing a parameter-free, statistically consistent data interpolation method, which we analyze theoretically in detail, we develop a network-free framework for AI incorporating generative modeling. We demonstrate this framework with examples from three different disciplines - ethology, control theory, and mathematics. Our generative Hilbert framework applied to the trajectories of small groups of swimming fish outperformed state-of-the-art traditional mathematical behavioral models and current ANN-based models. We demonstrate pure data interpolation based control by stabilizing an inverted pendulum and a driven logistic map around unstable fixed points. Finally, we present a mathematical application by predicting zeros of the Riemann Zeta function, achieving comparable performance as a transformer network. We do not suggest that the proposed framework will always outperform networks as over-parameterized networks can interpolate. However, our framework is theoretically sound, transparent, deterministic, and parameter free: remarkably, it does not require any compute-expensive training, does not involve optimization, has no model selection, and is easily reproduced and ported. We also propose an easily computed method of credit assignment based on this framework, to help address ethical-legal challenges raised by generative AI. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 239,285 |
2310.08256 | Impact of Co-occurrence on Factual Knowledge of Large Language Models | Large language models (LLMs) often make factually incorrect responses despite their success in various applications. In this paper, we hypothesize that relying heavily on simple co-occurrence statistics of the pre-training corpora is one of the main factors that cause factual errors. Our results reveal that LLMs are vulnerable to the co-occurrence bias, defined as preferring frequently co-occurred words over the correct answer. Consequently, LLMs struggle to recall facts whose subject and object rarely co-occur in the pre-training dataset although they are seen during finetuning. We show that co-occurrence bias remains despite scaling up model sizes or finetuning. Therefore, we suggest finetuning on a debiased dataset to mitigate the bias by filtering out biased samples whose subject-object co-occurrence count is high. Although debiased finetuning allows LLMs to memorize rare facts in the training set, it is not effective in recalling rare facts unseen during finetuning. Further research in mitigation will help build reliable language models by preventing potential errors. The code is available at \url{https://github.com/CheongWoong/impact_of_cooccurrence}. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 399,325 |
2404.13764 | Using Adaptive Empathetic Responses for Teaching English | Existing English-teaching chatbots rarely incorporate empathy explicitly in their feedback, but empathetic feedback could help keep students engaged and reduce learner anxiety. Toward this end, we propose the task of negative emotion detection via audio, for recognizing empathetic feedback opportunities in language learning. We then build the first spoken English-teaching chatbot with adaptive, empathetic feedback. This feedback is synthesized through automatic prompt optimization of ChatGPT and is evaluated with English learners. We demonstrate the effectiveness of our system through a preliminary user study. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 448,431 |
2302.04917 | ChemVise: Maximizing Out-of-Distribution Chemical Detection with the
Novel Application of Zero-Shot Learning | Accurate chemical sensors are vital in medical, military, and home safety applications. Training machine learning models to be accurate on real world chemical sensor data requires performing many diverse, costly experiments in controlled laboratory settings to create a data set. In practice even expensive, large data sets may be insufficient for generalization of a trained model to a real-world testing distribution. Rather than perform greater numbers of experiments requiring exhaustive mixtures of chemical analytes, this research proposes learning approximations of complex exposures from training sets of simple ones by using single-analyte exposure signals as building blocks of a multiple-analyte space. We demonstrate this approach to synthetic sensor responses surprisingly improves the detection of out-of-distribution obscured chemical analytes. Further, we pair these synthetic signals to targets in an information-dense representation space utilizing a large corpus of chemistry knowledge. Through utilization of a semantically meaningful analyte representation spaces along with synthetic targets we achieve rapid analyte classification in the presence of obscurants without corresponding obscured-analyte training data. Transfer learning for supervised learning with molecular representations makes assumptions about the input data. Instead, we borrow from the natural language and natural image processing literature for a novel approach to chemical sensor signal classification using molecular semantics for arbitrary chemical sensor hardware designs. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 344,864 |
2502.12446 | Multi-Attribute Steering of Language Models via Targeted Intervention | Inference-time intervention (ITI) has emerged as a promising method for steering large language model (LLM) behavior in a particular direction (e.g., improving helpfulness) by intervening on token representations without costly updates to the LLM's parameters. However, existing ITI approaches fail to scale to multi-attribute settings with conflicts, such as enhancing helpfulness while also reducing toxicity. To address this, we introduce Multi-Attribute Targeted Steering (MAT-Steer), a novel steering framework designed for selective token-level intervention across multiple attributes. MAT-Steer learns steering vectors using an alignment objective that shifts the model's internal representations of undesirable outputs closer to those of desirable ones while enforcing sparsity and orthogonality among vectors for different attributes, thereby reducing inter-attribute conflicts. We evaluate MAT-Steer in two distinct settings: (i) on question answering (QA) tasks where we balance attributes like truthfulness, bias, and toxicity; (ii) on generative tasks where we simultaneously improve attributes like helpfulness, correctness, and coherence. MAT-Steer outperforms existing ITI and parameter-efficient finetuning approaches across both task types (e.g., 3% average accuracy gain across QA tasks and 55.82% win rate against the best ITI baseline). | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 534,867 |
2405.10793 | CCTNet: A Circular Convolutional Transformer Network for LiDAR-based
Place Recognition Handling Movable Objects Occlusion | Place recognition is a fundamental task for robotic application, allowing robots to perform loop closure detection within simultaneous localization and mapping (SLAM), and achieve relocalization on prior maps. Current range image-based networks use single-column convolution to maintain feature invariance to shifts in image columns caused by LiDAR viewpoint change.However, this raises the issues such as "restricted receptive fields" and "excessive focus on local regions", degrading the performance of networks. To address the aforementioned issues, we propose a lightweight circular convolutional Transformer network denoted as CCTNet, which boosts performance by capturing structural information in point clouds and facilitating crossdimensional interaction of spatial and channel information. Initially, a Circular Convolution Module (CCM) is introduced, expanding the network's perceptual field while maintaining feature consistency across varying LiDAR perspectives. Then, a Range Transformer Module (RTM) is proposed, which enhances place recognition accuracy in scenarios with movable objects by employing a combination of channel and spatial attention mechanisms. Furthermore, we propose an Overlap-based loss function, transforming the place recognition task from a binary loop closure classification into a regression problem linked to the overlap between LiDAR frames. Through extensive experiments on the KITTI and Ford Campus datasets, CCTNet surpasses comparable methods, achieving Recall@1 of 0.924 and 0.965, and Recall@1% of 0.990 and 0.993 on the test set, showcasing a superior performance. Results on the selfcollected dataset further demonstrate the proposed method's potential for practical implementation in complex scenarios to handle movable objects, showing improved generalization in various datasets. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 454,887 |
2211.05533 | GREENER: Graph Neural Networks for News Media Profiling | We study the problem of profiling news media on the Web with respect to their factuality of reporting and bias. This is an important but under-studied problem related to disinformation and "fake news" detection, but it addresses the issue at a coarser granularity compared to looking at an individual article or an individual claim. This is useful as it allows to profile entire media outlets in advance. Unlike previous work, which has focused primarily on text (e.g.,~on the text of the articles published by the target website, or on the textual description in their social media profiles or in Wikipedia), here our main focus is on modeling the similarity between media outlets based on the overlap of their audience. This is motivated by homophily considerations, i.e.,~the tendency of people to have connections to people with similar interests, which we extend to media, hypothesizing that similar types of media would be read by similar kinds of users. In particular, we propose GREENER (GRaph nEural nEtwork for News mEdia pRofiling), a model that builds a graph of inter-media connections based on their audience overlap, and then uses graph neural networks to represent each medium. We find that such representations are quite useful for predicting the factuality and the bias of news media outlets, yielding improvements over state-of-the-art results reported on two datasets. When augmented with conventionally used representations obtained from news articles, Twitter, YouTube, Facebook, and Wikipedia, prediction accuracy is found to improve by 2.5-27 macro-F1 points for the two tasks. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 329,586 |
1809.01906 | Model-Based Regularization for Deep Reinforcement Learning with
Transcoder Networks | This paper proposes a new optimization objective for value-based deep reinforcement learning. We extend conventional Deep Q-Networks (DQNs) by adding a model-learning component yielding a transcoder network. The prediction errors for the model are included in the basic DQN loss as additional regularizers. This augmented objective leads to a richer training signal that provides feedback at every time step. Moreover, because learning an environment model shares a common structure with the RL problem, we hypothesize that the resulting objective improves both sample efficiency and performance. We empirically confirm our hypothesis on a range of 20 games from the Atari benchmark attaining superior results over vanilla DQN without model-based regularization. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 106,918 |
1612.08220 | Understanding Neural Networks through Representation Erasure | While neural networks have been successfully applied to many natural language processing tasks, they come at the cost of interpretability. In this paper, we propose a general methodology to analyze and interpret decisions from a neural model by observing the effects on the model of erasing various parts of the representation, such as input word-vector dimensions, intermediate hidden units, or input words. We present several approaches to analyzing the effects of such erasure, from computing the relative difference in evaluation metrics, to using reinforcement learning to erase the minimum set of input words in order to flip a neural model's decision. In a comprehensive analysis of multiple NLP tasks, including linguistic feature classification, sentence-level sentiment analysis, and document level sentiment aspect prediction, we show that the proposed methodology not only offers clear explanations about neural model decisions, but also provides a way to conduct error analysis on neural models. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 66,046 |
1707.09108 | Ensemble Performance of Biometric Authentication Systems Based on Secret
Key Generation | We study the ensemble performance of biometric authentication systems, based on secret key generation, which work as follows. In the enrollment stage, an individual provides a biometric signal that is mapped into a secret key and a helper message, the former being prepared to become available to the system at a later time (for authentication), and the latter is stored in a public database. When an authorized user requests authentication, claiming his/her identity as one of the subscribers, s/he has to provide a biometric signal again, and then the system, which retrieves also the helper message of the claimed subscriber, produces an estimate of the secret key, that is finally compared to the secret key of the claimed user. In case of a match, the authentication request is approved, otherwise, it is rejected.Referring to an ensemble of systems based on Slepian-Wolf binning, we provide a detailed analysis of the false-reject and false-accept probabilities, for a wide class of stochastic decoders. We also comment on the security for the typical code in the ensemble. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 77,952 |
1805.04152 | Training Recurrent Neural Networks via Dynamical Trajectory-Based
Optimization | This paper introduces a new method to train recurrent neural networks using dynamical trajectory-based optimization. The optimization method utilizes a projected gradient system (PGS) and a quotient gradient system (QGS) to determine the feasible regions of an optimization problem and search the feasible regions for local minima. By exploring the feasible regions, local minima are identified and the local minimum with the lowest cost is chosen as the global minimum of the optimization problem. Lyapunov theory is used to prove the stability of the local minima and their stability in the presence of measurement errors. Numerical examples show that the new approach provides better results than genetic algorithm and error backpropagation (EBP) trained networks. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 97,180 |
1501.04867 | Conditional Information Inequalities and Combinatorial Applications | We show that the inequality $H(A \mid B,X) + H(A \mid B,Y) \le H(A\mid B)$ for jointly distributed random variables $A,B,X,Y$, which does not hold in general case, holds under some natural condition on the support of the probability distribution of $A,B,X,Y$. This result generalizes a version of the conditional Ingleton inequality: if for some distribution $I(X: Y \mid A) = H(A\mid X,Y)=0$, then $I(A : B) \le I(A : B \mid X) + I(A: B \mid Y) + I(X : Y)$. We present two applications of our result. The first one is the following easy-to-formulate combinatorial theorem: assume that the edges of a bipartite graph are partitioned into $K$ matchings such that for each pair (left vertex $x$, right vertex $y$) there is at most one matching in the partition involving both $x$ and $y$; assume further that the degree of each left vertex is at least $L$ and the degree of each right vertex is at least $R$. Then $K\ge LR$. The second application is a new method to prove lower bounds for biclique coverings of bipartite graphs. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 39,427 |
2311.17958 | CommunityAI: Towards Community-based Federated Learning | Federated Learning (FL) has emerged as a promising paradigm to train machine learning models collaboratively while preserving data privacy. However, its widespread adoption faces several challenges, including scalability, heterogeneous data and devices, resource constraints, and security concerns. Despite its promise, FL has not been specifically adapted for community domains, primarily due to the wide-ranging differences in data types and context, devices and operational conditions, environmental factors, and stakeholders. In response to these challenges, we present a novel framework for Community-based Federated Learning called CommunityAI. CommunityAI enables participants to be organized into communities based on their shared interests, expertise, or data characteristics. Community participants collectively contribute to training and refining learning models while maintaining data and participant privacy within their respective groups. Within this paper, we discuss the conceptual architecture, system requirements, processes, and future challenges that must be solved. Finally, our goal within this paper is to present our vision regarding enabling a collaborative learning process within various communities. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 411,496 |
1709.04427 | Contrast Enhancement of Brightness-Distorted Images by Improved Adaptive
Gamma Correction | As an efficient image contrast enhancement (CE) tool, adaptive gamma correction (AGC) was previously proposed by relating gamma parameter with cumulative distribution function (CDF) of the pixel gray levels within an image. ACG deals well with most dimmed images, but fails for globally bright images and the dimmed images with local bright regions. Such two categories of brightness-distorted images are universal in real scenarios, such as improper exposure and white object regions. In order to attenuate such deficiencies, here we propose an improved AGC algorithm. The novel strategy of negative images is used to realize CE of the bright images, and the gamma correction modulated by truncated CDF is employed to enhance the dimmed ones. As such, local over-enhancement and structure distortion can be alleviated. Both qualitative and quantitative experimental results show that our proposed method yields consistently good CE results. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 80,664 |
2311.02625 | New structure of channel coding: serial concatenation of Polar codes | In this paper, we introduce a new coding and decoding structure for enhancing the reliability and performance of polar codes, specifically at low error rates. We achieve this by concatenating two polar codes in series to create robust error-correcting codes. The primary objective here is to optimize the behavior of individual elementary codes within polar codes. In this structure, we incorporate interleaving, a technique that rearranges bits to maximize the separation between originally neighboring symbols. This rearrangement is instrumental in converting error clusters into distributed errors across the entire sequence. To evaluate their performance, we proposed to model a communication system with seven components: an information source, a channel encoder, a modulator, a channel, a demodulator, a channel decoder, and a destination. This work focuses on evaluating the bit error rate (BER) of codes for different block lengths and code rates. Next, we compare the bit error rate (BER) performance between our proposed method and polar codes. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 405,519 |
2012.12978 | Eurythmic Dancing with Plants -- Measuring Plant Response to Human Body
Movement in an Anthroposophic Environment | This paper describes three experiments measuring interaction of humans with garden plants. In particular, body movement of a human conducting eurythmic dances near the plants (beetroots, tomatoes, lettuce) is correlated with the action potential measured by a plant SpikerBox, a device measuring the electrical activity of plants, and the leaf movement of the plant, tracked with a camera. The first experiment shows that our measurement system captures external stimuli identically for different plants, validating the measurement system. The second experiment illustrates that the plants' response is correlated to the movements of the dancer. The third experiment indicates that plants that have been exposed for multiple weeks to eurythmic dancing might respond differently to plants which are exposed for the first time to eurythmic dancing. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 213,065 |
2307.03783 | Neural Abstraction-Based Controller Synthesis and Deployment | Abstraction-based techniques are an attractive approach for synthesizing correct-by-construction controllers to satisfy high-level temporal requirements. A main bottleneck for successful application of these techniques is the memory requirement, both during controller synthesis and in controller deployment. We propose memory-efficient methods for mitigating the high memory demands of the abstraction-based techniques using neural network representations. To perform synthesis for reach-avoid specifications, we propose an on-the-fly algorithm that relies on compressed neural network representations of the forward and backward dynamics of the system. In contrast to usual applications of neural representations, our technique maintains soundness of the end-to-end process. To ensure this, we correct the output of the trained neural network such that the corrected output representations are sound with respect to the finite abstraction. For deployment, we provide a novel training algorithm to find a neural network representation of the synthesized controller and experimentally show that the controller can be correctly represented as a combination of a neural network and a look-up table that requires a substantially smaller memory. We demonstrate experimentally that our approach significantly reduces the memory requirements of abstraction-based methods. For the selected benchmarks, our approach reduces the memory requirements respectively for the synthesis and deployment by a factor of $1.31\times 10^5$ and $7.13\times 10^3$ on average, and up to $7.54\times 10^5$ and $3.18\times 10^4$. Although this reduction is at the cost of increased off-line computations to train the neural networks, all the steps of our approach are parallelizable and can be implemented on machines with higher number of processing units to reduce the required computational time. | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | 378,147 |
2311.10387 | Stable Attractors for Neural networks classification via Ordinary
Differential Equations (SA-nODE) | A novel approach for supervised classification is presented which sits at the intersection of machine learning and dynamical systems theory. At variance with other methodologies that employ ordinary differential equations for classification purposes, the untrained model is a priori constructed to accommodate for a set of pre-assigned stationary stable attractors. Classifying amounts to steer the dynamics towards one of the planted attractors, depending on the specificity of the processed item supplied as an input. Asymptotically the system will hence converge on a specific point of the explored multi-dimensional space, flagging the category of the object to be eventually classified. Working in this context, the inherent ability to perform classification, as acquired ex post by the trained model, is ultimately reflected in the shaped basin of attractions associated to each of the target stable attractors. The performance of the proposed method is here challenged against simple toy models crafted for the purpose, as well as by resorting to well established reference standards. Although this method does not reach the performance of state-of-the-art deep learning algorithms, it illustrates that continuous dynamical systems with closed analytical interaction terms can serve as high-performance classifiers. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 408,519 |
2212.05129 | Measuring Data | We identify the task of measuring data to quantitatively characterize the composition of machine learning data and datasets. Similar to an object's height, width, and volume, data measurements quantify different attributes of data along common dimensions that support comparison. Several lines of research have proposed what we refer to as measurements, with differing terminology; we bring some of this work together, particularly in fields of computer vision and language, and build from it to motivate measuring data as a critical component of responsible AI development. Measuring data aids in systematically building and analyzing machine learning (ML) data towards specific goals and gaining better control of what modern ML systems will learn. We conclude with a discussion of the many avenues of future work, the limitations of data measurements, and how to leverage these measurement approaches in research and practice. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 335,681 |
2010.12412 | SmBoP: Semi-autoregressive Bottom-up Semantic Parsing | The de-facto standard decoding method for semantic parsing in recent years has been to autoregressively decode the abstract syntax tree of the target program using a top-down depth-first traversal. In this work, we propose an alternative approach: a Semi-autoregressive Bottom-up Parser (SmBoP) that constructs at decoding step $t$ the top-$K$ sub-trees of height $\leq t$. Our parser enjoys several benefits compared to top-down autoregressive parsing. From an efficiency perspective, bottom-up parsing allows to decode all sub-trees of a certain height in parallel, leading to logarithmic runtime complexity rather than linear. From a modeling perspective, a bottom-up parser learns representations for meaningful semantic sub-programs at each step, rather than for semantically-vacuous partial trees. We apply SmBoP on Spider, a challenging zero-shot semantic parsing benchmark, and show that SmBoP leads to a 2.2x speed-up in decoding time and a $\sim$5x speed-up in training time, compared to a semantic parser that uses autoregressive decoding. SmBoP obtains 71.1 denotation accuracy on Spider, establishing a new state-of-the-art, and 69.5 exact match, comparable to the 69.6 exact match of the autoregressive RAT-SQL+GraPPa. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 202,684 |
1911.07548 | Optimal Control over Multiple Input Lossy Channels | The performance of control systems with input packet losses on the controller to plant communication channel is analysed. The main contribution of this work is a proof that linear optimal control systems operating with UDP-like communication protocols have a larger quadratic cost than the same systems operating with TCP-like protocols. The proof is derived for the general case of multidimensional and independent actuation communication channels. In doing so, our results extend previous work to systems with multiple distributed actuators. The difference in cost between two communication protocols is analysed, enabling the maximal difference between the two protocols to be quantified. Numerical examples are presented to highlight the difference in costs induced by the choice of communication protocol. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 153,893 |
1310.6119 | Asynchronous Rumour Spreading in Social and Signed Topologies | In this paper, we present an experimental analysis of the asynchronous push & pull rumour spreading protocol. This protocol is, to date, the best-performing rumour spreading protocol for simple, scalable, and robust information dissemination in distributed systems. We analyse the effect that multiple parameters have on the protocol's performance, such as using memory to avoid contacting the same neighbor twice in a row, varying the stopping criteria used by nodes to decide when to stop spreading the rumour, employing more sophisticated neighbor selection policies instead of the standard uniform random choice, and others. Prior work has focused on either providing theoretical upper bounds regarding the number of rounds needed to spread the rumour to all nodes, or, proposes improvements by adjusting isolated parameters. To our knowledge, our work is the first to study how multiple parameters affect system behaviour both in isolation and combination and under a wide range of values. Our analysis is based on experimental simulations using real-world social network datasets, thus complementing prior theoretical work to shed light on how the protocol behaves in practical, real-world systems. We also study the behaviour of the protocol on a special type of social graph, called signed networks (e.g., Slashdot and Epinions), whose links indicate stronger trust relationships. Finally, through our detailed analysis, we demonstrate how a few simple additions to the protocol can improve the total time required to inform 100% of the nodes by a maximum of 99.69% and an average of 82.37%. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 27,949 |
2303.01526 | Semantic Attention Flow Fields for Monocular Dynamic Scene Decomposition | From video, we reconstruct a neural volume that captures time-varying color, density, scene flow, semantics, and attention information. The semantics and attention let us identify salient foreground objects separately from the background across spacetime. To mitigate low resolution semantic and attention features, we compute pyramids that trade detail with whole-image context. After optimization, we perform a saliency-aware clustering to decompose the scene. To evaluate real-world scenes, we annotate object masks in the NVIDIA Dynamic Scene and DyCheck datasets. We demonstrate that this method can decompose dynamic scenes in an unsupervised way with competitive performance to a supervised method, and that it improves foreground/background segmentation over recent static/dynamic split methods. Project Webpage: https://visual.cs.brown.edu/saff | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 349,005 |
2406.08887 | Low-Overhead Channel Estimation via 3D Extrapolation for TDD mmWave
Massive MIMO Systems Under High-Mobility Scenarios | In time division duplexing (TDD) millimeter wave (mmWave) massive multiple-input multiple-output (MIMO) systems, downlink channel state information (CSI) can be obtained from uplink channel estimation thanks to channel reciprocity. However, under high-mobility scenarios, frequent uplink channel estimation is needed due to channel aging. Additionally, large amounts of antennas and subcarriers result in high-dimensional CSI matrices, aggravating pilot training overhead. To address this, we propose a three-domain (3D) channel extrapolation framework across spatial, frequency, and temporal domains. First, considering the effectiveness of traditional knowledge-driven channel estimation methods and the marginal effects of pilots in the spatial and frequency domains, a knowledge-and-data driven spatial-frequency channel extrapolation network (KDD-SFCEN) is proposed for uplink channel estimation via joint spatial-frequency channel extrapolation to reduce spatial-frequency domain pilot overhead. Then, leveraging channel reciprocity and temporal dependencies, we propose a temporal uplink-downlink channel extrapolation network (TUDCEN) powered by generative artificial intelligence for slot-level channel extrapolation, aiming to reduce the tremendous temporal domain pilot overhead caused by high mobility. Numerical results demonstrate the superiority of the proposed framework in significantly reducing the pilot training overhead by 16 times and improving the system's spectral efficiency under high-mobility scenarios compared with state-of-the-art channel estimation/extrapolation methods. | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | false | false | 463,675 |
2205.01930 | Explainable Anomaly Detection for Industrial Control System
Cybersecurity | Industrial Control Systems (ICSs) are becoming more and more important in managing the operation of many important systems in smart manufacturing, such as power stations, water supply systems, and manufacturing sites. While massive digital data can be a driving force for system performance, data security has raised serious concerns. Anomaly detection, therefore, is essential for preventing network security intrusions and system attacks. Many AI-based anomaly detection methods have been proposed and achieved high detection performance, however, are still a "black box" that is hard to be interpreted. In this study, we suggest using Explainable Artificial Intelligence to enhance the perspective and reliable results of an LSTM-based Autoencoder-OCSVM learning model for anomaly detection in ICS. We demonstrate the performance of our proposed method based on a well-known SCADA dataset. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 294,772 |
1903.00317 | TamperNN: Efficient Tampering Detection of Deployed Neural Nets | Neural networks are powering the deployment of embedded devices and Internet of Things. Applications range from personal assistants to critical ones such as self-driving cars. It has been shown recently that models obtained from neural nets can be trojaned ; an attacker can then trigger an arbitrary model behavior facing crafted inputs. This has a critical impact on the security and reliability of those deployed devices. We introduce novel algorithms to detect the tampering with deployed models, classifiers in particular. In the remote interaction setup we consider, the proposed strategy is to identify markers of the model input space that are likely to change class if the model is attacked, allowing a user to detect a possible tampering. This setup makes our proposal compatible with a wide range of scenarios, such as embedded models, or models exposed through prediction APIs. We experiment those tampering detection algorithms on the canonical MNIST dataset, over three different types of neural nets, and facing five different attacks (trojaning, quantization, fine-tuning, compression and watermarking). We then validate over five large models (VGG16, VGG19, ResNet, MobileNet, DenseNet) with a state of the art dataset (VGGFace2), and report results demonstrating the possibility of an efficient detection of model tampering. | false | false | false | false | false | false | true | false | false | false | false | true | true | false | false | false | false | false | 122,994 |
2311.10472 | End-to-end autoencoding architecture for the simultaneous generation of
medical images and corresponding segmentation masks | Despite the increasing use of deep learning in medical image segmentation, acquiring sufficient training data remains a challenge in the medical field. In response, data augmentation techniques have been proposed; however, the generation of diverse and realistic medical images and their corresponding masks remains a difficult task, especially when working with insufficient training sets. To address these limitations, we present an end-to-end architecture based on the Hamiltonian Variational Autoencoder (HVAE). This approach yields an improved posterior distribution approximation compared to traditional Variational Autoencoders (VAE), resulting in higher image generation quality. Our method outperforms generative adversarial architectures under data-scarce conditions, showcasing enhancements in image quality and precise tumor mask synthesis. We conduct experiments on two publicly available datasets, MICCAI's Brain Tumor Segmentation Challenge (BRATS), and Head and Neck Tumor Segmentation Challenge (HECKTOR), demonstrating the effectiveness of our method on different medical imaging modalities. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 408,540 |
2210.00939 | Improving Sample Quality of Diffusion Models Using Self-Attention
Guidance | Denoising diffusion models (DDMs) have attracted attention for their exceptional generation quality and diversity. This success is largely attributed to the use of class- or text-conditional diffusion guidance methods, such as classifier and classifier-free guidance. In this paper, we present a more comprehensive perspective that goes beyond the traditional guidance methods. From this generalized perspective, we introduce novel condition- and training-free strategies to enhance the quality of generated images. As a simple solution, blur guidance improves the suitability of intermediate samples for their fine-scale information and structures, enabling diffusion models to generate higher quality samples with a moderate guidance scale. Improving upon this, Self-Attention Guidance (SAG) uses the intermediate self-attention maps of diffusion models to enhance their stability and efficacy. Specifically, SAG adversarially blurs only the regions that diffusion models attend to at each iteration and guides them accordingly. Our experimental results show that our SAG improves the performance of various diffusion models, including ADM, IDDPM, Stable Diffusion, and DiT. Moreover, combining SAG with conventional guidance methods leads to further improvement. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 321,064 |
cs/0611094 | Reducing Order Enforcement Cost in Complex Query Plans | Algorithms that exploit sort orders are widely used to implement joins, grouping, duplicate elimination and other set operations. Query optimizers traditionally deal with sort orders by using the notion of interesting orders. The number of interesting orders is unfortunately factorial in the number of participating attributes. Optimizer implementations use heuristics to prune the number of interesting orders, but the quality of the heuristics is unclear. Increasingly complex decision support queries and increasing use of covering indices, which provide multiple alternative sort orders for relations, motivate us to better address the problem of optimization with interesting orders. We show that even a simplified version of optimization with sort orders is NP-hard and provide principled heuristics for choosing interesting orders. We have implemented the proposed techniques in a Volcano-style cost-based optimizer, and our performance study shows significant improvements in estimated cost. We also executed our plans on a widely used commercial database system, and on PostgreSQL, and found that actual execution times for our plans were significantly better than for plans generated by those systems in several cases. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 539,890 |
2409.05367 | Diagnostic Reasoning in Natural Language: Computational Model and
Application | Diagnostic reasoning is a key component of expert work in many domains. It is a hard, time-consuming activity that requires expertise, and AI research has investigated the ways automated systems can support this process. Yet, due to the complexity of natural language, the applications of AI for diagnostic reasoning to language-related tasks are lacking. To close this gap, we investigate diagnostic abductive reasoning (DAR) in the context of language-grounded tasks (NL-DAR). We propose a novel modeling framework for NL-DAR based on Pearl's structural causal models and instantiate it in a comprehensive study of scientific paper assessment in the biomedical domain. We use the resulting dataset to investigate the human decision-making process in NL-DAR and determine the potential of LLMs to support structured decision-making over text. Our framework, open resources and tools lay the groundwork for the empirical study of collaborative diagnostic reasoning in the age of LLMs, in the scholarly domain and beyond. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 486,744 |
1106.4577 | Interactive Execution Monitoring of Agent Teams | There is an increasing need for automated support for humans monitoring the activity of distributed teams of cooperating agents, both human and machine. We characterize the domain-independent challenges posed by this problem, and describe how properties of domains influence the challenges and their solutions. We will concentrate on dynamic, data-rich domains where humans are ultimately responsible for team behavior. Thus, the automated aid should interactively support effective and timely decision making by the human. We present a domain-independent categorization of the types of alerts a plan-based monitoring system might issue to a user, where each type generally requires different monitoring techniques. We describe a monitoring framework for integrating many domain-specific and task-specific monitoring techniques and then using the concept of value of an alert to avoid operator overload. We use this framework to describe an execution monitoring approach we have used to implement Execution Assistants (EAs) in two different dynamic, data-rich, real-world domains to assist a human in monitoring team behavior. One domain (Army small unit operations) has hundreds of mobile, geographically distributed agents, a combination of humans, robots, and vehicles. The other domain (teams of unmanned ground and air vehicles) has a handful of cooperating robots. Both domains involve unpredictable adversaries in the vicinity. Our approach customizes monitoring behavior for each specific task, plan, and situation, as well as for user preferences. Our EAs alert the human controller when reported events threaten plan execution or physically threaten team members. Alerts were generated in a timely manner without inundating the user with too many alerts (less than 10 percent of alerts are unwanted, as judged by domain experts). | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | 10,960 |
2211.15502 | ToothInpaintor: Tooth Inpainting from Partial 3D Dental Model and 2D
Panoramic Image | In orthodontic treatment, a full tooth model consisting of both the crown and root is indispensable in making the treatment plan. However, acquiring tooth root information to obtain the full tooth model from CBCT images is sometimes restricted due to the massive radiation of CBCT scanning. Thus, reconstructing the full tooth shape from the ready-to-use input, e.g., the partial intra-oral scan and the 2D panoramic image, is an applicable and valuable solution. In this paper, we propose a neural network, called ToothInpaintor, that takes as input a partial 3D dental model and a 2D panoramic image and reconstructs the full tooth model with high-quality root(s). Technically, we utilize the implicit representation for both the 3D and 2D inputs, and learn a latent space of the full tooth shapes. At test time, given an input, we successfully project it to the learned latent space via neural optimization to obtain the full tooth model conditioned on the input. To help find the robust projection, a novel adversarial learning module is exploited in our pipeline. We extensively evaluate our method on a dataset collected from real-world clinics. The evaluation, comparison, and comprehensive ablation studies demonstrate that our approach produces accurate complete tooth models robustly and outperforms the state-of-the-art methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 333,296 |
2204.08624 | Topology and geometry of data manifold in deep learning | Despite significant advances in the field of deep learning in applications to various fields, explaining the inner processes of deep learning models remains an important and open question. The purpose of this article is to describe and substantiate the geometric and topological view of the learning process of neural networks. Our attention is focused on the internal representation of neural networks and on the dynamics of changes in the topology and geometry of the data manifold on different layers. We also propose a method for assessing the generalizing ability of neural networks based on topological descriptors. In this paper, we use the concepts of topological data analysis and intrinsic dimension, and we present a wide range of experiments on different datasets and different configurations of convolutional neural network architectures. In addition, we consider the issue of the geometry of adversarial attacks in the classification task and spoofing attacks on face recognition systems. Our work is a contribution to the development of an important area of explainable and interpretable AI through the example of computer vision. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 292,155 |
1812.11842 | Do GANs leave artificial fingerprints? | In the last few years, generative adversarial networks (GAN) have shown tremendous potential for a number of applications in computer vision and related fields. With the current pace of progress, it is a sure bet they will soon be able to generate high-quality images and videos, virtually indistinguishable from real ones. Unfortunately, realistic GAN-generated images pose serious threats to security, to begin with a possible flood of fake multimedia, and multimedia forensic countermeasures are in urgent need. In this work, we show that each GAN leaves its specific fingerprint in the images it generates, just like real-world cameras mark acquired images with traces of their photo-response non-uniformity pattern. Source identification experiments with several popular GANs show such fingerprints to represent a precious asset for forensic analyses. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 117,639 |
2006.09732 | High order low-bit Sigma-Delta quantization for fusion frames | We construct high order low-bit Sigma-Delta $(\Sigma \Delta)$ quantizers for the vector-valued setting of fusion frames. We prove that these $\Sigma \Delta$ quantizers can be stably implemented to quantize fusion frame measurements on subspaces $W_n$ using $\log_2( {\rm dim}(W_n)+1)$ bits per measurement. Signal reconstruction is performed using a version of Sobolev duals for fusion frames, and numerical experiments are given to validate the overall performance. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 182,639 |
2111.00409 | Kernel-based Impulse Response Identification with Side-Information on
Steady-State Gain | In this paper, we consider the problem of system identification when side-information is available on the steady-state (or DC) gain of the system. We formulate a general nonparametric identification method as an infinite-dimensional constrained convex program over the reproducing kernel Hilbert space (RKHS) of stable impulse responses. The objective function of this optimization problem is the empirical loss regularized with the norm of RKHS, and the constraint is considered for enforcing the integration of the steady-state gain side-information. The proposed formulation addresses both the discrete-time and continuous-time cases. We show that this program has a unique solution obtained by solving an equivalent finite-dimensional convex optimization. This solution has a closed-form when the empirical loss and regularization functions are quadratic and exact side-information is considered. We perform extensive numerical comparisons to verify the efficiency of the proposed identification methodology. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 264,208 |
2405.15151 | NeB-SLAM: Neural Blocks-based Salable RGB-D SLAM for Unknown Scenes | Neural implicit representations have recently demonstrated considerable potential in the field of visual simultaneous localization and mapping (SLAM). This is due to their inherent advantages, including low storage overhead and representation continuity. However, these methods necessitate the size of the scene as input, which is impractical for unknown scenes. Consequently, we propose NeB-SLAM, a neural block-based scalable RGB-D SLAM for unknown scenes. Specifically, we first propose a divide-and-conquer mapping strategy that represents the entire unknown scene as a set of sub-maps. These sub-maps are a set of neural blocks of fixed size. Then, we introduce an adaptive map growth strategy to achieve adaptive allocation of neural blocks during camera tracking and gradually cover the whole unknown scene. Finally, extensive evaluations on various datasets demonstrate that our method is competitive in both mapping and tracking when targeting unknown environments. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | true | 456,770 |
2002.09841 | SetRank: A Setwise Bayesian Approach for Collaborative Ranking from
Implicit Feedback | The recent development of online recommender systems has a focus on collaborative ranking from implicit feedback, such as user clicks and purchases. Different from explicit ratings, which reflect graded user preferences, the implicit feedback only generates positive and unobserved labels. While considerable efforts have been made in this direction, the well-known pairwise and listwise approaches have still been limited by various challenges. Specifically, for the pairwise approaches, the assumption of independent pairwise preference is not always held in practice. Also, the listwise approaches cannot efficiently accommodate "ties" due to the precondition of the entire list permutation. To this end, in this paper, we propose a novel setwise Bayesian approach for collaborative ranking, namely SetRank, to inherently accommodate the characteristics of implicit feedback in recommender system. Specifically, SetRank aims at maximizing the posterior probability of novel setwise preference comparisons and can be implemented with matrix factorization and neural networks. Meanwhile, we also present the theoretical analysis of SetRank to show that the bound of excess risk can be proportional to $\sqrt{M/N}$, where $M$ and $N$ are the numbers of items and users, respectively. Finally, extensive experiments on four real-world datasets clearly validate the superiority of SetRank compared with various state-of-the-art baselines. | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | 165,201 |
2411.12043 | A comparative analysis for different finite element types in
strain-gradient elasticity simulations performed on Firedrake and FEniCS | The layer-upon-layer approach in additive manufacturing, open or closed cells in polymeric or metallic foams involve an intrinsic microstructure tailored to the underlying applications. Homogenization of such architectured materials creates metamaterials modeled by higher-gradient models, specifically when the microstructure's characteristic length is comparable to the length scale of the structure. In this study, we conduct a comparative analysis of various finite elements methods for solving problems in strain-gradient elasticity. We employ open-source packages from Firedrake and FEniCS. Different finite element formulations are tested: we implement Lagrange, Argyris, Hermite elements, a Hu--Washizu type (mixed) formulation, as well as isogeometric analysis with Non-Uniform Rational B-Splines (NURBS). For the numerical study, we investigate one- and two-dimensional problems discussed in the literature of strain-gradient modeling. All developed codes are open-access to encourage research in Finite Element Method (FEM) based computation of generalized continua. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 509,272 |
1812.00498 | Permutations Unlabeled beyond Sampling Unknown | A recent unlabeled sampling result by Unnikrishnan, Haghighatshoar and Vetterli states that with probability one over iid Gaussian matrices $A$, any $x$ can be uniquely recovered from an unknown permutation of $y = A x$ as soon as $A$ has at least twice as many rows as columns. We show that this condition on $A$ implies something much stronger: that an unknown vector $x$ can be recovered from measurements $y = T A x$, when the unknown $T$ belongs to an arbitrary set of invertible, diagonalizable linear transformations $\mathcal{T}$. The set $\mathcal{T}$ can be finite or countably infinite. When it is the set of $m \times m$ permutation matrices, we have the classical unlabeled sampling problem. We show that for almost all $A$ with at least twice as many rows as columns, all $x$ can be recovered either uniquely, or up to a scale depending on $\mathcal{T}$, and that the condition on the size of $A$ is necessary. Our proof is based on vector space geometry. Specializing to permutations we obtain a simplified proof of the uniqueness result of Unnikrishnan, Haghighatshoar and Vetterli. In this letter we are only concerned with uniqueness; stability and algorithms are left for future work. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 115,278 |
2211.07955 | IntegratedPIFu: Integrated Pixel Aligned Implicit Function for
Single-view Human Reconstruction | We propose IntegratedPIFu, a new pixel aligned implicit model that builds on the foundation set by PIFuHD. IntegratedPIFu shows how depth and human parsing information can be predicted and capitalised upon in a pixel-aligned implicit model. In addition, IntegratedPIFu introduces depth oriented sampling, a novel training scheme that improve any pixel aligned implicit model ability to reconstruct important human features without noisy artefacts. Lastly, IntegratedPIFu presents a new architecture that, despite using less model parameters than PIFuHD, is able to improves the structural correctness of reconstructed meshes. Our results show that IntegratedPIFu significantly outperforms existing state of the arts methods on single view human reconstruction. Our code has been made available online. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 330,429 |
2404.06283 | LLMs' Reading Comprehension Is Affected by Parametric Knowledge and
Struggles with Hypothetical Statements | The task of reading comprehension (RC), often implemented as context-based question answering (QA), provides a primary means to assess language models' natural language understanding (NLU) capabilities. Yet, when applied to large language models (LLMs) with extensive built-in world knowledge, this method can be deceptive. If the context aligns with the LLMs' internal knowledge, it is hard to discern whether the models' answers stem from context comprehension or from LLMs' internal information. Conversely, using data that conflicts with the models' knowledge creates erroneous trends which distort the results. To address this issue, we suggest to use RC on imaginary data, based on fictitious facts and entities. This task is entirely independent of the models' world knowledge, enabling us to evaluate LLMs' linguistic abilities without the interference of parametric knowledge. Testing ChatGPT, GPT-4, LLaMA 2 and Mixtral on such imaginary data, we uncover a class of linguistic phenomena posing a challenge to current LLMs, involving thinking in terms of alternative, hypothetical scenarios. While all the models handle simple affirmative and negative contexts with high accuracy, they are much more prone to error when dealing with modal and conditional contexts. Crucially, these phenomena also trigger the LLMs' vulnerability to knowledge-conflicts again. In particular, while some models prove virtually unaffected by knowledge conflicts in affirmative and negative contexts, when faced with more semantically involved modal and conditional environments, they often fail to separate the text from their internal knowledge. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 445,405 |
1602.01541 | Fundamental Limits in Multi-image Alignment | The performance of multi-image alignment, bringing different images into one coordinate system, is critical in many applications with varied signal-to-noise ratio (SNR) conditions. A great amount of effort is being invested into developing methods to solve this problem. Several important questions thus arise, including: Which are the fundamental limits in multi-image alignment performance? Does having access to more images improve the alignment? Theoretical bounds provide a fundamental benchmark to compare methods and can help establish whether improvements can be made. In this work, we tackle the problem of finding the performance limits in image registration when multiple shifted and noisy observations are available. We derive and analyze the Cram\'er-Rao and Ziv-Zakai lower bounds under different statistical models for the underlying image. The accuracy of the derived bounds is experimentally assessed through a comparison to the maximum likelihood estimator. We show the existence of different behavior zones depending on the difficulty level of the problem, given by the SNR conditions of the input images. We find that increasing the number of images is only useful below a certain SNR threshold, above which the pairwise MLE estimation proves to be optimal. The analysis we present here brings further insight into the fundamental limitations of the multi-image alignment problem. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 51,712 |
2405.11914 | PT43D: A Probabilistic Transformer for Generating 3D Shapes from Single
Highly-Ambiguous RGB Images | Generating 3D shapes from single RGB images is essential in various applications such as robotics. Current approaches typically target images containing clear and complete visual descriptions of the object, without considering common realistic cases where observations of objects that are largely occluded or truncated. We thus propose a transformer-based autoregressive model to generate the probabilistic distribution of 3D shapes conditioned on an RGB image containing potentially highly ambiguous observations of the object. To handle realistic scenarios such as occlusion or field-of-view truncation, we create simulated image-to-shape training pairs that enable improved fine-tuning for real-world scenarios. We then adopt cross-attention to effectively identify the most relevant region of interest from the input image for shape generation. This enables inference of sampled shapes with reasonable diversity and strong alignment with the input image. We train and test our model on our synthetic data then fine-tune and test it on real-world data. Experiments demonstrate that our model outperforms state of the art in both scenarios. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 455,342 |
2308.06453 | Multi-Label Knowledge Distillation | Existing knowledge distillation methods typically work by imparting the knowledge of output logits or intermediate feature maps from the teacher network to the student network, which is very successful in multi-class single-label learning. However, these methods can hardly be extended to the multi-label learning scenario, where each instance is associated with multiple semantic labels, because the prediction probabilities do not sum to one and feature maps of the whole example may ignore minor classes in such a scenario. In this paper, we propose a novel multi-label knowledge distillation method. On one hand, it exploits the informative semantic knowledge from the logits by dividing the multi-label learning problem into a set of binary classification problems; on the other hand, it enhances the distinctiveness of the learned feature representations by leveraging the structural information of label-wise embeddings. Experimental results on multiple benchmark datasets validate that the proposed method can avoid knowledge counteraction among labels, thus achieving superior performance against diverse comparing methods. Our code is available at: https://github.com/penghui-yang/L2D | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 385,145 |
2107.05319 | Human-like Relational Models for Activity Recognition in Video | Video activity recognition by deep neural networks is impressive for many classes. However, it falls short of human performance, especially for challenging to discriminate activities. Humans differentiate these complex activities by recognising critical spatio-temporal relations among explicitly recognised objects and parts, for example, an object entering the aperture of a container. Deep neural networks can struggle to learn such critical relationships effectively. Therefore we propose a more human-like approach to activity recognition, which interprets a video in sequential temporal phases and extracts specific relationships among objects and hands in those phases. Random forest classifiers are learnt from these extracted relationships. We apply the method to a challenging subset of the something-something dataset and achieve a more robust performance against neural network baselines on challenging activities. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 245,741 |
1606.05060 | Pruning Random Forests for Prediction on a Budget | We propose to prune a random forest (RF) for resource-constrained prediction. We first construct a RF and then prune it to optimize expected feature cost & accuracy. We pose pruning RFs as a novel 0-1 integer program with linear constraints that encourages feature re-use. We establish total unimodularity of the constraint set to prove that the corresponding LP relaxation solves the original integer program. We then exploit connections to combinatorial optimization and develop an efficient primal-dual algorithm, scalable to large datasets. In contrast to our bottom-up approach, which benefits from good RF initialization, conventional methods are top-down acquiring features based on their utility value and is generally intractable, requiring heuristics. Empirically, our pruning algorithm outperforms existing state-of-the-art resource-constrained algorithms. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 57,353 |
1611.02401 | Divide and Conquer Networks | We consider the learning of algorithmic tasks by mere observation of input-output pairs. Rather than studying this as a black-box discrete regression problem with no assumption whatsoever on the input-output mapping, we concentrate on tasks that are amenable to the principle of divide and conquer, and study what are its implications in terms of learning. This principle creates a powerful inductive bias that we leverage with neural architectures that are defined recursively and dynamically, by learning two scale-invariant atomic operations: how to split a given input into smaller sets, and how to merge two partially solved tasks into a larger partial solution. Our model can be trained in weakly supervised environments, namely by just observing input-output pairs, and in even weaker environments, using a non-differentiable reward signal. Moreover, thanks to the dynamic aspect of our architecture, we can incorporate the computational complexity as a regularization term that can be optimized by backpropagation. We demonstrate the flexibility and efficiency of the Divide-and-Conquer Network on several combinatorial and geometric tasks: convex hull, clustering, knapsack and euclidean TSP. Thanks to the dynamic programming nature of our model, we show significant improvements in terms of generalization error and computational complexity. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 63,560 |
2111.02687 | CoreLM: Coreference-aware Language Model Fine-Tuning | Language Models are the underpin of all modern Natural Language Processing (NLP) tasks. The introduction of the Transformers architecture has contributed significantly into making Language Modeling very effective across many NLP task, leading to significant advancements in the field. However, Transformers come with a big computational cost, which grows quadratically with respect to the input length. This presents a challenge as to understand long texts requires a lot of context. In this paper, we propose a Fine-Tuning framework, named CoreLM, that extends the architecture of current Pretrained Language Models so that they incorporate explicit entity information. By introducing entity representations, we make available information outside the contextual space of the model, which results in a better Language Model for a fraction of the computational cost. We implement our approach using GPT2 and compare the fine-tuned model to the original. Our proposed model achieves a lower Perplexity in GUMBY and LAMBDADA datasets when compared to GPT2 and a fine-tuned version of GPT2 without any changes. We also compare the models' performance in terms of Accuracy in LAMBADA and Children's Book Test, with and without the use of model-created coreference annotations. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 264,946 |
2502.04358 | Position: Scaling LLM Agents Requires Asymptotic Analysis with LLM
Primitives | Decomposing hard problems into subproblems often makes them easier and more efficient to solve. With large language models (LLMs) crossing critical reliability thresholds for a growing slate of capabilities, there is an increasing effort to decompose systems into sets of LLM-based agents, each of whom can be delegated sub-tasks. However, this decomposition (even when automated) is often intuitive, e.g., based on how a human might assign roles to members of a human team. How close are these role decompositions to optimal? This position paper argues that asymptotic analysis with LLM primitives is needed to reason about the efficiency of such decomposed systems, and that insights from such analysis will unlock opportunities for scaling them. By treating the LLM forward pass as the atomic unit of computational cost, one can separate out the (often opaque) inner workings of a particular LLM from the inherent efficiency of how a set of LLMs are orchestrated to solve hard problems. In other words, if we want to scale the deployment of LLMs to the limit, instead of anthropomorphizing LLMs, asymptotic analysis with LLM primitives should be used to reason about and develop more powerful decompositions of large problems into LLM agents. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | true | false | true | 531,102 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.