id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
0806.2140
Defaults and Normality in Causal Structures
A serious defect with the Halpern-Pearl (HP) definition of causality is repaired by combining a theory of causality with a theory of defaults. In addition, it is shown that (despite a claim to the contrary) a cause according to the HP condition need not be a single conjunct. A definition of causality motivated by Wright's NESS test is shown to always hold for a single conjunct. Moreover, conditions that hold for all the examples considered by HP are given that guarantee that causality according to (this version) of the NESS test is equivalent to the HP definition.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
1,917
2106.10352
Cross-hospital Sepsis Early Detection via Semi-supervised Optimal Transport with Self-paced Ensemble
Leveraging machine learning techniques for Sepsis early detection and diagnosis has attracted increasing interest in recent years. However, most existing methods require a large amount of labeled training data, which may not be available for a target hospital that deploys a new Sepsis detection system. More seriously, as treated patients are diversified between hospitals, directly applying a model trained on other hospitals may not achieve good performance for the target hospital. To address this issue, we propose a novel semi-supervised transfer learning framework based on optimal transport theory and self-paced ensemble for Sepsis early detection, called SPSSOT, which can efficiently transfer knowledge from the source hospital (with rich labeled data) to the target hospital (with scarce labeled data). Specifically, SPSSOT incorporates a new optimal transport-based semi-supervised domain adaptation component that can effectively exploit all the unlabeled data in the target hospital. Moreover, self-paced ensemble is adapted in SPSSOT to alleviate the class imbalance issue during transfer learning. In a nutshell, SPSSOT is an end-to-end transfer learning method that automatically selects suitable samples from two domains (hospitals) respectively and aligns their feature spaces. Extensive experiments on two open clinical datasets, MIMIC-III and Challenge, demonstrate that SPSSOT outperforms state-of-the-art transfer learning methods by improving 1-3% of AUC.
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
241,983
1808.01244
CornerNet: Detecting Objects as Paired Keypoints
We propose CornerNet, a new approach to object detection where we detect an object bounding box as a pair of keypoints, the top-left corner and the bottom-right corner, using a single convolution neural network. By detecting objects as paired keypoints, we eliminate the need for designing a set of anchor boxes commonly used in prior single-stage detectors. In addition to our novel formulation, we introduce corner pooling, a new type of pooling layer that helps the network better localize corners. Experiments show that CornerNet achieves a 42.2% AP on MS COCO, outperforming all existing one-stage detectors.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
104,537
2309.01412
Finite/fixed-time Stabilization of Linear Systems with States Quantization
This paper develops a homogeneity-based approach to finite/fixed-time stabilization of linear time-invariant (LTI) system with quantized measurements. A sufficient condition for finite/fixed-time stabilization of multi-input LTI system under states quantization is derived. It is shown that a homogeneous quantized state feedback with logarithmic quantizer can guarantee finite/fixed-time stability of the closed-loop system provided that the quantization is sufficiently dense. Theoretical results are supported with numerical simulations.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
389,678
2105.08131
A New Framework to Adopt Multidimensional Databases for Organizational Information System Strategies
As information becomes increasingly sizable for organizations to maintain the challenge of organizing data still remains. More importantly, the on-going process of analysing incoming data occurs on a continual basis and organizations should employ existing procedures that may not be adequate or efficient when attempting to access specific information to analyse. In these latter days of technological advancement, organizations can offer their customers extensive data resources to utilize and thus accomplish individual objectives and maintain competitiveness; however, it remains a challenge in providing data in a format that serves each clients suited needs. For some, the complexity of a data model can be overwhelming to utilize. Furthermore, companies should secure an understanding of the purchasing power used by specific consumer groups to remain competitive and ease the operation of data analysis. This research paper is to examine the use of multi-dimensional models within a business environment and how it may provide customers and managers with generating queries that will provide accurate and relevant data for effective analysis. It also provides a new framework that can aid various types of organisations using sizable database systems to create their own multidimensional model from relational databases and present the data in multidimensional views. It also defines the requirements. Despite the availability of set tools, the complexity of utilizing the conceptions discourages customers as they may become apprehensive about exploring these options for analytical purposes. This could be done by conducting a query.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
235,659
2006.03824
Scaling Equilibrium Propagation to Deep ConvNets by Drastically Reducing its Gradient Estimator Bias
Equilibrium Propagation (EP) is a biologically-inspired algorithm for convergent RNNs with a local learning rule that comes with strong theoretical guarantees. The parameter updates of the neural network during the credit assignment phase have been shown mathematically to approach the gradients provided by Backpropagation Through Time (BPTT) when the network is infinitesimally nudged toward its target. In practice, however, training a network with the gradient estimates provided by EP does not scale to visual tasks harder than MNIST. In this work, we show that a bias in the gradient estimate of EP, inherent in the use of finite nudging, is responsible for this phenomenon and that cancelling it allows training deep ConvNets by EP. We show that this bias can be greatly reduced by using symmetric nudging (a positive nudging and a negative one). We also generalize previous EP equations to the case of cross-entropy loss (by opposition to squared error). As a result of these advances, we are able to achieve a test error of 11.7% on CIFAR-10 by EP, which approaches the one achieved by BPTT and provides a major improvement with respect to the standard EP approach with same-sign nudging that gives 86% test error. We also apply these techniques to train an architecture with asymmetric forward and backward connections, yielding a 13.2% test error. These results highlight EP as a compelling biologically-plausible approach to compute error gradients in deep neural networks.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
180,448
2007.13475
Towards an ontology of HTTP interactions
Enterprise information systems have adopted Web-based foundations for exchanges between heterogeneous programmes. These programs provide and consume via Web APIs some resources identified by URIs, whose representations are transmitted via HTTP. Furthermore HTTP remains at the heart of all Web developments (Semantic Web, linked data, IoT...). Thus, situations where a program must be able to reason about HTTP interactions (request-response) are multiplying. This requires an explicit formal specification of a shared conceptualization of those interactions. A proposal for an RDF vocabulary exists, developed with a view to carrying out web application conformity tests and record the tests outputs. This vocabulary has already been reused. In this paper we propose to adapt and extend it for making it more reusable.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
189,139
1611.07212
Recurrent Attention Models for Depth-Based Person Identification
We present an attention-based model that reasons on human body shape and motion dynamics to identify individuals in the absence of RGB information, hence in the dark. Our approach leverages unique 4D spatio-temporal signatures to address the identification problem across days. Formulated as a reinforcement learning task, our model is based on a combination of convolutional and recurrent neural networks with the goal of identifying small, discriminative regions indicative of human identity. We demonstrate that our model produces state-of-the-art results on several published datasets given only depth images. We further study the robustness of our model towards viewpoint, appearance, and volumetric changes. Finally, we share insights gleaned from interpretable 2D, 3D, and 4D visualizations of our model's spatio-temporal attention.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
64,323
1907.02161
Seeing Under the Cover: A Physics Guided Learning Approach for In-Bed Pose Estimation
Human in-bed pose estimation has huge practical values in medical and healthcare applications yet still mainly relies on expensive pressure mapping (PM) solutions. In this paper, we introduce our novel physics inspired vision-based approach that addresses the challenging issues associated with the in-bed pose estimation problem including monitoring a fully covered person in complete darkness. We reformulated this problem using our proposed Under the Cover Imaging via Thermal Diffusion (UCITD) method to capture the high resolution pose information of the body even when it is fully covered by using a long wavelength IR technique. We proposed a physical hyperparameter concept through which we achieved high quality groundtruth pose labels in different modalities. A fully annotated in-bed pose dataset called Simultaneously-collected multimodal Lying Pose (SLP) is also formed/released with the same order of magnitude as most existing large-scale human pose datasets to support complex models' training and evaluation. A network trained from scratch on it and tested on two diverse settings, one in a living room and the other in a hospital room showed pose estimation performance of 99.5% and 95.7% in PCK0.2 standard, respectively. Moreover, in a multi-factor comparison with a state-of-the art in-bed pose monitoring solution based on PM, our solution showed significant superiority in all practical aspects by being 60 times cheaper, 300 times smaller, while having higher pose recognition granularity and accuracy.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
137,541
2107.02306
Connectivity Matters: Neural Network Pruning Through the Lens of Effective Sparsity
Neural network pruning is a fruitful area of research with surging interest in high sparsity regimes. Benchmarking in this domain heavily relies on faithful representation of the sparsity of subnetworks, which has been traditionally computed as the fraction of removed connections (direct sparsity). This definition, however, fails to recognize unpruned parameters that detached from input or output layers of underlying subnetworks, potentially underestimating actual effective sparsity: the fraction of inactivated connections. While this effect might be negligible for moderately pruned networks (up to 10-100 compression rates), we find that it plays an increasing role for thinner subnetworks, greatly distorting comparison between different pruning algorithms. For example, we show that effective compression of a randomly pruned LeNet-300-100 can be orders of magnitude larger than its direct counterpart, while no discrepancy is ever observed when using SynFlow for pruning [Tanaka et al., 2020]. In this work, we adopt the lens of effective sparsity to reevaluate several recent pruning algorithms on common benchmark architectures (e.g., LeNet-300-100, VGG-19, ResNet-18) and discover that their absolute and relative performance changes dramatically in this new and more appropriate framework. To aim for effective, rather than direct, sparsity, we develop a low-cost extension to most pruning algorithms. Further, equipped with effective sparsity as a reference frame, we partially reconfirm that random pruning with appropriate sparsity allocation across layers performs as well or better than more sophisticated algorithms for pruning at initialization [Su et al., 2020]. In response to this observation, using a simple analogy of pressure distribution in coupled cylinders from physics, we design novel layerwise sparsity quotas that outperform all existing baselines in the context of random pruning.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
244,761
2301.09042
The Shape of Explanations: A Topological Account of Rule-Based Explanations in Machine Learning
Rule-based explanations provide simple reasons explaining the behavior of machine learning classifiers at given points in the feature space. Several recent methods (Anchors, LORE, etc.) purport to generate rule-based explanations for arbitrary or black-box classifiers. But what makes these methods work in general? We introduce a topological framework for rule-based explanation methods and provide a characterization of explainability in terms of the definability of a classifier relative to an explanation scheme. We employ this framework to consider various explanation schemes and argue that the preferred scheme depends on how much the user knows about the domain and the probability measure over the feature space.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
341,383
2502.13172
Unveiling Privacy Risks in LLM Agent Memory
Large Language Model (LLM) agents have become increasingly prevalent across various real-world applications. They enhance decision-making by storing private user-agent interactions in the memory module for demonstrations, introducing new privacy risks for LLM agents. In this work, we systematically investigate the vulnerability of LLM agents to our proposed Memory EXTRaction Attack (MEXTRA) under a black-box setting. To extract private information from memory, we propose an effective attacking prompt design and an automated prompt generation method based on different levels of knowledge about the LLM agent. Experiments on two representative agents demonstrate the effectiveness of MEXTRA. Moreover, we explore key factors influencing memory leakage from both the agent's and the attacker's perspectives. Our findings highlight the urgent need for effective memory safeguards in LLM agent design and deployment.
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
false
false
535,242
2501.09411
Towards Robust and Realistic Human Pose Estimation via WiFi Signals
Robust WiFi-based human pose estimation is a challenging task that bridges discrete and subtle WiFi signals to human skeletons. This paper revisits this problem and reveals two critical yet overlooked issues: 1) cross-domain gap, i.e., due to significant variations between source-target domain pose distributions; and 2) structural fidelity gap, i.e., predicted skeletal poses manifest distorted topology, usually with misplaced joints and disproportionate bone lengths. This paper fills these gaps by reformulating the task into a novel two-phase framework dubbed DT-Pose: Domain-consistent representation learning and Topology-constrained Pose decoding. Concretely, we first propose a temporal-consistent contrastive learning strategy with uniformity regularization, coupled with self-supervised masking-reconstruction operations, to enable robust learning of domain-consistent and motion-discriminative WiFi-specific representations. Beyond this, we introduce a simple yet effective pose decoder with task prompts, which integrates Graph Convolution Network (GCN) and Transformer layers to constrain the topology structure of the generated skeleton by exploring the adjacent-overarching relationships among human joints. Extensive experiments conducted on various benchmark datasets highlight the superior performance of our method in tackling these fundamental challenges in both 2D/3D human pose estimation tasks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
525,134
2110.06559
Infinitely Divisible Noise in the Low Privacy Regime
Federated learning, in which training data is distributed among users and never shared, has emerged as a popular approach to privacy-preserving machine learning. Cryptographic techniques such as secure aggregation are used to aggregate contributions, like a model update, from all users. A robust technique for making such aggregates differentially private is to exploit infinite divisibility of the Laplace distribution, namely, that a Laplace distribution can be expressed as a sum of i.i.d. noise shares from a Gamma distribution, one share added by each user. However, Laplace noise is known to have suboptimal error in the low privacy regime for $\varepsilon$-differential privacy, where $\varepsilon > 1$ is a large constant. In this paper we present the first infinitely divisible noise distribution for real-valued data that achieves $\varepsilon$-differential privacy and has expected error that decreases exponentially with $\varepsilon$.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
true
260,669
1811.00907
Importance of Search and Evaluation Strategies in Neural Dialogue Modeling
We investigate the impact of search strategies in neural dialogue modeling. We first compare two standard search algorithms, greedy and beam search, as well as our newly proposed iterative beam search which produces a more diverse set of candidate responses. We evaluate these strategies in realistic full conversations with humans and propose a model-based Bayesian calibration to address annotator bias. These conversations are analyzed using two automatic metrics: log-probabilities assigned by the model and utterance diversity. Our experiments reveal that better search algorithms lead to higher rated conversations. However, finding the optimal selection mechanism to choose from a more diverse set of candidates is still an open question.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
112,221
2308.13534
Building Trust in Conversational AI: A Comprehensive Review and Solution Architecture for Explainable, Privacy-Aware Systems using LLMs and Knowledge Graph
Conversational AI systems have emerged as key enablers of human-like interactions across diverse sectors. Nevertheless, the balance between linguistic nuance and factual accuracy has proven elusive. In this paper, we first introduce LLMXplorer, a comprehensive tool that provides an in-depth review of over 150 Large Language Models (LLMs), elucidating their myriad implications ranging from social and ethical to regulatory, as well as their applicability across industries. Building on this foundation, we propose a novel functional architecture that seamlessly integrates the structured dynamics of Knowledge Graphs with the linguistic capabilities of LLMs. Validated using real-world AI news data, our architecture adeptly blends linguistic sophistication with factual rigour and further strengthens data security through Role-Based Access Control. This research provides insights into the evolving landscape of conversational AI, emphasizing the imperative for systems that are efficient, transparent, and trustworthy.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
387,952
1902.11155
Generalized Karush-Kuhn-Tucker Conditions for Real Continuous Optimization Problems
Most existing work focuses on the generalization of KKT for nonsmooth convex optimization problems, but this paper explores a generalized form of Karush-Kuhn-Tucker (KKT) conditions for real continuous optimization problems.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
122,883
2206.09432
Object Localization Assistive System Based on CV and Vibrotactile Encoding
Intelligent assistive systems can navigate blind people, but most of them could only give non-intuitive cues or inefficient guidance. Based on computer vision and vibrotactile encoding, this paper presents an interactive system that provides blind people with intuitive spatial cognition. Different from the traditional auditory feedback strategy based on speech cues, this paper firstly introduces a vibration-encoded feedback method that leverages the haptic neural pathway and enables the users to interact with objects other than manipulating an assistance device. Based on this strategy, a wearable visual module based on an RGB-D camera is adopted for 3D spatial object localization, which contributes to accurate perception and quick object localization in the real environment. The experimental results on target blind individuals indicate that vibrotactile feedback reduces the task completion time by over 25% compared with the mainstream voice prompt feedback scheme. The proposed object localization system provides a more intuitive spatial navigation and comfortable wearability for blindness assistance.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
303,571
2312.15516
A-SDM: Accelerating Stable Diffusion through Redundancy Removal and Performance Optimization
The Stable Diffusion Model (SDM) is a popular and efficient text-to-image (t2i) generation and image-to-image (i2i) generation model. Although there have been some attempts to reduce sampling steps, model distillation, and network quantization, these previous methods generally retain the original network architecture. Billion scale parameters and high computing requirements make the research of model architecture adjustment scarce. In this work, we first explore the computational redundancy part of the network, and then prune the redundancy blocks of the model and maintain the network performance through a progressive incubation strategy. Secondly, in order to maintaining the model performance, we add cross-layer multi-expert conditional convolution (CLME-Condconv) to the block pruning part to inherit the original convolution parameters. Thirdly, we propose a global-regional interactive (GRI) attention to speed up the computationally intensive attention part. Finally, we use semantic-aware supervision (SAS) to align the outputs of the teacher model and student model at the semantic level. Experiments show that this method can effectively train a lightweight model close to the performance of the original SD model, and effectively improve the model speed under limited resources. Experiments show that the proposed method can effectively train a light-weight model close to the performance of the original SD model, and effectively improve the model speed under limited resources. After acceleration, the UNet part of the model is 22% faster and the overall speed is 19% faster.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
418,034
2501.04227
Agent Laboratory: Using LLM Agents as Research Assistants
Historically, scientific discovery has been a lengthy and costly process, demanding substantial time and resources from initial conception to final results. To accelerate scientific discovery, reduce research costs, and improve research quality, we introduce Agent Laboratory, an autonomous LLM-based framework capable of completing the entire research process. This framework accepts a human-provided research idea and progresses through three stages--literature review, experimentation, and report writing to produce comprehensive research outputs, including a code repository and a research report, while enabling users to provide feedback and guidance at each stage. We deploy Agent Laboratory with various state-of-the-art LLMs and invite multiple researchers to assess its quality by participating in a survey, providing human feedback to guide the research process, and then evaluate the final paper. We found that: (1) Agent Laboratory driven by o1-preview generates the best research outcomes; (2) The generated machine learning code is able to achieve state-of-the-art performance compared to existing methods; (3) Human involvement, providing feedback at each stage, significantly improves the overall quality of research; (4) Agent Laboratory significantly reduces research expenses, achieving an 84% decrease compared to previous autonomous research methods. We hope Agent Laboratory enables researchers to allocate more effort toward creative ideation rather than low-level coding and writing, ultimately accelerating scientific discovery.
true
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
523,138
1903.04752
Occlusion-guided compact template learning for ensemble deep network-based pose-invariant face recognition
Concatenation of the deep network representations extracted from different facial patches helps to improve face recognition performance. However, the concatenated facial template increases in size and contains redundant information. Previous solutions aim to reduce the dimensionality of the facial template without considering the occlusion pattern of the facial patches. In this paper, we propose an occlusion-guided compact template learning (OGCTL) approach that only uses the information from visible patches to construct the compact template. The compact face representation is not sensitive to the number of patches that are used to construct the facial template and is more suitable for incorporating the information from different view angles for image-set based face recognition. Instead of using occlusion masks in face matching (e.g., DPRFS [38]), the proposed method uses occlusion masks in template construction and achieves significantly better image-set based face verification performance on a challenging database with a template size that is an order-of-magnitude smaller than DPRFS.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
124,040
1912.11234
Computation Reallocation for Object Detection
The allocation of computation resources in the backbone is a crucial issue in object detection. However, classification allocation pattern is usually adopted directly to object detector, which is proved to be sub-optimal. In order to reallocate the engaged computation resources in a more efficient way, we present CR-NAS (Computation Reallocation Neural Architecture Search) that can learn computation reallocation strategies across different feature resolution and spatial position diectly on the target detection dataset. A two-level reallocation space is proposed for both stage and spatial reallocation. A novel hierarchical search procedure is adopted to cope with the complex search space. We apply CR-NAS to multiple backbones and achieve consistent improvements. Our CR-ResNet50 and CR-MobileNetV2 outperforms the baseline by 1.9% and 1.7% COCO AP respectively without any additional computation budget. The models discovered by CR-NAS can be equiped to other powerful detection neck/head and be easily transferred to other dataset, e.g. PASCAL VOC, and other vision tasks, e.g. instance segmentation. Our CR-NAS can be used as a plugin to improve the performance of various networks, which is demanding.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
158,518
1909.05084
Image Segmentation using Multi-Threshold technique by Histogram Sampling
The segmentation of digital images is one of the essential steps in image processing or a computer vision system. It helps in separating the pixels into different regions according to their intensity level. A large number of segmentation techniques have been proposed, and a few of them use complex computational operations. Among all, the most straightforward procedure that can be easily implemented is thresholding. In this paper, we present a unique heuristic approach for image segmentation that automatically determines multilevel thresholds by sampling the histogram of a digital image. Our approach emphasis on selecting a valley as optimal threshold values. We demonstrated that our approach outperforms the popular Otsu's method in terms of CPU computational time. We demonstrated that our approach outperforms the popular Otsu's method in terms of CPU computational time. We observed a maximum speed-up of 35.58x and a minimum speed-up of 10.21x on popular image processing benchmarks. To demonstrate the correctness of our approach in determining threshold values, we compute PSNR, SSIM, and FSIM values to compare with the values obtained by Otsu's method. This evaluation shows that our approach is comparable and better in many cases as compared to well known Otsu's method.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
144,990
2007.07054
Nonlinear Adaptive Cruise Control of Vehicular Platoons
The paper deals with the design of nonlinear adaptive cruise controllers for vehicular platoons operating on an open road or a ring-road. The constructed feedback controllers are nonlinear functions of the distance between successive vehicles and their speeds. It is shown that the proposed novel controllers guarantee safety (collision avoidance) and bounded vehicle speeds by explicitly characterizing the set of allowable inputs. Moreover, we guarantee global asymptotic stability of the platoon to a desired configuration as well as string stability. Certain macroscopic properties are also investigated. The efficiency of the nonlinear adaptive cruise controllers is demonstrated by means of a numerical example.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
187,210
1912.10068
Recommendations and User Agency: The Reachability of Collaboratively-Filtered Information
Recommender systems often rely on models which are trained to maximize accuracy in predicting user preferences. When the systems are deployed, these models determine the availability of content and information to different users. The gap between these objectives gives rise to a potential for unintended consequences, contributing to phenomena such as filter bubbles and polarization. In this work, we consider directly the information availability problem through the lens of user recourse. Using ideas of reachability, we propose a computationally efficient audit for top-$N$ linear recommender models. Furthermore, we describe the relationship between model complexity and the effort necessary for users to exert control over their recommendations. We use this insight to provide a novel perspective on the user cold-start problem. Finally, we demonstrate these concepts with an empirical investigation of a state-of-the-art model trained on a widely used movie ratings dataset.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
158,211
1704.07139
An Aposteriorical Clusterability Criterion for $k$-Means++ and Simplicity of Clustering
We define the notion of a well-clusterable data set combining the point of view of the objective of $k$-means clustering algorithm (minimising the centric spread of data elements) and common sense (clusters shall be separated by gaps). We identify conditions under which the optimum of $k$-means objective coincides with a clustering under which the data is separated by predefined gaps. We investigate two cases: when the whole clusters are separated by some gap and when only the cores of the clusters meet some separation condition. We overcome a major obstacle in using clusterability criteria due to the fact that known approaches to clusterability checking had the disadvantage that they are related to the optimal clustering which is NP hard to identify. Compared to other approaches to clusterability, the novelty consists in the possibility of an a posteriori (after running $k$-means) check if the data set is well-clusterable or not. As the $k$-means algorithm applied for this purpose has polynomial complexity so does therefore the appropriate check. Additionally, if $k$-means++ fails to identify a clustering that meets clusterability criteria, with high probability the data is not well-clusterable.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
72,303
2406.10366
Improving the Validity and Practical Usefulness of AI/ML Evaluations Using an Estimands Framework
Commonly, AI or machine learning (ML) models are evaluated on benchmark datasets. This practice supports innovative methodological research, but benchmark performance can be poorly correlated with performance in real-world applications -- a construct validity issue. To improve the validity and practical usefulness of evaluations, we propose using an estimands framework adapted from international clinical trials guidelines. This framework provides a systematic structure for inference and reporting in evaluations, emphasizing the importance of a well-defined estimation target. We illustrate our proposal on examples of commonly used evaluation methodologies - involving cross-validation, clustering evaluation, and LLM benchmarking - that can lead to incorrect rankings of competing models (rank reversals) with high probability, even when performance differences are large. We demonstrate how the estimands framework can help uncover underlying issues, their causes, and potential solutions. Ultimately, we believe this framework can improve the validity of evaluations through better-aligned inference, and help decision-makers and model users interpret reported results more effectively.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
464,374
2501.11347
EndoChat: Grounded Multimodal Large Language Model for Endoscopic Surgery
Recently, Multimodal Large Language Models (MLLMs) have demonstrated their immense potential in computer-aided diagnosis and decision-making. In the context of robotic-assisted surgery, MLLMs can serve as effective tools for surgical training and guidance. However, there is still a lack of MLLMs specialized for surgical scene understanding in clinical applications. In this work, we introduce EndoChat to address various dialogue paradigms and subtasks in surgical scene understanding that surgeons encounter. To train our EndoChat, we construct the Surg-396K dataset through a novel pipeline that systematically extracts surgical information and generates structured annotations based on collected large-scale endoscopic surgery datasets. Furthermore, we introduce a multi-scale visual token interaction mechanism and a visual contrast-based reasoning mechanism to enhance the model's representation learning and reasoning capabilities. Our model achieves state-of-the-art performance across five dialogue paradigms and eight surgical scene understanding tasks. Additionally, we conduct evaluations with professional surgeons, most of whom provide positive feedback on collaborating with EndoChat. Overall, these results demonstrate that our EndoChat has great potential to significantly advance training and automation in robotic-assisted surgery.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
525,893
1812.05477
Gaussian Process Deep Belief Networks: A Smooth Generative Model of Shape with Uncertainty Propagation
The shape of an object is an important characteristic for many vision problems such as segmentation, detection and tracking. Being independent of appearance, it is possible to generalize to a large range of objects from only small amounts of data. However, shapes represented as silhouette images are challenging to model due to complicated likelihood functions leading to intractable posteriors. In this paper we present a generative model of shapes which provides a low dimensional latent encoding which importantly resides on a smooth manifold with respect to the silhouette images. The proposed model propagates uncertainty in a principled manner allowing it to learn from small amounts of data and providing predictions with associated uncertainty. We provide experiments that show how our proposed model provides favorable quantitative results compared with the state-of-the-art while simultaneously providing a representation that resides on a low-dimensional interpretable manifold.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
116,421
2401.06683
DQNC2S: DQN-based Cross-stream Crisis event Summarizer
Summarizing multiple disaster-relevant data streams simultaneously is particularly challenging as existing Retrieve&Re-ranking strategies suffer from the inherent redundancy of multi-stream data and limited scalability in a multi-query setting. This work proposes an online approach to crisis timeline generation based on weak annotation with Deep Q-Networks. It selects on-the-fly the relevant pieces of text without requiring neither human annotations nor content re-ranking. This makes the inference time independent of the number of input queries. The proposed approach also incorporates a redundancy filter into the reward function to effectively handle cross-stream content overlaps. The achieved ROUGE and BERTScore results are superior to those of best-performing models on the CrisisFACTS 2022 benchmark.
false
false
false
false
true
true
true
false
true
false
false
false
false
false
false
false
false
false
421,240
2111.06016
Synthetic Document Generator for Annotation-free Layout Recognition
Analyzing the layout of a document to identify headers, sections, tables, figures etc. is critical to understanding its content. Deep learning based approaches for detecting the layout structure of document images have been promising. However, these methods require a large number of annotated examples during training, which are both expensive and time consuming to obtain. We describe here a synthetic document generator that automatically produces realistic documents with labels for spatial positions, extents and categories of the layout elements. The proposed generative process treats every physical component of a document as a random variable and models their intrinsic dependencies using a Bayesian Network graph. Our hierarchical formulation using stochastic templates allow parameter sharing between documents for retaining broad themes and yet the distributional characteristics produces visually unique samples, thereby capturing complex and diverse layouts. We empirically illustrate that a deep layout detection model trained purely on the synthetic documents can match the performance of a model that uses real documents.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
265,956
1811.02390
Local-Encoding-Preserving Secure Network Coding---Part II: Flexible Rate and Security Level
In the two-part paper, we consider the problem of secure network coding when the information rate and the security level can change over time. To efficiently solve this problem, we put forward local-encoding-preserving secure network coding, where a family of secure linear network codes (SLNCs) is called local-encoding-preserving (LEP) if all the SLNCs in this family share a common local encoding kernel at each intermediate node in the network. In this paper (Part II), we first consider the design of a family of LEP SLNCs for a fixed rate and a flexible security level. We present a novel and efficient approach for constructing upon an SLNC that exists an LEP SLNC with the same rate and the security level increased by one. Next, we consider the design of a family of LEP SLNCs for a fixed dimension (equal to the sum of rate and security level) and a flexible pair of rate and security level. We propose another novel approach for designing an SLNC such that the same SLNC can be applied for all the rate and security-level pairs with the fixed dimension. Also, two polynomial-time algorithms are developed for efficient implementations of our two approaches, respectively. Furthermore, we prove that both approaches do not incur any penalty on the required field size for the existence of SLNCs in terms of the best known lower bound by Guang and Yeung. Finally, we consider the ultimate problem of designing a family of LEP SLNCs that can be applied to all possible pairs of rate and security level. By combining the construction of a family of LEP SLNCs for a fixed security level and a flexible rate (obtained in Part I) with the constructions of the two families of LEP SLNCs in the current paper in suitable ways, we can obtain a family of LEP SLNCs that can be applied for all possible pairs of rate and security level. Three possible such constructions are presented.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
112,581
2009.00470
Data Anomaly Detection for Structural Health Monitoring of Bridges using Shapelet Transform
With the wider availability of sensor technology, a number of Structural Health Monitoring (SHM) systems are deployed to monitor civil infrastructure. The continuous monitoring provides valuable information about the structure that can help in providing a decision support system for retrofits and other structural modifications. However, when the sensors are exposed to harsh environmental conditions, the data measured by the SHM systems tend to be affected by multiple anomalies caused by faulty or broken sensors. Given a deluge of high-dimensional data collected continuously over time, research into using machine learning methods to detect anomalies are a topic of great interest to the SHM community. This paper contributes to this effort by proposing the use of a relatively new time series representation named Shapelet Transform in combination with a Random Forest classifier to autonomously identify anomalies in SHM data. The shapelet transform is a unique time series representation that is solely based on the shape of the time series data. In consideration of the individual characteristics unique to every anomaly, the application of this transform yields a new shape-based feature representation that can be combined with any standard machine learning algorithm to detect anomalous data with no manual intervention. For the present study, the anomaly detection framework consists of three steps: identifying unique shapes from anomalous data, using these shapes to transform the SHM data into a local-shape space and training machine learning algorithm on this transformed data to identify anomalies. The efficacy of this method is demonstrated by the identification of anomalies in acceleration data from a SHM system installed on a long-span bridge in China. The results show that multiple data anomalies in SHM data can be automatically detected with high accuracy using the proposed method.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
194,053
2008.03937
Feature Ranking for Semi-supervised Learning
The data made available for analysis are becoming more and more complex along several directions: high dimensionality, number of examples and the amount of labels per example. This poses a variety of challenges for the existing machine learning methods: coping with dataset with a large number of examples that are described in a high-dimensional space and not all examples have labels provided. For example, when investigating the toxicity of chemical compounds there are a lot of compounds available, that can be described with information rich high-dimensional representations, but not all of the compounds have information on their toxicity. To address these challenges, we propose semi-supervised learning of feature ranking. The feature rankings are learned in the context of classification and regression as well as in the context of structured output prediction (multi-label classification, hierarchical multi-label classification and multi-target regression). To the best of our knowledge, this is the first work that treats the task of feature ranking within the semi-supervised structured output prediction context. More specifically, we propose two approaches that are based on tree ensembles and the Relief family of algorithms. The extensive evaluation across 38 benchmark datasets reveals the following: Random Forests perform the best for the classification-like tasks, while for the regression-like tasks Extra-PCTs perform the best, Random Forests are the most efficient method considering induction times across all tasks, and semi-supervised feature rankings outperform their supervised counterpart across a majority of the datasets from the different tasks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
191,076
2305.04835
How Do In-Context Examples Affect Compositional Generalization?
Compositional generalization--understanding unseen combinations of seen primitives--is an essential reasoning capability in human intelligence. The AI community mainly studies this capability by fine-tuning neural networks on lots of training samples, while it is still unclear whether and how in-context learning--the prevailing few-shot paradigm based on large language models--exhibits compositional generalization. In this paper, we present CoFe, a test suite to investigate in-context compositional generalization. We find that the compositional generalization performance can be easily affected by the selection of in-context examples, thus raising the research question what the key factors are to make good in-context examples for compositional generalization. We study three potential factors: similarity, diversity and complexity. Our systematic experiments indicate that in-context examples should be structurally similar to the test case, diverse from each other, and individually simple. Furthermore, two strong limitations are observed: in-context compositional generalization on fictional words is much weaker than that on commonly used ones; it is still critical that the in-context examples should cover required linguistic structures, even though the backbone model has been pre-trained on large corpus. We hope our analysis would facilitate the understanding and utilization of in-context learning paradigm.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
362,920
2407.14133
I Know About "Up"! Enhancing Spatial Reasoning in Visual Language Models Through 3D Reconstruction
Visual Language Models (VLMs) are essential for various tasks, particularly visual reasoning tasks, due to their robust multi-modal information integration, visual reasoning capabilities, and contextual awareness. However, existing \VLMs{}' visual spatial reasoning capabilities are often inadequate, struggling even with basic tasks such as distinguishing left from right. To address this, we propose the \ours{} model, designed to enhance the visual spatial reasoning abilities of VLMS. ZeroVLM employs Zero-1-to-3, a 3D reconstruction model for obtaining different views of the input images and incorporates a prompting mechanism to further improve visual spatial reasoning. Experimental results on four visual spatial reasoning datasets show that our \ours{} achieves up to 19.48% accuracy improvement, which indicates the effectiveness of the 3D reconstruction and prompting mechanisms of our ZeroVLM.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
474,665
1303.6454
Partial Transfer Entropy on Rank Vectors
For the evaluation of information flow in bivariate time series, information measures have been employed, such as the transfer entropy (TE), the symbolic transfer entropy (STE), defined similarly to TE but on the ranks of the components of the reconstructed vectors, and the transfer entropy on rank vectors (TERV), similar to STE but forming the ranks for the future samples of the response system with regard to the current reconstructed vector. Here we extend TERV for multivariate time series, and account for the presence of confounding variables, called partial transfer entropy on ranks (PTERV). We investigate the asymptotic properties of PTERV, and also partial STE (PSTE), construct parametric significance tests under approximations with Gaussian and gamma null distributions, and show that the parametric tests cannot achieve the power of the randomization test using time-shifted surrogates. Using simulations on known coupled dynamical systems and applying parametric and randomization significance tests, we show that PTERV performs better than PSTE but worse than the partial transfer entropy (PTE). However, PTERV, unlike PTE, is robust to the presence of drifts in the time series and it is also not affected by the level of detrending.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
23,275
1709.07358
Non-Depth-First Search against Independent Distributions on an AND-OR Tree
Suzuki and Niida (Ann. Pure. Appl. Logic, 2015) showed the following results on independent distributions (IDs) on an AND-OR tree, where they took only depth-first algorithms into consideration. (1) Among IDs such that probability of the root having value 0 is fixed as a given r such that 0 < r < 1, if d is a maximizer of cost of the best algorithm then d is an independent and identical distribution (IID). (2) Among all IDs, if d is a maximizer of cost of the best algorithm then d is an IID. In the case where non-depth-first algorithms are taken into consideration, the counter parts of (1) and (2) are left open in the above work. Peng et al. (Inform. Process. Lett., 2017) extended (1) and (2) to multi-branching trees, where in (2) they put an additional hypothesis on IDs that probability of the root having value 0 is neither 0 nor 1. We give positive answers for the two questions of Suzuki-Niida. A key to the proof is that if ID d achieves the equilibrium among IDs then we can chose an algorithm of the best cost against d from depth-first algorithms. In addition, we extend the result of Peng et al. to the case where non-depth-first algorithms are taken into consideration.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
81,264
2407.05377
Collective Innovation in Groups of Large Language Models
Human culture relies on collective innovation: our ability to continuously explore how existing elements in our environment can be combined to create new ones. Language is hypothesized to play a key role in human culture, driving individual cognitive capacities and shaping communication. Yet the majority of models of collective innovation assign no cognitive capacities or language abilities to agents. Here, we contribute a computational study of collective innovation where agents are Large Language Models (LLMs) that play Little Alchemy 2, a creative video game originally developed for humans that, as we argue, captures useful aspects of innovation landscapes not present in previous test-beds. We, first, study an LLM in isolation and discover that it exhibits both useful skills and crucial limitations. We, then, study groups of LLMs that share information related to their behaviour and focus on the effect of social connectivity on collective performance. In agreement with previous human and computational studies, we observe that groups with dynamic connectivity out-compete fully-connected groups. Our work reveals opportunities and challenges for future studies of collective innovation that are becoming increasingly relevant as Generative Artificial Intelligence algorithms and humans innovate alongside each other.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
470,951
2212.04928
P2T2: a Physically-primed deep-neural-network approach for robust $T_{2}$ distribution estimation from quantitative $T_{2}$-weighted MRI
Estimating $T_2$ relaxation time distributions from multi-echo $T_2$-weighted MRI ($T_2W$) data can provide valuable biomarkers for assessing inflammation, demyelination, edema, and cartilage composition in various pathologies, including neurodegenerative disorders, osteoarthritis, and tumors. Deep neural network (DNN) based methods have been proposed to address the complex inverse problem of estimating $T_2$ distributions from MRI data, but they are not yet robust enough for clinical data with low Signal-to-Noise ratio (SNR) and are highly sensitive to distribution shifts such as variations in echo-times (TE) used during acquisition. Consequently, their application is hindered in clinical practice and large-scale multi-institutional trials with heterogeneous acquisition protocols. We propose a physically-primed DNN approach, called $P_2T_2$, that incorporates the signal decay forward model in addition to the MRI signal into the DNN architecture to improve the accuracy and robustness of $T_2$ distribution estimation. We evaluated our $P_2T_2$ model in comparison to both DNN-based methods and classical methods for $T_2$ distribution estimation using 1D and 2D numerical simulations along with clinical data. Our model improved the baseline model's accuracy for low SNR levels ($SNR<80$) which are common in the clinical setting. Further, our model achieved a $\sim$35\% improvement in robustness against distribution shifts in the acquisition process compared to previously proposed DNN models. Finally, Our $P_2T_2$ model produces the most detailed Myelin-Water fraction maps compared to baseline approaches when applied to real human MRI data. Our $P_2T_2$ model offers a reliable and precise means of estimating $T_2$ distributions from MRI data and shows promise for use in large-scale multi-institutional trials with heterogeneous acquisition protocols.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
335,623
cs/0008031
Bunsetsu Identification Using Category-Exclusive Rules
This paper describes two new bunsetsu identification methods using supervised learning. Since Japanese syntactic analysis is usually done after bunsetsu identification, bunsetsu identification is important for analyzing Japanese sentences. In experiments comparing the four previously available machine-learning methods (decision tree, maximum-entropy method, example-based approach and decision list) and two new methods using category-exclusive rules, the new method using the category-exclusive rules with the highest similarity performed best.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
537,198
1307.7138
Reconstruction of Network Coded Sources From Incomplete Datasets
In this paper, we investigate the problem of recovering source information from an incomplete set of network coded data. We first study the theoretical performance of such systems under maximum a posteriori (MAP) decoding and derive the upper bound on the probability of decoding error as a function of the system parameters. We also establish the sufficient conditions on the number of network coded symbols required to achieve decoding error probability below a certain level. We then propose a low complexity iterative decoding algorithm based on message passing for decoding the network coded data of a particular class of statistically dependent sources that present pairwise linear correlation. The algorithm operates on a graph that captures the network coding constraints, while the knowledge about the source correlation is directly incorporated in the messages exchanged over the graph. We test the proposed method on both synthetic data and correlated image sequences and demonstrate that the prior knowledge about the source correlation can be effectively exploited at the decoder in order to provide a good reconstruction of the transmitted data in cases where the network coded data available at the decoder is not sufficient for exact decoding.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
26,069
1308.6683
Benchmarking Summarizability Processing in XML Warehouses with Complex Hierarchies
Business Intelligence plays an important role in decision making. Based on data warehouses and Online Analytical Processing, a business intelligence tool can be used to analyze complex data. Still, summarizability issues in data warehouses cause ineffective analyses that may become critical problems to businesses. To settle this issue, many researchers have studied and proposed various solutions, both in relational and XML data warehouses. However, they find difficulty in evaluating the performance of their proposals since the available benchmarks lack complex hierarchies. In order to contribute to summarizability analysis, this paper proposes an extension to the XML warehouse benchmark (XWeB) with complex hierarchies. The benchmark enables us to generate XML data warehouses with scalable complex hierarchies as well as summarizability processing. We experimentally demonstrated that complex hierarchies can definitely be included into a benchmark dataset, and that our benchmark is able to compare two alternative approaches dealing with summarizability issues.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
26,734
2308.09210
Efficient Algorithms for Attributed Graph Alignment with Vanishing Edge Correlation
Graph alignment refers to the task of finding the vertex correspondence between two correlated graphs of $n$ vertices. Extensive study has been done on polynomial-time algorithms for the graph alignment problem under the Erd\H{o}s-R\'enyi graph pair model, where the two graphs are Erd\H{o}s-R\'enyi graphs with edge probability $q_\mathrm{u}$, correlated under certain vertex correspondence. To achieve exact recovery of the correspondence, all existing algorithms at least require the edge correlation coefficient $\rho_\mathrm{u}$ between the two graphs to be \emph{non-vanishing} as $n\rightarrow\infty$. Moreover, it is conjectured that no polynomial-time algorithm can achieve exact recovery under vanishing edge correlation $\rho_\mathrm{u}<1/\mathrm{polylog}(n)$. In this paper, we show that with a vanishing amount of additional \emph{attribute information}, exact recovery is polynomial-time feasible under \emph{vanishing} edge correlation $\rho_\mathrm{u} \ge n^{-\Theta(1)}$. We identify a \emph{local} tree structure, which incorporates one layer of user information and one layer of attribute information, and apply the subgraph counting technique to such structures. A polynomial-time algorithm is proposed that recovers the vertex correspondence for most of the vertices, and then refines the output to achieve exact recovery. The consideration of attribute information is motivated by real-world applications like LinkedIn and Twitter, where user attributes like birthplace and education background can aid alignment.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
386,193
2406.16135
Crosslingual Capabilities and Knowledge Barriers in Multilingual Large Language Models
Large language models (LLMs) are typically multilingual due to pretraining on diverse multilingual corpora. But can these models relate corresponding concepts across languages, effectively being crosslingual? This study evaluates six state-of-the-art LLMs on inherently crosslingual tasks. We observe that while these models show promising surface-level crosslingual abilities on machine translation and embedding space analyses, they struggle with deeper crosslingual knowledge transfer, revealing a crosslingual knowledge barrier in both general (MMLU benchmark) and domain-specific (Harry Potter quiz) contexts. We observe that simple inference-time mitigation methods offer only limited improvement. On the other hand, we propose fine-tuning of LLMs on mixed-language data, which effectively reduces these gaps, even when using out-of-domain datasets like WikiText. Our findings suggest the need for explicit optimization to unlock the full crosslingual potential of LLMs. Our code is publicly available at https://github.com/google-research/crosslingual-knowledge-barriers.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
467,010
2502.05938
Energy-Efficient Autonomous Aerial Navigation with Dynamic Vision Sensors: A Physics-Guided Neuromorphic Approach
Vision-based object tracking is a critical component for achieving autonomous aerial navigation, particularly for obstacle avoidance. Neuromorphic Dynamic Vision Sensors (DVS) or event cameras, inspired by biological vision, offer a promising alternative to conventional frame-based cameras. These cameras can detect changes in intensity asynchronously, even in challenging lighting conditions, with a high dynamic range and resistance to motion blur. Spiking neural networks (SNNs) are increasingly used to process these event-based signals efficiently and asynchronously. Meanwhile, physics-based artificial intelligence (AI) provides a means to incorporate system-level knowledge into neural networks via physical modeling. This enhances robustness, energy efficiency, and provides symbolic explainability. In this work, we present a neuromorphic navigation framework for autonomous drone navigation. The focus is on detecting and navigating through moving gates while avoiding collisions. We use event cameras for detecting moving objects through a shallow SNN architecture in an unsupervised manner. This is combined with a lightweight energy-aware physics-guided neural network (PgNN) trained with depth inputs to predict optimal flight times, generating near-minimum energy paths. The system is implemented in the Gazebo simulator and integrates a sensor-fused vision-to-planning neuro-symbolic framework built with the Robot Operating System (ROS) middleware. This work highlights the future potential of integrating event-based vision with physics-guided planning for energy-efficient autonomous navigation, particularly for low-latency decision-making.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
531,849
2310.08419
Jailbreaking Black Box Large Language Models in Twenty Queries
There is growing interest in ensuring that large language models (LLMs) align with human values. However, the alignment of such models is vulnerable to adversarial jailbreaks, which coax LLMs into overriding their safety guardrails. The identification of these vulnerabilities is therefore instrumental in understanding inherent weaknesses and preventing future misuse. To this end, we propose Prompt Automatic Iterative Refinement (PAIR), an algorithm that generates semantic jailbreaks with only black-box access to an LLM. PAIR -- which is inspired by social engineering attacks -- uses an attacker LLM to automatically generate jailbreaks for a separate targeted LLM without human intervention. In this way, the attacker LLM iteratively queries the target LLM to update and refine a candidate jailbreak. Empirically, PAIR often requires fewer than twenty queries to produce a jailbreak, which is orders of magnitude more efficient than existing algorithms. PAIR also achieves competitive jailbreaking success rates and transferability on open and closed-source LLMs, including GPT-3.5/4, Vicuna, and Gemini.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
399,384
2306.04839
Solving Novel Program Synthesis Problems with Genetic Programming using Parametric Polymorphism
Contemporary genetic programming (GP) systems for general program synthesis have been primarily concerned with evolving programs that can manipulate values from a standard set of primitive data types and simple indexed data structures. In contrast, human programmers do not limit themselves to a small finite set of data types and use polymorphism to express an unbounded number of types including nested data structures, product types, and generic functions. Code-building Genetic Programming (CBGP) is a recently introduced method that compiles type-safe programs from linear genomes using stack-based compilation and a formal type system. Although prior work with CBGP has shown initial demonstrations of polymorphism inside evolved programs, we have provided a deeper exploration of these capabilities through the evolution of programs which make use of generic data types such as key-value maps, tuples, and sets, as well as higher order functions and functions with polymorphic type signatures. In our experiments, CBGP is able to solve problems with all of these properties, where every other GP system that we know of has restrictions that make it unable to even consider problems with these properties. This demonstration provides a significant step towards fully aligning the expressiveness of GP to real world programming.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
true
371,923
2111.08679
Automatically detecting anomalous exoplanet transits
Raw light curve data from exoplanet transits is too complex to naively apply traditional outlier detection methods. We propose an architecture which estimates a latent representation of both the main transit and residual deviations with a pair of variational autoencoders. We show, using two fabricated datasets, that our latent representations of anomalous transit residuals are significantly more amenable to outlier detection than raw data or the latent representation of a traditional variational autoencoder. We then apply our method to real exoplanet transit data. Our study is the first which automatically identifies anomalous exoplanet transit light curves. We additionally release three first-of-their-kind datasets to enable further research.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
true
false
false
266,776
2404.17807
Meta In-Context Learning Makes Large Language Models Better Zero and Few-Shot Relation Extractors
Relation extraction (RE) is an important task that aims to identify the relationships between entities in texts. While large language models (LLMs) have revealed remarkable in-context learning (ICL) capability for general zero and few-shot learning, recent studies indicate that current LLMs still struggle with zero and few-shot RE. Previous studies are mainly dedicated to design prompt formats and select good examples for improving ICL-based RE. Although both factors are vital for ICL, if one can fundamentally boost the ICL capability of LLMs in RE, the zero and few-shot RE performance via ICL would be significantly improved. To this end, we introduce \textsc{Micre} (\textbf{M}eta \textbf{I}n-\textbf{C}ontext learning of LLMs for \textbf{R}elation \textbf{E}xtraction), a new meta-training framework for zero and few-shot RE where an LLM is tuned to do ICL on a diverse collection of RE datasets (i.e., learning to learn in context for RE). Through meta-training, the model becomes more effectively to learn a new RE task in context by conditioning on a few training examples with no parameter updates or task-specific templates at inference time, enabling better zero and few-shot task generalization. We experiment \textsc{Micre} on various LLMs with different model scales and 12 public RE datasets, and then evaluate it on unseen RE benchmarks under zero and few-shot settings. \textsc{Micre} delivers comparable or superior performance compared to a range of baselines including supervised fine-tuning and typical in-context learning methods. We find that the gains are particular significant for larger model scales, and using a diverse set of the meta-training RE datasets is key to improvements. Empirically, we show that \textsc{Micre} can transfer the relation semantic knowledge via relation label name during inference on target RE datasets.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
450,009
2310.08582
Tree-Planner: Efficient Close-loop Task Planning with Large Language Models
This paper studies close-loop task planning, which refers to the process of generating a sequence of skills (a plan) to accomplish a specific goal while adapting the plan based on real-time observations. Recently, prompting Large Language Models (LLMs) to generate actions iteratively has become a prevalent paradigm due to its superior performance and user-friendliness. However, this paradigm is plagued by two inefficiencies: high token consumption and redundant error correction, both of which hinder its scalability for large-scale testing and applications. To address these issues, we propose Tree-Planner, which reframes task planning with LLMs into three distinct phases: plan sampling, action tree construction, and grounded deciding. Tree-Planner starts by using an LLM to sample a set of potential plans before execution, followed by the aggregation of them to form an action tree. Finally, the LLM performs a top-down decision-making process on the tree, taking into account real-time environmental information. Experiments show that Tree-Planner achieves state-of-the-art performance while maintaining high efficiency. By decomposing LLM queries into a single plan-sampling call and multiple grounded-deciding calls, a considerable part of the prompt are less likely to be repeatedly consumed. As a result, token consumption is reduced by 92.2% compared to the previously best-performing model. Additionally, by enabling backtracking on the action tree as needed, the correction process becomes more flexible, leading to a 40.5% decrease in error corrections.
false
false
false
false
true
false
true
true
true
false
false
false
false
false
false
false
false
false
399,445
1804.07134
varrank: an R package for variable ranking based on mutual information with applications to observed systemic datasets
This article describes the R package varrank. It has a flexible implementation of heuristic approaches which perform variable ranking based on mutual information. The package is particularly suitable for exploring multivariate datasets requiring a holistic analysis. The core functionality is a general implementation of the minimum redundancy maximum relevance (mRMRe) model. This approach is based on information theory metrics. It is compatible with discrete and continuous data which are discretised using a large choice of possible rules. The two main problems that can be addressed by this package are the selection of the most representative variables for modeling a collection of variables of interest, i.e., dimension reduction, and variable ranking with respect to a set of variables of interest.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
95,462
2107.11911
Restless Bandits with Many Arms: Beating the Central Limit Theorem
We consider finite-horizon restless bandits with multiple pulls per period, which play an important role in recommender systems, active learning, revenue management, and many other areas. While an optimal policy can be computed, in principle, using dynamic programming, the computation required scales exponentially in the number of arms $N$. Thus, there is substantial value in understanding the performance of index policies and other policies that can be computed efficiently for large $N$. We study the growth of the optimality gap, i.e., the loss in expected performance compared to an optimal policy, for such policies in a classical asymptotic regime proposed by Whittle in which $N$ grows while holding constant the fraction of arms that can be pulled per period. Intuition from the Central Limit Theorem and the tightest previous theoretical bounds suggest that this optimality gap should grow like $O(\sqrt{N})$. Surprisingly, we show that it is possible to outperform this bound. We characterize a non-degeneracy condition and a wide class of novel practically-computable policies, called fluid-priority policies, in which the optimality gap is $O(1)$. These include most widely-used index policies. When this non-degeneracy condition does not hold, we show that fluid-priority policies nevertheless have an optimality gap that is $O(\sqrt{N})$, significantly generalizing the class of policies for which convergence rates are known. We demonstrate that fluid-priority policies offer state-of-the-art performance on a collection of restless bandit problems in numerical experiments.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
247,736
0712.3329
Universal Intelligence: A Definition of Machine Intelligence
A fundamental problem in artificial intelligence is that nobody really knows what intelligence is. The problem is especially acute when we need to consider artificial systems which are significantly different to humans. In this paper we approach this problem in the following way: We take a number of well known informal definitions of human intelligence that have been given by experts, and extract their essential features. These are then mathematically formalised to produce a general measure of intelligence for arbitrary machines. We believe that this equation formally captures the concept of machine intelligence in the broadest reasonable sense. We then show how this formal definition is related to the theory of universal optimal learning agents. Finally, we survey the many other tests and definitions of intelligence that have been proposed for machines.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
1,062
2102.12960
Deep learning based electrical noise removal enables high spectral optoacoustic contrast in deep tissue
Image contrast in multispectral optoacoustic tomography (MSOT) can be severely reduced by electrical noise and interference in the acquired optoacoustic signals. Signal processing techniques have proven insufficient to remove the effects of electrical noise because they typically rely on simplified models and fail to capture complex characteristics of signal and noise. Moreover, they often involve time-consuming processing steps that are unsuited for real-time imaging applications. In this work, we develop and demonstrate a discriminative deep learning (DL) approach to separate electrical noise from optoacoustic signals prior to image reconstruction. The proposed DL algorithm is based on two key features. First, it learns spatiotemporal correlations in both noise and signal by using the entire optoacoustic sinogram as input. Second, it employs training based on a large dataset of experimentally acquired pure noise and synthetic optoacoustic signals. We validated the ability of the trained model to accurately remove electrical noise on synthetic data and on optoacoustic images of a phantom and the human breast. We demonstrate significant enhancements of morphological and spectral optoacoustic images reaching 19% higher blood vessel contrast and localized spectral contrast at depths of more than 2 cm for images acquired in vivo. We discuss how the proposed denoising framework is applicable to clinical multispectral optoacoustic tomography and suitable for real-time operation.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
221,897
2306.07961
Differentiating Metropolis-Hastings to Optimize Intractable Densities
We develop an algorithm for automatic differentiation of Metropolis-Hastings samplers, allowing us to differentiate through probabilistic inference, even if the model has discrete components within it. Our approach fuses recent advances in stochastic automatic differentiation with traditional Markov chain coupling schemes, providing an unbiased and low-variance gradient estimator. This allows us to apply gradient-based optimization to objectives expressed as expectations over intractable target densities. We demonstrate our approach by finding an ambiguous observation in a Gaussian mixture model and by maximizing the specific heat in an Ising model.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
373,220
2402.14547
OmniPred: Language Models as Universal Regressors
Regression is a powerful tool to accurately predict the outcome metric of a system given a set of parameters, but has traditionally been restricted to methods which are only applicable to a specific task. In this paper, we propose OmniPred, a framework for training language models as universal end-to-end regressors over $(x,y)$ data from arbitrary formats. Using data sourced from Google Vizier, one of the largest proprietary blackbox optimization databases in the world, our extensive experiments demonstrate that language models are capable of very precise numerical regression using only textual representations of mathematical parameters and values, and if given the opportunity to train at scale over multiple tasks, can significantly outperform traditional regression models.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
true
false
431,734
2003.02392
PointLoc: Deep Pose Regressor for LiDAR Point Cloud Localization
In this paper, we present a novel end-to-end learning-based LiDAR relocalization framework, termed PointLoc, which infers 6-DoF poses directly using only a single point cloud as input, without requiring a pre-built map. Compared to RGB image-based relocalization, LiDAR frames can provide rich and robust geometric information about a scene. However, LiDAR point clouds are unordered and unstructured making it difficult to apply traditional deep learning regression models for this task. We address this issue by proposing a novel PointNet-style architecture with self-attention to efficiently estimate 6-DoF poses from 360{\deg} LiDAR input frames.Extensive experiments on recently released challenging Oxford Radar RobotCar dataset and real-world robot experiments demonstrate that the proposedmethod can achieve accurate relocalization performance.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
166,926
1901.01686
Ten ways to fool the masses with machine learning
If you want to tell people the truth, make them laugh, otherwise they'll kill you. (source unclear) Machine learning and deep learning are the technologies of the day for developing intelligent automatic systems. However, a key hurdle for progress in the field is the literature itself: we often encounter papers that report results that are difficult to reconstruct or reproduce, results that mis-represent the performance of the system, or contain other biases that limit their validity. In this semi-humorous article, we discuss issues that arise in running and reporting results of machine learning experiments. The purpose of the article is to provide a list of watch out points for researchers to be aware of when developing machine learning models or writing and reviewing machine learning papers.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
118,033
2404.14942
Manipulating Recommender Systems: A Survey of Poisoning Attacks and Countermeasures
Recommender systems have become an integral part of online services to help users locate specific information in a sea of data. However, existing studies show that some recommender systems are vulnerable to poisoning attacks, particularly those that involve learning schemes. A poisoning attack is where an adversary injects carefully crafted data into the process of training a model, with the goal of manipulating the system's final recommendations. Based on recent advancements in artificial intelligence, such attacks have gained importance recently. While numerous countermeasures to poisoning attacks have been developed, they have not yet been systematically linked to the properties of the attacks. Consequently, assessing the respective risks and potential success of mitigation strategies is difficult, if not impossible. This survey aims to fill this gap by primarily focusing on poisoning attacks and their countermeasures. This is in contrast to prior surveys that mainly focus on attacks and their detection methods. Through an exhaustive literature review, we provide a novel taxonomy for poisoning attacks, formalise its dimensions, and accordingly organise 30+ attacks described in the literature. Further, we review 40+ countermeasures to detect and/or prevent poisoning attacks, evaluating their effectiveness against specific types of attacks. This comprehensive survey should serve as a point of reference for protecting recommender systems against poisoning attacks. The article concludes with a discussion on open issues in the field and impactful directions for future research. A rich repository of resources associated with poisoning attacks is available at https://github.com/tamlhp/awesome-recsys-poisoning.
false
false
false
false
false
true
true
false
false
false
false
false
true
false
false
false
false
false
448,874
2110.01882
Simultaneous Information and Energy Transmission with Finite Constellations
In this paper, the fundamental limits on the rates at which information and energy can be simultaneously transmitted over an additive white Gaussian noise channel are studied under the following assumptions: $(a)$ the channel is memoryless; $(b)$ the number of channel input symbols (constellation size) and block length are finite; and $(c)$ the decoding error probability (DEP) and the energy outage probability (EOP) are bounded away from zero. In particular, it is shown that the limits on the maximum information and energy transmission rates; and the minimum DEP and EOP, are essentially set by the type induced by the code used to perform the transmission. That is, the empirical frequency with which each channel input symbol appears in the codewords. Using this observation, guidelines for optimal constellation design for simultaneous energy and information transmission are presented.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
258,930
1909.08970
RUN through the Streets: A New Dataset and Baseline Models for Realistic Urban Navigation
Following navigation instructions in natural language requires a composition of language, action, and knowledge of the environment. Knowledge of the environment may be provided via visual sensors or as a symbolic world representation referred to as a map. Here we introduce the Realistic Urban Navigation (RUN) task, aimed at interpreting navigation instructions based on a real, dense, urban map. Using Amazon Mechanical Turk, we collected a dataset of 2515 instructions aligned with actual routes over three regions of Manhattan. We propose a strong baseline for the task and empirically investigate which aspects of the neural architecture are important for the RUN success. Our results empirically show that entity abstraction, attention over words and worlds, and a constantly updating world-state, significantly contribute to task accuracy.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
146,106
2111.08634
NVIDIA NeMo Neural Machine Translation Systems for English-German and English-Russian News and Biomedical Tasks at WMT21
This paper provides an overview of NVIDIA NeMo's neural machine translation systems for the constrained data track of the WMT21 News and Biomedical Shared Translation Tasks. Our news task submissions for English-German (En-De) and English-Russian (En-Ru) are built on top of a baseline transformer-based sequence-to-sequence model. Specifically, we use a combination of 1) checkpoint averaging 2) model scaling 3) data augmentation with backtranslation and knowledge distillation from right-to-left factorized models 4) finetuning on test sets from previous years 5) model ensembling 6) shallow fusion decoding with transformer language models and 7) noisy channel re-ranking. Additionally, our biomedical task submission for English-Russian uses a biomedically biased vocabulary and is trained from scratch on news task data, medically relevant text curated from the news task dataset, and biomedical data provided by the shared task. Our news system achieves a sacreBLEU score of 39.5 on the WMT'20 En-De test set outperforming the best submission from last year's task of 38.8. Our biomedical task Ru-En and En-Ru systems reach BLEU scores of 43.8 and 40.3 respectively on the WMT'20 Biomedical Task Test set, outperforming the previous year's best submissions.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
266,765
2305.14787
Polarimetric Imaging for Perception
Autonomous driving and advanced driver-assistance systems rely on a set of sensors and algorithms to perform the appropriate actions and provide alerts as a function of the driving scene. Typically, the sensors include color cameras, radar, lidar and ultrasonic sensors. Strikingly however, although light polarization is a fundamental property of light, it is seldom harnessed for perception tasks. In this work we analyze the potential for improvement in perception tasks when using an RGB-polarimetric camera, as compared to an RGB camera. We examine monocular depth estimation and free space detection during the middle of the day, when polarization is independent of subject heading, and show that a quantifiable improvement can be achieved for both of them using state-of-the-art deep neural networks, with a minimum of architectural changes. We also present a new dataset composed of RGB-polarimetric images, lidar scans, GNSS / IMU readings and free space segmentations that further supports developing perception algorithms that take advantage of light polarization.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
367,257
2107.03315
Predicting with Confidence on Unseen Distributions
Recent work has shown that the performance of machine learning models can vary substantially when models are evaluated on data drawn from a distribution that is close to but different from the training distribution. As a result, predicting model performance on unseen distributions is an important challenge. Our work connects techniques from domain adaptation and predictive uncertainty literature, and allows us to predict model accuracy on challenging unseen distributions without access to labeled data. In the context of distribution shift, distributional distances are often used to adapt models and improve their performance on new domains, however accuracy estimation, or other forms of predictive uncertainty, are often neglected in these investigations. Through investigating a wide range of established distributional distances, such as Frechet distance or Maximum Mean Discrepancy, we determine that they fail to induce reliable estimates of performance under distribution shift. On the other hand, we find that the difference of confidences (DoC) of a classifier's predictions successfully estimates the classifier's performance change over a variety of shifts. We specifically investigate the distinction between synthetic and natural distribution shifts and observe that despite its simplicity DoC consistently outperforms other quantifications of distributional difference. $DoC$ reduces predictive error by almost half ($46\%$) on several realistic and challenging distribution shifts, e.g., on the ImageNet-Vid-Robust and ImageNet-Rendition datasets.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
245,121
1702.00500
AMR-to-text Generation with Synchronous Node Replacement Grammar
This paper addresses the task of AMR-to-text generation by leveraging synchronous node replacement grammar. During training, graph-to-string rules are learned using a heuristic extraction algorithm. At test time, a graph transducer is applied to collapse input AMRs and generate output sentences. Evaluated on SemEval-2016 Task 8, our method gives a BLEU score of 25.62, which is the best reported so far.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
67,661
1705.06830
Exploring the structure of a real-time, arbitrary neural artistic stylization network
In this paper, we present a method which combines the flexibility of the neural algorithm of artistic style with the speed of fast style transfer networks to allow real-time stylization using any content/style image pair. We build upon recent work leveraging conditional instance normalization for multi-style transfer networks by learning to predict the conditional instance normalization parameters directly from a style image. The model is successfully trained on a corpus of roughly 80,000 paintings and is able to generalize to paintings previously unobserved. We demonstrate that the learned embedding space is smooth and contains a rich structure and organizes semantic information associated with paintings in an entirely unsupervised manner.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
73,686
2309.01207
Spectral Adversarial MixUp for Few-Shot Unsupervised Domain Adaptation
Domain shift is a common problem in clinical applications, where the training images (source domain) and the test images (target domain) are under different distributions. Unsupervised Domain Adaptation (UDA) techniques have been proposed to adapt models trained in the source domain to the target domain. However, those methods require a large number of images from the target domain for model training. In this paper, we propose a novel method for Few-Shot Unsupervised Domain Adaptation (FSUDA), where only a limited number of unlabeled target domain samples are available for training. To accomplish this challenging task, first, a spectral sensitivity map is introduced to characterize the generalization weaknesses of models in the frequency domain. We then developed a Sensitivity-guided Spectral Adversarial MixUp (SAMix) method to generate target-style images to effectively suppresses the model sensitivity, which leads to improved model generalizability in the target domain. We demonstrated the proposed method and rigorously evaluated its performance on multiple tasks using several public datasets.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
389,593
1907.11158
Cross-Lingual Transfer for Distantly Supervised and Low-resources Indonesian NER
Manually annotated corpora for low-resource languages are usually small in quantity (gold), or large but distantly supervised (silver). Inspired by recent progress of injecting pre-trained language model (LM) on many Natural Language Processing (NLP) task, we proposed to fine-tune pre-trained language model from high-resources languages to low-resources languages to improve the performance of both scenarios. Our empirical experiment demonstrates significant improvement when fine-tuning pre-trained language model in cross-lingual transfer scenarios for small gold corpus and competitive results in large silver compare to supervised cross-lingual transfer, which will be useful when there is no parallel annotation in the same task to begin. We compare our proposed method of cross-lingual transfer using pre-trained LM to different sources of transfer such as mono-lingual LM and Part-of-Speech tagging (POS) in the downstream task of both large silver and small gold NER dataset by exploiting character-level input of bi-directional language model task.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
139,789
2407.16907
Research on Education Big Data for Students Academic Performance Analysis based on Machine Learning
The application of the Internet in the field of education is becoming more and more popular, and a large amount of educational data is generated in the process. How to effectively use these data has always been a key issue in the field of educational data mining. In this work, a machine learning model based on Long Short-Term Memory Network (LSTM) was used to conduct an in-depth analysis of educational big data to evaluate student performance. The LSTM model efficiently processes time series data, allowing us to capture time-dependent and long-term trends in students' learning activities. This approach is particularly useful for analyzing student progress, engagement, and other behavioral patterns to support personalized education. In an experimental analysis, we verified the effectiveness of the deep learning method in predicting student performance by comparing the performance of different models. Strict cross-validation techniques are used to ensure the accuracy and generalization of experimental results.
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
475,774
2412.14802
Stack Trace Deduplication: Faster, More Accurately, and in More Realistic Scenarios
In large-scale software systems, there are often no fully-fledged bug reports with human-written descriptions when an error occurs. In this case, developers rely on stack traces, i.e., series of function calls that led to the error. Since there can be tens and hundreds of thousands of them describing the same issue from different users, automatic deduplication into categories is necessary to allow for processing. Recent works have proposed powerful deep learning-based approaches for this, but they are evaluated and compared in isolation from real-life workflows, and it is not clear whether they will actually work well at scale. To overcome this gap, this work presents three main contributions: a novel model, an industry-based dataset, and a multi-faceted evaluation. Our model consists of two parts - (1) an embedding model with byte-pair encoding and approximate nearest neighbor search to quickly find the most relevant stack traces to the incoming one, and (2) a reranker that re-ranks the most fitting stack traces, taking into account the repeated frames between them. To complement the existing datasets collected from open-source projects, we share with the community SlowOps - a dataset of stack traces from IntelliJ-based products developed by JetBrains, which has an order of magnitude more stack traces per category. Finally, we carry out an evaluation that strives to be realistic: measuring not only the accuracy of categorization, but also the operation time and the ability to create new categories. The evaluation shows that our model strikes a good balance - it outperforms other models on both open-source datasets and SlowOps, while also being faster on time than most. We release all of our code and data, and hope that our work can pave the way to further practice-oriented research in the area.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
518,862
2407.10307
Distributed Charging Coordination for Electric Trucks under Limited Facilities and Travel Uncertainties
In this work, we address the problem of charging coordination between electric trucks and charging stations. The problem arises from the tension between the trucks' nontrivial charging times and the stations' limited charging facilities. Our goal is to reduce the trucks' waiting times at the stations while minimizing individual trucks' operational costs. We propose a distributed coordination framework that relies on computation and communication between the stations and the trucks, and handles uncertainties in travel times and energy consumption. Within the framework, the stations assign a limited number of charging ports to trucks according to the first-come, first-served rule. In addition, each station constructs a waiting time forecast model based on its historical data and provides its estimated waiting times to trucks upon request. When approaching a station, a truck sends its arrival time and estimated arrival-time windows to the nearby station and the distant stations, respectively. The truck then receives the estimated waiting times from these stations in response, and updates its charging plan accordingly while accounting for travel uncertainties. We performed simulation studies for $1,000$ trucks traversing the Swedish road network for $40$ days, using realistic traffic data with travel uncertainties. The results show that our method reduces the average waiting time of the trucks by $46.1\%$ compared to offline charging plans computed by the trucks without coordination and update, and by $33.8\%$ compared to the coordination scheme assuming zero waiting times at distant stations.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
472,930
2008.01928
Component Divide-and-Conquer for Real-World Image Super-Resolution
In this paper, we present a large-scale Diverse Real-world image Super-Resolution dataset, i.e., DRealSR, as well as a divide-and-conquer Super-Resolution (SR) network, exploring the utility of guiding SR model with low-level image components. DRealSR establishes a new SR benchmark with diverse real-world degradation processes, mitigating the limitations of conventional simulated image degradation. In general, the targets of SR vary with image regions with different low-level image components, e.g., smoothness preserving for flat regions, sharpening for edges, and detail enhancing for textures. Learning an SR model with conventional pixel-wise loss usually is easily dominated by flat regions and edges, and fails to infer realistic details of complex textures. We propose a Component Divide-and-Conquer (CDC) model and a Gradient-Weighted (GW) loss for SR. Our CDC parses an image with three components, employs three Component-Attentive Blocks (CABs) to learn attentive masks and intermediate SR predictions with an intermediate supervision learning strategy, and trains an SR model following a divide-and-conquer learning principle. Our GW loss also provides a feasible way to balance the difficulties of image components for SR. Extensive experiments validate the superior performance of our CDC and the challenging aspects of our DRealSR dataset related to diverse real-world scenarios. Our dataset and codes are publicly available at https://github.com/xiezw5/Component-Divide-and-Conquer-for-Real-World-Image-Super-Resolution
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
190,471
2012.03600
Exploiting Intrinsic Kinematic Null Space for Supernumerary Robotic Limbs Control
Supernumerary robotic limbs (SRLs) gained increasing interest in the last years for their applicability as healthcare and assistive technologies. These devices can either support or augment human sensorimotor capabilities, allowing users to complete tasks that are more complex than those feasible for their natural limbs. However, for a successful coordination between natural and artificial limbs, intuitiveness of interaction and perception of autonomy are key enabling features, especially for people suffering from motor disorders and impairments. The development of suitable human-robot interfaces is thus fundamental to foster the adoption of SRLs. With this work, we describe how to control an extra degree of freedom by taking advantage of what we defined the Intrinsic Kinematic Null Space, i.e. the redundancy of the human kinematic chain involved in the ongoing task. Obtained results demonstrated that the proposed control strategy is effective for performing complex tasks with a supernumerary robotic finger, and that practice improves users' control ability.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
210,176
1804.06248
PM-GANs: Discriminative Representation Learning for Action Recognition Using Partial-modalities
Data of different modalities generally convey complimentary but heterogeneous information, and a more discriminative representation is often preferred by combining multiple data modalities like the RGB and infrared features. However in reality, obtaining both data channels is challenging due to many limitations. For example, the RGB surveillance cameras are often restricted from private spaces, which is in conflict with the need of abnormal activity detection for personal security. As a result, using partial data channels to build a full representation of multi-modalities is clearly desired. In this paper, we propose a novel Partial-modal Generative Adversarial Networks (PM-GANs) that learns a full-modal representation using data from only partial modalities. The full representation is achieved by a generated representation in place of the missing data channel. Extensive experiments are conducted to verify the performance of our proposed method on action recognition, compared with four state-of-the-art methods. Meanwhile, a new Infrared-Visible Dataset for action recognition is introduced, and will be the first publicly available action dataset that contains paired infrared and visible spectrum.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
95,257
2006.13329
Bach or Mock? A Grading Function for Chorales in the Style of J.S. Bach
Deep generative systems that learn probabilistic models from a corpus of existing music do not explicitly encode knowledge of a musical style, compared to traditional rule-based systems. Thus, it can be difficult to determine whether deep models generate stylistically correct output without expert evaluation, but this is expensive and time-consuming. Therefore, there is a need for automatic, interpretable, and musically-motivated evaluation measures of generated music. In this paper, we introduce a grading function that evaluates four-part chorales in the style of J.S. Bach along important musical features. We use the grading function to evaluate the output of a Transformer model, and show that the function is both interpretable and outperforms human experts at discriminating Bach chorales from model-generated ones.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
183,868
2307.11620
Offline Multi-Agent Reinforcement Learning with Implicit Global-to-Local Value Regularization
Offline reinforcement learning (RL) has received considerable attention in recent years due to its attractive capability of learning policies from offline datasets without environmental interactions. Despite some success in the single-agent setting, offline multi-agent RL (MARL) remains to be a challenge. The large joint state-action space and the coupled multi-agent behaviors pose extra complexities for offline policy optimization. Most existing offline MARL studies simply apply offline data-related regularizations on individual agents, without fully considering the multi-agent system at the global level. In this work, we present OMIGA, a new offline m ulti-agent RL algorithm with implicit global-to-local v alue regularization. OMIGA provides a principled framework to convert global-level value regularization into equivalent implicit local value regularizations and simultaneously enables in-sample learning, thus elegantly bridging multi-agent value decomposition and policy learning with offline regularizations. Based on comprehensive experiments on the offline multi-agent MuJoCo and StarCraft II micro-management tasks, we show that OMIGA achieves superior performance over the state-of-the-art offline MARL methods in almost all tasks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
380,975
2007.05577
Vizarel: A System to Help Better Understand RL Agents
Visualization tools for supervised learning have allowed users to interpret, introspect, and gain intuition for the successes and failures of their models. While reinforcement learning practitioners ask many of the same questions, existing tools are not applicable to the RL setting. In this work, we describe our initial attempt at constructing a prototype of these ideas, through identifying possible features that such a system should encapsulate. Our design is motivated by envisioning the system to be a platform on which to experiment with interpretable reinforcement learning.
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
186,717
2004.12585
A Batch Normalized Inference Network Keeps the KL Vanishing Away
Variational Autoencoder (VAE) is widely used as a generative model to approximate a model's posterior on latent variables by combining the amortized variational inference and deep neural networks. However, when paired with strong autoregressive decoders, VAE often converges to a degenerated local optimum known as "posterior collapse". Previous approaches consider the Kullback Leibler divergence (KL) individual for each datapoint. We propose to let the KL follow a distribution across the whole dataset, and analyze that it is sufficient to prevent posterior collapse by keeping the expectation of the KL's distribution positive. Then we propose Batch Normalized-VAE (BN-VAE), a simple but effective approach to set a lower bound of the expectation by regularizing the distribution of the approximate posterior's parameters. Without introducing any new model component or modifying the objective, our approach can avoid the posterior collapse effectively and efficiently. We further show that the proposed BN-VAE can be extended to conditional VAE (CVAE). Empirically, our approach surpasses strong autoregressive baselines on language modeling, text classification and dialogue generation, and rivals more complex approaches while keeping almost the same training time as VAE.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
174,293
2310.18794
Sequence-Level Certainty Reduces Hallucination In Knowledge-Grounded Dialogue Generation
In this work, we propose sequence-level certainty as a common theme over hallucination in Knowledge Grounded Dialogue Generation (KGDG). We explore the correlation between the level of hallucination in model responses and two types of sequence-level certainty: probabilistic certainty and semantic certainty. Empirical results reveal that higher levels of both types of certainty in model responses are correlated with lower levels of hallucination. We further propose Certainty-based Response Ranking (CRR), a decoding-time hallucination mitigation method that samples several response candidates, ranks them based on sequence-level certainty, and outputs the response with the highest certainty level. Aligning with our definitions of sequence-level certainty, we design 2 types of CRR approaches: Probabilistic CRR (P-CRR) and Semantic CRR (S-CRR). P-CRR ranks individually sampled model responses using the arithmetic mean log-probability of the entire sequence. S-CRR approaches certainty estimation from meaning-space, and ranks model response candidates based on their semantic certainty level as measured by an entailment-based Agreement Score (AS). Through extensive experiments across 3 KGDG datasets, 3 decoding methods, and 4 KGDG models, we validate the effectiveness of CRR for reducing hallucination in KGDG task.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
403,704
1303.7103
Decentralized Eigenvalue Algorithms for Distributed Signal Detection in Cognitive Networks
In this paper we derive and analyze two algorithms -- referred to as decentralized power method (DPM) and decentralized Lanczos algorithm (DLA) -- for distributed computation of one (the largest) or multiple eigenvalues of a sample covariance matrix over a wireless network. The proposed algorithms, based on sequential average consensus steps for computations of matrix-vector products and inner vector products, are first shown to be equivalent to their centralized counterparts in the case of exact distributed consensus. Then, closed-form expressions of the error introduced by non-ideal consensus are derived for both algorithms. The error of the DPM is shown to vanish asymptotically under given conditions on the sequence of consensus errors. Finally, we consider applications to spectrum sensing in cognitive radio networks, and we show that virtually all eigenvalue-based tests proposed in the literature can be implemented in a distributed setting using either the DPM or the DLA. Simulation results are presented that validate the effectiveness of the proposed algorithms in conditions of practical interest (large-scale networks, small number of samples, and limited number of iterations).
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
true
23,321
1304.2103
High-Throughput Cooperative Communication with Interference Cancellation for Two-Path Relay in Multi-source System
Relay-based cooperative communication has become a research focus in recent years because it can achieve diversity gain in wireless networks. In existing works, network coding and two-path relay are adopted to deal with the increase of network size and the half-duplex nature of relay, respectively. To further improve bandwidth efficiency, we propose a novel cooperative transmission scheme which combines network coding and two-path relay together in a multi-source system. Due to the utilization of two-path relay, our proposed scheme achieves full-rate transmission. Adopting complex field network coding (CFNC) at both sources and relays ensures that symbols from different sources are allowed to be broadcast in the same time slot. We also adopt physical-layer network coding (PNC) at relay nodes to deal with the inter-relay interference caused by the two-path relay. With careful process design, the ideal throughput of our scheme achieves by 1 symbol per source per time slot (sym/S/TS). Furthermore, the theoretical analysis provides a method to estimate the symbol error probability (SEP) and throughput in additive complex white Gaussian noise (AWGN) and Rayleigh fading channels. The simulation results verify the improvement achieved by the proposed scheme.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
23,624
2012.12331
Real-Time Vehicular Wireless System-Level Simulation
Future automation and control units for advanced driver assistance systems (ADAS) will exchange sensor and kinematic data with nearby vehicles using wireless communication links to improve traffic safety. In this paper we present an accurate real-time system-level simulation for multi-vehicle communication scenarios to support the development and test of connected ADAS systems. The physical and data-link layer are abstracted and provide the frame error rate (FER) to a network simulator. The FER is strongly affected by the non-stationary doubly dispersive fading process of the vehicular radio communication channel. We use a geometry-based stochastic channel model (GSCM) to enable a simplified but still accurate representation of the non-stationary vehicular fading process. The propagation path parameters of the GSCM are used to efficiently compute the time-variant condensed radio channel parameters per stationarity region of each communication link during run-time. Five condensed radio channel parameters mainly determine the FER forming a parameter vector: path loss, root mean square delay spread, Doppler bandwidth, $K$-factor, and line-of-sight Doppler shift. We measure the FER for a pre-defined set of discrete grid points of the parameter vector using a channel emulator and a given transmitter-receiver modem pair. The FER data is stored in a table and looked up during run-time of the real-time system-level simulation. We validate our methodology using empirical measurement data from a street crossing scenarios demonstrating a close match in terms of FER between simulation and measurement.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
212,890
1803.01384
Data Curation with Deep Learning [Vision]
Data curation - the process of discovering, integrating, and cleaning data - is one of the oldest, hardest, yet inevitable data management problems. Despite decades of efforts from both researchers and practitioners, it is still one of the most time consuming and least enjoyable work of data scientists. In most organizations, data curation plays an important role so as to fully unlock the value of big data. Unfortunately, the current solutions are not keeping up with the ever-changing data ecosystem, because they often require substantially high human cost. Meanwhile, deep learning is making strides in achieving remarkable successes in multiple areas, such as image recognition, natural language processing, and speech recognition. In this vision paper, we explore how some of the fundamental innovations in deep learning could be leveraged to improve existing data curation solutions and to help build new ones. In particular, we provide a thorough overview of the current deep learning landscape, and identify interesting research opportunities and dispel common myths. We hope that the synthesis of these important domains will unleash a series of research activities that will lead to significantly improved solutions for many data curation tasks.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
91,865
2306.01789
Edit Distance based RL for RNNT decoding
RNN-T is currently considered the industry standard in ASR due to its exceptional WERs in various benchmark tests and its ability to support seamless streaming and longform transcription. However, its biggest drawback lies in the significant discrepancy between its training and inference objectives. During training, RNN-T maximizes all alignment probabilities by teacher forcing, while during inference, it uses beam search which may not necessarily find the maximum probable alignment. Additionally, RNN-T's inability to experience mistakes during teacher forcing training makes it more problematic when a mistake occurs in inference. To address this issue, this paper proposes a Reinforcement Learning method that minimizes the gap between training and inference time. Our Edit Distance based RL (EDRL) approach computes rewards based on the edit distance, and trains the network at every action level. The proposed approach yielded SoTA WERs on LibriSpeech for the 600M Conformer RNN-T model.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
370,608
1602.01731
Multi-Objective Framework for Dynamic Optimization of OFDMA Cellular Systems
Green cellular networking has become an important research area in recent years due to environmental and economical concerns. Switching off under-utilized BSs during off-peak traffic load conditions is a promising approach to reduce energy consumption in cellular networks. In practice, during initial cell planning, the BS locations and RAN parameters are optimized to meet the basic system design requirements like coverage, capacity, overlap, QoS etc. As these metrics are tightly coupled with each other due to co-channel interference, switching off certain BSs may affect the system requirements. Therefore, identifying a subset of large number of BSs which are to be put into sleep mode, is a challenging dynamic optimization problem. In this work, we develop a multiobjective framework for dynamic optimization framework for OFDMA based cellular systems. The objective is to identify the appropriate set of active sectors and RAN parameters that maximize coverage and area spectral efficiency while minimizing overlap and area power consumption without violating the QoS requirements for a given traffic demand density. The objective functions and constraints are obtained using appropriate analytical models which capture the traffic characteristics, propagation characteristics (pathloss, shadowing, and small scale fading) as well as load condition in neighbouring cells. A low complexity evolutionary algorithm is used for identifying the global Pareto optimal solutions at a faster convergence rate. The inter-relationships between the system objectives are studied and guidelines are provided to find an appropriate network configuration that provides the best achievable trade-offs. The results show that using the proposed framework, significant amount of energy saving can be achieved and with a low computational complexity while maintaining good trade-offs among the other objectives.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
51,745
1910.08343
Automatic Data Augmentation by Learning the Deterministic Policy
Aiming to produce sufficient and diverse training samples, data augmentation has been demonstrated for its effectiveness in training deep models. Regarding that the criterion of the best augmentation is challenging to define, we in this paper present a novel learning-based augmentation method termed as DeepAugNet, which formulates the final augmented data as a collection of several sequentially augmented subsets. Specifically, the current augmented subset is required to maximize the performance improvement compared with the last augmented subset by learning the deterministic augmentation policy using deep reinforcement learning. By introducing an unified optimization goal, DeepAugNet intends to combine the data augmentation and the deep model training in an end-to-end training manner which is realized by simultaneously training a hybrid architecture of dueling deep Q-learning algorithm and a surrogate deep model. We extensively evaluated our proposed DeepAugNet on various benchmark datasets including Fashion MNIST, CUB, CIFAR-100 and WebCaricature. Compared with the current state-of-the-arts, our method can achieve a significant improvement in small-scale datasets, and a comparable performance in large-scale datasets. Code will be available soon.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
149,849
1910.01590
DPSOM: Deep Probabilistic Clustering with Self-Organizing Maps
Generating interpretable visualizations from complex data is a common problem in many applications. Two key ingredients for tackling this issue are clustering and representation learning. However, current methods do not yet successfully combine the strengths of these two approaches. Existing representation learning models which rely on latent topological structure such as self-organising maps, exhibit markedly lower clustering performance compared to recent deep clustering methods. To close this performance gap, we (a) present a novel way to fit self-organizing maps with probabilistic cluster assignments (PSOM), (b) propose a new deep architecture for probabilistic clustering (DPSOM) using a VAE, and (c) extend our architecture for time-series clustering (T-DPSOM), which also allows forecasting in the latent space using LSTMs. We show that DPSOM achieves superior clustering performance compared to current deep clustering methods on MNIST/Fashion-MNIST, while maintaining the favourable visualization properties of SOMs. On medical time series, we show that T-DPSOM outperforms baseline methods in time series clustering and time series forecasting, while providing interpretable visualizations of patient state trajectories and uncertainty estimation.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
147,979
2305.14039
Learning a Single Convolutional Layer Model for Low Light Image Enhancement
Low-light image enhancement (LLIE) aims to improve the illuminance of images due to insufficient light exposure. Recently, various lightweight learning-based LLIE methods have been proposed to handle the challenges of unfavorable prevailing low contrast, low brightness, etc. In this paper, we have streamlined the architecture of the network to the utmost degree. By utilizing the effective structural re-parameterization technique, a single convolutional layer model (SCLM) is proposed that provides global low-light enhancement as the coarsely enhanced results. In addition, we introduce a local adaptation module that learns a set of shared parameters to accomplish local illumination correction to address the issue of varied exposure levels in different image regions. Experimental results demonstrate that the proposed method performs favorably against the state-of-the-art LLIE methods in both objective metrics and subjective visual effects. Additionally, our method has fewer parameters and lower inference complexity compared to other learning-based schemes.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
366,824
1905.04771
Failure-Tolerant Connectivity Maintenance for Robot Swarms
Connectivity maintenance plays a key role in achieving a desired global behavior among a swarm of robots. However, connectivity maintenance in realistic environments is hampered by lack of computation resources, low communication bandwidth, robot failures, and unstable links. In this paper, we propose a novel decentralized connectivity-preserving algorithm that can be deployed on top of other behaviors to enforce connectivity constraints. The algorithm takes a set of targets to be reached while keeping a minimum number of redundant links between robots, with the goal of guaranteeing bandwidth and reliability. Robots then incrementally build and maintain a communication backbone with the specified number of links. We empirically study the performance of the algorithm, analyzing its time to convergence, as well as robustness to faults injected into the backbone robots. Our results statistically demonstrate the algorithm's ability to preserve the desired connectivity constraints and to reach the targets with up to 70 percent of individual robot failures in the communication backbone.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
130,560
2301.09077
Unleash the Potential of Image Branch for Cross-modal 3D Object Detection
To achieve reliable and precise scene understanding, autonomous vehicles typically incorporate multiple sensing modalities to capitalize on their complementary attributes. However, existing cross-modal 3D detectors do not fully utilize the image domain information to address the bottleneck issues of the LiDAR-based detectors. This paper presents a new cross-modal 3D object detector, namely UPIDet, which aims to unleash the potential of the image branch from two aspects. First, UPIDet introduces a new 2D auxiliary task called normalized local coordinate map estimation. This approach enables the learning of local spatial-aware features from the image modality to supplement sparse point clouds. Second, we discover that the representational capability of the point cloud backbone can be enhanced through the gradients backpropagated from the training objectives of the image branch, utilizing a succinct and effective point-to-pixel module. Extensive experiments and ablation studies validate the effectiveness of our method. Notably, we achieved the top rank in the highly competitive cyclist class of the KITTI benchmark at the time of submission. The source code is available at https://github.com/Eaphan/UPIDet.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
341,396
2203.06714
A Survey on Deep Graph Generation: Methods and Applications
Graphs are ubiquitous in encoding relational information of real-world objects in many domains. Graph generation, whose purpose is to generate new graphs from a distribution similar to the observed graphs, has received increasing attention thanks to the recent advances of deep learning models. In this paper, we conduct a comprehensive review on the existing literature of deep graph generation from a variety of emerging methods to its wide application areas. Specifically, we first formulate the problem of deep graph generation and discuss its difference with several related graph learning tasks. Secondly, we divide the state-of-the-art methods into three categories based on model architectures and summarize their generation strategies. Thirdly, we introduce three key application areas of deep graph generation. Lastly, we highlight challenges and opportunities in the future study of deep graph generation. We hope that our survey will be useful for researchers and practitioners who are interested in this exciting and rapidly-developing field.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
285,205
1410.2960
Location Spoofing Detection for VANETs by a Single Base Station in Rician Fading Channels
In this work we examine the performance of a Location Spoofing Detection System (LSDS) for vehicular networks in the realistic setting of Rician fading channels. In the LSDS, an authorized Base Station (BS) equipped with multiple antennas utilizes channel observations to identify a malicious vehicle, also equipped with multiple antennas, that is spoofing its location. After deriving the optimal transmit power and the optimal directional beamformer of a potentially malicious vehicle, robust theoretical analysis and detailed simulations are conducted in order to determine the impact of key system parameters on the LSDS performance. Our analysis shows how LSDS performance increases as the Rician K-factor of the channel between the BS and legitimate vehicles increases, or as the number of antennas at the BS or legitimate vehicle increases. We also obtain the counter-intuitive result that the malicious vehicle's optimal number of antennas conditioned on its optimal directional beamformer is equal to the legitimate vehicle's number of antennas. The results we provide here are important for the verification of location information reported in IEEE 1609.2 safety messages.
false
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
36,668
1810.10096
Learning Representations in Model-Free Hierarchical Reinforcement Learning
Common approaches to Reinforcement Learning (RL) are seriously challenged by large-scale applications involving huge state spaces and sparse delayed reward feedback. Hierarchical Reinforcement Learning (HRL) methods attempt to address this scalability issue by learning action selection policies at multiple levels of temporal abstraction. Abstraction can be had by identifying a relatively small set of states that are likely to be useful as subgoals, in concert with the learning of corresponding skill policies to achieve those subgoals. Many approaches to subgoal discovery in HRL depend on the analysis of a model of the environment, but the need to learn such a model introduces its own problems of scale. Once subgoals are identified, skills may be learned through intrinsic motivation, introducing an internal reward signal marking subgoal attainment. In this paper, we present a novel model-free method for subgoal discovery using incremental unsupervised learning over a small memory of the most recent experiences (trajectories) of the agent. When combined with an intrinsic motivation learning mechanism, this method learns both subgoals and skills, based on experiences in the environment. Thus, we offer an original approach to HRL that does not require the acquisition of a model of the environment, suitable for large-scale applications. We demonstrate the efficiency of our method on two RL problems with sparse delayed feedback: a variant of the rooms environment and the first screen of the ATARI 2600 Montezuma's Revenge game.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
111,195
2408.07791
An Efficient and Explanatory Image and Text Clustering System with Multimodal Autoencoder Architecture
We demonstrate the efficiencies and explanatory abilities of extensions to the common tools of Autoencoders and LLM interpreters, in the novel context of comparing different cultural approaches to the same international news event. We develop a new Convolutional-Recurrent Variational Autoencoder (CRVAE) model that extends the modalities of previous CVAE models, by using fully-connected latent layers to embed in parallel the CNN encodings of video frames, together with the LSTM encodings of their related text derived from audio. We incorporate the model within a larger system that includes frame-caption alignment, latent space vector clustering, and a novel LLM-based cluster interpreter. We measure, tune, and apply this system to the task of summarizing a video into three to five thematic clusters, with each theme described by ten LLM-produced phrases. We apply this system to two news topics, COVID-19 and the Winter Olympics, and five other topics are in progress.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
true
480,721
1606.03556
Human Attention in Visual Question Answering: Do Humans and Deep Networks Look at the Same Regions?
We conduct large-scale studies on `human attention' in Visual Question Answering (VQA) to understand where humans choose to look to answer questions about images. We design and test multiple game-inspired novel attention-annotation interfaces that require the subject to sharpen regions of a blurred image to answer a question. Thus, we introduce the VQA-HAT (Human ATtention) dataset. We evaluate attention maps generated by state-of-the-art VQA models against human attention both qualitatively (via visualizations) and quantitatively (via rank-order correlation). Overall, our experiments show that current attention models in VQA do not seem to be looking at the same regions as humans.
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
57,108
2312.12972
From Past to Future: Rethinking Eligibility Traces
In this paper, we introduce a fresh perspective on the challenges of credit assignment and policy evaluation. First, we delve into the nuances of eligibility traces and explore instances where their updates may result in unexpected credit assignment to preceding states. From this investigation emerges the concept of a novel value function, which we refer to as the \emph{bidirectional value function}. Unlike traditional state value functions, bidirectional value functions account for both future expected returns (rewards anticipated from the current state onward) and past expected returns (cumulative rewards from the episode's start to the present). We derive principled update equations to learn this value function and, through experimentation, demonstrate its efficacy in enhancing the process of policy evaluation. In particular, our results indicate that the proposed learning approach can, in certain challenging contexts, perform policy evaluation more rapidly than TD($\lambda$) -- a method that learns forward value functions, $v^\pi$, \emph{directly}. Overall, our findings present a new perspective on eligibility traces and potential advantages associated with the novel value function it inspires, especially for policy evaluation.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
417,170
2211.11759
Learning Cooperative Oversubscription for Cloud by Chance-Constrained Multi-Agent Reinforcement Learning
Oversubscription is a common practice for improving cloud resource utilization. It allows the cloud service provider to sell more resources than the physical limit, assuming not all users would fully utilize the resources simultaneously. However, how to design an oversubscription policy that improves utilization while satisfying the some safety constraints remains an open problem. Existing methods and industrial practices are over-conservative, ignoring the coordination of diverse resource usage patterns and probabilistic constraints. To address these two limitations, this paper formulates the oversubscription for cloud as a chance-constrained optimization problem and propose an effective Chance Constrained Multi-Agent Reinforcement Learning (C2MARL) method to solve this problem. Specifically, C2MARL reduces the number of constraints by considering their upper bounds and leverages a multi-agent reinforcement learning paradigm to learn a safe and optimal coordination policy. We evaluate our C2MARL on an internal cloud platform and public cloud datasets. Experiments show that our C2MARL outperforms existing methods in improving utilization ($20\%\sim 86\%$) under different levels of safety constraints.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
true
false
false
false
331,868
2402.02622
DenseFormer: Enhancing Information Flow in Transformers via Depth Weighted Averaging
The transformer architecture by Vaswani et al. (2017) is now ubiquitous across application domains, from natural language processing to speech processing and image understanding. We propose DenseFormer, a simple modification to the standard architecture that improves the perplexity of the model without increasing its size -- adding a few thousand parameters for large-scale models in the 100B parameters range. Our approach relies on an additional averaging step after each transformer block, which computes a weighted average of current and past representations -- we refer to this operation as Depth-Weighted-Average (DWA). The learned DWA weights exhibit coherent patterns of information flow, revealing the strong and structured reuse of activations from distant layers. Experiments demonstrate that DenseFormer is more data efficient, reaching the same perplexity of much deeper transformer models, and that for the same perplexity, these new models outperform transformer baselines in terms of memory efficiency and inference time.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
426,650
2305.06936
An Option-Dependent Analysis of Regret Minimization Algorithms in Finite-Horizon Semi-Markov Decision Processes
A large variety of real-world Reinforcement Learning (RL) tasks is characterized by a complex and heterogeneous structure that makes end-to-end (or flat) approaches hardly applicable or even infeasible. Hierarchical Reinforcement Learning (HRL) provides general solutions to address these problems thanks to a convenient multi-level decomposition of the tasks, making their solution accessible. Although often used in practice, few works provide theoretical guarantees to justify this outcome effectively. Thus, it is not yet clear when to prefer such approaches compared to standard flat ones. In this work, we provide an option-dependent upper bound to the regret suffered by regret minimization algorithms in finite-horizon problems. We illustrate that the performance improvement derives from the planning horizon reduction induced by the temporal abstraction enforced by the hierarchical structure. Then, focusing on a sub-setting of HRL approaches, the options framework, we highlight how the average duration of the available options affects the planning horizon and, consequently, the regret itself. Finally, we relax the assumption of having pre-trained options to show how in particular situations, learning hierarchically from scratch could be preferable to using a standard approach.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
363,714