id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2312.15241
Measuring Value Alignment
As artificial intelligence (AI) systems become increasingly integrated into various domains, ensuring that they align with human values becomes critical. This paper introduces a novel formalism to quantify the alignment between AI systems and human values, using Markov Decision Processes (MDPs) as the foundational model. We delve into the concept of values as desirable goals tied to actions and norms as behavioral guidelines, aiming to shed light on how they can be used to guide AI decisions. This framework offers a mechanism to evaluate the degree of alignment between norms and values by assessing preference changes across state transitions in a normative world. By utilizing this formalism, AI developers and ethicists can better design and evaluate AI systems to ensure they operate in harmony with human values. The proposed methodology holds potential for a wide range of applications, from recommendation systems emphasizing well-being to autonomous vehicles prioritizing safety.
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
417,935
2406.19623
FRA-DiagSys: A Transformer Winding Fault Diagnosis System for Identifying Fault Types and degrees Using Frequency Response Analysis
The electric power transformer is a critical component in electrical distribution networks, and the diagnosis of faults in transformers is an important research area. Frequency Response Analysis (FRA) methods are widely used for analyzing winding faults in transformers, particularly in Chinese power stations. However, the current approach relies on manual expertise to interpret FRA curves, which can be both skill-intensive and lacks precision. This study presents a novel approach using a Multilayer perceptron model to directly model and analyze FRA data, simulating various winding fault types and degrees in 12-disc winding and 10-disc winding transformers with different connection configurations, resulting in three distinct datasets. Six different Multilayer perceptron architectures were developed, with optimal models achieving recognition accuracies of over 99.7% for diagnosing fault degrees and more than 90% for fault types. Hence, this paper has yielded a model architecture that exhibits commendable performance in diagnosing various fault types and their severities in different models of transformers when utilizing different FRA connection methods. Additionally, a specialized diagnostic system called FRA-DiagSys with two-stage model utilization was developed, achieving 100% accuracy in diagnosing fault types and degrees for a specific winding-10 power transformer, surpassing other diagnostic methods and strategies.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
468,492
0710.4051
On the capacity achieving covariance matrix for Rician MIMO channels: an asymptotic approach
The capacity-achieving input covariance matrices for coherent block-fading correlated MIMO Rician channels are determined. In this case, no closed-form expressions for the eigenvectors of the optimum input covariance matrix are available. An approximation of the average mutual information is evaluated in this paper in the asymptotic regime where the number of transmit and receive antennas converge to $+\infty$. New results related to the accuracy of the corresponding large system approximation are provided. An attractive optimization algorithm of this approximation is proposed and we establish that it yields an effective way to compute the capacity achieving covariance matrix for the average mutual information. Finally, numerical simulation results show that, even for a moderate number of transmit and receive antennas, the new approach provides the same results as direct maximization approaches of the average mutual information, while being much more computationally attractive.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
814
2403.03273
DINOv2 based Self Supervised Learning For Few Shot Medical Image Segmentation
Deep learning models have emerged as the cornerstone of medical image segmentation, but their efficacy hinges on the availability of extensive manually labeled datasets and their adaptability to unforeseen categories remains a challenge. Few-shot segmentation (FSS) offers a promising solution by endowing models with the capacity to learn novel classes from limited labeled examples. A leading method for FSS is ALPNet, which compares features between the query image and the few available support segmented images. A key question about using ALPNet is how to design its features. In this work, we delve into the potential of using features from DINOv2, which is a foundational self-supervised learning model in computer vision. Leveraging the strengths of ALPNet and harnessing the feature extraction capabilities of DINOv2, we present a novel approach to few-shot segmentation that not only enhances performance but also paves the way for more robust and adaptable medical image analysis.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
435,128
2402.17866
Towards spatiotemporal integration of bus transit with data-driven approaches
This study aims to propose an approach for spatiotemporal integration of bus transit, which enables users to change bus lines by paying a single fare. This could increase bus transit efficiency and, consequently, help to make this mode of transportation more attractive. Usually, this strategy is allowed for a few hours in a non-restricted area; thus, certain walking distance areas behave like "virtual terminals." For that, two data-driven algorithms are proposed in this work. First, a new algorithm for detecting itineraries based on bus GPS data and the bus stop location. The proposed algorithm's results show that 90% of the database detected valid itineraries by excluding invalid markings and adding times at missing bus stops through temporal interpolation. Second, this study proposes a bus stop clustering algorithm to define suitable areas for these virtual terminals where it would be possible to make bus transfers outside the physical terminals. Using real-world origin-destination trips, the bus network, including clusters, can reduce traveled distances by up to 50%, making twice as many connections on average.
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
433,174
2304.05080
Investigating Imbalances Between SAR and Optical Utilization for Multi-Modal Urban Mapping
Accurate urban maps provide essential information to support sustainable urban development. Recent urban mapping methods use multi-modal deep neural networks to fuse Synthetic Aperture Radar (SAR) and optical data. However, multi-modal networks may rely on just one modality due to the greedy nature of learning. In turn, the imbalanced utilization of modalities can negatively affect the generalization ability of a network. In this paper, we investigate the utilization of SAR and optical data for urban mapping. To that end, a dual-branch network architecture using intermediate fusion modules to share information between the uni-modal branches is utilized. A cut-off mechanism in the fusion modules enables the stopping of information flow between the branches, which is used to estimate the network's dependence on SAR and optical data. While our experiments on the SEN12 Global Urban Mapping dataset show that good performance can be achieved with conventional SAR-optical data fusion (F1 score = 0.682 $\pm$ 0.014), we also observed a clear under-utilization of optical data. Therefore, future work is required to investigate whether a more balanced utilization of SAR and optical data can lead to performance improvements.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
357,479
2310.14985
LLM-Based Agent Society Investigation: Collaboration and Confrontation in Avalon Gameplay
This paper explores the open research problem of understanding the social behaviors of LLM-based agents. Using Avalon as a testbed, we employ system prompts to guide LLM agents in gameplay. While previous studies have touched on gameplay with LLM agents, research on their social behaviors is lacking. We propose a novel framework, tailored for Avalon, features a multi-agent system facilitating efficient communication and interaction. We evaluate its performance based on game success and analyze LLM agents' social behaviors. Results affirm the framework's effectiveness in creating adaptive agents and suggest LLM-based agents' potential in navigating dynamic social interactions. By examining collaboration and confrontation behaviors, we offer insights into this field's research and applications. Our code is publicly available at https://github.com/3DAgentWorld/LLM-Game-Agent.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
402,112
2005.13818
Travel Time Prediction using Tree-Based Ensembles
In this paper, we consider the task of predicting travel times between two arbitrary points in an urban scenario. We view this problem from two temporal perspectives: long-term forecasting with a horizon of several days and short-term forecasting with a horizon of one hour. Both of these perspectives are relevant for planning tasks in the context of urban mobility and transportation services. We utilize tree-based ensemble methods that we train and evaluate on a dataset of taxi trip records from New York City. Through extensive data analysis, we identify relevant temporal and spatial features. We also engineer additional features based on weather and routing data. The latter is obtained via a routing solver operating on the road network. The computational results show that the addition of this routing data can be beneficial to the model performance. Moreover, employing different models for short and long-term prediction is useful as short-term models are better suited to mirror current traffic conditions. In fact, we show that accurate short-term predictions may be obtained with only little training data.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
179,111
2410.12809
Cerebral microbleeds: Association with cognitive decline and pathology build-up
Cerebral microbleeds, markers of brain damage from vascular and amyloid pathologies, are linked to cognitive decline in aging, but their role in Alzheimer's disease (AD) onset and progression remains unclear. This study aimed to explore whether the presence and location of lobar microbleeds are associated with amyloid-$\beta$ (A$\beta$)-PET, tau tangle formation (tau-PET), and longitudinal cognitive decline. We analyzed 1,573 ADNI participants with MR imaging data and information on the number and location of microbleeds. Associations between lobar microbleeds and pathology, cerebrospinal fluid (CSF), genetics, and cognition were examined, focusing on regional microbleeds and domain-specific cognitive decline using ordinary least-squares regression while adjusting for covariates. Cognitive decline was assessed with ADAS-Cog11 and its domain-specific sub-scores. Participants underwent neuropsychological testing at least twice, with a minimum two-year interval between assessments. Among the 1,573 participants (692 women, mean age 71.23 years), 373 participants had microbleeds. The presence of microbleeds was linked to cognitive decline, particularly in the semantic, language, and praxis domains for those with temporal lobe microbleeds. Microbleeds in the overall cortex were associated with language decline. Pathologically, temporal lobe microbleeds were associated with increased tau in the overall cortex, while cortical microbleeds were linked to elevated A$\beta$ in the temporal, parietal, and frontal regions. In this mixed population, microbleeds were connected to longitudinal cognitive decline, especially in semantic and language domains, and were associated with higher baseline A$\beta$ and tau pathology. These findings suggest that lobar microbleeds should be included in AD diagnostic and prognostic evaluations.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
499,203
2407.16691
Automatic Equalization for Individual Instrument Tracks Using Convolutional Neural Networks
We propose a novel approach for the automatic equalization of individual musical instrument tracks. Our method begins by identifying the instrument present within a source recording in order to choose its corresponding ideal spectrum as a target. Next, the spectral difference between the recording and the target is calculated, and accordingly, an equalizer matching model is used to predict settings for a parametric equalizer. To this end, we build upon a differentiable parametric equalizer matching neural network, demonstrating improvements relative to previously established state-of-the-art. Unlike past approaches, we show how our system naturally allows real-world audio data to be leveraged during the training of our matching model, effectively generating suitably produced training targets in an automated manner mirroring conditions at inference time. Consequently, we illustrate how fine-tuning our matching model on such examples considerably improves parametric equalizer matching performance in real-world scenarios, decreasing mean absolute error by 24% relative to methods relying solely on random parameter sampling techniques as a self-supervised learning strategy. We perform listening tests, and demonstrate that our proposed automatic equalization solution subjectively enhances the tonal characteristics for recordings of common instrument types.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
475,689
2311.09768
Utilizing dataset affinity prediction in object detection to assess training data
Data pooling offers various advantages, such as increasing the sample size, improving generalization, reducing sampling bias, and addressing data sparsity and quality, but it is not straightforward and may even be counterproductive. Assessing the effectiveness of pooling datasets in a principled manner is challenging due to the difficulty in estimating the overall information content of individual datasets. Towards this end, we propose incorporating a data source prediction module into standard object detection pipelines. The module runs with minimal overhead during inference time, providing additional information about the data source assigned to individual detections. We show the benefits of the so-called dataset affinity score by automatically selecting samples from a heterogeneous pool of vehicle datasets. The results show that object detectors can be trained on a significantly sparser set of training samples without losing detection accuracy.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
408,286
2212.06803
Fair Infinitesimal Jackknife: Mitigating the Influence of Biased Training Data Points Without Refitting
In consequential decision-making applications, mitigating unwanted biases in machine learning models that yield systematic disadvantage to members of groups delineated by sensitive attributes such as race and gender is one key intervention to strive for equity. Focusing on demographic parity and equality of opportunity, in this paper we propose an algorithm that improves the fairness of a pre-trained classifier by simply dropping carefully selected training data points. We select instances based on their influence on the fairness metric of interest, computed using an infinitesimal jackknife-based approach. The dropping of training points is done in principle, but in practice does not require the model to be refit. Crucially, we find that such an intervention does not substantially reduce the predictive performance of the model but drastically improves the fairness metric. Through careful experiments, we evaluate the effectiveness of the proposed approach on diverse tasks and find that it consistently improves upon existing alternatives.
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
336,216
2112.06295
Towards More Efficient Insertion Transformer with Fractional Positional Encoding
Auto-regressive neural sequence models have been shown to be effective across text generation tasks. However, their left-to-right decoding order prevents generation from being parallelized. Insertion Transformer (Stern et al., 2019) is an attractive alternative that allows outputting multiple tokens in a single generation step. Nevertheless, due to the incompatibility between absolute positional encoding and insertion-based generation schemes, it needs to refresh the encoding of every token in the generated partial hypothesis at each step, which could be costly. We design a novel reusable positional encoding scheme for Insertion Transformers called Fractional Positional Encoding (FPE), which allows reusing representations calculated in previous steps. Empirical studies on various text generation tasks demonstrate the effectiveness of FPE, which leads to floating-point operation reduction and latency improvements on batched decoding.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
271,121
1702.06362
Negative-Unlabeled Tensor Factorization for Location Category Inference from Highly Inaccurate Mobility Data
Identifying significant location categories visited by mobile users is the key to a variety of applications. This is an extremely challenging task due to the possible deviation between the estimated location coordinate and the actual location, which could be on the order of kilometers. To estimate the actual location category more precisely, we propose a novel tensor factorization framework, through several key observations including the intrinsic correlations between users, to infer the most likely location categories within the location uncertainty circle. In addition, the proposed algorithm can also predict where users are even in the absence of location information. In order to efficiently solve the proposed framework, we propose a parameter-free and scalable optimization algorithm by effectively exploring the sparse and low-rank structure of the tensor. Our empirical studies show that the proposed algorithm is both efficient and effective: it can solve problems with millions of users and billions of location updates, and also provides superior prediction accuracies on real-world location updates and check-in data sets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
68,590
2011.12853
Error analysis of a demodulation procedure for multicarrier signals with slowly-varying carriers
We propose a procedure to demodulate analog signals encoded by a multicarrier modulator, with slowly-varying carrier shapes. We prove that the asymptotic demodulation error can be made arbitrarily small. The intended application is the "sensorless" control of AC electric motors at or near standstill, through the decoding of the PWM-induced current ripple.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
208,294
2206.04928
GAMR: A Guided Attention Model for (visual) Reasoning
Humans continue to outperform modern AI systems in their ability to flexibly parse and understand complex visual scenes. Here, we present a novel module for visual reasoning, the Guided Attention Model for (visual) Reasoning (GAMR), which instantiates an active vision theory -- positing that the brain solves complex visual reasoning problems dynamically -- via sequences of attention shifts to select and route task-relevant visual information into memory. Experiments on an array of visual reasoning tasks and datasets demonstrate GAMR's ability to learn visual routines in a robust and sample-efficient manner. In addition, GAMR is shown to be capable of zero-shot generalization on completely novel reasoning tasks. Overall, our work provides computational support for cognitive theories that postulate the need for a critical interplay between attention and memory to dynamically maintain and manipulate task-relevant visual information to solve complex visual reasoning tasks.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
true
false
true
301,829
2408.09494
Source-Free Test-Time Adaptation For Online Surface-Defect Detection
Surface defect detection is significant in industrial production. However, detecting defects with varying textures and anomaly classes during the test time is challenging. This arises due to the differences in data distributions between source and target domains. Collecting and annotating new data from the target domain and retraining the model is time-consuming and costly. In this paper, we propose a novel test-time adaptation surface-defect detection approach that adapts pre-trained models to new domains and classes during inference. Our approach involves two core ideas. Firstly, we introduce a supervisor to filter samples and select only those with high confidence to update the model. This ensures that the model is not excessively biased by incorrect data. Secondly, we propose the augmented mean prediction to generate robust pseudo labels and a dynamically-balancing loss to facilitate the model in effectively integrating classification and segmentation results to improve surface-defect detection accuracy. Our approach is real-time and does not require additional offline retraining. Experiments demonstrate it outperforms state-of-the-art techniques.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
481,465
2411.02477
Building a Synthetic Vascular Model: Evaluation in an Intracranial Aneurysms Detection Scenario
We hereby present a full synthetic model, able to mimic the various constituents of the cerebral vascular tree, including the cerebral arteries, bifurcations and intracranial aneurysms. This model intends to provide a substantial dataset of brain arteries which could be used by a 3D convolutional neural network to efficiently detect Intra-Cranial Aneurysms. The cerebral aneurysms most often occur on a particular structure of the vascular tree named the Circle of Willis. Various studies have been conducted to detect and monitor the aneurysms and those based on Deep Learning achieve the best performance. Specifically, in this work, we propose a full synthetic 3D model able to mimic the brain vasculature as acquired by Magnetic Resonance Angiography, Time Of Flight principle. Among the various MRI modalities, this latter allows for a good rendering of the blood vessels and is non-invasive. Our model has been designed to simultaneously mimic the arteries' geometry, the aneurysm shape, and the background noise. The vascular tree geometry is modeled thanks to an interpolation with 3D Spline functions, and the statistical properties of the background noise is collected from angiography acquisitions and reproduced within the model. In this work, we thoroughly describe the synthetic vasculature model, we build up a neural network designed for aneurysm segmentation and detection, finally, we carry out an in-depth evaluation of the performance gap gained thanks to the synthetic model data augmentation.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
505,527
2006.12038
Bandit algorithms: Letting go of logarithmic regret for statistical robustness
We study regret minimization in a stochastic multi-armed bandit setting and establish a fundamental trade-off between the regret suffered under an algorithm, and its statistical robustness. Considering broad classes of underlying arms' distributions, we show that bandit learning algorithms with logarithmic regret are always inconsistent and that consistent learning algorithms always suffer a super-logarithmic regret. This result highlights the inevitable statistical fragility of all `logarithmic regret' bandit algorithms available in the literature---for instance, if a UCB algorithm designed for $\sigma$-subGaussian distributions is used in a subGaussian setting with a mismatched variance parameter, the learning performance could be inconsistent. Next, we show a positive result: statistically robust and consistent learning performance is attainable if we allow the regret to be slightly worse than logarithmic. Specifically, we propose three classes of distribution oblivious algorithms that achieve an asymptotic regret that is arbitrarily close to logarithmic.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
183,455
1807.08604
A Frequency-Domain Characterization of Optimal Error Covariance for the Kalman-Bucy Filter
In this paper, we discover that the trace of the division of the optimal output estimation error covariance over the noise covariance attained by the Kalman-Bucy filter can be explicitly expressed in terms of the plant dynamics and noise statistics in a frequency-domain integral characterization. Towards this end, we examine the algebraic Riccati equation associated with Kalman-Bucy filtering using analytic function theory and relate it to the Bode integral. Our approach features an alternative, frequency-domain framework for analyzing algebraic Riccati equations and reduces to various existing related results.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
103,578
1811.03407
A Factor Graph Approach to Automated Design of Bayesian Signal Processing Algorithms
The benefits of automating design cycles for Bayesian inference-based algorithms are becoming increasingly recognized by the machine learning community. As a result, interest in probabilistic programming frameworks has much increased over the past few years. This paper explores a specific probabilistic programming paradigm, namely message passing in Forney-style factor graphs (FFGs), in the context of automated design of efficient Bayesian signal processing algorithms. To this end, we developed "ForneyLab" (https://github.com/biaslab/ForneyLab.jl) as a Julia toolbox for message passing-based inference in FFGs. We show by example how ForneyLab enables automatic derivation of Bayesian signal processing algorithms, including algorithms for parameter estimation and model comparison. Crucially, due to the modular makeup of the FFG framework, both the model specification and inference methods are readily extensible in ForneyLab. In order to test this framework, we compared variational message passing as implemented by ForneyLab with automatic differentiation variational inference (ADVI) and Monte Carlo methods as implemented by state-of-the-art tools "Edward" and "Stan". In terms of performance, extensibility and stability issues, ForneyLab appears to enjoy an edge relative to its competitors for automated inference in state-space models.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
112,833
2411.01073
AttackQA: Development and Adoption of a Dataset for Assisting Cybersecurity Operations using Fine-tuned and Open-Source LLMs
Retrieval-augmented generation (RAG) on specialized domain datasets has shown improved performance when large language models (LLMs) are fine-tuned for generating responses to user queries. In this study, we develop a cybersecurity question-answering (Q\&A) dataset, called AttackQA, and employ it to build a RAG-based Q\&A system designed for analysts in security operations centers. The dataset comprises 25,335 Q\&A pairs, accompanied by rationales to facilitate fine-tuning and evaluation. 80\% of the dataset was generated with help of a lightweight open-source LLM (LLama 3 8B), which produced over 1100 tokens per second with full 16-bit precision on SambaNova System's SN40L specialized hardware. To ensure dataset quality, we fine-tuned LLama 3 70B to detect and reject low-quality Q\&A pairs. In using the dataset for RAG, we demonstrate that fine-tuning open-source embeddings and LLMs can yield superior accuracy compared to OpenAI's state-of-the-art proprietary embedding and LLM (GPT-4o). Furthermore, we use Llama 3.1 405B as a judge to evaluate answer correctness, enabling the creation of a fully open-source, high-speed RAG and evaluation pipeline with a benchmark for model accuracy.
false
false
false
false
true
false
true
false
false
false
false
false
true
false
false
false
false
false
504,898
2402.15247
A Bargaining-based Approach for Feature Trading in Vertical Federated Learning
Vertical Federated Learning (VFL) has emerged as a popular machine learning paradigm, enabling model training across the data and the task parties with different features about the same user set while preserving data privacy. In production environment, VFL usually involves one task party and one data party. Fair and economically efficient feature trading is crucial to the commercialization of VFL, where the task party is considered as the data consumer who buys the data party's features. However, current VFL feature trading practices often price the data party's data as a whole and assume transactions occur prior to the performing VFL. Neglecting the performance gains resulting from traded features may lead to underpayment and overpayment issues. In this study, we propose a bargaining-based feature trading approach in VFL to encourage economically efficient transactions. Our model incorporates performance gain-based pricing, taking into account the revenue-based optimization objectives of both parties. We analyze the proposed bargaining model under perfect and imperfect performance information settings, proving the existence of an equilibrium that optimizes the parties' objectives. Moreover, we develop performance gain estimation-based bargaining strategies for imperfect performance information scenarios and discuss potential security issues and solutions. Experiments on three real-world datasets demonstrate the effectiveness of the proposed bargaining model.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
true
false
false
false
432,058
2005.01047
Fusion of visible and infrared images via complex function
We propose an algorithm for the fusion of partial images collected from the visual and infrared cameras such that the visual and infrared images are the real and imaginary parts of a complex function. The proposed image fusion algorithm of the complex function is a generalization for the algorithm of conventional image addition in the same way as the addition of complex numbers is the generalization for the addition of real numbers. The proposed algorithm of the complex function is simple in use and non-demanding in computer power. The complex form of the fused image opens a possibility to form the fused image either as the amplitude image or as a phase image, which in turn can be in several forms. We show theoretically that the local contrast of the fused phase images is higher than those of the partial images as well as in comparison with the images obtained by the algorithm of the simple or weighted addition. Experimental image quality assessment of the fused phase images performed using the histograms, entropy shows the higher quality of the phase images in comparison with those of the input partial images as well as those obtained with different fusion methods reported in the literature. Keywords: digital image processing, image fusion, infrared imaging, image quality assessment
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
175,477
2210.16976
Representation Learning for General-sum Low-rank Markov Games
We study multi-agent general-sum Markov games with nonlinear function approximation. We focus on low-rank Markov games whose transition matrix admits a hidden low-rank structure on top of an unknown non-linear representation. The goal is to design an algorithm that (1) finds an $\varepsilon$-equilibrium policy sample efficiently without prior knowledge of the environment or the representation, and (2) permits a deep-learning friendly implementation. We leverage representation learning and present a model-based and a model-free approach to construct an effective representation from the collected data. For both approaches, the algorithm achieves a sample complexity of poly$(H,d,A,1/\varepsilon)$, where $H$ is the game horizon, $d$ is the dimension of the feature vector, $A$ is the size of the joint action space and $\varepsilon$ is the optimality gap. When the number of players is large, the above sample complexity can scale exponentially with the number of players in the worst case. To address this challenge, we consider Markov games with a factorized transition structure and present an algorithm that escapes such exponential scaling. To our best knowledge, this is the first sample-efficient algorithm for multi-agent general-sum Markov games that incorporates (non-linear) function approximation. We accompany our theoretical result with a neural network-based implementation of our algorithm and evaluate it against the widely used deep RL baseline, DQN with fictitious play.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
327,521
2206.01918
Automated Audio Captioning with Epochal Difficult Captions for Curriculum Learning
In this paper, we propose an algorithm, Epochal Difficult Captions, to supplement the training of any model for the Automated Audio Captioning task. Epochal Difficult Captions is an elegant evolution to the keyword estimation task that previous work have used to train the encoder of the AAC model. Epochal Difficult Captions modifies the target captions based on a curriculum and a difficulty level determined as a function of current epoch. Epochal Difficult Captions can be used with any model architecture and is a lightweight function that does not increase training time. We test our results on three systems and show that using Epochal Difficult Captions consistently improves performance
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
300,670
1810.09912
Efficient Bayesian Experimental Design for Implicit Models
Bayesian experimental design involves the optimal allocation of resources in an experiment, with the aim of optimising cost and performance. For implicit models, where the likelihood is intractable but sampling from the model is possible, this task is particularly difficult and therefore largely unexplored. This is mainly due to technical difficulties associated with approximating posterior distributions and utility functions. We devise a novel experimental design framework for implicit models that improves upon previous work in two ways. First, we use the mutual information between parameters and data as the utility function, which has previously not been feasible. We achieve this by utilising Likelihood-Free Inference by Ratio Estimation (LFIRE) to approximate posterior distributions, instead of the traditional approximate Bayesian computation or synthetic likelihood methods. Secondly, we use Bayesian optimisation in order to solve the optimal design problem, as opposed to the typically used grid search or sampling-based methods. We find that this increases efficiency and allows us to consider higher design dimensions.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
111,151
2403.11958
Language Evolution with Deep Learning
Computational modeling plays an essential role in the study of language emergence. It aims to simulate the conditions and learning processes that could trigger the emergence of a structured language within a simulated controlled environment. Several methods have been used to investigate the origin of our language, including agent-based systems, Bayesian agents, genetic algorithms, and rule-based systems. This chapter explores another class of computational models that have recently revolutionized the field of machine learning: deep learning models. The chapter introduces the basic concepts of deep and reinforcement learning methods and summarizes their helpfulness for simulating language emergence. It also discusses the key findings, limitations, and recent attempts to build realistic simulations. This chapter targets linguists and cognitive scientists seeking an introduction to deep learning as a tool to investigate language evolution.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
438,949
2408.01323
FANNO: Augmenting High-Quality Instruction Data with Open-Sourced LLMs Only
Instruction fine-tuning stands as a crucial advancement in leveraging large language models (LLMs) for enhanced task performance. However, the annotation of instruction datasets has traditionally been expensive and laborious, often relying on manual annotations or costly API calls of proprietary LLMs. To address these challenges, we introduce FANNO, a fully autonomous, open-sourced framework that revolutionizes the annotation process without the need for pre-existing annotated data. Utilizing a Mistral-7b-instruct model, FANNO efficiently produces diverse and high-quality datasets through a structured process involving document pre-screening, instruction generation, and response generation. Experiments on Open LLM Leaderboard and AlpacaEval benchmark show that the FANNO can generate high-quality data with diversity and complexity for free, comparable to human-annotated or cleaned datasets like Alpaca-GPT4-Cleaned.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
478,187
2405.20748
OpenTensor: Reproducing Faster Matrix Multiplication Discovering Algorithms
OpenTensor is a reproduction of AlphaTensor, which discovered a new algorithm that outperforms the state-of-the-art methods for matrix multiplication by Deep Reinforcement Learning (DRL). While AlphaTensor provides a promising framework for solving scientific problems, it is really hard to reproduce due to the massive tricks and lack of source codes. In this paper, we clean up the algorithm pipeline, clarify the technical details, and make some improvements to the training process. Computational results show that OpenTensor can successfully find efficient matrix multiplication algorithms.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
459,501
1805.00367
A Multi-State Diagnosis and Prognosis Framework with Feature Learning for Tool Condition Monitoring
In this paper, a multi-state diagnosis and prognosis (MDP) framework is proposed for tool condition monitoring via a deep belief network based multi-state approach (DBNMS). For fault diagnosis, a cost-sensitive deep belief network (namely ECS-DBN) is applied to deal with the imbalanced data problem for tool state estimation. An appropriate prognostic degradation model is then applied for tool wear estimation based on the different tool states. The proposed framework has the advantage of automatic feature representation learning and shows better performance in accuracy and robustness. The effectiveness of the proposed DBNMS is validated using a real-world dataset obtained from the gun drilling process. This dataset contains a large amount of measured signals involving different tool geometries under various operating conditions. The DBNMS is examined for both the tool state estimation and tool wear estimation tasks. In the experimental studies, the prediction results are evaluated and compared with popular machine learning approaches, which show the superior performance of the proposed DBNMS approach.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
96,429
0809.2754
Algorithmic information theory
We introduce algorithmic information theory, also known as the theory of Kolmogorov complexity. We explain the main concepts of this quantitative approach to defining `information'. We discuss the extent to which Kolmogorov's and Shannon's information theory have a common purpose, and where they are fundamentally different. We indicate how recent developments within the theory allow one to formally distinguish between `structural' (meaningful) and `random' information as measured by the Kolmogorov structure function, which leads to a mathematical formalization of Occam's razor in inductive inference. We end by discussing some of the philosophical implications of the theory.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
2,355
2410.03772
Precision Knowledge Editing: Enhancing Safety in Large Language Models
Large language models (LLMs) have demonstrated remarkable capabilities, but they also pose risks related to the generation of toxic or harmful content. This work introduces Precision Knowledge Editing (PKE), an advanced technique that builds upon existing knowledge editing methods to more effectively identify and modify toxic parameter regions within LLMs. By leveraging neuron weight tracking and activation pathway tracing, PKE achieves finer granularity in toxic content management compared to previous methods like Detoxifying Instance Neuron Modification (DINM). Our experiments demonstrate that PKE significantly reduces the attack success rate (ASR) across various models, including Llama2-7b and Llama-3-8b-instruct, while maintaining overall model performance. Additionally, we also compared the performance of some closed-source models (gpt-4-0613 and Claude 3 Sonnet) in our experiments, and found that models adjusted using our method far outperformed the closed-source models in terms of safety. This research contributes to the ongoing efforts to make LLMs safer and more reliable for real-world applications.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
494,957
2109.00124
Learning Coated Adversarial Camouflages for Object Detectors
An adversary can fool deep neural network object detectors by generating adversarial noises. Most of the existing works focus on learning local visible noises in an adversarial "patch" fashion. However, the 2D patch attached to a 3D object tends to suffer from an inevitable reduction in attack performance as the viewpoint changes. To remedy this issue, this work proposes the Coated Adversarial Camouflage (CAC) to attack the detectors in arbitrary viewpoints. Unlike the patch trained in the 2D space, our camouflage generated by a conceptually different training framework consists of 3D rendering and dense proposals attack. Specifically, we make the camouflage perform 3D spatial transformations according to the pose changes of the object. Based on the multi-view rendering results, the top-n proposals of the region proposal network are fixed, and all the classifications in the fixed dense proposals are attacked simultaneously to output errors. In addition, we build a virtual 3D scene to fairly and reproducibly evaluate different attacks. Extensive experiments demonstrate the superiority of CAC over the existing attacks, and it shows impressive performance both in the virtual scene and the real world. This poses a potential threat to the security-critical computer vision systems.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
253,007
2104.06403
Lite-HRNet: A Lightweight High-Resolution Network
We present an efficient high-resolution network, Lite-HRNet, for human pose estimation. We start by simply applying the efficient shuffle block in ShuffleNet to HRNet (high-resolution network), yielding stronger performance over popular lightweight networks, such as MobileNet, ShuffleNet, and Small HRNet. We find that the heavily-used pointwise (1x1) convolutions in shuffle blocks become the computational bottleneck. We introduce a lightweight unit, conditional channel weighting, to replace costly pointwise (1x1) convolutions in shuffle blocks. The complexity of channel weighting is linear w.r.t the number of channels and lower than the quadratic time complexity for pointwise convolutions. Our solution learns the weights from all the channels and over multiple resolutions that are readily available in the parallel branches in HRNet. It uses the weights as the bridge to exchange information across channels and resolutions, compensating the role played by the pointwise (1x1) convolution. Lite-HRNet demonstrates superior results on human pose estimation over popular lightweight networks. Moreover, Lite-HRNet can be easily applied to semantic segmentation task in the same lightweight manner. The code and models have been publicly available at https://github.com/HRNet/Lite-HRNet.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
230,066
2306.06203
FLSL: Feature-level Self-supervised Learning
Current self-supervised learning (SSL) methods (e.g., SimCLR, DINO, VICReg,MOCOv3) target primarily on representations at instance level and do not generalize well to dense prediction tasks, such as object detection and segmentation.Towards aligning SSL with dense predictions, this paper demonstrates for the first time the underlying mean-shift clustering process of Vision Transformers (ViT), which aligns well with natural image semantics (e.g., a world of objects and stuffs). By employing transformer for joint embedding and clustering, we propose a two-level feature clustering SSL method, coined Feature-Level Self-supervised Learning (FLSL). We present the formal definition of the FLSL problem and construct the objectives from the mean-shift and k-means perspectives. We show that FLSL promotes remarkable semantic cluster representations and learns an embedding scheme amenable to intra-view and inter-view feature clustering. Experiments show that FLSL yields significant improvements in dense prediction tasks, achieving 44.9 (+2.8)% AP and 46.5% AP in object detection, as well as 40.8 (+2.3)% AP and 42.1% AP in instance segmentation on MS-COCO, using Mask R-CNN with ViT-S/16 and ViT-S/8 as backbone, respectively. FLSL consistently outperforms existing SSL methods across additional benchmarks, including UAV17 object detection on UAVDT, and video instance segmentation on DAVIS 2017.We conclude by presenting visualization and various ablation studies to better understand the success of FLSL. The source code is available at https://github.com/ISL-CV/FLSL.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
372,507
2208.06031
Handling big tabular data of ICT supply chains: a multi-task, machine-interpretable approach
Due to the characteristics of Information and Communications Technology (ICT) products, the critical information of ICT devices is often summarized in big tabular data shared across supply chains. Therefore, it is critical to automatically interpret tabular structures with the surging amount of electronic assets. To transform the tabular data in electronic documents into a machine-interpretable format and provide layout and semantic information for information extraction and interpretation, we define a Table Structure Recognition (TSR) task and a Table Cell Type Classification (CTC) task. We use a graph to represent complex table structures for the TSR task. Meanwhile, table cells are categorized into three groups based on their functional roles for the CTC task, namely Header, Attribute, and Data. Subsequently, we propose a multi-task model to solve the defined two tasks simultaneously by using the text modal and image modal features. Our experimental results show that our proposed method can outperform state-of-the-art methods on ICDAR2013 and UNLV datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
312,570
2112.06108
3D LiDAR Aided GNSS NLOS Mitigation in Urban Canyons
In this paper, we propose a 3D LiDAR aided global navigation satellite system (GNSS) non-line-of-sight (NLOS) mitigation method caused by both static buildings and dynamic objects. A sliding window map describing the surrounding of the ego-vehicle is first generated, based on real-time 3D point clouds from a 3D LiDAR sensor. Then, NLOS receptions are detected based on the sliding window map using a proposed fast searching method which is free of the initial guess of the position of the GNSS receiver. Instead of directly excluding the detected NLOS satellites from further positioning estimation, this paper rectifies the pseudorange measurement model by (1) correcting the pseudorange measurements if the reflecting point of NLOS signals is detected inside the sliding window map, and (2) remodeling the uncertainty of the NLOS pseudorange measurement using a novel weighting scheme. We evaluated the performance of the proposed method in several typical urban canyons in Hong Kong using an automobile-level GNSS receiver. Moreover, we also evaluate the potential of the proposed NLOS mitigation method in GNSS and inertial navigation systems integration via factor graph optimization.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
271,056
1402.0501
Large-deviation properties of resilience of transportation networks
Distributions of the resilience of transport networks are studied numerically, in particular the large-deviation tails. Thus, not only typical quantities like average or variance but the distributions over the (almost) full support can be studied. For a proof of principle, a simple transport model based on the edge-betweenness and three abstract yet widely studied random network ensembles are considered here: Erdoes-Renyi random networks with finite connectivity, small world networks and spatial networks embedded in a two-dimensional plane. Using specific numerical large-deviation techniques, probability densities as small as 10^(-80) are obtained here. This allows one to study typical but also the most and the least resilient networks. The resulting distributions fulfill the mathematical large-deviation principle, i.e., can be well described by rate functions in the thermodynamic limit. The analysis of the limiting rate function reveals that the resilience follows an exponential distribution almost everywhere. An analysis of the structure of the network shows that the most-resilient networks can be obtained, as a rule of thumb, by minimizing the diameter of a network. Also, trivially, by including more links a network can typically be made more resilient. On the other hand, the least-resilient networks are very rare and characterized by one (or few) small core(s) to which all other nodes are connected. In total, the spatial network ensemble turns out to be most suitable for obtaining and studying resilience of real mostly finite-dimensional networks. Studying this ensemble in combination with the presented large-deviation approach for more realistic, in particular dynamic transport networks appears to be very promising.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
30,568
2303.05490
On the Expressiveness and Generalization of Hypergraph Neural Networks
This extended abstract describes a framework for analyzing the expressiveness, learning, and (structural) generalization of hypergraph neural networks (HyperGNNs). Specifically, we focus on how HyperGNNs can learn from finite datasets and generalize structurally to graph reasoning problems of arbitrary input sizes. Our first contribution is a fine-grained analysis of the expressiveness of HyperGNNs, that is, the set of functions that they can realize. Our result is a hierarchy of problems they can solve, defined in terms of various hyperparameters such as depths and edge arities. Next, we analyze the learning properties of these neural networks, especially focusing on how they can be trained on a finite set of small graphs and generalize to larger graphs, which we term structural generalization. Our theoretical results are further supported by the empirical results.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
350,484
2402.10433
Fusing Neural and Physical: Augment Protein Conformation Sampling with Tractable Simulations
The protein dynamics are common and important for their biological functions and properties, the study of which usually involves time-consuming molecular dynamics (MD) simulations in silico. Recently, generative models has been leveraged as a surrogate sampler to obtain conformation ensembles with orders of magnitude faster and without requiring any simulation data (a "zero-shot" inference). However, being agnostic of the underlying energy landscape, the accuracy of such generative model may still be limited. In this work, we explore the few-shot setting of such pre-trained generative sampler which incorporates MD simulations in a tractable manner. Specifically, given a target protein of interest, we first acquire some seeding conformations from the pre-trained sampler followed by a number of physical simulations in parallel starting from these seeding samples. Then we fine-tuned the generative model using the simulation trajectories above to become a target-specific sampler. Experimental results demonstrated the superior performance of such few-shot conformation sampler at a tractable computational cost.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
429,960
2301.05264
Security-Aware Approximate Spiking Neural Networks
Deep Neural Networks (DNNs) and Spiking Neural Networks (SNNs) are both known for their susceptibility to adversarial attacks. Therefore, researchers in the recent past have extensively studied the robustness and defense of DNNs and SNNs under adversarial attacks. Compared to accurate SNNs (AccSNN), approximate SNNs (AxSNNs) are known to be up to 4X more energy-efficient for ultra-low power applications. Unfortunately, the robustness of AxSNNs under adversarial attacks is yet unexplored. In this paper, we first extensively analyze the robustness of AxSNNs with different structural parameters and approximation levels under two gradient-based and two neuromorphic attacks. Then, we propose two novel defense methods, i.e., precision scaling and approximate quantization-aware filtering (AQF), for securing AxSNNs. We evaluated the effectiveness of these two defense methods using both static and neuromorphic datasets. Our results demonstrate that AxSNNs are more prone to adversarial attacks than AccSNNs, but precision scaling and AQF significantly improve the robustness of AxSNNs. For instance, a PGD attack on AxSNN results in a 72\% accuracy loss compared to AccSNN without any attack, whereas the same attack on the precision-scaled AxSNN leads to only a 17\% accuracy loss in the static MNIST dataset (4X robustness improvement). Similarly, a Sparse Attack on AxSNN leads to a 77\% accuracy loss when compared to AccSNN without any attack, whereas the same attack on an AxSNN with AQF leads to only a 2\% accuracy loss in the neuromorphic DVS128 Gesture dataset (38X robustness improvement).
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
false
true
340,300
2007.07486
Content-based Recommendations for Radio Stations with Deep Learned Audio Fingerprints
The world of linear radio broadcasting is characterized by a wide variety of stations and played content. That is why finding stations playing the preferred content is a tough task for a potential listener, especially due to the overwhelming number of offered choices. Here, recommender systems usually step in but existing content-based approaches rely on metadata and thus are constrained by the available data quality. Other approaches leverage user behavior data and thus do not exploit any domain-specific knowledge and are furthermore disadvantageous regarding privacy concerns. Therefore, we propose a new pipeline for the generation of audio-based radio station fingerprints relying on audio stream crawling and a Deep Autoencoder. We show that the proposed fingerprints are especially useful for characterizing radio stations by their audio content and thus are an excellent representation for meaningful and reliable radio station recommendations. Furthermore, the proposed modules are part of the HRADIO Communication Platform, which enables hybrid radio features to radio stations. It is released with a flexible open source license and enables especially small- and medium-sized businesses, to provide customized and high quality radio services to potential listeners.
true
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
187,350
2406.11608
Learning Hierarchical Semantic Classification by Grounding on Consistent Image Segmentations
Hierarchical semantic classification requires the prediction of a taxonomy tree instead of a single flat level of the tree, where both accuracies at individual levels and consistency across levels matter. We can train classifiers for individual levels, which has accuracy but not consistency, or we can train only the finest level classification and infer higher levels, which has consistency but not accuracy. Our key insight is that hierarchical recognition should not be treated as multi-task classification, as each level is essentially a different task and they would have to compromise with each other, but be grounded on image segmentations that are consistent across semantic granularities. Consistency can in fact improve accuracy. We build upon recent work on learning hierarchical segmentation for flat-level recognition, and extend it to hierarchical recognition. It naturally captures the intuition that fine-grained recognition requires fine image segmentation whereas coarse-grained recognition requires coarse segmentation; they can all be integrated into one recognition model that drives fine-to-coarse internal visual parsing.Additionally, we introduce a Tree-path KL Divergence loss to enforce consistent accurate predictions across levels. Our extensive experimentation and analysis demonstrate our significant gains on predicting an accurate and consistent taxonomy tree.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
464,962
2110.09171
Projected Model Counting: Beyond Independent Support
The past decade has witnessed a surge of interest in practical techniques for projected model counting. Despite significant advancements, however, performance scaling remains the Achilles' heel of this field. A key idea used in modern counters is to count models projected on an \emph{independent support} that is often a small subset of the projection set, i.e. original set of variables on which we wanted to project. While this idea has been effective in scaling performance, the question of whether it can benefit to count models projected on variables beyond the projection set, has not been explored. In this paper, we study this question and show that contrary to intuition, it can be beneficial to project on variables beyond the projection set. In applications such as verification of binarized neural networks, quantification of information flow, reliability of power grids etc., a good upper bound of the projected model count often suffices. We show that in several such cases, we can identify a set of variables, called upper bound support (UBS), that is not necessarily a subset of the projection set, and yet counting models projected on UBS guarantees an upper bound of the true projected model count. Theoretically, a UBS can be exponentially smaller than the smallest independent support. Our experiments show that even otherwise, UBS-based projected counting can be more efficient than independent support-based projected counting, while yielding bounds of very high quality. Based on extensive experiments, we find that UBS-based projected counting can solve many problem instances that are beyond the reach of a state-of-the-art independent support-based projected model counter.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
261,712
2010.14772
Around the variational principle for metric mean dimension
We study variational principles for metric mean dimension. First we prove that in the variational principle of Lindenstrauss and Tsukamoto it suffices to take supremum over ergodic measures. Second we derive a variational principle for metric mean dimension involving growth rates of measure-theoretic entropy of partitions decreasing in diameter which holds in full generality and in particular does not necessitate the assumption of tame growth of covering numbers. The expressions involved are a dynamical version of Renyi information dimension. Third we derive a new expression for Geiger-Koch information dimension rate for ergodic shift-invariant measures. Finally we develop a lower bound for metric mean dimension in terms of Brin-Katok local entropy.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
203,567
2305.10869
Free Lunch for Privacy Preserving Distributed Graph Learning
Learning on graphs is becoming prevalent in a wide range of applications including social networks, robotics, communication, medicine, etc. These datasets belonging to entities often contain critical private information. The utilization of data for graph learning applications is hampered by the growing privacy concerns from users on data sharing. Existing privacy-preserving methods pre-process the data to extract user-side features, and only these features are used for subsequent learning. Unfortunately, these methods are vulnerable to adversarial attacks to infer private attributes. We present a novel privacy-respecting framework for distributed graph learning and graph-based machine learning. In order to perform graph learning and other downstream tasks on the server side, this framework aims to learn features as well as distances without requiring actual features while preserving the original structural properties of the raw data. The proposed framework is quite generic and highly adaptable. We demonstrate the utility of the Euclidean space, but it can be applied with any existing method of distance approximation and graph learning for the relevant spaces. Through extensive experimentation on both synthetic and real datasets, we demonstrate the efficacy of the framework in terms of comparing the results obtained without data sharing to those obtained with data sharing as a benchmark. This is, to our knowledge, the first privacy-preserving distributed graph learning framework.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
365,271
2405.13022
LLMs can learn self-restraint through iterative self-reflection
In order to be deployed safely, Large Language Models (LLMs) must be capable of dynamically adapting their behavior based on their level of knowledge and uncertainty associated with specific topics. This adaptive behavior, which we refer to as self-restraint, is non-trivial to teach since it depends on the internal knowledge of an LLM. By default, LLMs are trained to maximize the next token likelihood, which does not teach the model to modulate its answer based on its level of uncertainty. In order to learn self-restraint, we devise a utility function that can encourage the model to produce responses only when it is confident in them. This utility function can be used to score generation of different length and abstention. To optimize this function, we introduce ReSearch, a process of "self-reflection" consisting of iterative self-prompting and self-evaluation. We use the ReSearch algorithm to generate synthetic data on which we finetune our models. Compared to their original versions, our resulting models generate fewer \emph{hallucinations} overall at no additional inference cost, for both known and unknown topics, as the model learns to selectively restrain itself. In addition, our method elegantly incorporates the ability to abstain by augmenting the samples generated by the model during the search procedure with an answer expressing abstention.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
455,755
1601.07381
Investigating echo state networks dynamics by means of recurrence analysis
In this paper, we elaborate over the well-known interpretability issue in echo state networks. The idea is to investigate the dynamics of reservoir neurons with time-series analysis techniques taken from research on complex systems. Notably, we analyze time-series of neuron activations with Recurrence Plots (RPs) and Recurrence Quantification Analysis (RQA), which permit to visualize and characterize high-dimensional dynamical systems. We show that this approach is useful in a number of ways. First, the two-dimensional representation offered by RPs provides a way for visualizing the high-dimensional dynamics of a reservoir. Our results suggest that, if the network is stable, reservoir and input denote similar line patterns in the respective RPs. Conversely, the more unstable the ESN, the more the RP of the reservoir presents instability patterns. As a second result, we show that the $\mathrm{L_{max}}$ measure is highly correlated with the well-established maximal local Lyapunov exponent. This suggests that complexity measures based on RP diagonal lines distribution provide a valuable tool to quantify the degree of network stability. Finally, our analysis shows that all RQA measures fluctuate on the proximity of the so-called edge of stability, where an ESN typically achieves maximum computational capability. We verify that the determination of the edge of stability provided by such RQA measures is more accurate than two well-known criteria based on the Jacobian matrix of the reservoir. Therefore, we claim that RPs and RQA-based analyses can be used as valuable tools to design an effective network given a specific problem.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
51,425
1412.7288
Observations Concerning the probability of the existence of annihilators for balanced boolean functions
LFSR-based stream ciphers with nonlinear filters or combiners are susceptible to algebraic attacks using linearization methods to solve an overdefined system of nonlinear equations. And this process is greatly enhanced if the filtering or combining function has a low degree annihilator. To prevent such an attack, one would choose the parameters of that function so that the degree of its annihilator becomes large enough. As computing power is continuously increasing, a choice that seems secure today, becomes insecure tomorrow. Therefore, a tool is needed to estimate the probability of the existence of annihilators for balanced boolean functions with parameters that are beyond the current computing power. Based on experimental and calculational observations, we give in this paper an almost exact estimate of that probability, which represent a great improvement over the upper bound previously known.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
38,795
1912.02889
Learning Super-resolved Depth from Active Gated Imaging
Environment perception for autonomous driving is doomed by the trade-off between range-accuracy and resolution: current sensors that deliver very precise depth information are usually restricted to low resolution because of technology or cost limitations. In this work, we exploit depth information from an active gated imaging system based on cost-sensitive diode and CMOS technology. Learning a mapping between pixel intensities of three gated slices and depth produces a super-resolved depth map image with respectable relative accuracy of 5% in between 25-80 m. By design, depth information is perfectly aligned with pixel intensity values.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
156,454
2502.00792
RTBAgent: A LLM-based Agent System for Real-Time Bidding
Real-Time Bidding (RTB) enables advertisers to place competitive bids on impression opportunities instantaneously, striving for cost-effectiveness in a highly competitive landscape. Although RTB has widely benefited from the utilization of technologies such as deep learning and reinforcement learning, the reliability of related methods often encounters challenges due to the discrepancies between online and offline environments and the rapid fluctuations of online bidding. To handle these challenges, RTBAgent is proposed as the first RTB agent system based on large language models (LLMs), which synchronizes real competitive advertising bidding environments and obtains bidding prices through an integrated decision-making process. Specifically, obtaining reasoning ability through LLMs, RTBAgent is further tailored to be more professional for RTB via involved auxiliary modules, i.e., click-through rate estimation model, expert strategy knowledge, and daily reflection. In addition, we propose a two-step decision-making process and multi-memory retrieval mechanism, which enables RTBAgent to review historical decisions and transaction records and subsequently make decisions more adaptive to market changes in real-time bidding. Empirical testing with real advertising datasets demonstrates that RTBAgent significantly enhances profitability. The RTBAgent code will be publicly accessible at: https://github.com/CaiLeng/RTBAgent.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
529,549
2009.14108
Align-RUDDER: Learning From Few Demonstrations by Reward Redistribution
Reinforcement learning algorithms require many samples when solving complex hierarchical tasks with sparse and delayed rewards. For such complex tasks, the recently proposed RUDDER uses reward redistribution to leverage steps in the Q-function that are associated with accomplishing sub-tasks. However, often only few episodes with high rewards are available as demonstrations since current exploration strategies cannot discover them in reasonable time. In this work, we introduce Align-RUDDER, which utilizes a profile model for reward redistribution that is obtained from multiple sequence alignment of demonstrations. Consequently, Align-RUDDER employs reward redistribution effectively and, thereby, drastically improves learning on few demonstrations. Align-RUDDER outperforms competitors on complex artificial tasks with delayed rewards and few demonstrations. On the Minecraft ObtainDiamond task, Align-RUDDER is able to mine a diamond, though not frequently. Code is available at https://github.com/ml-jku/align-rudder. YouTube: https://youtu.be/HO-_8ZUl-UY
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
197,937
2309.06105
Towards Visual Taxonomy Expansion
Taxonomy expansion task is essential in organizing the ever-increasing volume of new concepts into existing taxonomies. Most existing methods focus exclusively on using textual semantics, leading to an inability to generalize to unseen terms and the "Prototypical Hypernym Problem." In this paper, we propose Visual Taxonomy Expansion (VTE), introducing visual features into the taxonomy expansion task. We propose a textual hypernymy learning task and a visual prototype learning task to cluster textual and visual semantics. In addition to the tasks on respective modalities, we introduce a hyper-proto constraint that integrates textual and visual semantics to produce fine-grained visual semantics. Our method is evaluated on two datasets, where we obtain compelling results. Specifically, on the Chinese taxonomy dataset, our method significantly improves accuracy by 8.75 %. Additionally, our approach performs better than ChatGPT on the Chinese taxonomy dataset.
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
391,313
1711.02545
Online Learning for Changing Environments using Coin Betting
A key challenge in online learning is that classical algorithms can be slow to adapt to changing environments. Recent studies have proposed "meta" algorithms that convert any online learning algorithm to one that is adaptive to changing environments, where the adaptivity is analyzed in a quantity called the strongly-adaptive regret. This paper describes a new meta algorithm that has a strongly-adaptive regret bound that is a factor of $\sqrt{\log(T)}$ better than other algorithms with the same time complexity, where $T$ is the time horizon. We also extend our algorithm to achieve a first-order (i.e., dependent on the observed losses) strongly-adaptive regret bound for the first time, to our knowledge. At its heart is a new parameter-free algorithm for the learning with expert advice (LEA) problem in which experts sometimes do not output advice for consecutive time steps (i.e., \emph{sleeping} experts). This algorithm is derived by a reduction from optimal algorithms for the so-called coin betting problem. Empirical results show that our algorithm outperforms state-of-the-art methods in both learning with expert advice and metric learning scenarios.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
84,079
2105.04419
VDB-EDT: An Efficient Euclidean Distance Transform Algorithm Based on VDB Data Structure
This paper presents a fundamental algorithm, called VDB-EDT, for Euclidean distance transform (EDT) based on the VDB data structure. The algorithm executes on grid maps and generates the corresponding distance field for recording distance information against obstacles, which forms the basis of numerous motion planning algorithms. The contributions of this work mainly lie in three folds. Firstly, we propose a novel algorithm that can facilitate distance transform procedures by optimizing the scheduling priorities of transform functions, which significantly improves the running speed of conventional EDT algorithms. Secondly, we for the first time introduce the memory-efficient VDB data structure, a customed B+ tree, to represent the distance field hierarchically. Benefiting from the special index and caching mechanism, VDB shows a fast (average \textit{O}(1)) random access speed, and thus is very suitable for the frequent neighbor-searching operations in EDT. Moreover, regarding the small scale of existing datasets, we release a large-scale dataset captured from subterranean environments to benchmark EDT algorithms. Extensive experiments on the released dataset and publicly available datasets show that VDB-EDT can reduce memory consumption by about 30%-85%, depending on the sparsity of the environment, while maintaining a competitive running speed with the fastest array-based implementation. The experiments also show that VDB-EDT can significantly outperform the state-of-the-art EDT algorithm in both runtime and memory efficiency, which strongly demonstrates the advantages of our proposed method. The released dataset and source code are available on https://github.com/zhudelong/VDB-EDT.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
234,501
2112.06635
Bounds in the Lee Metric and Optimal Codes
In this paper we investigate known Singleton-like bounds in the Lee metric and characterize optimal codes, which turn out to be very few. We then focus on Plotkin-like bounds in the Lee metric and present a new bound that extends and refines a previously known, and out-performs it in the case of non-free codes. We then compute the density of optimal codes with regard to the new bound. Finally we fill a gap in the characterization of Lee-equidistant codes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
271,246
2410.01124
Synthetic imagery for fuzzy object detection: A comparative study
The fuzzy object detection is a challenging field of research in computer vision (CV). Distinguishing between fuzzy and non-fuzzy object detection in CV is important. Fuzzy objects such as fire, smoke, mist, and steam present significantly greater complexities in terms of visual features, blurred edges, varying shapes, opacity, and volume compared to non-fuzzy objects such as trees and cars. Collection of a balanced and diverse dataset and accurate annotation is crucial to achieve better ML models for fuzzy objects, however, the task of collection and annotation is still highly manual. In this research, we propose and leverage an alternative method of generating and automatically annotating fully synthetic fire images based on 3D models for training an object detection model. Moreover, the performance, and efficiency of the trained ML models on synthetic images is compared with ML models trained on real imagery and mixed imagery. Findings proved the effectiveness of the synthetic data for fire detection, while the performance improves as the test dataset covers a broader spectrum of real fires. Our findings illustrates that when synthetic imagery and real imagery is utilized in a mixed training set the resulting ML model outperforms models trained on real imagery as well as models trained on synthetic imagery for detection of a broad spectrum of fires. The proposed method for automating the annotation of synthetic fuzzy objects imagery carries substantial implications for reducing both time and cost in creating computer vision models specifically tailored for detecting fuzzy objects.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
493,622
1709.07166
Semi-Automated Nasal PAP Mask Sizing using Facial Photographs
We present a semi-automated system for sizing nasal Positive Airway Pressure (PAP) masks based upon a neural network model that was trained with facial photographs of both PAP mask users and non-users. It demonstrated an accuracy of 72% in correctly sizing a mask and 96% accuracy sizing to within 1 mask size group. The semi-automated system performed comparably to sizing from manual measurements taken from the same images which produced 89% and 100% accuracy respectively.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
81,234
2401.05676
Exploring Self- and Cross-Triplet Correlations for Human-Object Interaction Detection
Human-Object Interaction (HOI) detection plays a vital role in scene understanding, which aims to predict the HOI triplet in the form of <human, object, action>. Existing methods mainly extract multi-modal features (e.g., appearance, object semantics, human pose) and then fuse them together to directly predict HOI triplets. However, most of these methods focus on seeking for self-triplet aggregation, but ignore the potential cross-triplet dependencies, resulting in ambiguity of action prediction. In this work, we propose to explore Self- and Cross-Triplet Correlations (SCTC) for HOI detection. Specifically, we regard each triplet proposal as a graph where Human, Object represent nodes and Action indicates edge, to aggregate self-triplet correlation. Also, we try to explore cross-triplet dependencies by jointly considering instance-level, semantic-level, and layout-level relations. Besides, we leverage the CLIP model to assist our SCTC obtain interaction-aware feature by knowledge distillation, which provides useful action clues for HOI detection. Extensive experiments on HICO-DET and V-COCO datasets verify the effectiveness of our proposed SCTC.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
420,871
2402.01738
C4Q: A Chatbot for Quantum
Quantum computing is a growing field that promises many real-world applications such as quantum cryptography or quantum finance. The number of people able to use quantum computing is however still very small. This limitation comes from the difficulty to understand the concepts and to know how to start coding. Therefore, there is a need for tools that can assist non-expert in overcoming this complexity. One possibility would be to use existing conversational agents. Unfortunately ChatGPT and other Large-Language Models produce inaccurate results. This article presents C4Q, a chatbot that answers accurately basic questions and guides users when trying to code quantum programs. Contrary to other approaches C4Q uses a pre-trained large language model only to discover and classify user requests. It then generates an accurate answer using an own engine. Thanks to this architectural design, C4Q's answers are always correct, and thus C4Q can become a support tool that makes quantum computing more available to non-experts.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
426,185
2308.04635
Where's the Liability in Harmful AI Speech?
Generative AI, in particular text-based "foundation models" (large models trained on a huge variety of information including the internet), can generate speech that could be problematic under a wide range of liability regimes. Machine learning practitioners regularly "red team" models to identify and mitigate such problematic speech: from "hallucinations" falsely accusing people of serious misconduct to recipes for constructing an atomic bomb. A key question is whether these red-teamed behaviors actually present any liability risk for model creators and deployers under U.S. law, incentivizing investments in safety mechanisms. We examine three liability regimes, tying them to common examples of red-teamed model behaviors: defamation, speech integral to criminal conduct, and wrongful death. We find that any Section 230 immunity analysis or downstream liability analysis is intimately wrapped up in the technical details of algorithm design. And there are many roadblocks to truly finding models (and their associated parties) liable for generated speech. We argue that AI should not be categorically immune from liability in these scenarios and that as courts grapple with the already fine-grained complexities of platform algorithms, the technical details of generative AI loom above with thornier questions. Courts and policymakers should think carefully about what technical design incentives they create as they evaluate these issues.
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
384,486
1905.03809
Wearable Sensor Data Based Human Activity Recognition using Machine Learning: A new approach
Recent years have witnessed the rapid development of human activity recognition (HAR) based on wearable sensor data. One can find many practical applications in this area, especially in the field of health care. Many machine learning algorithms such as Decision Trees, Support Vector Machine, Naive Bayes, K-Nearest Neighbor, and Multilayer Perceptron are successfully used in HAR. Although these methods are fast and easy for implementation, they still have some limitations due to poor performance in a number of situations. In this paper, we propose a novel method based on the ensemble learning to boost the performance of these machine learning methods for HAR.
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
130,293
2401.07451
Zone-Specific CSI Feedback for Massive MIMO: A Situation-Aware Deep Learning Approach
Massive MIMO basestations, operating with frequency-division duplexing (FDD), require the users to feedback their channel state information (CSI) in order to design the precoding matrices. Given the powerful capabilities of deep neural networks in learning quantization codebooks, utilizing these networks in compressing the channels and reducing the massive MIMO CSI feedback overhead has recently gained increased interest. Learning one model, however, for the full cell or sector may not be optimal as the channel distribution could change significantly from one \textit{zone} (an area or region) to another. In this letter, we introduce the concept of \textit{zone-specific} CSI feedback. By partitioning the site space into multiple channel zones, the underlying channel distribution can be efficiently leveraged to reduce the CSI feedback. This concept leverages the implicit or explicit user position information to select the right zone-specific model and its parameters. To facilitate the evaluation of associated overhead, we introduce two novel metrics named \textit{model parameters transmission rate} (MPTR) and \textit{model parameters update rate} (MPUR). They jointly provide important insights and guidance for the system design and deployment. Simulation results show that significant gains could be achieved by the proposed framework. For example, using the large-scale Boston downtown scenario of DeepMIMO, the proposed zone-specific CSI feedback approach can on average achieve around 6dB NMSE gain compared to the other solutions, while keeping the same model complexity.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
421,541
1305.4801
Mining top-k granular association rules for recommendation
Recommender systems are important for e-commerce companies as well as researchers. Recently, granular association rules have been proposed for cold-start recommendation. However, existing approaches reserve only globally strong rules; therefore some users may receive no recommendation at all. In this paper, we propose to mine the top-k granular association rules for each user. First we define three measures of granular association rules. These are the source coverage which measures the user granule size, the target coverage which measures the item granule size, and the confidence which measures the strength of the association. With the confidence measure, rules can be ranked according to their strength. Then we propose algorithms for training the recommender and suggesting items to each user. Experimental are undertaken on a publicly available data set MovieLens. Results indicate that the appropriate setting of granule can avoid over-fitting and at the same time, help obtaining high recommending accuracy.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
24,718
2304.02704
Real-Time Dense 3D Mapping of Underwater Environments
This paper addresses real-time dense 3D reconstruction for a resource-constrained Autonomous Underwater Vehicle (AUV). Underwater vision-guided operations are among the most challenging as they combine 3D motion in the presence of external forces, limited visibility, and absence of global positioning. Obstacle avoidance and effective path planning require online dense reconstructions of the environment. Autonomous operation is central to environmental monitoring, marine archaeology, resource utilization, and underwater cave exploration. To address this problem, we propose to use SVIn2, a robust VIO method, together with a real-time 3D reconstruction pipeline. We provide extensive evaluation on four challenging underwater datasets. Our pipeline produces comparable reconstruction with that of COLMAP, the state-of-the-art offline 3D reconstruction method, at high frame rates on a single CPU.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
356,512
2201.00548
Hybrid intelligence for dynamic job-shop scheduling with deep reinforcement learning and attention mechanism
The dynamic job-shop scheduling problem (DJSP) is a class of scheduling tasks that specifically consider the inherent uncertainties such as changing order requirements and possible machine breakdown in realistic smart manufacturing settings. Since traditional methods cannot dynamically generate effective scheduling strategies in face of the disturbance of environments, we formulate the DJSP as a Markov decision process (MDP) to be tackled by reinforcement learning (RL). For this purpose, we propose a flexible hybrid framework that takes disjunctive graphs as states and a set of general dispatching rules as the action space with minimum prior domain knowledge. The attention mechanism is used as the graph representation learning (GRL) module for the feature extraction of states, and the double dueling deep Q-network with prioritized replay and noisy networks (D3QPN) is employed to map each state to the most appropriate dispatching rule. Furthermore, we present Gymjsp, a public benchmark based on the well-known OR-Library, to provide a standardized off-the-shelf facility for RL and DJSP research communities. Comprehensive experiments on various DJSP instances confirm that our proposed framework is superior to baseline algorithms with smaller makespan across all instances and provide empirical justification for the validity of the various components in the hybrid framework.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
273,994
2106.16198
In-distribution adversarial attacks on object recognition models using gradient-free search
Neural networks are susceptible to small perturbations in the form of 2D rotations and shifts, image crops, and even changes in object colors. Past works attribute these errors to dataset bias, claiming that models fail on these perturbed samples as they do not belong to the training data distribution. Here, we challenge this claim and present evidence of the widespread existence of perturbed images within the training data distribution, which networks fail to classify. We train models on data sampled from parametric distributions, then search inside this data distribution to find such in-distribution adversarial examples. This is done using our gradient-free evolution strategies (ES) based approach which we call CMA-Search. Despite training with a large-scale (0.5 million images), unbiased dataset of camera and light variations, CMA-Search can find a failure inside the data distribution in over 71% cases by perturbing the camera position. With lighting changes, CMA-Search finds misclassifications in 42% cases. These findings also extend to natural images from ImageNet and Co3D datasets. This phenomenon of in-distribution images presents a highly worrisome problem for artificial intelligence -- they bypass the need for a malicious agent to add engineered noise to induce an adversarial attack. All code, datasets, and demos are available at https://github.com/Spandan-Madan/in_distribution_adversarial_examples.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
244,000
1811.07698
Towards Global Explanations for Credit Risk Scoring
In this paper we propose a method to obtain global explanations for trained black-box classifiers by sampling their decision function to learn alternative interpretable models. The envisaged approach provides a unified solution to approximate non-linear decision boundaries with simpler classifiers while retaining the original classification accuracy. We use a private residential mortgage default dataset as a use case to illustrate the feasibility of this approach to ensure the decomposability of attributes during pre-processing.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
113,846
2110.04632
DenseNet approach to segmentation and classification of dermatoscopic skin lesions images
At present, cancer is one of the most important health issues in the world. Because early detection and appropriate treatment in cancer are very effective in the recovery and survival of patients, image processing as a diagnostic tool can help doctors to diagnose in the first recognition of cancer. One of the most important steps in diagnosing a skin lesion is to automatically detect the border of the skin image because the accuracy of the next steps depends on it. If these subtleties are identified, they can have a great impact on the diagnosis of the disease. Therefore, there is a good opportunity to develop more accurate algorithms to analyze such images. This paper proposes an improved method for segmentation and classification for skin lesions using two architectures, the U-Net for image segmentation and the DenseNet121 for image classification which have excellent accuracy. We tested the segmentation architecture of our model on the ISIC-2018 dataset and the classification on the HAM10000 dataset. Our results show that the combination of U-Net and DenseNet121 architectures provides acceptable results in dermatoscopic image analysis compared to previous research. Another classification examined in this study is cancerous and non-cancerous samples. In this classification, cancerous and non-cancerous samples were detected in DenseNet121 network with 79.49% and 93.11% accuracy respectively.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
259,970
2309.05256
Examining the Effect of Pre-training on Time Series Classification
Although the pre-training followed by fine-tuning paradigm is used extensively in many fields, there is still some controversy surrounding the impact of pre-training on the fine-tuning process. Currently, experimental findings based on text and image data lack consensus. To delve deeper into the unsupervised pre-training followed by fine-tuning paradigm, we have extended previous research to a new modality: time series. In this study, we conducted a thorough examination of 150 classification datasets derived from the Univariate Time Series (UTS) and Multivariate Time Series (MTS) benchmarks. Our analysis reveals several key conclusions. (i) Pre-training can only help improve the optimization process for models that fit the data poorly, rather than those that fit the data well. (ii) Pre-training does not exhibit the effect of regularization when given sufficient training time. (iii) Pre-training can only speed up convergence if the model has sufficient ability to fit the data. (iv) Adding more pre-training data does not improve generalization, but it can strengthen the advantage of pre-training on the original data volume, such as faster convergence. (v) While both the pre-training task and the model structure determine the effectiveness of the paradigm on a given dataset, the model structure plays a more significant role.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
391,016
1907.09356
Decentralized Deep Learning with Arbitrary Communication Compression
Decentralized training of deep learning models is a key element for enabling data privacy and on-device learning over networks, as well as for efficient scaling to large compute clusters. As current approaches suffer from limited bandwidth of the network, we propose the use of communication compression in the decentralized training context. We show that Choco-SGD $-$ recently introduced and analyzed for strongly-convex objectives only $-$ converges under arbitrary high compression ratio on general non-convex functions at the rate $O\bigl(1/\sqrt{nT}\bigr)$ where $T$ denotes the number of iterations and $n$ the number of workers. The algorithm achieves linear speedup in the number of workers and supports higher compression than previous state-of-the art methods. We demonstrate the practical performance of the algorithm in two key scenarios: the training of deep learning models (i) over distributed user devices, connected by a social network and (ii) in a datacenter (outperforming all-reduce time-wise).
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
139,337
2110.03478
Complex-valued Federated Learning with Differential Privacy and MRI Applications
Federated learning enhanced with Differential Privacy (DP) is a powerful privacy-preserving strategy to protect individuals sharing their sensitive data for processing in fields such as medicine and healthcare. Many medical applications, for example magnetic resonance imaging (MRI), rely on complex-valued signal processing techniques for data acquisition and analysis. However, the appropriate application of DP to complex-valued data is still underexplored. To address this issue, from the theoretical side, we introduce the complex-valued Gaussian mechanism, whose behaviour we characterise in terms of $f$-DP, $(\varepsilon, \delta)$-DP and R\'enyi-DP. Moreover, we generalise the fundamental algorithm DP stochastic gradient descent to complex-valued neural networks and present novel complex-valued neural network primitives compatible with DP. Experimentally, we showcase a proof-of-concept by training federated complex-valued neural networks with DP on a real-world task (MRI pulse sequence classification in $k$-space), yielding excellent utility and privacy. Our results highlight the relevance of combining federated learning with robust privacy-preserving techniques in the MRI context.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
259,517
2308.13641
ML-Powered Index Tuning: An Overview of Recent Progress and Open Challenges
The scale and complexity of workloads in modern cloud services have brought into sharper focus a critical challenge in automated index tuning -- the need to recommend high-quality indexes while maintaining index tuning scalability. This challenge is further compounded by the requirement for automated index implementations to introduce minimal query performance regressions in production deployments, representing a significant barrier to achieving scalability and full automation. This paper directs attention to these challenges within automated index tuning and explores ways in which machine learning (ML) techniques provide new opportunities in their mitigation. In particular, we reflect on recent efforts in developing ML techniques for workload selection, candidate index filtering, speeding up index configuration search, reducing the amount of query optimizer calls, and lowering the chances of performance regressions. We highlight the key takeaways from these efforts and underline the gaps that need to be closed for their effective functioning within the traditional index tuning framework. Additionally, we present a preliminary cross-platform design aimed at democratizing index tuning across multiple SQL-like systems -- an imperative in today's continuously expanding data system landscape. We believe our findings will help provide context and impetus to the research and development efforts in automated index tuning.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
true
false
387,994
1204.5028
Regenerating Codes: A System Perspective
The explosion of the amount of data stored in cloud systems calls for more efficient paradigms for redundancy. While replication is widely used to ensure data availability, erasure correcting codes provide a much better trade-off between storage and availability. Regenerating codes are good candidates for they also offer low repair costs in term of network bandwidth. While they have been proven optimal, they are difficult to understand and parameterize. In this paper we provide an analysis of regenerating codes for practitioners to grasp the various trade-offs. More specifically we make two contributions: (i) we study the impact of the parameters by conducting an analysis at the level of the system, rather than at the level of a single device; (ii) we compare the computational costs of various implementations of codes and highlight the most efficient ones. Our goal is to provide system designers with concrete information to help them choose the best parameters and design for regenerating codes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
15,628
1903.05350
The Fourier Spectral Characterization for the Correlation-Immune Functions over Fp
The correlation-immune functions serve as an important metric for measuring resistance of a cryptosystem against correlation attacks. Existing literature emphasize on matrices, orthogonal arrays and Walsh-Hadamard spectra to characterize the correlation-immune functions over $\mathbb{F}_p$ ($p \geq 2$ is a prime). %with prime $p$. Recently, Wang and Gong investigated the Fourier spectral characterization over the complex field for correlation-immune Boolean functions. In this paper, the discrete Fourier transform (DFT) of non-binary functions was studied. It was shown that a function $f$ over $\mathbb{F}_p$ is $m$th-order correlation-immune if and only if its Fourier spectrum vanishes at a specific location under any permutation of variables. Moreover, if $f$ is a symmetric function, $f$ is correlation-immune if and only if its Fourier spectrum vanishes at only one location.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
124,147
2405.02412
Deep Learning and Transfer Learning Architectures for English Premier League Player Performance Forecasting
This paper presents a groundbreaking model for forecasting English Premier League (EPL) player performance using convolutional neural networks (CNNs). We evaluate Ridge regression, LightGBM and CNNs on the task of predicting upcoming player FPL score based on historical FPL data over the previous weeks. Our baseline models, Ridge regression and LightGBM, achieve solid performance and emphasize the importance of recent FPL points, influence, creativity, threat, and playtime in predicting EPL player performances. Our optimal CNN architecture achieves better performance with fewer input features and even outperforms the best previous EPL player performance forecasting models in the literature. The optimal CNN architecture also achieves very strong Spearman correlation with player rankings, indicating its strong implications for supporting the development of FPL artificial intelligence (AI) Agents and providing analysis for FPL managers. We additionally perform transfer learning experiments on soccer news data collected from The Guardian, for the same task of predicting upcoming player score, but do not identify a strong predictive signal in natural language news texts, achieving worse performance compared to both the CNN and baseline models. Overall, our CNN-based approach marks a significant advancement in EPL player performance forecasting and lays the foundation for transfer learning to other EPL prediction tasks such as win-loss odds for sports betting and the development of cutting-edge FPL AI Agents.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
451,758
1707.07255
Detecting and Grouping Identical Objects for Region Proposal and Classification
Often multiple instances of an object occur in the same scene, for example in a warehouse. Unsupervised multi-instance object discovery algorithms are able to detect and identify such objects. We use such an algorithm to provide object proposals to a convolutional neural network (CNN) based classifier. This results in fewer regions to evaluate, compared to traditional region proposal algorithms. Additionally, it enables using the joint probability of multiple instances of an object, resulting in improved classification accuracy. The proposed technique can also split a single class into multiple sub-classes corresponding to the different object types, enabling hierarchical classification.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
77,585
2006.16701
Hierarchical Qualitative Clustering: clustering mixed datasets with critical qualitative information
Clustering can be used to extract insights from data or to verify some of the assumptions held by the domain experts, namely data segmentation. In the literature, few methods can be applied in clustering qualitative values using the context associated with other variables present in the data, without losing interpretability. Moreover, the metrics for calculating dissimilarity between qualitative values often scale poorly for high dimensional mixed datasets. In this study, we propose a novel method for clustering qualitative values, based on Hierarchical Clustering (HQC), and using Maximum Mean Discrepancy. HQC maintains the original interpretability of the qualitative information present in the dataset. We apply HQC to two datasets. Using a mixed dataset provided by Spotify, we showcase how our method can be used for clustering music artists based on the quantitative features of thousands of songs. In addition, using financial features of companies, we cluster company industries, and discuss the implications in investment portfolios diversification.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
184,884
2009.11990
A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder
Traditional linear subspace reduced order models (LS-ROMs) are able to accelerate physical simulations, in which the intrinsic solution space falls into a subspace with a small dimension, i.e., the solution space has a small Kolmogorov n-width. However, for physical phenomena not of this type, e.g., any advection-dominated flow phenomena, such as in traffic flow, atmospheric flows, and air flow over vehicles, a low-dimensional linear subspace poorly approximates the solution. To address cases such as these, we have developed a fast and accurate physics-informed neural network ROM, namely nonlinear manifold ROM (NM-ROM), which can better approximate high-fidelity model solutions with a smaller latent space dimension than the LS-ROMs. Our method takes advantage of the existing numerical methods that are used to solve the corresponding full order models. The efficiency is achieved by developing a hyper-reduction technique in the context of the NM-ROM. Numerical results show that neural networks can learn a more efficient latent space representation on advection-dominated data from 1D and 2D Burgers' equations. A speedup of up to 2.6 for 1D Burgers' and a speedup of 11.7 for 2D Burgers' equations are achieved with an appropriate treatment of the nonlinear terms through a hyper-reduction technique. Finally, a posteriori error bounds for the NM-ROMs are derived that take account of the hyper-reduced operators.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
true
197,303
1909.04596
Prediction of Overall Survival of Brain Tumor Patients
Automated brain tumor segmentation plays an important role in the diagnosis and prognosis of the patient. In addition, features from the tumorous brain help in predicting patients overall survival. The main focus of this paper is to segment tumor from BRATS 2018 benchmark dataset and use age, shape and volumetric features to predict overall survival of patients. The random forest classifier achieves overall survival accuracy of 59% on the test dataset and 67% on the dataset with resection status as gross total resection. The proposed approach uses fewer features but achieves better accuracy than state of the art methods.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
144,848
2104.08448
Data Distillation for Text Classification
Deep learning techniques have achieved great success in many fields, while at the same time deep learning models are getting more complex and expensive to compute. It severely hinders the wide applications of these models. In order to alleviate this problem, model distillation emerges as an effective means to compress a large model into a smaller one without a significant drop in accuracy. In this paper, we study a related but orthogonal issue, data distillation, which aims to distill the knowledge from a large training dataset down to a smaller and synthetic one. It has the potential to address the large and growing neural network training problem based on the small dataset. We develop a novel data distillation method for text classification. We evaluate our method on eight benchmark datasets. The results that the distilled data with the size of 0.1% of the original text data achieves approximately 90% performance of the original is rather impressive.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
230,804
2004.06002
Dynamic R-CNN: Towards High Quality Object Detection via Dynamic Training
Although two-stage object detectors have continuously advanced the state-of-the-art performance in recent years, the training process itself is far from crystal. In this work, we first point out the inconsistency problem between the fixed network settings and the dynamic training procedure, which greatly affects the performance. For example, the fixed label assignment strategy and regression loss function cannot fit the distribution change of proposals and thus are harmful to training high quality detectors. Consequently, we propose Dynamic R-CNN to adjust the label assignment criteria (IoU threshold) and the shape of regression loss function (parameters of SmoothL1 Loss) automatically based on the statistics of proposals during training. This dynamic design makes better use of the training samples and pushes the detector to fit more high quality samples. Specifically, our method improves upon ResNet-50-FPN baseline with 1.9% AP and 5.5% AP$_{90}$ on the MS COCO dataset with no extra overhead. Codes and models are available at https://github.com/hkzhang95/DynamicRCNN.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
172,391
2406.06220
Label-Looping: Highly Efficient Decoding for Transducers
This paper introduces a highly efficient greedy decoding algorithm for Transducer-based speech recognition models. We redesign the standard nested-loop design for RNN-T decoding, swapping loops over frames and labels: the outer loop iterates over labels, while the inner loop iterates over frames searching for the next non-blank symbol. Additionally, we represent partial hypotheses in a special structure using CUDA tensors, supporting parallelized hypotheses manipulations. Experiments show that the label-looping algorithm is up to 2.0X faster than conventional batched decoding when using batch size 32. It can be further combined with other compiler or GPU call-related techniques to achieve even more speedup. Our algorithm is general-purpose and can work with both conventional Transducers and Token-and-Duration Transducers. We open-source our implementation to benefit the research community.
false
false
true
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
462,485
1911.02562
Gextext: Disease Network Extraction from Biomedical Literature
PURPOSE: We propose a fully unsupervised method to learn latent disease networks directly from unstructured biomedical text corpora. This method addresses current challenges in unsupervised knowledge extraction, such as the detection of long-range dependencies and requirements for large training corpora. METHODS: Let C be a corpus of n text chunks. Let V be a set of p disease terms occurring in the corpus. Let X indicate the occurrence of V in C. Gextext identifies disease similarities by positively correlated occurrence patterns. This information is combined to generate a graph on which geodesic distance describes dissimilarity. Diseasomes were learned by Gextext and GloVE on corpora of 100-1000 PubMed abstracts. Similarity matrix estimates were validated against biomedical semantic similarity metrics and gene profile similarity. RESULTS: Geodesic distance on Gextext-inferred diseasomes correlated inversely with external measures of semantic similarity. Gene profile similarity also correlated significant with proximity on the inferred graph. Gextext outperformed GloVE in our experiments. The information contained on the Gextext graph exceeded the explicit information content within the text. CONCLUSIONS: Gextext extracts latent relationships from unstructured text, enabling fully unsupervised modelling of diseasome graphs from PubMed abstracts.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
152,395
1905.05947
Joint haze image synthesis and dehazing with mmd-vae losses
Fog and haze are weathers with low visibility which are adversarial to the driving safety of intelligent vehicles equipped with optical sensors like cameras and LiDARs. Therefore image dehazing for perception enhancement and haze image synthesis for testing perception abilities are equivalently important in the development of such autonomous driving systems. From the view of image translation, these two problems are essentially dual with each other, which have the potentiality to be solved jointly. In this paper, we propose an unsupervised Image-to-Image Translation framework based on Variational Autoencoders (VAE) and Generative Adversarial Nets (GAN) to handle haze image synthesis and haze removal simultaneously. Since the KL divergence in the VAE objectives could not guarantee the optimal mapping under imbalanced and unpaired training samples with limited size, Maximum mean discrepancy (MMD) based VAE is utilized to ensure the translating consistency in both directions. The comprehensive analysis on both synthesis and dehazing performance of our method demonstrate the feasibility and practicability of the proposed method.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
130,864
2004.13154
Voxgraph: Globally Consistent, Volumetric Mapping using Signed Distance Function Submaps
Globally consistent dense maps are a key requirement for long-term robot navigation in complex environments. While previous works have addressed the challenges of dense mapping and global consistency, most require more computational resources than may be available on-board small robots. We propose a framework that creates globally consistent volumetric maps on a CPU and is lightweight enough to run on computationally constrained platforms. Our approach represents the environment as a collection of overlapping Signed Distance Function (SDF) submaps, and maintains global consistency by computing an optimal alignment of the submap collection. By exploiting the underlying SDF representation, we generate correspondence free constraints between submap pairs that are computationally efficient enough to optimize the global problem each time a new submap is added. We deploy the proposed system on a hexacopter Micro Aerial Vehicle (MAV) with an Intel i7-8650U CPU in two realistic scenarios: mapping a large-scale area using a 3D LiDAR, and mapping an industrial space using an RGB-D camera. In the large-scale outdoor experiments, the system optimizes a 120x80m map in less than 4s and produces absolute trajectory RMSEs of less than 1m over 400m trajectories. Our complete system, called voxgraph, is available as open source.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
174,458
1904.03436
Unsupervised Embedding Learning via Invariant and Spreading Instance Feature
This paper studies the unsupervised embedding learning problem, which requires an effective similarity measurement between samples in low-dimensional embedding space. Motivated by the positive concentrated and negative separated properties observed from category-wise supervised learning, we propose to utilize the instance-wise supervision to approximate these properties, which aims at learning data augmentation invariant and instance spread-out features. To achieve this goal, we propose a novel instance based softmax embedding method, which directly optimizes the `real' instance features on top of the softmax function. It achieves significantly faster learning speed and higher accuracy than all existing methods. The proposed method performs well for both seen and unseen testing categories with cosine similarity. It also achieves competitive performance even without pre-trained network over samples from fine-grained categories.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
126,718
2306.03042
SERT: A Transfomer Based Model for Spatio-Temporal Sensor Data with Missing Values for Environmental Monitoring
Environmental monitoring is crucial to our understanding of climate change, biodiversity loss and pollution. The availability of large-scale spatio-temporal data from sources such as sensors and satellites allows us to develop sophisticated models for forecasting and understanding key drivers. However, the data collected from sensors often contain missing values due to faulty equipment or maintenance issues. The missing values rarely occur simultaneously leading to data that are multivariate misaligned sparse time series. We propose two models that are capable of performing multivariate spatio-temporal forecasting while handling missing data naturally without the need for imputation. The first model is a transformer-based model, which we name SERT (Spatio-temporal Encoder Representations from Transformers). The second is a simpler model named SST-ANN (Sparse Spatio-Temporal Artificial Neural Network) which is capable of providing interpretable results. We conduct extensive experiments on two different datasets for multivariate spatio-temporal forecasting and show that our models have competitive or superior performance to those at the state-of-the-art.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
371,167
1709.05172
Optimal Base Station Design with Limited Fronthaul: Massive Bandwidth or Massive MIMO?
To reach a cost-efficient 5G architecture, the use of remote radio heads connected through a fronthaul to baseband controllers is a promising solution. However, the fronthaul links must support high bit rates as 5G networks are projected to use wide bandwidths and many antennas. Upgrading all of the existing fronthaul connections would be cumbersome, while replacing the remote radio head and upgrading the software in the baseband controllers is relatively simple. In this paper, we consider the uplink and seek the answer to the question: If we have a fixed fronthaul capacity and can deploy any technology in the remote radio head, what is the optimal technology? In particular, we optimize the number of antennas, quantization bits and bandwidth to maximize the sum rate under a fronthaul capacity constraint. The analytical results suggest that operating with many antennas equipped with low-resolution analog-to-digital converters, while the interplay between number of antennas and bandwidth depends on various parameters. The numerical analysis provides further insights into the design of communication systems with limited fronthaul capacity.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
80,800
1201.4116
Analysis of Cell Load Coupling for LTE Network Planning and Optimization
System-centric modeling and analysis are of key significance in planning and optimizing cellular networks. In this paper, we provide a mathematical analysis of performance modeling for LTE networks. The system model characterizes the coupling relation between the cell load factors, taking into account non-uniform traffic demand and interference between the cells with arbitrary network topology. Solving the model enables a network-wide performance evaluation in resource consumption. We develop and prove both sufficient and necessary conditions for the feasibility of the load-coupling system, and provide results related to computational aspects for numerically approaching the solution. The theoretical findings are accompanied with experimental results to instructively illustrate the application in optimizing LTE network configuration.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
13,898
cs/0510032
Polar Polytopes and Recovery of Sparse Representations
Suppose we have a signal y which we wish to represent using a linear combination of a number of basis atoms a_i, y=sum_i x_i a_i = Ax. The problem of finding the minimum L0 norm representation for y is a hard problem. The Basis Pursuit (BP) approach proposes to find the minimum L1 norm representation instead, which corresponds to a linear program (LP) that can be solved using modern LP techniques, and several recent authors have given conditions for the BP (minimum L1 norm) and sparse (minimum L0 solutions) representations to be identical. In this paper, we explore this sparse representation problem} using the geometry of convex polytopes, as recently introduced into the field by Donoho. By considering the dual LP we find that the so-called polar polytope P of the centrally-symmetric polytope P whose vertices are the atom pairs +-a_i is particularly helpful in providing us with geometrical insight into optimality conditions given by Fuchs and Tropp for non-unit-norm atom sets. In exploring this geometry we are able to tighten some of these earlier results, showing for example that the Fuchs condition is both necessary and sufficient for L1-unique-optimality, and that there are situations where Orthogonal Matching Pursuit (OMP) can eventually find all L1-unique-optimal solutions with m nonzeros even if ERC fails for m, if allowed to run for more than m steps.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
539,010
1706.05011
Measuring Personalization of Web Search
Web search is an integral part of our daily lives. Recently, there has been a trend of personalization in Web search, where different users receive different results for the same search query. The increasing level of personalization is leading to concerns about Filter Bubble effects, where certain users are simply unable to access information that the search engines' algorithm decides is irrelevant. Despite these concerns, there has been little quantification of the extent of personalization in Web search today, or the user attributes that cause it. In light of this situation, we make three contributions. First, we develop a methodology for measuring personalization in Web search results. While conceptually simple, there are numerous details that our methodology must handle in order to accurately attribute differences in search results to personalization. Second, we apply our methodology to 200 users on Google Web Search and 100 users on Bing. We find that, on average, 11.7% of results show differences due to personalization on Google, while 15.8% of results are personalized on Bing, but that this varies widely by search query and by result ranking. Third, we investigate the user features used to personalize on Google Web Search and Bing. Surprisingly, we only find measurable personalization as a result of searching with a logged in account and the IP address of the searching user. Our results are a first step towards understanding the extent and effects of personalization on Web search engines today.
false
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
false
75,427
2407.03018
An Organism Starts with a Single Pix-Cell: A Neural Cellular Diffusion for High-Resolution Image Synthesis
Generative modeling seeks to approximate the statistical properties of real data, enabling synthesis of new data that closely resembles the original distribution. Generative Adversarial Networks (GANs) and Denoising Diffusion Probabilistic Models (DDPMs) represent significant advancements in generative modeling, drawing inspiration from game theory and thermodynamics, respectively. Nevertheless, the exploration of generative modeling through the lens of biological evolution remains largely untapped. In this paper, we introduce a novel family of models termed Generative Cellular Automata (GeCA), inspired by the evolution of an organism from a single cell. GeCAs are evaluated as an effective augmentation tool for retinal disease classification across two imaging modalities: Fundus and Optical Coherence Tomography (OCT). In the context of OCT imaging, where data is scarce and the distribution of classes is inherently skewed, GeCA significantly boosts the performance of 11 different ophthalmological conditions, achieving a 12% increase in the average F1 score compared to conventional baselines. GeCAs outperform both diffusion methods that incorporate UNet or state-of-the art variants with transformer-based denoising models, under similar parameter constraints. Code is available at: https://github.com/xmed-lab/GeCA.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
469,988
2211.07104
MetaKRec: Collaborative Meta-Knowledge Enhanced Recommender System
Knowledge graph (KG) enhanced recommendation has demonstrated improved performance in the recommendation system (RecSys) and attracted considerable research interest. Recently the literature has adopted neural graph networks (GNNs) on the collaborative knowledge graph and built an end-to-end KG-enhanced RecSys. However, the majority of these approaches have three limitations: (1) treat the collaborative knowledge graph as a homogeneous graph and overlook the highly heterogeneous relationships among items, (2) lack of design to explicitly leverage the rich side information, and (3) overlook the rich knowledge in user preference. To fill this gap, in this paper, we explore the rich, heterogeneous relationship among items and propose a new KG-enhanced recommendation model called Collaborative Meta-Knowledge Enhanced Recommender System (MetaKRec). In particular, we focus on modeling the rich, heterogeneous semantic relationships among items and construct several collaborative Meta-KGs to explicitly depict the relatedness of the items under the guidance of meta-knowledge. In addition to the knowledge obtained from KG, we leverage user knowledge that extracts from user preference to construct the Meta-KGs. The constructed Meta-KGs can capture the knowledge from both the knowledge graph and user preference. Furthermore. we utilize a light convolution encoder to recursively integrate the item relationship in each collaborative Meta-KGs. This scheme allows us to explicitly gather the heterogeneous semantic relationships among items and encode them into the representations of items. In addition, we propose channel attention to fuse the item and user representations from different Meta-KGs. Extensive experiments are conducted on four real-world benchmark datasets, demonstrating significant gains over the state-of-the-art baselines on both regular and cold-start recommendation settings.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
330,130
2110.14810
Telling Creative Stories Using Generative Visual Aids
Can visual artworks created using generative visual algorithms inspire human creativity in storytelling? We asked writers to write creative stories from a starting prompt, and provided them with visuals created by generative AI models from the same prompt. Compared to a control group, writers who used the visuals as story writing aid wrote significantly more creative, original, complete and visualizable stories, and found the task more fun. Of the generative algorithms used (BigGAN, VQGAN, DALL-E, CLIPDraw), VQGAN was the most preferred. The control group that did not view the visuals did significantly better in integrating the starting prompts. Findings indicate that cross modality inputs by AI can benefit divergent aspects of creativity in human-AI co-creation, but hinders convergent thinking.
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
263,648
2406.06671
Controlling Counterfactual Harm in Decision Support Systems Based on Prediction Sets
Decision support systems based on prediction sets help humans solve multiclass classification tasks by narrowing down the set of potential label values to a subset of them, namely a prediction set, and asking them to always predict label values from the prediction sets. While this type of systems have been proven to be effective at improving the average accuracy of the predictions made by humans, by restricting human agency, they may cause harm$\unicode{x2014}$a human who has succeeded at predicting the ground-truth label of an instance on their own may have failed had they used these systems. In this paper, our goal is to control how frequently a decision support system based on prediction sets may cause harm, by design. To this end, we start by characterizing the above notion of harm using the theoretical framework of structural causal models. Then, we show that, under a natural, albeit unverifiable, monotonicity assumption, we can estimate how frequently a system may cause harm using only predictions made by humans on their own. Further, we also show that, under a weaker monotonicity assumption, which can be verified experimentally, we can bound how frequently a system may cause harm again using only predictions made by humans on their own. Building upon these assumptions, we introduce a computational framework to design decision support systems based on prediction sets that are guaranteed to cause harm less frequently than a user-specified value using conformal risk control. We validate our framework using real human predictions from two different human subject studies and show that, in decision support systems based on prediction sets, there is a trade-off between accuracy and counterfactual harm.
true
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
462,735
2304.07125
Keeping the Questions Conversational: Using Structured Representations to Resolve Dependency in Conversational Question Answering
Having an intelligent dialogue agent that can engage in conversational question answering (ConvQA) is now no longer limited to Sci-Fi movies only and has, in fact, turned into a reality. These intelligent agents are required to understand and correctly interpret the sequential turns provided as the context of the given question. However, these sequential questions are sometimes left implicit and thus require the resolution of some natural language phenomena such as anaphora and ellipsis. The task of question rewriting has the potential to address the challenges of resolving dependencies amongst the contextual turns by transforming them into intent-explicit questions. Nonetheless, the solution of rewriting the implicit questions comes with some potential challenges such as resulting in verbose questions and taking conversational aspect out of the scenario by generating self-contained questions. In this paper, we propose a novel framework, CONVSR (CONVQA using Structured Representations) for capturing and generating intermediate representations as conversational cues to enhance the capability of the QA model to better interpret the incomplete questions. We also deliberate how the strengths of this task could be leveraged in a bid to design more engaging and eloquent conversational agents. We test our model on the QuAC and CANARD datasets and illustrate by experimental results that our proposed framework achieves a better F1 score than the standard question rewriting model.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
358,237
2212.14591
Mixture of von Mises-Fisher distribution with sparse prototypes
Mixtures of von Mises-Fisher distributions can be used to cluster data on the unit hypersphere. This is particularly adapted for high-dimensional directional data such as texts. We propose in this article to estimate a von Mises mixture using a l 1 penalized likelihood. This leads to sparse prototypes that improve clustering interpretability. We introduce an expectation-maximisation (EM) algorithm for this estimation and explore the trade-off between the sparsity term and the likelihood one with a path following algorithm. The model's behaviour is studied on simulated data and, we show the advantages of the approach on real data benchmark. We also introduce a new data set on financial reports and exhibit the benefits of our method for exploratory analysis.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
338,663
1908.09528
Thinking Globally, Acting Locally: Distantly Supervised Global-to-Local Knowledge Selection for Background Based Conversation
Background Based Conversations (BBCs) have been introduced to help conversational systems avoid generating overly generic responses. In a BBC, the conversation is grounded in a knowledge source. A key challenge in BBCs is Knowledge Selection (KS): given a conversational context, try to find the appropriate background knowledge (a text fragment containing related facts or comments, etc.) based on which to generate the next response. Previous work addresses KS by employing attention and/or pointer mechanisms. These mechanisms use a local perspective, i.e., they select a token at a time based solely on the current decoding state. We argue for the adoption of a global perspective, i.e., pre-selecting some text fragments from the background knowledge that could help determine the topic of the next response. We enhance KS in BBCs by introducing a Global-to-Local Knowledge Selection (GLKS) mechanism. Given a conversational context and background knowledge, we first learn a topic transition vector to encode the most likely text fragments to be used in the next response, which is then used to guide the local KS at each decoding timestamp. In order to effectively learn the topic transition vector, we propose a distantly supervised learning schema. Experimental results show that the GLKS model significantly outperforms state-of-the-art methods in terms of both automatic and human evaluation. More importantly, GLKS achieves this without requiring any extra annotations, which demonstrates its high degree of scalability.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
142,876