id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2501.01384 | OmniChat: Enhancing Spoken Dialogue Systems with Scalable Synthetic Data
for Diverse Scenarios | With the rapid development of large language models, researchers have created increasingly advanced spoken dialogue systems that can naturally converse with humans. However, these systems still struggle to handle the full complexity of real-world conversations, including audio events, musical contexts, and emotional expressions, mainly because current dialogue datasets are constrained in both scale and scenario diversity. In this paper, we propose leveraging synthetic data to enhance the dialogue models across diverse scenarios. We introduce ShareChatX, the first comprehensive, large-scale dataset for spoken dialogue that spans diverse scenarios. Based on this dataset, we introduce OmniChat, a multi-turn dialogue system with a heterogeneous feature fusion module, designed to optimize feature selection in different dialogue contexts. In addition, we explored critical aspects of training dialogue systems using synthetic data. Through comprehensive experimentation, we determined the ideal balance between synthetic and real data, achieving state-of-the-art results on the real-world dialogue dataset DailyTalk. We also highlight the crucial importance of synthetic data in tackling diverse, complex dialogue scenarios, especially those involving audio and music. For more details, please visit our demo page at \url{https://sharechatx.github.io/}. | true | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 522,042 |
1810.05420 | Cryo-CARE: Content-Aware Image Restoration for Cryo-Transmission
Electron Microscopy Data | Multiple approaches to use deep learning for image restoration have recently been proposed. Training such approaches requires well registered pairs of high and low quality images. While this is easily achievable for many imaging modalities, e.g. fluorescence light microscopy, for others it is not. Cryo-transmission electron microscopy (cryo-TEM) could profoundly benefit from improved denoising methods, unfortunately it is one of the latter. Here we show how recent advances in network training for image restoration tasks, i.e. denoising, can be applied to cryo-TEM data. We describe our proposed method and show how it can be applied to single cryo-TEM projections and whole cryo-tomographic image volumes. Our proposed restoration method dramatically increases contrast in cryo-TEM images, which improves the interpretability of the acquired data. Furthermore we show that automated downstream processing on restored image data, demonstrated on a dense segmentation task, leads to improved results. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 110,223 |
1211.5903 | MMSE Performance Analysis of Generalized Multibeam Satellite Channels | Aggressive frequency reuse in the return link (RL) of multibeam satellite communications (SatComs) is crucial towards the implementation of next generation, interactive satellite services. In this direction, multiuser detection has shown great potential in mitigating the increased intrasystem interferences, induced by a tight spectrum reuse. Herein we present an analytic framework to describe the linear Minimum Mean Square Error (MMSE) performance of multiuser channels that exhibit full receive correlation: an inherent attribute of the RL of multibeam SatComs. Analytic, tight approximations on the MMSE performance are proposed for cases where closed form solutions are not available in the existing literature. The proposed framework is generic, thus providing a generalized solution straightforwardly extendable to various fading models over channels that exhibit full receive correlation. Simulation results are provided to show the tightness of the proposed approximation with respect to the available transmit power. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 19,940 |
2303.17647 | Detecting and Grounding Important Characters in Visual Stories | Characters are essential to the plot of any story. Establishing the characters before writing a story can improve the clarity of the plot and the overall flow of the narrative. However, previous work on visual storytelling tends to focus on detecting objects in images and discovering relationships between them. In this approach, characters are not distinguished from other objects when they are fed into the generation pipeline. The result is a coherent sequence of events rather than a character-centric story. In order to address this limitation, we introduce the VIST-Character dataset, which provides rich character-centric annotations, including visual and textual co-reference chains and importance ratings for characters. Based on this dataset, we propose two new tasks: important character detection and character grounding in visual stories. For both tasks, we develop simple, unsupervised models based on distributional similarity and pre-trained vision-and-language models. Our new dataset, together with these models, can serve as the foundation for subsequent work on analysing and generating stories from a character-centric perspective. | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | 355,292 |
2007.08563 | FTRANS: Energy-Efficient Acceleration of Transformers using FPGA | In natural language processing (NLP), the "Transformer" architecture was proposed as the first transduction model replying entirely on self-attention mechanisms without using sequence-aligned recurrent neural networks (RNNs) or convolution, and it achieved significant improvements for sequence to sequence tasks. The introduced intensive computation and storage of these pre-trained language representations has impeded their popularity into computation and memory-constrained devices. The field-programmable gate array (FPGA) is widely used to accelerate deep learning algorithms for its high parallelism and low latency. However, the trained models are still too large to accommodate to an FPGA fabric. In this paper, we propose an efficient acceleration framework, Ftrans, for transformer-based large scale language representations. Our framework includes enhanced block-circulant matrix (BCM)-based weight representation to enable model compression on large-scale language representations at the algorithm level with few accuracy degradation, and an acceleration design at the architecture level. Experimental results show that our proposed framework significantly reduces the model size of NLP models by up to 16 times. Our FPGA design achieves 27.07x and 81x improvement in performance and energy efficiency compared to CPU, and up to 8.80x improvement in energy efficiency compared to GPU. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 187,666 |
2406.05170 | Research on Tumors Segmentation based on Image Enhancement Method | One of the most effective ways to treat liver cancer is to perform precise liver resection surgery, the key step of which includes precise digital image segmentation of the liver and its tumor. However, traditional liver parenchymal segmentation techniques often face several challenges in performing liver segmentation: lack of precision, slow processing speed, and computational burden. These shortcomings limit the efficiency of surgical planning and execution. In this work, the model initially describes in detail a new image enhancement algorithm that enhances the key features of an image by adaptively adjusting the contrast and brightness of the image. Then, a deep learning-based segmentation network was introduced, which was specially trained on the enhanced images to optimize the detection accuracy of tumor regions. In addition, multi-scale analysis techniques have been incorporated into the study, allowing the model to analyze images at different resolutions to capture more nuanced tumor features. In the presentation of the experimental results, the study used the 3Dircadb dataset to test the effectiveness of the proposed method. The experimental results show that compared with the traditional image segmentation method, the new method using image enhancement technology has significantly improved the accuracy and recall rate of tumor identification. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 462,017 |
1403.4017 | Multi-task Feature Selection based Anomaly Detection | Network anomaly detection is still a vibrant research area. As the fast growth of network bandwidth and the tremendous traffic on the network, there arises an extremely challengeable question: How to efficiently and accurately detect the anomaly on multiple traffic? In multi-task learning, the traffic consisting of flows at different time periods is considered as a task. Multiple tasks at different time periods performed simultaneously to detect anomalies. In this paper, we apply the multi-task feature selection in network anomaly detection area which provides a powerful method to gather information from multiple traffic and detect anomalies on it simultaneously. In particular, the multi-task feature selection includes the well-known l1-norm based feature selection as a special case given only one task. Moreover, we show that the multi-task feature selection is more accurate by utilizing more information simultaneously than the l1-norm based method. At the evaluation stage, we preprocess the raw data trace from trans-Pacific backbone link between Japan and the United States, label with anomaly communities, and generate a 248-feature dataset. We show empirically that the multi-task feature selection outperforms independent l1-norm based feature selection on real traffic dataset. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 31,618 |
2406.09612 | Automated Molecular Concept Generation and Labeling with Large Language
Models | Artificial intelligence (AI) is transforming scientific research, with explainable AI methods like concept-based models (CMs) showing promise for new discoveries. However, in molecular science, CMs are less common than black-box models like Graph Neural Networks (GNNs), due to their need for predefined concepts and manual labeling. This paper introduces the Automated Molecular Concept (AutoMolCo) framework, which leverages Large Language Models (LLMs) to automatically generate and label predictive molecular concepts. Through iterative concept refinement, AutoMolCo enables simple linear models to outperform GNNs and LLM in-context learning on several benchmarks. The framework operates without human knowledge input, overcoming limitations of existing CMs while maintaining explainability and allowing easy intervention. Experiments on MoleculeNet and High-Throughput Experimentation (HTE) datasets demonstrate that AutoMolCo-induced explainable CMs are beneficial for molecular science research. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 464,005 |
2403.04781 | Selective Encryption using Segmentation Mask with Chaotic Henon Map for
Multidimensional Medical Images | A user-centric design and resource optimization should be at the center of any technology or innovation. The user-centric perspective gives the developer the opportunity to develop with task-based optimization. The user in the medical image field is a medical professional who analyzes the medical images and gives their diagnosis results to the patient. This scheme, having the medical professional user's perspective, innovates in the area of Medical Image storage and security. The architecture is designed with three main segments, namely: Segmentation, Storage, and Retrieval. This architecture was designed owing to the fact that the number of retrieval operations done by medical professionals was toweringly higher when compared to the storage operations done for some handful number of times for a particular medical image. This gives room for our innovation to segment out the medically indispensable part of the medical image, encrypt it, and store it. By encrypting the vital parts of the image using a strong encryption algorithm like the chaotic Henon map, we are able to keep the security intact. Now retrieving the medical image demands only the computationally less stressing decryption of the segmented region of interest. The decryption of the segmented region of interest results in the full recovery of the medical image which can be viewed on demand by the medical professionals for various diagnosis purposes. In this scheme, we were able to achieve a retrieval speed improvement of around 47% when compared to a full image encryption of brain medical CT images. | false | false | false | false | false | false | true | false | false | false | false | true | true | false | false | false | false | false | 435,726 |
2401.08715 | Selecting Subsets of Source Data for Transfer Learning with Applications
in Metal Additive Manufacturing | Considering data insufficiency in metal additive manufacturing (AM), transfer learning (TL) has been adopted to extract knowledge from source domains (e.g., completed printings) to improve the modeling performance in target domains (e.g., new printings). Current applications use all accessible source data directly in TL with no regard to the similarity between source and target data. This paper proposes a systematic method to find appropriate subsets of source data based on similarities between the source and target datasets for a given set of limited target domain data. Such similarity is characterized by the spatial and model distance metrics. A Pareto frontier-based source data selection method is developed, where the source data located on the Pareto frontier defined by two similarity distance metrics are selected iteratively. The method is integrated into an instance-based TL method (decision tree regression model) and a model-based TL method (fine-tuned artificial neural network). Both models are then tested on several regression tasks in metal AM. Comparison results demonstrate that 1) the source data selection method is general and supports integration with various TL methods and distance metrics, 2) compared with using all source data, the proposed method can find a small subset of source data from the same domain with better TL performance in metal AM regression tasks involving different processes and machines, and 3) when multiple source domains exist, the source data selection method could find the subset from one source domain to obtain comparable or better TL performance than the model constructed using data from all source domains. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 422,004 |
1312.1309 | On the DoF Region of the K-user MISO Broadcast Channel with Hybrid CSIT | An outer bound for the degrees of freedom (DoF) region of the K-user multiple-input single-output (MISO) broadcast channel (BC) is developed under the hybrid channel state information at transmitter (CSIT) model, in which the transmitter has instantaneous CSIT of channels to a subset of the receivers and delayed CSIT of channels to the rest of the receivers. For the 3-user MISO BC, when the transmitter has instantaneous CSIT of the channel to one receiver and delayed CSIT of channels to the other two, two new communication schemes are designed, which are able to achieve the DoF tuple of $\left(1,\frac{1}{3},\frac{1}{3}\right)$, with a sum DoF of $\frac{5}{3}$, that is greater than the sum DoF achievable only with delayed CSIT. Another communication scheme showing the benefit of the alternating CSIT model is also developed, to obtain the DoF tuple of $\left(1,\frac{4}{9},\frac{4}{9}\right)$ for the 3-user MISO BC. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 28,848 |
2312.12585 | BadRL: Sparse Targeted Backdoor Attack Against Reinforcement Learning | Backdoor attacks in reinforcement learning (RL) have previously employed intense attack strategies to ensure attack success. However, these methods suffer from high attack costs and increased detectability. In this work, we propose a novel approach, BadRL, which focuses on conducting highly sparse backdoor poisoning efforts during training and testing while maintaining successful attacks. Our algorithm, BadRL, strategically chooses state observations with high attack values to inject triggers during training and testing, thereby reducing the chances of detection. In contrast to the previous methods that utilize sample-agnostic trigger patterns, BadRL dynamically generates distinct trigger patterns based on targeted state observations, thereby enhancing its effectiveness. Theoretical analysis shows that the targeted backdoor attack is always viable and remains stealthy under specific assumptions. Empirical results on various classic RL tasks illustrate that BadRL can substantially degrade the performance of a victim agent with minimal poisoning efforts 0.003% of total training steps) during training and infrequent attacks during testing. | false | false | false | false | true | false | true | false | false | false | false | false | true | false | false | false | false | false | 417,012 |
1706.00384 | Deep Mutual Learning | Model distillation is an effective and widely used technique to transfer knowledge from a teacher to a student network. The typical application is to transfer from a powerful large network or ensemble to a small network, that is better suited to low-memory or fast execution requirements. In this paper, we present a deep mutual learning (DML) strategy where, rather than one way transfer between a static pre-defined teacher and a student, an ensemble of students learn collaboratively and teach each other throughout the training process. Our experiments show that a variety of network architectures benefit from mutual learning and achieve compelling results on CIFAR-100 recognition and Market-1501 person re-identification benchmarks. Surprisingly, it is revealed that no prior powerful teacher network is necessary -- mutual learning of a collection of simple student networks works, and moreover outperforms distillation from a more powerful yet static teacher. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 74,610 |
1807.05353 | Recurrent Stacking of Layers for Compact Neural Machine Translation
Models | In neural machine translation (NMT), the most common practice is to stack a number of recurrent or feed-forward layers in the encoder and the decoder. As a result, the addition of each new layer improves the translation quality significantly. However, this also leads to a significant increase in the number of parameters. In this paper, we propose to share parameters across all the layers thereby leading to a recurrently stacked NMT model. We empirically show that the translation quality of a model that recurrently stacks a single layer 6 times is comparable to the translation quality of a model that stacks 6 separate layers. We also show that using pseudo-parallel corpora by back-translation leads to further significant improvements in translation quality. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 102,913 |
2403.10984 | IoTCO2: Assessing the End-To-End Carbon Footprint of
Internet-of-Things-Enabled Deep Learning | To improve privacy and ensure quality-of-service (QoS), deep learning (DL) models are increasingly deployed on Internet of Things (IoT) devices for data processing, significantly increasing the carbon footprint associated with DL on IoT, covering both operational and embodied aspects. Existing operational energy predictors often overlook quantized DL models and emerging neural processing units (NPUs), while embodied carbon footprint modeling tools neglect non-computing hardware components common in IoT devices, creating a gap in accurate carbon footprint modeling tools for IoT-enabled DL. This paper introduces \textit{\carb}, an end-to-end tool for precise carbon footprint estimation in IoT-enabled DL, with deviations as low as 5\% for operational and 3.23\% for embodied carbon footprints compared to actual measurements across various DL models. Additionally, practical applications of \carb~are showcased through multiple user case studies. | false | false | false | false | true | false | true | false | false | false | false | false | false | true | false | false | false | false | 438,463 |
2309.06789 | An Image Dataset for Benchmarking Recommender Systems with Raw Pixels | Recommender systems (RS) have achieved significant success by leveraging explicit identification (ID) features. However, the full potential of content features, especially the pure image pixel features, remains relatively unexplored. The limited availability of large, diverse, and content-driven image recommendation datasets has hindered the use of raw images as item representations. In this regard, we present PixelRec, a massive image-centric recommendation dataset that includes approximately 200 million user-image interactions, 30 million users, and 400,000 high-quality cover images. By providing direct access to raw image pixels, PixelRec enables recommendation models to learn item representation directly from them. To demonstrate its utility, we begin by presenting the results of several classical pure ID-based baseline models, termed IDNet, trained on PixelRec. Then, to show the effectiveness of the dataset's image features, we substitute the itemID embeddings (from IDNet) with a powerful vision encoder that represents items using their raw image pixels. This new model is dubbed PixelNet.Our findings indicate that even in standard, non-cold start recommendation settings where IDNet is recognized as highly effective, PixelNet can already perform equally well or even better than IDNet. Moreover, PixelNet has several other notable advantages over IDNet, such as being more effective in cold-start and cross-domain recommendation scenarios. These results underscore the importance of visual features in PixelRec. We believe that PixelRec can serve as a critical resource and testing ground for research on recommendation models that emphasize image pixel content. The dataset, code, and leaderboard will be available at https://github.com/westlake-repl/PixelRec. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 391,544 |
1502.02940 | The Hilbert Space of Probability Mass Functions and Applications on
Probabilistic Inference | The Hilbert space of probability mass functions (pmf) is introduced in this thesis. A factorization method for multivariate pmfs is proposed by using the tools provided by the Hilbert space of pmfs. The resulting factorization is special for two reasons. First, it reveals the algebraic relations between the involved random variables. Second, it determines the conditional independence relations between the random variables. Due to the first property of the resulting factorization, it can be shown that channel decoders can be employed in the solution of probabilistic inference problems other than decoding. This approach might lead to new probabilistic inference algorithms and new hardware options for the implementation of these algorithms. An example of new inference algorithms inspired by the idea of using channel decoder for other inference tasks is a multiple-input multiple-output (MIMO) detection algorithm which has a complexity of the square-root of the optimum MIMO detection algorithm. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 40,102 |
1705.05427 | Repeated Inverse Reinforcement Learning | We introduce a novel repeated Inverse Reinforcement Learning problem: the agent has to act on behalf of a human in a sequence of tasks and wishes to minimize the number of tasks that it surprises the human by acting suboptimally with respect to how the human would have acted. Each time the human is surprised, the agent is provided a demonstration of the desired behavior by the human. We formalize this problem, including how the sequence of tasks is chosen, in a few different ways and provide some foundational results. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 73,486 |
2112.00798 | Fast Sparse Decision Tree Optimization via Reference Ensembles | Sparse decision tree optimization has been one of the most fundamental problems in AI since its inception and is a challenge at the core of interpretable machine learning. Sparse decision tree optimization is computationally hard, and despite steady effort since the 1960's, breakthroughs have only been made on the problem within the past few years, primarily on the problem of finding optimal sparse decision trees. However, current state-of-the-art algorithms often require impractical amounts of computation time and memory to find optimal or near-optimal trees for some real-world datasets, particularly those having several continuous-valued features. Given that the search spaces of these decision tree optimization problems are massive, can we practically hope to find a sparse decision tree that competes in accuracy with a black box machine learning model? We address this problem via smart guessing strategies that can be applied to any optimal branch-and-bound-based decision tree algorithm. We show that by using these guesses, we can reduce the run time by multiple orders of magnitude, while providing bounds on how far the resulting trees can deviate from the black box's accuracy and expressive power. Our approach enables guesses about how to bin continuous features, the size of the tree, and lower bounds on the error for the optimal decision tree. Our experiments show that in many cases we can rapidly construct sparse decision trees that match the accuracy of black box models. To summarize: when you are having trouble optimizing, just guess. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 269,245 |
1602.03468 | Articulated Clinician Detection Using 3D Pictorial Structures on RGB-D
Data | Reliable human pose estimation (HPE) is essential to many clinical applications, such as surgical workflow analysis, radiation safety monitoring and human-robot cooperation. Proposed methods for the operating room (OR) rely either on foreground estimation using a multi-camera system, which is a challenge in real ORs due to color similarities and frequent illumination changes, or on wearable sensors or markers, which are invasive and therefore difficult to introduce in the room. Instead, we propose a novel approach based on Pictorial Structures (PS) and on RGB-D data, which can be easily deployed in real ORs. We extend the PS framework in two ways. First, we build robust and discriminative part detectors using both color and depth images. We also present a novel descriptor for depth images, called histogram of depth differences (HDD). Second, we extend PS to 3D by proposing 3D pairwise constraints and a new method that makes exact inference tractable. Our approach is evaluated for pose estimation and clinician detection on a challenging RGB-D dataset recorded in a busy operating room during live surgeries. We conduct series of experiments to study the different part detectors in conjunction with the various 2D or 3D pairwise constraints. Our comparisons demonstrate that 3D PS with RGB-D part detectors significantly improves the results in a visually challenging operating environment. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 52,000 |
1811.12751 | Domain-Invariant Adversarial Learning for Unsupervised Domain Adaption | Unsupervised domain adaption aims to learn a powerful classifier for the target domain given a labeled source data set and an unlabeled target data set. To alleviate the effect of `domain shift', the major challenge in domain adaptation, studies have attempted to align the distributions of the two domains. Recent research has suggested that generative adversarial network (GAN) has the capability of implicitly capturing data distribution. In this paper, we thus propose a simple but effective model for unsupervised domain adaption leveraging adversarial learning. The same encoder is shared between the source and target domains which is expected to extract domain-invariant representations with the help of an adversarial discriminator. With the labeled source data, we introduce the center loss to increase the discriminative power of feature learned. We further align the conditional distribution of the two domains to enforce the discrimination of the features in the target domain. Unlike previous studies where the source features are extracted with a fixed pre-trained encoder, our method jointly learns feature representations of two domains. Moreover, by sharing the encoder, the model does not need to know the source of images during testing and hence is more widely applicable. We evaluate the proposed method on several unsupervised domain adaption benchmarks and achieve superior or comparable performance to state-of-the-art results. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 115,082 |
2410.02130 | MDSGen: Fast and Efficient Masked Diffusion Temporal-Aware Transformers
for Open-Domain Sound Generation | We introduce MDSGen, a novel framework for vision-guided open-domain sound generation optimized for model parameter size, memory consumption, and inference speed. This framework incorporates two key innovations: (1) a redundant video feature removal module that filters out unnecessary visual information, and (2) a temporal-aware masking strategy that leverages temporal context for enhanced audio generation accuracy. In contrast to existing resource-heavy Unet-based models, \texttt{MDSGen} employs denoising masked diffusion transformers, facilitating efficient generation without reliance on pre-trained diffusion models. Evaluated on the benchmark VGGSound dataset, our smallest model (5M parameters) achieves $97.9$% alignment accuracy, using $172\times$ fewer parameters, $371$% less memory, and offering $36\times$ faster inference than the current 860M-parameter state-of-the-art model ($93.9$% accuracy). The larger model (131M parameters) reaches nearly $99$% accuracy while requiring $6.5\times$ fewer parameters. These results highlight the scalability and effectiveness of our approach. The code is available at https://bit.ly/mdsgen. | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 494,120 |
2407.00241 | Exploiting Structure in Quantum Relative Entropy Programs | Quantum relative entropy programs are convex optimization problems which minimize a linear functional over an affine section of the epigraph of the quantum relative entropy function. Recently, the self-concordance of a natural barrier function was proved for this set. This has opened up the opportunity to use interior-point methods for nonsymmetric cone programs to solve these optimization problems. In this paper, we show how common structures arising from applications in quantum information theory can be exploited to improve the efficiency of solving quantum relative entropy programs using interior-point methods. First, we show that the natural barrier function for the epigraph of the quantum relative entropy composed with positive linear operators is optimally self-concordant, even when these linear operators map to singular matrices. Compared to modelling problems using the full quantum relative entropy cone, this allows us to remove redundant log determinant expressions from the barrier function and reduce the overall barrier parameter. Second, we show how certain slices of the quantum relative entropy cone exhibit useful properties which should be exploited whenever possible to perform certain key steps of interior-point methods more efficiently. We demonstrate how these methods can be applied to applications in quantum information theory, including quantifying quantum key rates, quantum rate-distortion functions, quantum channel capacities, and the ground state energy of Hamiltonians. Our numerical results show that these techniques improve computation times by up to several orders of magnitude, and allow previously intractable problems to be solved. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 468,777 |
2102.02730 | Feedback Capacity of Parallel ACGN Channels and Kalman Filter: Power
Allocation with Feedback | In this paper, we relate the feedback capacity of parallel additive colored Gaussian noise (ACGN) channels to a variant of the Kalman filter. By doing so, we obtain lower bounds on the feedback capacity of such channels, as well as the corresponding feedback (recursive) coding schemes, which are essentially power allocation policies with feedback, to achieve the bounds. The results are seen to reduce to existing lower bounds in the case of a single ACGN feedback channel, whereas when it comes to parallel additive white Gaussian noise (AWGN) channels with feedback, the recursive coding scheme reduces to a feedback "water-filling" power allocation policy. | false | false | false | false | false | false | true | false | false | true | true | false | false | false | false | false | false | false | 218,503 |
2211.15053 | Distinguishing representational geometries with controversial stimuli:
Bayesian experimental design and its application to face dissimilarity
judgments | Comparing representations of complex stimuli in neural network layers to human brain representations or behavioral judgments can guide model development. However, even qualitatively distinct neural network models often predict similar representational geometries of typical stimulus sets. We propose a Bayesian experimental design approach to synthesizing stimulus sets for adjudicating among representational models efficiently. We apply our method to discriminate among candidate neural network models of behavioral face dissimilarity judgments. Our results indicate that a neural network trained to invert a 3D-face-model graphics renderer is more human-aligned than the same architecture trained on identification, classification, or autoencoding. Our proposed stimulus synthesis objective is generally applicable to designing experiments to be analyzed by representational similarity analysis for model comparison. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | false | false | 333,097 |
2106.10198 | Systematic comparison of graph embedding methods in practical tasks | Network embedding techniques aim at representing structural properties of graphs in geometric space. Those representations are considered useful in downstream tasks such as link prediction and clustering. However, the number of graph embedding methods available on the market is large, and practitioners face the non-trivial choice of selecting the proper approach for a given application. The present work attempts to close this gap of knowledge through a systematic comparison of eleven different methods for graph embedding. We consider methods for embedding networks in the hyperbolic and Euclidean metric spaces, as well as non-metric community-based embedding methods. We apply these methods to embed more than one hundred real-world and synthetic networks. Three common downstream tasks -- mapping accuracy, greedy routing, and link prediction -- are considered to evaluate the quality of the various embedding methods. Our results show that some Euclidean embedding methods excel in greedy routing. As for link prediction, community-based and hyperbolic embedding methods yield overall performance superior than that of Euclidean-space-based approaches. We compare the running time for different methods and further analyze the impact of different network characteristics such as degree distribution, modularity, and clustering coefficients on the quality of the different embedding methods. We release our evaluation framework to provide a standardized benchmark for arbitrary embedding methods. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 241,935 |
2107.07843 | Deep Learning Based Hybrid Precoding in Dual-Band Communication Systems | We propose a deep learning-based method that uses spatial and temporal information extracted from the sub-6GHz band to predict/track beams in the millimeter-wave (mmWave) band. In more detail, we consider a dual-band communication system operating in both the sub-6GHz and mmWave bands. The objective is to maximize the achievable mutual information in the mmWave band with a hybrid analog/digital architecture where analog precoders (RF precoders) are taken from a finite codebook. Finding a RF precoder using conventional search methods incurs large signalling overhead, and the signalling scales with the number of RF chains and the resolution of the phase shifters. To overcome the issue of large signalling overhead in the mmWave band, the proposed method exploits the spatiotemporal correlation between sub-6GHz and mmWave bands, and it predicts/tracks the RF precoders in the mmWave band from sub-6GHz channel measurements. The proposed method provides a smaller candidate set so that performing a search over that set significantly reduces the signalling overhead compared with conventional search heuristics. Simulations show that the proposed method can provide reasonable achievable rates while significantly reducing the signalling overhead. | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | 246,546 |
1608.02132 | Password Cracking: The Effect of Hash Function Bias on the Average
Guesswork | Modern authentication systems store hashed values of passwords of users using cryptographic hash functions. Therefore, to crack a password an attacker needs to guess a hash function input that is mapped to the hashed value, as opposed to the password itself. We call a hash function that maps the same number of inputs to each bin, as \textbf{unbiased}. However, cryptographic hash functions in use have not been proven to be unbiased (i.e., they may have an unequal number of inputs mapped to different bins). A cryptographic hash function has the property that it is computationally difficult to find an input mapped to a bin. In this work we introduce a structured notion of biased hash functions for which we analyze the average guesswork under certain types of brute force attacks. This work shows that the level of security depends on the set of hashed values of valid users as well as the statistical profile of a hash function, resulting from bias. We examine the average guesswork conditioned on the set of hashed values, and model the statistical profile through the empirical distribution of the number of inputs that are mapped to a bin. In particular, we focus on a class of statistical profiles (capturing the bias) , which we call type-class statistical profiles, that has an empirical distribution related to the probability of the type classes defined in the method of types. For such profiles, we show that the average guesswork is related to basic measures in information theory such as entropy and divergence. We use this to show that the effect of bias on the conditional average guesswork is limited compared to other system parameters such as the number of valid users who store their hashed passwords in the system. | false | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | 59,515 |
2402.09470 | Rolling Diffusion Models | Diffusion models have recently been increasingly applied to temporal data such as video, fluid mechanics simulations, or climate data. These methods generally treat subsequent frames equally regarding the amount of noise in the diffusion process. This paper explores Rolling Diffusion: a new approach that uses a sliding window denoising process. It ensures that the diffusion process progressively corrupts through time by assigning more noise to frames that appear later in a sequence, reflecting greater uncertainty about the future as the generation process unfolds. Empirically, we show that when the temporal dynamics are complex, Rolling Diffusion is superior to standard diffusion. In particular, this result is demonstrated in a video prediction task using the Kinetics-600 video dataset and in a chaotic fluid dynamics forecasting experiment. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 429,540 |
2302.07260 | Scalable Bayesian optimization with high-dimensional outputs using
randomized prior networks | Several fundamental problems in science and engineering consist of global optimization tasks involving unknown high-dimensional (black-box) functions that map a set of controllable variables to the outcomes of an expensive experiment. Bayesian Optimization (BO) techniques are known to be effective in tackling global optimization problems using a relatively small number objective function evaluations, but their performance suffers when dealing with high-dimensional outputs. To overcome the major challenge of dimensionality, here we propose a deep learning framework for BO and sequential decision making based on bootstrapped ensembles of neural architectures with randomized priors. Using appropriate architecture choices, we show that the proposed framework can approximate functional relationships between design variables and quantities of interest, even in cases where the latter take values in high-dimensional vector spaces or even infinite-dimensional function spaces. In the context of BO, we augmented the proposed probabilistic surrogates with re-parameterized Monte Carlo approximations of multiple-point (parallel) acquisition functions, as well as methodological extensions for accommodating black-box constraints and multi-fidelity information sources. We test the proposed framework against state-of-the-art methods for BO and demonstrate superior performance across several challenging tasks with high-dimensional outputs, including a constrained multi-fidelity optimization task involving shape optimization of rotor blades in turbo-machinery. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 345,674 |
2410.03553 | Structure-Enhanced Protein Instruction Tuning: Towards General-Purpose
Protein Understanding | Proteins, as essential biomolecules, play a central role in biological processes, including metabolic reactions and DNA replication. Accurate prediction of their properties and functions is crucial in biological applications. Recent development of protein language models (pLMs) with supervised fine tuning provides a promising solution to this problem. However, the fine-tuned model is tailored for particular downstream prediction task, and achieving general-purpose protein understanding remains a challenge. In this paper, we introduce Structure-Enhanced Protein Instruction Tuning (SEPIT) framework to bridge this gap. Our approach integrates a noval structure-aware module into pLMs to inform them with structural knowledge, and then connects these enhanced pLMs to large language models (LLMs) to generate understanding of proteins. In this framework, we propose a novel two-stage instruction tuning pipeline that first establishes a basic understanding of proteins through caption-based instructions and then refines this understanding using a mixture of experts (MoEs) to learn more complex properties and functional information with the same amount of activated parameters. Moreover, we construct the largest and most comprehensive protein instruction dataset to date, which allows us to train and evaluate the general-purpose protein understanding model. Extensive experimental results on open-ended generation and closed-set answer tasks demonstrate the superior performance of SEPIT over both closed-source general LLMs and open-source LLMs trained with protein knowledge. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 494,845 |
2202.12423 | Applying Polynomial Decoupling Methods to the Polynomial NARX Model | System identification uses measurements of a dynamic system's input and output to reconstruct a mathematical model for that system. These can be mechanical, electrical, physiological, among others. Since most of the systems around us exhibit some form of nonlinear behavior, nonlinear system identification techniques are the tools that will help us gain a better understanding of our surroundings and potentially let us improve their performance. One model that is often used to represent nonlinear systems is the polynomial NARX model, an equation error model where the output is a polynomial function of the past inputs and outputs. That said, a major disadvantage with the polynomial NARX model is that the number of parameters increases rapidly with increasing polynomial order. Furthermore, the polynomial NARX model is a black-box model, and is therefore difficult to interpret. This paper discusses a decoupling algorithm for the polynomial NARX model that substitutes the multivariate polynomial with a transformation matrix followed by a bank of univariate polynomials. This decreases the number of model parameters significantly and also imposes structure on the black-box NARX model. Since a non-convex optimization is required for this identification technique, initialization is an important factor to consider. In this paper the decoupling algorithm is developed in conjunction with several different initialization techniques. The resulting algorithms are applied to two nonlinear benchmark problems: measurement data from the Silver-Box and simulation data from the Bouc-Wen friction model, and the performance is evaluated for different validation signals in both simulation and prediction. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 282,232 |
2210.14970 | Identifying Crisis Response Communities in Online Social Networks for
Compound Disasters: The Case of Hurricane Laura and Covid-19 | Online social networks allow different agencies and the public to interact and share the underlying risks and protective actions during major disasters. This study revealed such crisis communication patterns during hurricane Laura compounded by the COVID-19 pandemic. Laura was one of the strongest (Category 4) hurricanes on record to make landfall in Cameron, Louisiana. Using the Application Programming Interface (API), this study utilizes large-scale social media data obtained from Twitter through the recently released academic track that provides complete and unbiased observations. The data captured publicly available tweets shared by active Twitter users from the vulnerable areas threatened by Laura. Online social networks were based on user influence feature ( mentions or tags) that allows notifying other users while posting a tweet. Using network science theories and advanced community detection algorithms, the study split these networks into twenty-one components of various sizes, the largest of which contained eight well-defined communities. Several natural language processing techniques (i.e., word clouds, bigrams, topic modeling) were applied to the tweets shared by the users in these communities to observe their risk-taking or risk-averse behavior during a major compounding crisis. Social media accounts of local news media, radio, universities, and popular sports pages were among those who involved heavily and interacted closely with local residents. In contrast, emergency management and planning units in the area engaged less with the public. The findings of this study provide novel insights into the design of efficient social media communication guidelines to respond better in future disasters. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 326,746 |
2303.16445 | Larger Probes Tell a Different Story: Extending Psycholinguistic
Datasets Via In-Context Learning | Language model probing is often used to test specific capabilities of models. However, conclusions from such studies may be limited when the probing benchmarks are small and lack statistical power. In this work, we introduce new, larger datasets for negation (NEG-1500-SIMP) and role reversal (ROLE-1500) inspired by psycholinguistic studies. We dramatically extend existing NEG-136 and ROLE-88 benchmarks using GPT3, increasing their size from 18 and 44 sentence pairs to 750 each. We also create another version of extended negation dataset (NEG-1500-SIMP-TEMP), created using template-based generation. It consists of 770 sentence pairs. We evaluate 22 models on the extended datasets, seeing model performance dip 20-57% compared to the original smaller benchmarks. We observe high levels of negation sensitivity in models like BERT and ALBERT demonstrating that previous findings might have been skewed due to smaller test sets. Finally, we observe that while GPT3 has generated all the examples in ROLE-1500 is only able to solve 24.6% of them during probing. The datasets and code are available on $\href{https://github.com/text-machine-lab/extending_psycholinguistic_dataset}{Github}$. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 354,857 |
2210.06683 | Augmenting Flight Training with AI to Efficiently Train Pilots | We propose an AI-based pilot trainer to help students learn how to fly aircraft. First, an AI agent uses behavioral cloning to learn flying maneuvers from qualified flight instructors. Later, the system uses the agent's decisions to detect errors made by students and provide feedback to help students correct their errors. This paper presents an instantiation of the pilot trainer. We focus on teaching straight and level flying maneuvers by automatically providing formative feedback to the human student. | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 323,393 |
1506.08231 | A zero-sum monetary system, interest rates, and implications | To the knowledge of the author, this is the first time it has been shown that interest rates that are extremely high by modern standards (100% and higher) are necessary within a zero-sum monetary system, and not just driven by greed. Extreme interest rates that appeared in various places and times reinforce the idea that hard money may have contributed to high rates of interest. Here a model is presented that examines the interest rate required to succeed as an investor in a zero-sum fixed quantity hard-money system. Even when the playing field is significantly tilted toward the investor, interest rates need to be much higher than expected. In a completely fair zero-sum system, an investor cannot break even without charging 100% interest. Even with a 5% advantage, an investor won't break even at 15% interest. From this it is concluded that what we consider usurious rates today are, within a hard-money system, driven by necessity. Cryptocurrency is a novel form of hard-currency. The inability to virtualize the money creates a system close to zero-sum because of the limited supply design. Therefore, within the bounds of a cryptocurrency system that limits money creation, interest rates must rise to levels that the modern world considers usury. It is impossible, therefore, that a cryptocurrency that is not expandable could take over a modern economy and replace modern fiat currency. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 44,585 |
2308.05017 | When and How Does Known Class Help Discover Unknown Ones? Provable
Understanding Through Spectral Analysis | Novel Class Discovery (NCD) aims at inferring novel classes in an unlabeled set by leveraging prior knowledge from a labeled set with known classes. Despite its importance, there is a lack of theoretical foundations for NCD. This paper bridges the gap by providing an analytical framework to formalize and investigate when and how known classes can help discover novel classes. Tailored to the NCD problem, we introduce a graph-theoretic representation that can be learned by a novel NCD Spectral Contrastive Loss (NSCL). Minimizing this objective is equivalent to factorizing the graph's adjacency matrix, which allows us to derive a provable error bound and provide the sufficient and necessary condition for NCD. Empirically, NSCL can match or outperform several strong baselines on common benchmark datasets, which is appealing for practical usage while enjoying theoretical guarantees. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 384,651 |
1805.07912 | Bayesian posterior approximation via greedy particle optimization | In Bayesian inference, the posterior distributions are difficult to obtain analytically for complex models such as neural networks. Variational inference usually uses a parametric distribution for approximation, from which we can easily draw samples. Recently discrete approximation by particles has attracted attention because of its high expression ability. An example is Stein variational gradient descent (SVGD), which iteratively optimizes particles. Although SVGD has been shown to be computationally efficient empirically, its theoretical properties have not been clarified yet and no finite sample bound of the convergence rate is known. Another example is the Stein points (SP) method, which minimizes kernelized Stein discrepancy directly. Although a finite sample bound is assured theoretically, SP is computationally inefficient empirically, especially in high-dimensional problems. In this paper, we propose a novel method named maximum mean discrepancy minimization by the Frank-Wolfe algorithm (MMD-FW), which minimizes MMD in a greedy way by the FW algorithm. Our method is computationally efficient empirically and we show that its finite sample convergence bound is in a linear order in finite dimensions. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 97,989 |
1801.04751 | SAR Image Despeckling Using Quadratic-Linear Approximated L1-Norm | Speckle noise, inherent in synthetic aperture radar (SAR) images, degrades the performance of the various SAR image analysis tasks. Thus, speckle noise reduction is a critical preprocessing step for smoothing homogeneous regions while preserving details. This letter proposes a variational despeckling approach where L1-norm total variation regularization term is approximated in a quadratic and linear manner to increase accuracy while decreasing the computation time. Despeckling performance and computational efficiency of the proposed method are shown using synthetic and real-world SAR images. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 88,337 |
2108.02634 | Time-aware Path Reasoning on Knowledge Graph for Recommendation | Reasoning on knowledge graph (KG) has been studied for explainable recommendation due to it's ability of providing explicit explanations. However, current KG-based explainable recommendation methods unfortunately ignore the temporal information (such as purchase time, recommend time, etc.), which may result in unsuitable explanations. In this work, we propose a novel Time-aware Path reasoning for Recommendation (TPRec for short) method, which leverages the potential of temporal information to offer better recommendation with plausible explanations. First, we present an efficient time-aware interaction relation extraction component to construct collaborative knowledge graph with time-aware interactions (TCKG for short), and then introduce a novel time-aware path reasoning method for recommendation. We conduct extensive experiments on three real-world datasets. The results demonstrate that the proposed TPRec could successfully employ TCKG to achieve substantial gains and improve the quality of explainable recommendation. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 249,396 |
2310.09727 | Provably Fast Convergence of Independent Natural Policy Gradient for
Markov Potential Games | This work studies an independent natural policy gradient (NPG) algorithm for the multi-agent reinforcement learning problem in Markov potential games. It is shown that, under mild technical assumptions and the introduction of the \textit{suboptimality gap}, the independent NPG method with an oracle providing exact policy evaluation asymptotically reaches an $\epsilon$-Nash Equilibrium (NE) within $\mathcal{O}(1/\epsilon)$ iterations. This improves upon the previous best result of $\mathcal{O}(1/\epsilon^2)$ iterations and is of the same order, $\mathcal{O}(1/\epsilon)$, that is achievable for the single-agent case. Empirical results for a synthetic potential game and a congestion game are presented to verify the theoretical bounds. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 399,913 |
1812.01699 | Assigning a Grade: Accurate Measurement of Road Quality Using Satellite
Imagery | Roads are critically important infrastructure to societal and economic development, with huge investments made by governments every year. However, methods for monitoring those investments tend to be time-consuming, laborious, and expensive, placing them out of reach for many developing regions. In this work, we develop a model for monitoring the quality of road infrastructure using satellite imagery. For this task, we harness two trends: the increasing availability of high-resolution, often-updated satellite imagery, and the enormous improvement in speed and accuracy of convolutional neural network-based methods for performing computer vision tasks. We employ a unique dataset of road quality information on 7000km of roads in Kenya combined with 50cm resolution satellite imagery. We create models for a binary classification task as well as a comprehensive 5-category classification task, with accuracy scores of 88 and 73 percent respectively. We also provide evidence of the robustness of our methods with challenging held-out scenarios, though we note some improvement is still required for confident analysis of a never before seen road. We believe these results are well-positioned to have substantial impact on a broad set of transport applications. | false | false | false | false | false | false | true | false | false | false | false | true | false | true | false | false | false | false | 115,582 |
1509.02491 | Edge-enhancing Filters with Negative Weights | In [DOI:10.1109/ICMEW.2014.6890711], a graph-based denoising is performed by projecting the noisy image to a lower dimensional Krylov subspace of the graph Laplacian, constructed using nonnegative weights determined by distances between image data corresponding to image pixels. We~extend the construction of the graph Laplacian to the case, where some graph weights can be negative. Removing the positivity constraint provides a more accurate inference of a graph model behind the data, and thus can improve quality of filters for graph-based signal processing, e.g., denoising, compared to the standard construction, without affecting the costs. | false | false | false | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | 46,738 |
2410.00134 | Semantic-Driven Topic Modeling Using Transformer-Based Embeddings and
Clustering Algorithms | Topic modeling is a powerful technique to discover hidden topics and patterns within a collection of documents without prior knowledge. Traditional topic modeling and clustering-based techniques encounter challenges in capturing contextual semantic information. This study introduces an innovative end-to-end semantic-driven topic modeling technique for the topic extraction process, utilizing advanced word and document embeddings combined with a powerful clustering algorithm. This semantic-driven approach represents a significant advancement in topic modeling methodologies. It leverages contextual semantic information to extract coherent and meaningful topics. Specifically, our model generates document embeddings using pre-trained transformer-based language models, reduces the dimensions of the embeddings, clusters the embeddings based on semantic similarity, and generates coherent topics for each cluster. Compared to ChatGPT and traditional topic modeling algorithms, our model provides more coherent and meaningful topics. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 493,242 |
2401.13260 | MF-AED-AEC: Speech Emotion Recognition by Leveraging Multimodal Fusion,
Asr Error Detection, and Asr Error Correction | The prevalent approach in speech emotion recognition (SER) involves integrating both audio and textual information to comprehensively identify the speaker's emotion, with the text generally obtained through automatic speech recognition (ASR). An essential issue of this approach is that ASR errors from the text modality can worsen the performance of SER. Previous studies have proposed using an auxiliary ASR error detection task to adaptively assign weights of each word in ASR hypotheses. However, this approach has limited improvement potential because it does not address the coherence of semantic information in the text. Additionally, the inherent heterogeneity of different modalities leads to distribution gaps between their representations, making their fusion challenging. Therefore, in this paper, we incorporate two auxiliary tasks, ASR error detection (AED) and ASR error correction (AEC), to enhance the semantic coherence of ASR text, and further introduce a novel multi-modal fusion (MF) method to learn shared representations across modalities. We refer to our method as MF-AED-AEC. Experimental results indicate that MF-AED-AEC significantly outperforms the baseline model by a margin of 4.1\%. | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | 423,670 |
1005.3124 | An improved HeatS+ProbS hybrid recommendation algorithm based on
heterogeneous initial resource configurations | Network-based recommendation algorithms for user-object link predictions have achieved significant developments in recent years. For bipartite graphs, the reallocation of resource in such algorithms is analogous to heat spreading (HeatS) or probability spreading (ProbS) processes. The best algorithm to date is a hybrid of the HeatS and ProbS techniques with homogenous initial resource configurations, which fulfills simultaneously high accuracy and large diversity. We investigate the effect of heterogeneity in initial configurations on the HeatS+ProbS hybrid algorithm and find that both recommendation accuracy and diversity can be further improved in this new setting. Numerical experiments show that the improvement is robust. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 6,509 |
2405.06561 | Reservoir Computing Benchmarks: a review, a taxonomy, some best
practices | Reservoir Computing is an Unconventional Computation model to perform computation on various different substrates, such as RNNs or physical materials. The method takes a "black-box" approach, training only the outputs of the system it is built on. As such, evaluating the computational capacity of these systems can be challenging. We review and critique the evaluation methods used in the field of Reservoir Computing. We introduce a categorisation of benchmark tasks. We review multiple examples of benchmarks from the literature as applied to reservoir computing, and note their strengths and shortcomings. We suggest ways in which benchmarks and their uses may be improved to the benefit of the reservoir computing community | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | true | 453,334 |
2109.14958 | A Social Cognitive Heuristic for Adaptive Data Dissemination in Mobile
Opportunistic Networks | It is commonly agreed that data will be one of the cornerstones of Future Internet systems. In this context, mobile Opportunistic Networks (ONs) are one of the key paradigms to support, in a self-organising and decentralised manner, the growth of data generated by localized interactions between users mobile devices, and between them and nearby devices such as IoT nodes. In ONs, the spontaneous collaboration among mobile devices is exploited to disseminate data toward interested users. However, the limited resources and knowledge available at each node, and the vast amount of data available, make it difficult to devise efficient schemes to accomplish this task. Recent solutions propose to equip each device with data filtering methods derived from human data processing schemes, known as Cognitive Heuristics, i.e. very effective methods used by the brain to quickly drop useless information, while keeping the most relevant one. These solutions can become less effective when facing dynamic scenarios or situations where nodes cannot fully collaborate. One of the reasons is that the solutions proposed so far do not take take into account the social structure of the environment where the nodes move in. To be more effective, the selection of information performed by each node should take into consideration this dimension of the environment. In this paper we propose a social-based data dissemination scheme, based on the cognitive Social Circle Heuristic. This evaluation method exploits the structure of the social environment to make inferences about the relevance of discovered information. We show how the Social Circle Heuristic, coupled with a cognitive-based community detection scheme, can be exploited to design an effective data dissemination algorithm for ONs. We provide a detailed analysis of the performance of the proposed solution via simulation. | true | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | true | 258,127 |
2405.17399 | Transformers Can Do Arithmetic with the Right Embeddings | The poor performance of transformers on arithmetic tasks seems to stem in large part from their inability to keep track of the exact position of each digit inside of a large span of digits. We mend this problem by adding an embedding to each digit that encodes its position relative to the start of the number. In addition to the boost these embeddings provide on their own, we show that this fix enables architectural modifications such as input injection and recurrent layers to improve performance even further. With positions resolved, we can study the logical extrapolation ability of transformers. Can they solve arithmetic problems that are larger and more complex than those in their training data? We find that training on only 20 digit numbers with a single GPU for one day, we can reach state-of-the-art performance, achieving up to 99% accuracy on 100 digit addition problems. Finally, we show that these gains in numeracy also unlock improvements on other multi-step reasoning tasks including sorting and multiplication. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 457,890 |
1605.02827 | When Do Luxury Cars Hit the Road? Findings by A Big Data Approach | In this paper, we focus on studying the appearing time of different kinds of cars on the road. This information will enable us to infer the life style of the car owners. The results can further be used to guide marketing towards car owners. Conventionally, this kind of study is carried out by sending out questionnaires, which is limited in scale and diversity. To solve this problem, we propose a fully automatic method to carry out this study. Our study is based on publicly available surveillance camera data. To make the results reliable, we only use the high resolution cameras (i.e. resolution greater than $1280 \times 720$). Images from the public cameras are downloaded every minute. After obtaining 50,000 images, we apply faster R-CNN (region-based convoluntional neural network) to detect the cars in the downloaded images and a fine-tuned VGG16 model is used to recognize the car makes. Based on the recognition results, we present a data-driven analysis on the relationship between car makes and their appearing times, with implications on lifestyles. | false | false | false | false | false | false | false | false | false | false | false | true | false | true | false | false | false | false | 55,677 |
2107.09783 | Unsupervised Domain Adaptation in LiDAR Semantic Segmentation with
Self-Supervision and Gated Adapters | In this paper, we focus on a less explored, but more realistic and complex problem of domain adaptation in LiDAR semantic segmentation. There is a significant drop in performance of an existing segmentation model when training (source domain) and testing (target domain) data originate from different LiDAR sensors. To overcome this shortcoming, we propose an unsupervised domain adaptation framework that leverages unlabeled target domain data for self-supervision, coupled with an unpaired mask transfer strategy to mitigate the impact of domain shifts. Furthermore, we introduce the gated adapter module with a small number of parameters into the network to account for target domain-specific information. Experiments adapting from both real-to-real and synthetic-to-real LiDAR semantic segmentation benchmarks demonstrate the significant improvement over prior arts. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 247,123 |
1503.07921 | Breaking the News: First Impressions Matter on Online News | A growing number of people are changing the way they consume news, replacing the traditional physical newspapers and magazines by their virtual online versions or/and weblogs. The interactivity and immediacy present in online news are changing the way news are being produced and exposed by media corporations. News websites have to create effective strategies to catch people's attention and attract their clicks. In this paper we investigate possible strategies used by online news corporations in the design of their news headlines. We analyze the content of 69,907 headlines produced by four major global media corporations during a minimum of eight consecutive months in 2014. In order to discover strategies that could be used to attract clicks, we extracted features from the text of the news headlines related to the sentiment polarity of the headline. We discovered that the sentiment of the headline is strongly related to the popularity of the news and also with the dynamics of the posted comments on that particular news. | false | false | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | 41,528 |
1807.03326 | Adaptive Adversarial Attack on Scene Text Recognition | Recent studies have shown that state-of-the-art deep learning models are vulnerable to the inputs with small perturbations (adversarial examples). We observe two critical obstacles in adversarial examples: (i) Strong adversarial attacks (e.g., C&W attack) require manually tuning hyper-parameters and take a long time to construct an adversarial example, making it impractical to attack real-time systems; (ii) Most of the studies focus on non-sequential tasks, such as image classification, yet only a few consider sequential tasks. In this work, we speed up adversarial attacks, especially on sequential learning tasks. By leveraging the uncertainty of each task, we directly learn the adaptive multi-task weightings, without manually searching hyper-parameters. A unified architecture is developed and evaluated for both non-sequential tasks and sequential ones. To validate the effectiveness, we take the scene text recognition task as a case study. To our best knowledge, our proposed method is the first attempt to adversarial attack for scene text recognition. Adaptive Attack achieves over 99.9\% success rate with 3-6X speedup compared to state-of-the-art adversarial attacks. | false | false | false | false | false | false | true | false | false | false | false | true | true | false | false | false | false | false | 102,490 |
2305.04094 | Causal, Stochastic MPC for Wave Energy Converters | We implement a causal model predictive control (MPC) strategy to maximize power generation from a wave energy converter (WEC) system, for which the power take-off (PTO) systems have both hard stroke (i.e., displacement) limits and force ratings. The approach models the WEC dynamics in discrete-time, in a manner that exactly preserves energy-flow quantities, and assumes a stationary stochastic disturbance model for the incident wave force. The control objective is to maximize the expected power generation in stationarity, while accounting for parasitic losses in the power train. PTO stroke measurements are assumed to be available for real-time feedback, as well as the free-surface elevation of the waves at a designated location relative to the WEC, and the open-loop dynamics of the WEC are assumed to be linear and time-invariant. Mean-square stability of the MPC algorithm is proven. The methodology is illustrated in a simulation example pertaining to a heaving cylindrical buoy. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 362,631 |
2307.01169 | Analyzing and Improving Greedy 2-Coordinate Updates for
Equality-Constrained Optimization via Steepest Descent in the 1-Norm | We consider minimizing a smooth function subject to a summation constraint over its variables. By exploiting a connection between the greedy 2-coordinate update for this problem and equality-constrained steepest descent in the 1-norm, we give a convergence rate for greedy selection under a proximal Polyak-Lojasiewicz assumption that is faster than random selection and independent of the problem dimension $n$. We then consider minimizing with both a summation constraint and bound constraints, as arises in the support vector machine dual problem. Existing greedy rules for this setting either guarantee trivial progress only or require $O(n^2)$ time to compute. We show that bound- and summation-constrained steepest descent in the L1-norm guarantees more progress per iteration than previous rules and can be computed in only $O(n \log n)$ time. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 377,255 |
1410.7455 | Parallel training of DNNs with Natural Gradient and Parameter Averaging | We describe the neural-network training framework used in the Kaldi speech recognition toolkit, which is geared towards training DNNs with large amounts of training data using multiple GPU-equipped or multi-core machines. In order to be as hardware-agnostic as possible, we needed a way to use multiple machines without generating excessive network traffic. Our method is to average the neural network parameters periodically (typically every minute or two), and redistribute the averaged parameters to the machines for further training. Each machine sees different data. By itself, this method does not work very well. However, we have another method, an approximate and efficient implementation of Natural Gradient for Stochastic Gradient Descent (NG-SGD), which seems to allow our periodic-averaging method to work well, as well as substantially improving the convergence of SGD on a single machine. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 37,076 |
2407.07664 | A Coding-Theoretic Analysis of Hyperspherical Prototypical Learning
Geometry | Hyperspherical Prototypical Learning (HPL) is a supervised approach to representation learning that designs class prototypes on the unit hypersphere. The prototypes bias the representations to class separation in a scale invariant and known geometry. Previous approaches to HPL have either of the following shortcomings: (i) they follow an unprincipled optimisation procedure; or (ii) they are theoretically sound, but are constrained to only one possible latent dimension. In this paper, we address both shortcomings. To address (i), we present a principled optimisation procedure whose solution we show is optimal. To address (ii), we construct well-separated prototypes in a wide range of dimensions using linear block codes. Additionally, we give a full characterisation of the optimal prototype placement in terms of achievable and converse bounds, showing that our proposed methods are near-optimal. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 471,849 |
2407.04326 | LMSeg: A deep graph message-passing network for efficient and accurate
semantic segmentation of large-scale 3D landscape meshes | Semantic segmentation of large-scale 3D landscape meshes is pivotal for various geospatial applications, including spatial analysis, automatic mapping and localization of target objects, and urban planning and development. This requires an efficient and accurate 3D perception system to understand and analyze real-world environments. However, traditional mesh segmentation methods face challenges in accurately segmenting small objects and maintaining computational efficiency due to the complexity and large size of 3D landscape mesh datasets. This paper presents an end-to-end deep graph message-passing network, LMSeg, designed to efficiently and accurately perform semantic segmentation on large-scale 3D landscape meshes. The proposed approach takes the barycentric dual graph of meshes as inputs and applies deep message-passing neural networks to hierarchically capture the geometric and spatial features from the barycentric graph structures and learn intricate semantic information from textured meshes. The hierarchical and local pooling of the barycentric graph, along with the effective geometry aggregation modules of LMSeg, enable fast inference and accurate segmentation of small-sized and irregular mesh objects in various complex landscapes. Extensive experiments on two benchmark datasets (natural and urban landscapes) demonstrate that LMSeg significantly outperforms existing learning-based segmentation methods in terms of object segmentation accuracy and computational efficiency. Furthermore, our method exhibits strong generalization capabilities across diverse landscapes and demonstrates robust resilience against varying mesh densities and landscape topologies. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 470,519 |
2402.02798 | A Comprehensive Numerical Approach to Coil Placement in Cerebral
Aneurysms: Mathematical Modeling and In Silico Occlusion Classification | Endovascular coil embolization is one of the primary treatment techniques for cerebral aneurysms. Although it is a well established and minimally invasive method, it bears the risk of sub-optimal coil placement which can lead to incomplete occlusion of the aneurysm possibly causing recurrence. One of the key features of coils is that they have an imprinted natural shape supporting the fixation within the aneurysm. For the spatial discretization our mathematical coil model is based on the Discrete Elastic Rod model which results in a dimension-reduced 1D system of differential equations. We include bending and twisting responses to account for the coils natural curvature. Collisions between coil segments and the aneurysm-wall are handled by an efficient contact algorithm that relies on an octree based collision detection. The numerical solution of the model is obtained by a symplectic semi-implicit Euler time stepping method. Our model can be easily incorporated into blood flow simulations of embolized aneurysms. In order to differentiate optimal from sub-optimal placements, we employ a suitable in silico Raymond-Roy type occlusion classification and measure the local packing density in the aneurysm at its neck, wall-region and core. We investigate the impact of uncertainties in the coil parameters and embolization procedure. To this end, we vary the position and the angle of insertion of the microcatheter, and approximate the local packing density distributions by evaluating sample statistics. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 426,731 |
2305.05097 | Self-Repellent Random Walks on General Graphs -- Achieving Minimal
Sampling Variance via Nonlinear Markov Chains | We consider random walks on discrete state spaces, such as general undirected graphs, where the random walkers are designed to approximate a target quantity over the network topology via sampling and neighborhood exploration in the form of Markov chain Monte Carlo (MCMC) procedures. Given any Markov chain corresponding to a target probability distribution, we design a self-repellent random walk (SRRW) which is less likely to transition to nodes that were highly visited in the past, and more likely to transition to seldom visited nodes. For a class of SRRWs parameterized by a positive real {\alpha}, we prove that the empirical distribution of the process converges almost surely to the the target (stationary) distribution of the underlying Markov chain kernel. We then provide a central limit theorem and derive the exact form of the arising asymptotic co-variance matrix, which allows us to show that the SRRW with a stronger repellence (larger {\alpha}) always achieves a smaller asymptotic covariance, in the sense of Loewner ordering of co-variance matrices. Especially for SRRW-driven MCMC algorithms, we show that the decrease in the asymptotic sampling variance is of the order O(1/{\alpha}), eventually going down to zero. Finally, we provide numerical simulations complimentary to our theoretical results, also empirically testing a version of SRRW with {\alpha} increasing in time to combine the benefits of smaller asymptotic variance due to large {\alpha}, with empirically observed faster mixing properties of SRRW with smaller {\alpha}. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 363,001 |
2404.17105 | Synthesizing Iris Images using Generative Adversarial Networks: Survey
and Comparative Analysis | Biometric systems based on iris recognition are currently being used in border control applications and mobile devices. However, research in iris recognition is stymied by various factors such as limited datasets of bonafide irides and presentation attack instruments; restricted intra-class variations; and privacy concerns. Some of these issues can be mitigated by the use of synthetic iris data. In this paper, we present a comprehensive review of state-of-the-art GAN-based synthetic iris image generation techniques, evaluating their strengths and limitations in producing realistic and useful iris images that can be used for both training and testing iris recognition systems and presentation attack detectors. In this regard, we first survey the various methods that have been used for synthetic iris generation and specifically consider generators based on StyleGAN, RaSGAN, CIT-GAN, iWarpGAN, StarGAN, etc. We then analyze the images generated by these models for realism, uniqueness, and biometric utility. This comprehensive analysis highlights the pros and cons of various GANs in the context of developing robust iris matchers and presentation attack detectors. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 449,735 |
2303.02867 | Boundary-semantic collaborative guidance network with dual-stream
feedback mechanism for salient object detection in optical remote sensing
imagery | With the increasing application of deep learning in various domains, salient object detection in optical remote sensing images (ORSI-SOD) has attracted significant attention. However, most existing ORSI-SOD methods predominantly rely on local information from low-level features to infer salient boundary cues and supervise them using boundary ground truth, but fail to sufficiently optimize and protect the local information, and almost all approaches ignore the potential advantages offered by the last layer of the decoder to maintain the integrity of saliency maps. To address these issues, we propose a novel method named boundary-semantic collaborative guidance network (BSCGNet) with dual-stream feedback mechanism. First, we propose a boundary protection calibration (BPC) module, which effectively reduces the loss of edge position information during forward propagation and suppresses noise in low-level features without relying on boundary ground truth. Second, based on the BPC module, a dual feature feedback complementary (DFFC) module is proposed, which aggregates boundary-semantic dual features and provides effective feedback to coordinate features across different layers, thereby enhancing cross-scale knowledge communication. Finally, to obtain more complete saliency maps, we consider the uniqueness of the last layer of the decoder for the first time and propose the adaptive feedback refinement (AFR) module, which further refines feature representation and eliminates differences between features through a unique feedback mechanism. Extensive experiments on three benchmark datasets demonstrate that BSCGNet exhibits distinct advantages in challenging scenarios and outperforms the 17 state-of-the-art (SOTA) approaches proposed in recent years. Codes and results have been released on GitHub: https://github.com/YUHsss/BSCGNet. | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | 349,512 |
1302.4268 | Re-Encoding Techniques for Interpolation-Based Decoding of Reed-Solomon
Codes | We consider interpolation-based decoding of Reed-Solomon codes using the Guruswami-Sudan algorithm (GSA) and investigate the effects of two modification techniques for received vectors, i.e., the re-encoding map and the newly introduced periodicity projection. After an analysis of the latter, we track the benefits (that is low Hamming weight and regular structure) of modified received vectors through the interpolation step of the GSA and show how the involved homogeneous linear system of equations can be compressed. We show that this compression as well as the recovery of the interpolated bivariate polynomial is particularly simple when the periodicity projection was applied. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 22,135 |
1605.07147 | Riemannian SVRG: Fast Stochastic Optimization on Riemannian Manifolds | We study optimization of finite sums of geodesically smooth functions on Riemannian manifolds. Although variance reduction techniques for optimizing finite-sums have witnessed tremendous attention in the recent years, existing work is limited to vector space problems. We introduce Riemannian SVRG (RSVRG), a new variance reduced Riemannian optimization method. We analyze RSVRG for both geodesically convex and nonconvex (smooth) functions. Our analysis reveals that RSVRG inherits advantages of the usual SVRG method, but with factors depending on curvature of the manifold that influence its convergence. To our knowledge, RSVRG is the first provably fast stochastic Riemannian method. Moreover, our paper presents the first non-asymptotic complexity analysis (novel even for the batch setting) for nonconvex Riemannian optimization. Our results have several implications; for instance, they offer a Riemannian perspective on variance reduced PCA, which promises a short, transparent convergence analysis. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 56,253 |
1310.7205 | Algorithms for Timed Consistency Models | One of the major challenges in distributed systems is establishing consistency among replicated data in a timely fashion. While the consistent ordering of events has been extensively researched, the time span to reach a consistent state is mostly considered an effect of the chosen consistency model, rather than being considered a parameter itself. This paper argues that it is possible to give guarantees on the timely consistency of an operation. Subsequent to an update the cloud and all connected clients will either be consistent with the update within the defined upper bound of time or the update will be returned. This paper suggests the respective algorithms and protocols capable of producing such comprehensive Timed Consistency, as conceptually proposed by Torres-Rojas et al. The solution offers business customers an increasing level of predictability and adjustability. The temporal certainty concerning the execution makes the cloud a more attractive tool for time-critical or mission-critical applications fearing the poor availability of Strong Consistency in cloud environments. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | true | 28,016 |
1703.07334 | Pop-up SLAM: Semantic Monocular Plane SLAM for Low-texture Environments | Existing simultaneous localization and mapping (SLAM) algorithms are not robust in challenging low-texture environments because there are only few salient features. The resulting sparse or semi-dense map also conveys little information for motion planning. Though some work utilize plane or scene layout for dense map regularization, they require decent state estimation from other sources. In this paper, we propose real-time monocular plane SLAM to demonstrate that scene understanding could improve both state estimation and dense mapping especially in low-texture environments. The plane measurements come from a pop-up 3D plane model applied to each single image. We also combine planes with point based SLAM to improve robustness. On a public TUM dataset, our algorithm generates a dense semantic 3D model with pixel depth error of 6.2 cm while existing SLAM algorithms fail. On a 60 m long dataset with loops, our method creates a much better 3D model with state estimation error of 0.67%. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 70,372 |
2403.01582 | Selection, Ensemble, and Adaptation: Advancing Multi-Source-Free Domain
Adaptation via Architecture Zoo | Conventional Multi-Source Free Domain Adaptation (MSFDA) assumes that each source domain provides a single source model, and all source models adopt a uniform architecture. This paper introduces Zoo-MSFDA, a more general setting that allows each source domain to offer a zoo of multiple source models with different architectures. While it enriches the source knowledge, Zoo-MSFDA risks being dominated by suboptimal/harmful models. To address this issue, we theoretically analyze the model selection problem in Zoo-MSFDA, and introduce two principles: transferability principle and diversity principle. Recognizing the challenge of measuring transferability, we subsequently propose a novel Source-Free Unsupervised Transferability Estimation (SUTE). It enables assessing and comparing transferability across multiple source models with different architectures under domain shift, without requiring target labels and source data. Based on above, we introduce a Selection, Ensemble, and Adaptation (SEA) framework to address Zoo-MSFDA, which consists of: 1) source models selection based on the proposed principles and SUTE; 2) ensemble construction based on SUTE-estimated transferability; 3) target-domain adaptation of the ensemble model. Evaluations demonstrate that our SEA framework, with the introduced Zoo-MSFDA setting, significantly improves adaptation performance (e.g., 13.5% on DomainNet). Additionally, our SUTE achieves state-of-the-art performance in transferability estimation. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 434,488 |
2403.04957 | Automatic and Universal Prompt Injection Attacks against Large Language
Models | Large Language Models (LLMs) excel in processing and generating human language, powered by their ability to interpret and follow instructions. However, their capabilities can be exploited through prompt injection attacks. These attacks manipulate LLM-integrated applications into producing responses aligned with the attacker's injected content, deviating from the user's actual requests. The substantial risks posed by these attacks underscore the need for a thorough understanding of the threats. Yet, research in this area faces challenges due to the lack of a unified goal for such attacks and their reliance on manually crafted prompts, complicating comprehensive assessments of prompt injection robustness. We introduce a unified framework for understanding the objectives of prompt injection attacks and present an automated gradient-based method for generating highly effective and universal prompt injection data, even in the face of defensive measures. With only five training samples (0.3% relative to the test data), our attack can achieve superior performance compared with baselines. Our findings emphasize the importance of gradient-based testing, which can avoid overestimation of robustness, especially for defense mechanisms. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 435,801 |
1009.0744 | New and improved Johnson-Lindenstrauss embeddings via the Restricted
Isometry Property | Consider an m by N matrix Phi with the Restricted Isometry Property of order k and level delta, that is, the norm of any k-sparse vector in R^N is preserved to within a multiplicative factor of 1 +- delta under application of Phi. We show that by randomizing the column signs of such a matrix Phi, the resulting map with high probability embeds any fixed set of p = O(e^k) points in R^N into R^m without distorting the norm of any point in the set by more than a factor of 1 +- delta. Consequently, matrices with the Restricted Isometry Property and with randomized column signs provide optimal Johnson-Lindenstrauss embeddings up to logarithmic factors in N. In particular, our results improve the best known bounds on the necessary embedding dimension m for a wide class of structured random matrices; for partial Fourier and partial Hadamard matrices, we improve the recent bound m = O(delta^(-4) log(p) log^4(N)) appearing in Ailon and Liberty to m = O(delta^(-2) log(p) log^4(N)), which is optimal up to the logarithmic factors in N. Our results also have a direct application in the area of compressed sensing for redundant dictionaries. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 7,475 |
1912.12049 | Projection pursuit based on Gaussian mixtures and evolutionary
algorithms | We propose a projection pursuit (PP) algorithm based on Gaussian mixture models (GMMs). The negentropy obtained from a multivariate density estimated by GMMs is adopted as the PP index to be maximised. For a fixed dimension of the projection subspace, the GMM-based density estimation is projected onto that subspace, where an approximation of the negentropy for Gaussian mixtures is computed. Then, Genetic Algorithms (GAs) are used to find the optimal, orthogonal projection basis by maximising the former approximation. We show that this semi-parametric approach to PP is flexible and allows highly informative structures to be detected, by projecting multivariate datasets onto a subspace, where the data can be feasibly visualised. The performance of the proposed approach is shown on both artificial and real datasets. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 158,741 |
2212.08149 | Agent-Based Model of Crowd Dynamics in Emergency Situations: A Focus on
People With Disabilities | Collective behavior of people in large groups and emergent crowd dynamics can have dangerous and disastrous results when panic is introduced. These events can be caused by emergency situations such as fires in a large building or a stampeding effect when people are rushing in a densely packed area. In this paper, we will use an agent-based modeling approach to simulate different evacuation events in an attempt to understand what is the most efficient scenario. Specifically, we will focus on how people with disabilities are impacted by chosen parameters during an emergency evacuation. We chose an ABM to simulate this because we want to specify specific roles for different "agents" in our model. Specifically, we will focus on the influence of people with disabilities on crowd dynamics and the optimal exits. Does the placement of seating for people with disabilities affect the time it takes for the last person to exit the building? What effect does poor signage have on the time it takes for able-bodied and people with disabilities to exit safely? What happens if some people do not know about alternative exits in their panicked state? Using our agent-based model, we will investigate these questions while also adjusting other outside effects such as the density of the crowd, the speed at which people exit, and the location of people at the start of the simulation. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | 336,642 |
2110.11950 | Adversarial robustness for latent models: Revisiting the robust-standard
accuracies tradeoff | Over the past few years, several adversarial training methods have been proposed to improve the robustness of machine learning models against adversarial perturbations in the input. Despite remarkable progress in this regard, adversarial training is often observed to drop the standard test accuracy. This phenomenon has intrigued the research community to investigate the potential tradeoff between standard accuracy (a.k.a generalization) and robust accuracy (a.k.a robust generalization) as two performance measures. In this paper, we revisit this tradeoff for latent models and argue that this tradeoff is mitigated when the data enjoys a low-dimensional structure. In particular, we consider binary classification under two data generative models, namely Gaussian mixture model and generalized linear model, where the features data lie on a low-dimensional manifold. We develop a theory to show that the low-dimensional manifold structure allows one to obtain models that are nearly optimal with respect to both, the standard accuracy and the robust accuracy measures. We further corroborate our theory with several numerical experiments, including Mixture of Factor Analyzers (MFA) model trained on the MNIST dataset. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 262,662 |
2112.10716 | BAPose: Bottom-Up Pose Estimation with Disentangled Waterfall
Representations | We propose BAPose, a novel bottom-up approach that achieves state-of-the-art results for multi-person pose estimation. Our end-to-end trainable framework leverages a disentangled multi-scale waterfall architecture and incorporates adaptive convolutions to infer keypoints more precisely in crowded scenes with occlusions. The multi-scale representations, obtained by the disentangled waterfall module in BAPose, leverage the efficiency of progressive filtering in the cascade architecture, while maintaining multi-scale fields-of-view comparable to spatial pyramid configurations. Our results on the challenging COCO and CrowdPose datasets demonstrate that BAPose is an efficient and robust framework for multi-person pose estimation, achieving significant improvements on state-of-the-art accuracy. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 272,510 |
1902.07438 | Dynamic Matrix Decomposition for Action Recognition | Designing a technique for the automatic analysis of different actions in videos in order to detect the presence of interested activities is of high significance nowadays. In this paper, we explore a robust and dynamic appearance technique for the purpose of identifying different action activities. We also exploit a low-rank and structured sparse matrix decomposition (LSMD) method to better model these activities.. Our method is effective in encoding localized spatio-temporal features which enables the analysis of local motion taking place in the video. Our proposed model use adjacent frame differences as the input to the method thereby forcing it to capture the changes occurring in the video. The performance of our model is tested on a benchmark dataset in terms of detection accuracy. Results achieved with our model showed the promising capability of our model in detecting action activities. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 121,987 |
2007.10000 | On the Comparison of Classic and Deep Keypoint Detector and Descriptor
Methods | The purpose of this study is to give a performance comparison between several classic hand-crafted and deep key-point detector and descriptor methods. In particular, we consider the following classical algorithms: SIFT, SURF, ORB, FAST, BRISK, MSER, HARRIS, KAZE, AKAZE, AGAST, GFTT, FREAK, BRIEF and RootSIFT, where a subset of all combinations is paired into detector-descriptor pipelines. Additionally, we analyze the performance of two recent and perspective deep detector-descriptor models, LF-Net and SuperPoint. Our benchmark relies on the HPSequences dataset that provides real and diverse images under various geometric and illumination changes. We analyze the performance on three evaluation tasks: keypoint verification, image matching and keypoint retrieval. The results show that certain classic and deep approaches are still comparable, with some classic detector-descriptor combinations overperforming pretrained deep models. In terms of the execution times of tested implementations, SuperPoint model is the fastest, followed by ORB. The source code is published on \url{https://github.com/kristijanbartol/keypoint-algorithms-benchmark}. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 188,134 |
1905.12692 | Vector-Valued Graph Trend Filtering with Non-Convex Penalties | This work studies the denoising of piecewise smooth graph signals that exhibit inhomogeneous levels of smoothness over a graph, where the value at each node can be vector-valued. We extend the graph trend filtering framework to denoising vector-valued graph signals with a family of non-convex regularizers, which exhibit superior recovery performance over existing convex regularizers. Using an oracle inequality, we establish the statistical error rates of first-order stationary points of the proposed non-convex method for generic graphs. Furthermore, we present an ADMM-based algorithm to solve the proposed method and establish its convergence. Numerical experiments are conducted on both synthetic and real-world data for denoising, support recovery, event detection, and semi-supervised classification. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 132,842 |
1505.05451 | Fuzzy Least Squares Twin Support Vector Machines | Least Squares Twin Support Vector Machine (LST-SVM) has been shown to be an efficient and fast algorithm for binary classification. It combines the operating principles of Least Squares SVM (LS-SVM) and Twin SVM (T-SVM); it constructs two non-parallel hyperplanes (as in T-SVM) by solving two systems of linear equations (as in LS-SVM). Despite its efficiency, LST-SVM is still unable to cope with two features of real-world problems. First, in many real-world applications, labels of samples are not deterministic; they come naturally with their associated membership degrees. Second, samples in real-world applications may not be equally important and their importance degrees affect the classification. In this paper, we propose Fuzzy LST-SVM (FLST-SVM) to deal with these two characteristics of real-world data. Two models are introduced for FLST-SVM: the first model builds up crisp hyperplanes using training samples and their corresponding membership degrees. The second model, on the other hand, constructs fuzzy hyperplanes using training samples and their membership degrees. Numerical evaluation of the proposed method with synthetic and real datasets demonstrate significant improvement in the classification accuracy of FLST-SVM when compared to well-known existing versions of SVM. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 43,309 |
2006.00830 | Temporal Aggregate Representations for Long-Range Video Understanding | Future prediction, especially in long-range videos, requires reasoning from current and past observations. In this work, we address questions of temporal extent, scaling, and level of semantic abstraction with a flexible multi-granular temporal aggregation framework. We show that it is possible to achieve state of the art in both next action and dense anticipation with simple techniques such as max-pooling and attention. To demonstrate the anticipation capabilities of our model, we conduct experiments on Breakfast, 50Salads, and EPIC-Kitchens datasets, where we achieve state-of-the-art results. With minimal modifications, our model can also be extended for video segmentation and action recognition. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 179,584 |
2407.00837 | Towards Robust Speech Representation Learning for Thousands of Languages | Self-supervised learning (SSL) has helped extend speech technologies to more languages by reducing the need for labeled data. However, models are still far from supporting the world's 7000+ languages. We propose XEUS, a Cross-lingual Encoder for Universal Speech, trained on over 1 million hours of data across 4057 languages, extending the language coverage of SSL models 4-fold. We combine 1 million hours of speech from existing publicly accessible corpora with a newly created corpus of 7400+ hours from 4057 languages, which will be publicly released. To handle the diverse conditions of multilingual speech data, we augment the typical SSL masked prediction approach with a novel dereverberation objective, increasing robustness. We evaluate XEUS on several benchmarks, and show that it consistently outperforms or achieves comparable results to state-of-the-art (SOTA) SSL models across a variety of tasks. XEUS sets a new SOTA on the ML-SUPERB benchmark: it outperforms MMS 1B and w2v-BERT 2.0 v2 by 0.8% and 4.4% respectively, despite having less parameters or pre-training data. Checkpoints, code, and data are found in https://www.wavlab.org/activities/2024/xeus/. | false | false | true | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 469,027 |
2111.12855 | Robust Equivariant Imaging: a fully unsupervised framework for learning
to image from noisy and partial measurements | Deep networks provide state-of-the-art performance in multiple imaging inverse problems ranging from medical imaging to computational photography. However, most existing networks are trained with clean signals which are often hard or impossible to obtain. Equivariant imaging (EI) is a recent self-supervised learning framework that exploits the group invariance present in signal distributions to learn a reconstruction function from partial measurement data alone. While EI results are impressive, its performance degrades with increasing noise. In this paper, we propose a Robust Equivariant Imaging (REI) framework which can learn to image from noisy partial measurements alone. The proposed method uses Stein's Unbiased Risk Estimator (SURE) to obtain a fully unsupervised training loss that is robust to noise. We show that REI leads to considerable performance gains on linear and nonlinear inverse problems, thereby paving the way for robust unsupervised imaging with deep networks. Code is available at: https://github.com/edongdongchen/REI. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 268,096 |
1803.09522 | A Provably Correct Algorithm for Deep Learning that Actually Works | We describe a layer-by-layer algorithm for training deep convolutional networks, where each step involves gradient updates for a two layer network followed by a simple clustering algorithm. Our algorithm stems from a deep generative model that generates mages level by level, where lower resolution images correspond to latent semantic classes. We analyze the convergence rate of our algorithm assuming that the data is indeed generated according to this model (as well as additional assumptions). While we do not pretend to claim that the assumptions are realistic for natural images, we do believe that they capture some true properties of real data. Furthermore, we show that our algorithm actually works in practice (on the CIFAR dataset), achieving results in the same ballpark as that of vanilla convolutional neural networks that are being trained by stochastic gradient descent. Finally, our proof techniques may be of independent interest. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 93,515 |
2209.12159 | Grant-Free NOMA-OTFS Paradigm: Enabling Efficient Ubiquitous Access for
LEO Satellite Internet-of-Things | With the blooming of Internet-of-Things (IoT), we are witnessing an explosion in the number of IoT terminals, triggering an unprecedented demand for ubiquitous wireless access globally. In this context, the emerging low-Earth-orbit satellites (LEO-SATs) have been regarded as a promising enabler to complement terrestrial wireless networks in providing ubiquitous connectivity and bridging the ever-growing digital divide in the expected next-generation wireless communications. Nevertheless, the harsh conditions posed by LEO-SATs have imposed significant challenges to the current multiple access (MA) schemes and led to an emerging paradigm shift in system design. In this article, we first provide a comprehensive overview of the state-of-the-art MA schemes and investigate their limitations in the context of LEO-SATs. To this end, we propose a novel next generation MA (NGMA), which amalgamates the grant-free non-orthogonal multiple access (GF-NOMA) mechanism and the orthogonal time frequency space (OTFS) waveform, for simplifying the connection procedure with reduced access latency and enhanced Doppler-robustness. Critical open challenging issues and future directions are finally presented for further technical development. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 319,440 |
2203.12281 | Efficient Fully Distributed Federated Learning with Adaptive Local Links | Nowadays, data-driven, machine and deep learning approaches have provided unprecedented performance in various complex tasks, including image classification and object detection, and in a variety of application areas, like autonomous vehicles, medical imaging and wireless communications. Traditionally, such approaches have been deployed, along with the involved datasets, on standalone devices. Recently, a shift has been observed towards the so-called Edge Machine Learning, in which centralized architectures are adopted that allow multiple devices with local computational and storage resources to collaborate with the assistance of a centralized server. The well-known federated learning approach is able to utilize such architectures by allowing the exchange of only parameters with the server, while keeping the datasets private to each contributing device. In this work, we propose a fully distributed, diffusion-based learning algorithm that does not require a central server and propose an adaptive combination rule for the cooperation of the devices. By adopting a classification task on the MNIST dataset, the efficacy of the proposed algorithm over corresponding counterparts is demonstrated via the reduction of the number of collaboration rounds required to achieve an acceptable accuracy level in non- IID dataset scenarios. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 287,211 |
2111.03447 | Contextual Bayesian optimization with binary outputs | Bayesian optimization (BO) is an efficient method to optimize expensive black-box functions. It has been generalized to scenarios where objective function evaluations return stochastic binary feedback, such as success/failure in a given test, or preference between different parameter settings. In many real-world situations, the objective function can be evaluated in controlled 'contexts' or 'environments' that directly influence the observations. For example, one could directly alter the 'difficulty' of the test that is used to evaluate a system's performance. With binary feedback, the context determines the information obtained from each observation. For example, if the test is too easy/hard, the system will always succeed/fail, yielding uninformative binary outputs. Here we combine ideas from Bayesian active learning and optimization to efficiently choose the best context and optimization parameter on each iteration. We demonstrate the performance of our algorithm and illustrate how it can be used to tackle a concrete application in visual psychophysics: efficiently improving patients' vision via corrective lenses, using psychophysics measurements. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 265,171 |
2203.13972 | Autoregressive Linguistic Steganography Based on BERT and Consistency
Coding | Linguistic steganography (LS) conceals the presence of communication by embedding secret information into a text. How to generate a high-quality text carrying secret information is a key problem. With the widespread application of deep learning in natural language processing, recent algorithms use a language model (LM) to generate the steganographic text, which provides a higher payload compared with many previous arts. However, the security still needs to be enhanced. To tackle with this problem, we propose a novel autoregressive LS algorithm based on BERT and consistency coding, which achieves a better trade-off between embedding payload and system security. In the proposed work, based on the introduction of the masked LM, given a text, we use consistency coding to make up for the shortcomings of block coding used in the previous work so that we can encode arbitrary-size candidate token set and take advantages of the probability distribution for information hiding. The masked positions to be embedded are filled with tokens determined by an autoregressive manner to enhance the connection between contexts and therefore maintain the quality of the text. Experimental results have shown that, compared with related works, the proposed work improves the fluency of the steganographic text while guaranteeing security, and also increases the embedding payload to a certain extent. | false | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | 287,831 |
cs/0611090 | Algebraic Soft-Decision Decoding of Reed-Solomon Codes Using Bit-level
Soft Information | The performance of algebraic soft-decision decoding of Reed-Solomon codes using bit-level soft information is investigated. Optimal multiplicity assignment strategies of algebraic soft-decision decoding with infinite cost are first studied over erasure channels and the binary symmetric channel. The corresponding decoding radii are calculated in closed forms and tight bounds on the error probability are derived. The multiplicity assignment strategy and the corresponding performance analysis are then generalized to characterize the decoding region of algebraic softdecision decoding over a mixed error and bit-level erasure channel. The bit-level decoding region of the proposed multiplicity assignment strategy is shown to be significantly larger than that of conventional Berlekamp-Massey decoding. As an application, a bit-level generalized minimum distance decoding algorithm is proposed. The proposed decoding compares favorably with many other Reed-Solomon soft-decision decoding algorithms over various channels. Moreover, owing to the simplicity of the proposed bit-level generalized minimum distance decoding, its performance can be tightly bounded using order statistics. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 539,889 |
1711.00629 | Sleep Stage Classification Based on Multi-level Feature Learning and
Recurrent Neural Networks via Wearable Device | This paper proposes a practical approach for automatic sleep stage classification based on a multi-level feature learning framework and Recurrent Neural Network (RNN) classifier using heart rate and wrist actigraphy derived from a wearable device. The feature learning framework is designed to extract low- and mid-level features. Low-level features capture temporal and frequency domain properties and mid-level features learn compositions and structural information of signals. Since sleep staging is a sequential problem with long-term dependencies, we take advantage of RNNs with Bidirectional Long Short-Term Memory (BLSTM) architectures for sequence data learning. To simulate the actual situation of daily sleep, experiments are conducted with a resting group in which sleep is recorded in resting state, and a comprehensive group in which both resting sleep and non-resting sleep are included.We evaluate the algorithm based on an eight-fold cross validation to classify five sleep stages (W, N1, N2, N3, and REM). The proposed algorithm achieves weighted precision, recall and F1 score of 58.0%, 60.3%, and 58.2% in the resting group and 58.5%, 61.1%, and 58.5% in the comprehensive group, respectively. Various comparison experiments demonstrate the effectiveness of feature learning and BLSTM. We further explore the influence of depth and width of RNNs on performance. Our method is specially proposed for wearable devices and is expected to be applicable for long-term sleep monitoring at home. Without using too much prior domain knowledge, our method has the potential to generalize sleep disorder detection. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 83,752 |
2408.16353 | DetectBERT: Towards Full App-Level Representation Learning to Detect
Android Malware | Recent advancements in ML and DL have significantly improved Android malware detection, yet many methodologies still rely on basic static analysis, bytecode, or function call graphs that often fail to capture complex malicious behaviors. DexBERT, a pre-trained BERT-like model tailored for Android representation learning, enriches class-level representations by analyzing Smali code extracted from APKs. However, its functionality is constrained by its inability to process multiple Smali classes simultaneously. This paper introduces DetectBERT, which integrates correlated Multiple Instance Learning (c-MIL) with DexBERT to handle the high dimensionality and variability of Android malware, enabling effective app-level detection. By treating class-level features as instances within MIL bags, DetectBERT aggregates these into a comprehensive app-level representation. Our evaluation demonstrates that DetectBERT not only surpasses existing state-of-the-art detection methods but also adapts to evolving malware threats. Moreover, the versatility of the DetectBERT framework holds promising potential for broader applications in app-level analysis and other software engineering tasks, offering new avenues for research and development. | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | false | false | true | 484,303 |
2207.01947 | Making sense of spoken plurals | Distributional semantics offers new ways to study the semantics of morphology. This study focuses on the semantics of noun singulars and their plural inflectional variants in English. Our goal is to compare two models for the conceptualization of plurality. One model (FRACSS) proposes that all singular-plural pairs should be taken into account when predicting plural semantics from singular semantics. The other model (CCA) argues that conceptualization for plurality depends primarily on the semantic class of the base word. We compare the two models on the basis of how well the speech signal of plural tokens in a large corpus of spoken American English aligns with the semantic vectors predicted by the two models. Two measures are employed: the performance of a form-to-meaning mapping and the correlations between form distances and meaning distances. Results converge on a superior alignment for CCA. Our results suggest that usage-based approaches to pluralization in which a given word's own semantic neighborhood is given priority outperform theories according to which pluralization is conceptualized as a process building on high-level abstraction. We see that what has often been conceived of as a highly abstract concept, [+plural], is better captured via a family of mid-level partial generalizations. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 306,354 |
2406.06934 | Decentralized Social Networks and the Future of Free Speech Online | Decentralized social networks like Mastodon and BlueSky are trending topics that have drawn much attention and discussion in recent years. By devolving powers from the central node to the end users, decentralized social networks aim to cure existing pathologies on the centralized platforms and have been viewed by many as the future of the Internet. This article critically and systematically assesses the decentralization project's prospect for communications online. It uses normative theories of free speech to examine whether and how the decentralization design could facilitate users' freedom of expression online. The analysis shows that both promises and pitfalls exist, highlighting the importance of value-based design in this area. Two most salient issues for the design of the decentralized networks are: how to balance the decentralization ideal with constant needs of centralization on the network, and how to empower users to make them truly capable of exercising their control. The article then uses some design examples, such as the shared blocklist and the opt-in search function, to illustrate the value considerations underlying the design choices. Some tentative proposals for law and policy interventions are offered to better facilitate the design of the new network. Rather than providing clear answers, the article seeks to map the value implications of the design choices, highlight the stakes, and point directions for future research. | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | true | 462,824 |
1508.01162 | Implementation of Resistive Type Superconducting Fault Current Limiters
in Electrical Grids: Performance Analysis and Measuring of Optimal Locations | In the past few years there has been a significant rise in the short-circuit current levels in transmission and distribution networks, it due to the increasing demands on power and the addition of sources of distributed generations. It leads to the need of integration of novel protection systems such as the superconducting fault current limiters (SFCLs), ... . SFCL models on the electric distribution networks largely rely on the insertion of a step or exponential resistance that is determined by a predefined quenching time. However, beyond the framework of these models, the study of the performance, reliability, and location strategy for the installation of sole or multiple SFCLs in power grids still lacks of proper development leading to the utter need of comprehensive and systematic studies on this issue. In this paper, we expand the scope of the aforementioned models by considering the actual behaviour of a SFCL in terms of the temperature dynamic power-law dependence between the electrical field and the current density. Our results are compared with step-resistance models for the sake of discussion and clarity of the conclusions. Both SFCL models were integrated into a power system model built based on the UK power standard, and the impact of these protection strategies on the performance of the overall electricity network was studied. As a representative renewable energy source, a 90 MVA wind farm was considered for the simulations. Three fault conditions have been simulated, and the figures for the fault current reduction predicted by both fault current limiting models have been compared in terms of multiple current measuring points and allocation strategies... | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 45,758 |
2410.05320 | The OCON model: an old but gold solution for distributable supervised
classification | This paper introduces to a structured application of the One-Class approach and the One-Class-One-Network model for supervised classification tasks, specifically addressing a vowel phonemes classification case study within the Automatic Speech Recognition research field. Through pseudo-Neural Architecture Search and Hyper-Parameters Tuning experiments conducted with an informed grid-search methodology, we achieve classification accuracy comparable to nowadays complex architectures (90.0 - 93.7%). Despite its simplicity, our model prioritizes generalization of language context and distributed applicability, supported by relevant statistical and performance metrics. The experiments code is openly available at our GitHub. | false | false | true | false | true | false | true | false | true | false | false | false | false | false | false | false | true | false | 495,673 |
2107.05747 | SoftHebb: Bayesian Inference in Unsupervised Hebbian Soft
Winner-Take-All Networks | Hebbian plasticity in winner-take-all (WTA) networks is highly attractive for neuromorphic on-chip learning, owing to its efficient, local, unsupervised, and on-line nature. Moreover, its biological plausibility may help overcome important limitations of artificial algorithms, such as their susceptibility to adversarial attacks, and their high demands for training-example quantity and repetition. However, Hebbian WTA learning has found little use in machine learning (ML), likely because it has been missing an optimization theory compatible with deep learning (DL). Here we show rigorously that WTA networks constructed by standard DL elements, combined with a Hebbian-like plasticity that we derive, maintain a Bayesian generative model of the data. Importantly, without any supervision, our algorithm, SoftHebb, minimizes cross-entropy, i.e. a common loss function in supervised DL. We show this theoretically and in practice. The key is a "soft" WTA where there is no absolute "hard" winner neuron. Strikingly, in shallow-network comparisons with backpropagation (BP), SoftHebb shows advantages beyond its Hebbian efficiency. Namely, it converges in fewer iterations, and is significantly more robust to noise and adversarial attacks. Notably, attacks that maximally confuse SoftHebb are also confusing to the human eye, potentially linking human perceptual robustness, with Hebbian WTA circuits of cortex. Finally, SoftHebb can generate synthetic objects as interpolations of real object classes. All in all, Hebbian efficiency, theoretical underpinning, cross-entropy-minimization, and surprising empirical advantages, suggest that SoftHebb may inspire highly neuromorphic and radically different, but practical and advantageous learning algorithms and hardware accelerators. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | true | false | false | 245,868 |
2005.05507 | A Framework for Hierarchical Multilingual Machine Translation | Multilingual machine translation has recently been in vogue given its potential for improving machine translation performance for low-resource languages via transfer learning. Empirical examinations demonstrating the success of existing multilingual machine translation strategies, however, are limited to experiments in specific language groups. In this paper, we present a hierarchical framework for building multilingual machine translation strategies that takes advantage of a typological language family tree for enabling transfer among similar languages while avoiding the negative effects that result from incorporating languages that are too different to each other. Exhaustive experimentation on a dataset with 41 languages demonstrates the validity of the proposed framework, especially when it comes to improving the performance of low-resource languages via the use of typologically related families for which richer sets of resources are available. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 176,747 |
2311.14934 | Robust Graph Neural Networks via Unbiased Aggregation | The adversarial robustness of Graph Neural Networks (GNNs) has been questioned due to the false sense of security uncovered by strong adaptive attacks despite the existence of numerous defenses. In this work, we delve into the robustness analysis of representative robust GNNs and provide a unified robust estimation point of view to understand their robustness and limitations. Our novel analysis of estimation bias motivates the design of a robust and unbiased graph signal estimator. We then develop an efficient Quasi-Newton Iterative Reweighted Least Squares algorithm to solve the estimation problem, which is unfolded as robust unbiased aggregation layers in GNNs with theoretical guarantees. Our comprehensive experiments confirm the strong robustness of our proposed model under various scenarios, and the ablation study provides a deep understanding of its advantages. Our code is available at https://github.com/chris-hzc/RUNG. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 410,318 |
2004.13651 | Fast and Memory-Efficient Neural Code Completion | Code completion is one of the most widely used features of modern integrated development environments (IDEs). While deep learning has made significant progress in the statistical prediction of source code, state-of-the-art neural network models consume hundreds of megabytes of memory, bloating the development environment. We address this in two steps: first we present a modular neural framework for code completion. This allows us to explore the design space and evaluate different techniques. Second, within this framework we design a novel reranking neural completion model that combines static analysis with granular token encodings. The best neural reranking model consumes just 6 MB of RAM, - 19x less than previous models - computes a single completion in 8 ms, and achieves 90% accuracy in its top five suggestions. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 174,622 |
2106.10870 | Non-native English lexicon creation for bilingual speech synthesis | Bilingual English speakers speak English as one of their languages. Their English is of a non-native kind, and their conversations are of a code-mixed fashion. The intelligibility of a bilingual text-to-speech (TTS) system for such non-native English speakers depends on a lexicon that captures the phoneme sequence used by non-native speakers. However, due to the lack of non-native English lexicon, existing bilingual TTS systems employ native English lexicons that are widely available, in addition to their native language lexicon. Due to the inconsistency between the non-native English pronunciation in the audio and native English lexicon in the text, the intelligibility of synthesized speech in such TTS systems is significantly reduced. This paper is motivated by the knowledge that the native language of the speaker highly influences non-native English pronunciation. We propose a generic approach to obtain rules based on letter to phoneme alignment to map native English lexicon to their non-native version. The effectiveness of such mapping is studied by comparing bilingual (Indian English and Hindi) TTS systems trained with and without the proposed rules. The subjective evaluation shows that the bilingual TTS system trained with the proposed non-native English lexicon rules obtains a 6% absolute improvement in preference. | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 242,189 |
1403.3515 | Concept Trees: Building Dynamic Concepts from Semi-Structured Data using
Nature-Inspired Methods | This paper describes a method for creating structure from heterogeneous sources, as part of an information database, or more specifically, a 'concept base'. Structures called 'concept trees' can grow from the semi-structured sources when consistent sequences of concepts are presented. They might be considered to be dynamic databases, possibly a variation on the distributed Agent-Based or Cellular Automata models, or even related to Markov models. Semantic comparison of text is required, but the trees can be built more, from automatic knowledge and statistical feedback. This reduced model might also be attractive for security or privacy reasons, as not all of the potential data gets saved. The construction process maintains the key requirement of generality, allowing it to be used as part of a generic framework. The nature of the method also means that some level of optimisation or normalisation of the information will occur. This gives comparisons with databases or knowledge-bases, but a database system would firstly model its environment or datasets and then populate the database with instance values. The concept base deals with a more uncertain environment and therefore cannot fully model it beforehand. The model itself therefore evolves over time. Similar to databases, it also needs a good indexing system, where the construction process provides memory and indexing structures. These allow for more complex concepts to be automatically created, stored and retrieved, possibly as part of a more cognitive model. There are also some arguments, or more abstract ideas, for merging physical-world laws into these automatic processes. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 31,578 |
2008.07701 | Hidden order in online extremism and its disruption by nudging
collective chemistry | We show that the eclectic "Boogaloo" extremist movement that is now rising to prominence in the U.S., has a hidden online mathematical order that is identical to ISIS during its early development, despite their stark ideological, geographical and cultural differences. The evolution of each across scales follows a single shockwave equation that accounts for individual heterogeneity in online interactions. This equation predicts how to disrupt the onset and 'flatten the curve' of such online extremism by nudging its collective chemistry. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 192,190 |
2402.14098 | Intriguing Properties of Modern GANs | Modern GANs achieve remarkable performance in terms of generating realistic and diverse samples. This has led many to believe that ``GANs capture the training data manifold''. In this work we show that this interpretation is wrong. We empirically show that the manifold learned by modern GANs does not fit the training distribution: specifically the manifold does not pass through the training examples and passes closer to out-of-distribution images than to in-distribution images. We also investigate the distribution over images implied by the prior over the latent codes and study whether modern GANs learn a density that approximates the training distribution. Surprisingly, we find that the learned density is very far from the data distribution and that GANs tend to assign higher density to out-of-distribution images. Finally, we demonstrate that the set of images used to train modern GANs are often not part of the typical set described by the GANs' distribution. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 431,529 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.