id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2409.18169
Harmful Fine-tuning Attacks and Defenses for Large Language Models: A Survey
Recent research demonstrates that the nascent fine-tuning-as-a-service business model exposes serious safety concerns -- fine-tuning over a few harmful data uploaded by the users can compromise the safety alignment of the model. The attack, known as harmful fine-tuning attack, has raised a broad research interest among the community. However, as the attack is still new, \textbf{we observe that there are general misunderstandings within the research community.} To clear up concern, this paper provide a comprehensive overview to three aspects of harmful fine-tuning: attacks setting, defense design and evaluation methodology. Specifically, we first present the threat model of the problem, and introduce the harmful fine-tuning attack and its variants. Then we systematically survey the existing literature on attacks/defenses/mechanical analysis of the problem. Finally, we introduce the evaluation methodology and outline future research directions that might contribute to the development of the field. Additionally, we present a list of questions of interest, which might be useful to refer to when reviewers in the peer review process question the realism of the experiment/attack/defense setting. A curated list of relevant papers is maintained and made accessible at: https://github.com/git-disl/awesome_LLM-harmful-fine-tuning-papers.
false
false
false
false
true
false
true
false
false
false
false
false
true
false
false
false
false
false
492,133
2306.12545
Neural Multigrid Memory For Computational Fluid Dynamics
Turbulent flow simulation plays a crucial role in various applications, including aircraft and ship design, industrial process optimization, and weather prediction. In this paper, we propose an advanced data-driven method for simulating turbulent flow, representing a significant improvement over existing approaches. Our methodology combines the strengths of Video Prediction Transformer (VPTR) (Ye & Bilodeau, 2022) and Multigrid Architecture (MgConv, MgResnet) (Ke et al., 2017). VPTR excels in capturing complex spatiotemporal dependencies and handling large input data, making it a promising choice for turbulent flow prediction. Meanwhile, Multigrid Architecture utilizes multiple grids with different resolutions to capture the multiscale nature of turbulent flows, resulting in more accurate and efficient simulations. Through our experiments, we demonstrate the effectiveness of our proposed approach, named MGxTransformer, in accurately predicting velocity, temperature, and turbulence intensity for incompressible turbulent flows across various geometries and flow conditions. Our results exhibit superior accuracy compared to other baselines, while maintaining computational efficiency. Our implementation in PyTorch is available publicly at https://github.com/Combi2k2/MG-Turbulent-Flow
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
374,968
2409.17335
Non-asymptotic Convergence of Training Transformers for Next-token Prediction
Transformers have achieved extraordinary success in modern machine learning due to their excellent ability to handle sequential data, especially in next-token prediction (NTP) tasks. However, the theoretical understanding of their performance in NTP is limited, with existing studies focusing mainly on asymptotic performance. This paper provides a fine-grained non-asymptotic analysis of the training dynamics of a one-layer transformer consisting of a self-attention module followed by a feed-forward layer. We first characterize the essential structural properties of training datasets for NTP using a mathematical framework based on partial orders. Then, we design a two-stage training algorithm, where the pre-processing stage for training the feed-forward layer and the main stage for training the attention layer exhibit fast convergence performance. Specifically, both layers converge sub-linearly to the direction of their corresponding max-margin solutions. We also show that the cross-entropy loss enjoys a linear convergence rate. Furthermore, we show that the trained transformer presents non-trivial prediction ability with dataset shift, which sheds light on the remarkable generalization performance of transformers. Our analysis technique involves the development of novel properties on the attention gradient and further in-depth analysis of how these properties contribute to the convergence of the training process. Our experiments further validate our theoretical findings.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
491,745
1807.08905
Pilot Spoofing Attack by Multiple Eavesdroppers
In this paper, we investigate the design of a pilot spoofing attack (PSA) carried out by multiple single-antenna eavesdroppers (Eves) in a downlink time-division duplex (TDD) system, where a multiple antenna base station (BS) transmits confidential information to a single-antenna legitimate user (LU). During the uplink channel training phase, multiple Eves collaboratively impair the channel acquisition of the legitimate link, aiming at maximizing the wiretapping signal-to-noise ratio (SNR) in the subsequent downlink data transmission phase. Two different scenarios are investigated: (1) the BS is unaware of the PSA, and (2) the BS attempts to detect the presence of the PSA. For both scenarios, we formulate wiretapping SNR maximization problems. For the second scenario, we also investigate the probability of successful detection and constrain it to remain below a pre-designed threshold. The two resulting optimization problems can be unified into a more general non-convex optimization problem, and we propose an efficient algorithm based on the minorization-maximization (MM) method and the alternating direction method of multipliers (ADMM) to solve it. The proposed MM-ADMM algorithm is shown to converge to a stationary point of the general problem. In addition, we propose a semidefinite relaxation (SDR) method as a benchmark to evaluate the efficiency of the MM-ADMM algorithm. Numerical results show that the MM-ADMM algorithm achieves near-optimal performance and is computationally more efficient than the SDRbased method.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
103,623
2104.10834
DANNet: A One-Stage Domain Adaptation Network for Unsupervised Nighttime Semantic Segmentation
Semantic segmentation of nighttime images plays an equally important role as that of daytime images in autonomous driving, but the former is much more challenging due to poor illuminations and arduous human annotations. In this paper, we propose a novel domain adaptation network (DANNet) for nighttime semantic segmentation without using labeled nighttime image data. It employs an adversarial training with a labeled daytime dataset and an unlabeled dataset that contains coarsely aligned day-night image pairs. Specifically, for the unlabeled day-night image pairs, we use the pixel-level predictions of static object categories on a daytime image as a pseudo supervision to segment its counterpart nighttime image. We further design a re-weighting strategy to handle the inaccuracy caused by misalignment between day-night image pairs and wrong predictions of daytime images, as well as boost the prediction accuracy of small objects. The proposed DANNet is the first one stage adaptation framework for nighttime semantic segmentation, which does not train additional day-night image transfer models as a separate pre-processing stage. Extensive experiments on Dark Zurich and Nighttime Driving datasets show that our method achieves state-of-the-art performance for nighttime semantic segmentation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
231,730
1904.03870
Streamlined Dense Video Captioning
Dense video captioning is an extremely challenging task since accurate and coherent description of events in a video requires holistic understanding of video contents as well as contextual reasoning of individual events. Most existing approaches handle this problem by first detecting event proposals from a video and then captioning on a subset of the proposals. As a result, the generated sentences are prone to be redundant or inconsistent since they fail to consider temporal dependency between events. To tackle this challenge, we propose a novel dense video captioning framework, which models temporal dependency across events in a video explicitly and leverages visual and linguistic context from prior events for coherent storytelling. This objective is achieved by 1) integrating an event sequence generation network to select a sequence of event proposals adaptively, and 2) feeding the sequence of event proposals to our sequential video captioning network, which is trained by reinforcement learning with two-level rewards at both event and episode levels for better context modeling. The proposed technique achieves outstanding performances on ActivityNet Captions dataset in most metrics.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
126,866
2112.04177
VISOLO: Grid-Based Space-Time Aggregation for Efficient Online Video Instance Segmentation
For online video instance segmentation (VIS), fully utilizing the information from previous frames in an efficient manner is essential for real-time applications. Most previous methods follow a two-stage approach requiring additional computations such as RPN and RoIAlign, and do not fully exploit the available information in the video for all subtasks in VIS. In this paper, we propose a novel single-stage framework for online VIS built based on the grid structured feature representation. The grid-based features allow us to employ fully convolutional networks for real-time processing, and also to easily reuse and share features within different components. We also introduce cooperatively operating modules that aggregate information from available frames, in order to enrich the features for all subtasks in VIS. Our design fully takes advantage of previous information in a grid form for all tasks in VIS in an efficient way, and we achieved the new state-of-the-art accuracy (38.6 AP and 36.9 AP) and speed (40.0 FPS) on YouTube-VIS 2019 and 2021 datasets among online VIS methods. The code is available at https://github.com/SuHoHan95/VISOLO.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
270,445
2302.06885
Improving Interpretability of Deep Sequential Knowledge Tracing Models with Question-centric Cognitive Representations
Knowledge tracing (KT) is a crucial technique to predict students' future performance by observing their historical learning processes. Due to the powerful representation ability of deep neural networks, remarkable progress has been made by using deep learning techniques to solve the KT problem. The majority of existing approaches rely on the \emph{homogeneous question} assumption that questions have equivalent contributions if they share the same set of knowledge components. Unfortunately, this assumption is inaccurate in real-world educational scenarios. Furthermore, it is very challenging to interpret the prediction results from the existing deep learning based KT models. Therefore, in this paper, we present QIKT, a question-centric interpretable KT model to address the above challenges. The proposed QIKT approach explicitly models students' knowledge state variations at a fine-grained level with question-sensitive cognitive representations that are jointly learned from a question-centric knowledge acquisition module and a question-centric problem solving module. Meanwhile, the QIKT utilizes an item response theory based prediction layer to generate interpretable prediction results. The proposed QIKT model is evaluated on three public real-world educational datasets. The results demonstrate that our approach is superior on the KT prediction task, and it outperforms a wide range of deep learning based KT models in terms of prediction accuracy with better model interpretability. To encourage reproducible results, we have provided all the datasets and code at \url{https://pykt.org/}.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
345,566
1806.02660
Analyzing Traffic Delay at Unmanaged Intersections
At an unmanaged intersection, it is important to understand how much traffic delay may be caused as a result of microscopic vehicle interactions. Conventional traffic simulations that explicitly track these interactions are time-consuming. Prior work introduced an analytical traffic model for unmanaged intersections. The traffic delay at the intersection is modeled as an event-driven stochastic process, whose dynamics encode microscopic vehicle interactions. This paper studies the traffic delay in a two-lane intersection using the model. We perform rigorous analyses concerning the distribution of traffic delay under different scenarios. We then discuss the relationships between traffic delay and multiple factors such as traffic flow density, unevenness of traffic flows, temporal gaps between two consecutive vehicles, and the passing order.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
99,830
2404.08264
Guided Masked Self-Distillation Modeling for Distributed Multimedia Sensor Event Analysis
Observations with distributed sensors are essential in analyzing a series of human and machine activities (referred to as 'events' in this paper) in complex and extensive real-world environments. This is because the information obtained from a single sensor is often missing or fragmented in such an environment; observations from multiple locations and modalities should be integrated to analyze events comprehensively. However, a learning method has yet to be established to extract joint representations that effectively combine such distributed observations. Therefore, we propose Guided Masked sELf-Distillation modeling (Guided-MELD) for inter-sensor relationship modeling. The basic idea of Guided-MELD is to learn to supplement the information from the masked sensor with information from other sensors needed to detect the event. Guided-MELD is expected to enable the system to effectively distill the fragmented or redundant target event information obtained by the sensors without being overly dependent on any specific sensors. To validate the effectiveness of the proposed method in novel tasks of distributed multimedia sensor event analysis, we recorded two new datasets that fit the problem setting: MM-Store and MM-Office. These datasets consist of human activities in a convenience store and an office, recorded using distributed cameras and microphones. Experimental results on these datasets show that the proposed Guided-MELD improves event tagging and detection performance and outperforms conventional inter-sensor relationship modeling methods. Furthermore, the proposed method performed robustly even when sensors were reduced.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
446,171
2205.12394
MaskEval: Weighted MLM-Based Evaluation for Text Summarization and Simplification
In text summarization and simplification, system outputs must be evaluated along multiple dimensions such as relevance, factual consistency, fluency, and grammaticality, and a wide range of possible outputs could be of high quality. These properties make the development of an adaptable, reference-less evaluation metric both necessary and challenging. We introduce MaskEval, a reference-less metric for text summarization and simplification that operates by performing masked language modeling (MLM) on the concatenation of the candidate and the source texts. It features an attention-like weighting mechanism to modulate the relative importance of each MLM step, which crucially allows it to be adapted to evaluate different quality dimensions. We demonstrate its effectiveness on English summarization and simplification in terms of correlations with human judgments, and explore transfer scenarios between the two tasks.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
298,510
2412.20632
EVOLVE: Emotion and Visual Output Learning via LLM Evaluation
Human acceptance of social robots is greatly effected by empathy and perceived understanding. This necessitates accurate and flexible responses to various input data from the user. While systems such as this can become increasingly complex as more states or response types are included, new research in the application of large language models towards human-robot interaction has allowed for more streamlined perception and reaction pipelines. LLM-selected actions and emotional expressions can help reinforce the realism of displayed empathy and allow for improved communication between the robot and user. Beyond portraying empathy in spoken or written responses, this shows the possibilities of using LLMs in actuated, real world scenarios. In this work we extend research in LLM-driven nonverbal behavior for social robots by considering more open-ended emotional response selection leveraging new advances in vision-language models, along with emotionally aligned motion and color pattern selections that strengthen conveyance of meaning and empathy.
true
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
521,291
1810.01270
META-DES: A Dynamic Ensemble Selection Framework using Meta-Learning
Dynamic ensemble selection systems work by estimating the level of competence of each classifier from a pool of classifiers. Only the most competent ones are selected to classify a given test sample. This is achieved by defining a criterion to measure the level of competence of a base classifier, such as, its accuracy in local regions of the feature space around the query instance. However, using only one criterion about the behavior of a base classifier is not sufficient to accurately estimate its level of competence. In this paper, we present a novel dynamic ensemble selection framework using meta-learning. We propose five distinct sets of meta-features, each one corresponding to a different criterion to measure the level of competence of a classifier for the classification of input samples. The meta-features are extracted from the training data and used to train a meta-classifier to predict whether or not a base classifier is competent enough to classify an input instance. During the generalization phase, the meta-features are extracted from the query instance and passed down as input to the meta-classifier. The meta-classifier estimates, whether a base classifier is competent enough to be added to the ensemble. Experiments are conducted over several small sample size classification problems, i.e., problems with a high degree of uncertainty due to the lack of training data. Experimental results show the proposed meta-learning framework greatly improves classification accuracy when compared against current state-of-the-art dynamic ensemble selection techniques.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
109,365
2008.04581
Using Network Embeddings for Improving Network Alignment
Network (or Graph) Alignment Algorithms aims to reveal structural similarities among graphs. In particular Local Network Alignment Algorithms (LNAs) finds local regions of similarity among two or more networks. Such algorithms are in general based on a set of seed nodes that are used to grow an alignment. Almost all LNAs algorithms use as seed nodes a set of vertices based on context information (e.g. a set of biologically related in biological network alignment) and this may cause a bias or a data-circularity problem. More recently, we demonstrated that the use of topological information in the choice of seed nodes may improve the quality of the alignments. We used some common approaches based on global alignment algorithms for capturing topological similarity among nodes. In parallel, it has been demonstrated that the use of network embedding methods (or representation learning), may capture the structural similarity among nodes better than other methods. Therefore we propose to use network embeddings to learn structural similarity among nodes and to use such similarity to improve LNA extendings our previous algorithms. We define a framework for LNA.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
191,269
1803.04836
3D Video Quality Assessment
A key factor in designing 3D systems is to understand how different visual cues and distortions affect the perceptual quality of 3D video. The ultimate way to assess video quality is through subjective tests. However, subjective evaluation is time consuming, expensive, and in most cases not even possible. An alternative solution is objective quality metrics, which attempt to model the Human Visual System (HVS) in order to assess the perceptual quality. The potential of 3D technology to significantly improve the immersiveness of video content has been hampered by the difficulty of objectively assessing Quality of Experience (QoE). A no-reference (NR) objective 3D quality metric, which could help determine capturing parameters and improve playback perceptual quality, would be welcomed by camera and display manufactures. Network providers would embrace a full-reference (FR) 3D quality metric, as they could use it to ensure efficient QoE-based resource management during compression and Quality of Service (QoS) during transmission.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
92,526
1910.09644
ConEx: Efficient Exploration of Big-Data System Configurations for Better Performance
Configuration space complexity makes the big-data software systems hard to configure well. Consider Hadoop, with over nine hundred parameters, developers often just use the default configurations provided with Hadoop distributions. The opportunity costs in lost performance are significant. Popular learning-based approaches to auto-tune software does not scale well for big-data systems because of the high cost of collecting training data. We present a new method based on a combination of Evolutionary Markov Chain Monte Carlo (EMCMC) sampling and cost reduction techniques to cost-effectively find better-performing configurations for big data systems. For cost reduction, we developed and experimentally tested and validated two approaches: using scaled-up big data jobs as proxies for the objective function for larger jobs and using a dynamic job similarity measure to infer that results obtained for one kind of big data problem will work well for similar problems. Our experimental results suggest that our approach promises to significantly improve the performance of big data systems and that it outperforms competing approaches based on random sampling, basic genetic algorithms (GA), and predictive model learning. Our experimental results support the conclusion that our approach has strongly demonstrated potential to significantly and cost-effectively improve the performance of big data systems.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
true
150,250
1704.02819
Flags of almost affine codes
We describe a two-party wire-tap channel of type II in the framework of almost affine codes. Its cryptological performance is related to some relative profiles of a pair of almost affine codes. These profiles are analogues of relative generalized Hamming weights in the linear case.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
71,519
2411.11693
From Spectra to Geography: Intelligent Mapping of RRUFF Mineral Data
Accurately determining the geographic origin of mineral samples is pivotal for applications in geology, mineralogy, and material science. Leveraging the comprehensive Raman spectral data from the RRUFF database, this study introduces a novel machine learning framework aimed at geolocating mineral specimens at the country level. We employ a one-dimensional ConvNeXt1D neural network architecture to classify mineral spectra based solely on their spectral signatures. The processed dataset comprises over 32,900 mineral samples, predominantly natural, spanning 101 countries. Through five-fold cross-validation, the ConvNeXt1D model achieved an impressive average classification accuracy of 93%, demonstrating its efficacy in capturing geospatial patterns inherent in Raman spectra.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
509,142
2209.12278
Neural inhibition during speech planning contributes to contrastive hyperarticulation
Previous work has demonstrated that words are hyperarticulated on dimensions of speech that differentiate them from a minimal pair competitor. This phenomenon has been termed contrastive hyperarticulation (CH). We present a dynamic neural field (DNF) model of voice onset time (VOT) planning that derives CH from an inhibitory influence of the minimal pair competitor during planning. We test some predictions of the model with a novel experiment investigating CH of voiceless stop consonant VOT in pseudowords. The results demonstrate a CH effect in pseudowords, consistent with a basis for the effect in the real-time planning and production of speech. The scope and magnitude of CH in pseudowords was reduced compared to CH in real words, consistent with a role for interactive activation between lexical and phonological levels of planning. We discuss the potential of our model to unify an apparently disparate set of phenomena, from CH to phonological neighborhood effects to phonetic trace effects in speech errors.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
319,475
2301.12129
Decentralized Energy Market Integrating Carbon Allowance Trade and Uncertainty Balance in Energy Communities
With the sustained attention on carbon neutrality, the personal carbon trading (PCT) scheme has been embraced as an auspicious paradigm for scaling down carbon emissions. To facilitate the simultaneous clearance of energy and carbon allowance inside the energy community while hedging against uncertainty, a joint trading framework is proposed in this article. The energy trading is implemented in a peer-to-peer (P2P) manner without the intervention of a central operator, and the uncertainty trading is materialized through procuring reserve of conventional generators and flexibility of users. Under the PCT scheme, carbon allowance is transacted via a sharing mechanism. Possible excessive carbon emissions due to uncertainty balance are tackled by obliging renewable agents to procure sufficient carbon allowances, following the consumption responsibility principle. A two-stage iterative method consisting of tightening McCormick envelope and alternating direction method of multipliers (ADMM) is devised to transform the model into a mixed-integer second-order cone program (MISOCP) and to allow for a fully decentralized market-clearing procedure. Numerical results have validated the effectiveness of the proposed market model.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
342,406
2002.01441
A Generalized Flow for B2B Sales Predictive Modeling: An Azure Machine Learning Approach
Predicting the outcome of sales opportunities is a core part of successful business management. Conventionally, making this prediction has relied mostly on subjective human evaluations in the process of sales decision making. In this paper, we addressed the problem of forecasting the outcome of business to business (B2B) sales by proposing a thorough data-driven Machine Learning (ML) workflow on a cloud-based computing platform: Microsoft Azure Machine Learning Service (Azure ML). This workflow consists of two pipelines: (1) An ML pipeline to train probabilistic predictive models on the historical sales opportunities data. In this pipeline, data is enriched with an extensive feature enhancement step and then used to train an ensemble of ML classification models in parallel. (2) A prediction pipeline to utilize the trained ML model and infer the likelihood of winning new sales opportunities along with calculating optimal decision boundaries. The effectiveness of the proposed workflow was evaluated on a real sales dataset of a major global B2B consulting firm. Our results implied that decision-making based on the ML predictions is more accurate and brings a higher monetary value.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
162,650
1909.12778
Global Sparse Momentum SGD for Pruning Very Deep Neural Networks
Deep Neural Network (DNN) is powerful but computationally expensive and memory intensive, thus impeding its practical usage on resource-constrained front-end devices. DNN pruning is an approach for deep model compression, which aims at eliminating some parameters with tolerable performance degradation. In this paper, we propose a novel momentum-SGD-based optimization method to reduce the network complexity by on-the-fly pruning. Concretely, given a global compression ratio, we categorize all the parameters into two parts at each training iteration which are updated using different rules. In this way, we gradually zero out the redundant parameters, as we update them using only the ordinary weight decay but no gradients derived from the objective function. As a departure from prior methods that require heavy human works to tune the layer-wise sparsity ratios, prune by solving complicated non-differentiable problems or finetune the model after pruning, our method is characterized by 1) global compression that automatically finds the appropriate per-layer sparsity ratios; 2) end-to-end training; 3) no need for a time-consuming re-training process after pruning; and 4) superior capability to find better winning tickets which have won the initialization lottery.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
147,215
2402.01605
A Lyapunov theory demonstrating a fundamental limit on the speed of systems consolidation
The nervous system reorganizes memories from an early site to a late site, a commonly observed feature of learning and memory systems known as systems consolidation. Previous work has suggested learning rules by which consolidation may occur. Here, we provide conditions under which such rules are guaranteed to lead to stable convergence of learning and consolidation. We use the theory of Lyapunov functions, which enforces stability by requiring learning rules to decrease an energy-like (Lyapunov) function. We present the theory in the context of a simple circuit architecture motivated by classic models of learning in systems consolidation mediated by the cerebellum. Stability is only guaranteed if the learning rate in the late stage is not faster than the learning rate in the early stage. Further, the slower the learning rate at the late stage, the larger the perturbation the system can tolerate with a guarantee of stability. We provide intuition for this result by mapping the consolidation model to a damped driven oscillator system, and showing that the ratio of early- to late-stage learning rates in the consolidation model can be directly identified with the (square of the) oscillator's damping ratio. This work suggests the power of the Lyapunov approach to provide constraints on nervous system function.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
426,095
2401.04578
Effective pruning of web-scale datasets based on complexity of concept clusters
Utilizing massive web-scale datasets has led to unprecedented performance gains in machine learning models, but also imposes outlandish compute requirements for their training. In order to improve training and data efficiency, we here push the limits of pruning large-scale multimodal datasets for training CLIP-style models. Today's most effective pruning method on ImageNet clusters data samples into separate concepts according to their embedding and prunes away the most prototypical samples. We scale this approach to LAION and improve it by noting that the pruning rate should be concept-specific and adapted to the complexity of the concept. Using a simple and intuitive complexity measure, we are able to reduce the training cost to a quarter of regular training. By filtering from the LAION dataset, we find that training on a smaller set of high-quality data can lead to higher performance with significantly lower training costs. More specifically, we are able to outperform the LAION-trained OpenCLIP-ViT-B32 model on ImageNet zero-shot accuracy by 1.1p.p. while only using 27.7% of the data and training compute. Despite a strong reduction in training cost, we also see improvements on ImageNet dist. shifts, retrieval tasks and VTAB. On the DataComp Medium benchmark, we achieve a new state-of-the-art Imagehttps://info.arxiv.org/help/prep#commentsNet zero-shot accuracy and a competitive average zero-shot accuracy on 38 evaluation tasks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
420,476
2111.11023
Multi-Channel Multi-Speaker ASR Using 3D Spatial Feature
Automatic speech recognition (ASR) of multi-channel multi-speaker overlapped speech remains one of the most challenging tasks to the speech community. In this paper, we look into this challenge by utilizing the location information of target speakers in the 3D space for the first time. To explore the strength of proposed the 3D spatial feature, two paradigms are investigated. 1) a pipelined system with a multi-channel speech separation module followed by the state-of-the-art single-channel ASR module; 2) a "All-In-One" model where the 3D spatial feature is directly used as an input to ASR system without explicit separation modules. Both of them are fully differentiable and can be back-propagated end-to-end. We test them on simulated overlapped speech and real recordings. Experimental results show that 1) the proposed ALL-In-One model achieved a comparable error rate to the pipelined system while reducing the inference time by half; 2) the proposed 3D spatial feature significantly outperformed (31\% CERR) all previous works of using the 1D directional information in both paradigms.
true
false
true
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
267,524
2305.08802
Multi-Cluster Aggregative Games: A Linearly Convergent Nash Equilibrium Seeking Algorithm and its Applications in Energy Management
We propose a type of non-cooperative game, termed multi-cluster aggregative game, which is composed of clusters as players, where each cluster consists of collaborative agents with cost functions depending on their own decisions and the aggregate quantity of each participant cluster to modeling large-scale and hierarchical multi-agent systems. This novel game model is motivated by decision-making problems in competitive-cooperative network systems with large-scale nodes, such as the Energy Internet. To address challenges arising in seeking Nash equilibrium for such network systems, we develop an algorithm with a hierarchical communication topology which is a hybrid with distributed and semi-decentralized protocols. The upper level consists of cluster coordinators estimating the aggregate quantities with local communications, while the lower level is cluster subnets composed of its coordinator and agents aiming to track the gradient of the corresponding cluster. In particular, the clusters exchange the aggregate quantities instead of their decisions to relieve the burden of communication. Under strongly monotone and mildly Lipschitz continuous assumptions, we rigorously prove that the algorithm linearly converges to a Nash equilibrium with a fixed step size.We present the applications in the context of the Energy Internet. Furthermore, the numerical results verify the effectiveness of the algorithm.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
364,413
2308.00703
A Backend Platform for Supporting the Reproducibility of Computational Experiments
In recent years, the research community has raised serious questions about the reproducibility of scientific work. In particular, since many studies include some kind of computing work, reproducibility is also a technological challenge, not only in computer science, but in most research domains. Replicability and computational reproducibility are not easy to achieve, not only because researchers have diverse proficiency in computing technologies, but also because of the variety of computational environments that can be used. Indeed, it is challenging to recreate the same environment using the same frameworks, code, data sources, programming languages, dependencies, and so on. In this work, we propose an Integrated Development Environment allowing the share, configuration, packaging and execution of an experiment by setting the code and data used and defining the programming languages, code, dependencies, databases, or commands to execute to achieve consistent results for each experiment. After the initial creation and configuration, the experiment can be executed any number of times, always producing exactly the same results. Furthermore, it allows the execution of the experiment by using a different associated dataset, and it can be possible to verify the reproducibility and replicability of the results. This allows the creation of a reproducible pack that can be re-executed by anyone on any other computer. Our platform aims to allow researchers in any field to create a reproducibility package for their science that can be re-executed on any other computer. To evaluate our platform, we used it to reproduce 25 experiments extracted from published papers. We have been able to successfully reproduce 20 (80%) of these experiments achieving the results reported in such works with minimum effort, thus showing that our approach is effective.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
383,021
2203.00663
Iterative Residual Policy: for Goal-Conditioned Dynamic Manipulation of Deformable Objects
This paper tackles the task of goal-conditioned dynamic manipulation of deformable objects. This task is highly challenging due to its complex dynamics (introduced by object deformation and high-speed action) and strict task requirements (defined by a precise goal specification). To address these challenges, we present Iterative Residual Policy (IRP), a general learning framework applicable to repeatable tasks with complex dynamics. IRP learns an implicit policy via delta dynamics -- instead of modeling the entire dynamical system and inferring actions from that model, IRP learns delta dynamics that predict the effects of delta action on the previously-observed trajectory. When combined with adaptive action sampling, the system can quickly optimize its actions online to reach a specified goal. We demonstrate the effectiveness of IRP on two tasks: whipping a rope to hit a target point and swinging a cloth to reach a target pose. Despite being trained only in simulation on a fixed robot setup, IRP is able to efficiently generalize to noisy real-world dynamics, new objects with unseen physical properties, and even different robot hardware embodiments, demonstrating its excellent generalization capability relative to alternative approaches. Video is available at https://youtu.be/7h3SZ3La-oA
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
283,080
2412.05313
{\lambda}: A Benchmark for Data-Efficiency in Long-Horizon Indoor Mobile Manipulation Robotics
Efficiently learning and executing long-horizon mobile manipulation (MoMa) tasks is crucial for advancing robotics in household and workplace settings. However, current MoMa models are data-inefficient, underscoring the need for improved models that require realistic-sized benchmarks to evaluate their efficiency, which do not exist. To address this, we introduce the LAMBDA ({\lambda}) benchmark (Long-horizon Actions for Mobile-manipulation Benchmarking of Directed Activities), which evaluates the data efficiency of models on language-conditioned, long-horizon, multi-room, multi-floor, pick-and-place tasks using a dataset of manageable size, more feasible for collection. The benchmark includes 571 human-collected demonstrations that provide realism and diversity in simulated and real-world settings. Unlike planner-generated data, these trajectories offer natural variability and replay-verifiability, ensuring robust learning and evaluation. We benchmark several models, including learning-based models and a neuro-symbolic modular approach combining foundation models with task and motion planning. Learning-based models show suboptimal success rates, even when leveraging pretrained weights, underscoring significant data inefficiencies. However, the neuro-symbolic approach performs significantly better while being more data efficient. Findings highlight the need for more data-efficient learning-based MoMa approaches. {\lambda} addresses this gap by serving as a key benchmark for evaluating the data efficiency of those future models in handling household robotics tasks.
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
514,782
1709.03424
Constant-Weight Array Codes
Binary constant-weight codes have been extensively studied, due to both their numerous applications and to their theoretical significance. In particular, constant-weight codes have been proposed for error correction in store and forward. In this paper, we introduce constant-weight array codes (CWACs), which offer a tradeoff between the rate gain of general constant-weight codes and the low decoding complexity of liftings. CWACs can either be used in the on-shot setting introduced earlier or in a multi-shot approach, where one codeword consists of several messages. The multi-shot approach generalizes the one-shot approach and hence allows for higher rate gains. We first give a construction of CWACs based on concatenation, which generalizes the traditional erasure codes, and also provide a decoding algorithm for these codes. Since CWACs can be viewed as a generalization of both binary constant-weight codes and nonrestricted Hamming metric codes, CWACs thus provide an additional degree of freedom to both problems of determining the maximum cardinality of constant-weight codes and nonrestricted Hamming metric codes. We then investigate their theoretical significance. We first generalize many classical bounds derived for Hamming metric codes or constant-weight codes in the CWAC framework. We finally relate the maximum cardinality of a CWAC to that of a constant-weight code, of a nonrestricted Hamming metric code, and of a spherical code.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
80,459
1208.4656
Capacity of Compound MIMO Gaussian Channels with Additive Uncertainty
This paper considers reliable communications over a multiple-input multiple-output (MIMO) Gaussian channel, where the channel matrix is within a bounded channel uncertainty region around a nominal channel matrix, i.e., an instance of the compound MIMO Gaussian channel. We study the optimal transmit covariance matrix design to achieve the capacity of compound MIMO Gaussian channels, where the channel uncertainty region is characterized by the spectral norm. This design problem is a challenging non-convex optimization problem. However, in this paper, we reveal that this problem has a hidden convexity property, which can be exploited to map the problem into a convex optimization problem. We first prove that the optimal transmit design is to diagonalize the nominal channel, and then show that the duality gap between the capacity of the compound MIMO Gaussian channel and the min-max channel capacity is zero, which proves the conjecture of Loyka and Charalambous (IEEE Trans. Inf. Theory, vol. 58, no. 4, pp. 2048-2063, 2012). The key tools for showing these results are a new matrix determinant inequality and some unitarily invariant properties.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
18,225
2209.02200
Task-wise Sampling Convolutions for Arbitrary-Oriented Object Detection in Aerial Images
Arbitrary-oriented object detection (AOOD) has been widely applied to locate and classify objects with diverse orientations in remote sensing images. However, the inconsistent features for the localization and classification tasks in AOOD models may lead to ambiguity and low-quality object predictions, which constrains the detection performance. In this article, an AOOD method called task-wise sampling convolutions (TS-Conv) is proposed. TS-Conv adaptively samples task-wise features from respective sensitive regions and maps these features together in alignment to guide a dynamic label assignment for better predictions. Specifically, sampling positions of the localization convolution in TS-Conv are supervised by the oriented bounding box (OBB) prediction associated with spatial coordinates, while sampling positions and convolutional kernel of the classification convolution are designed to be adaptively adjusted according to different orientations for improving the orientation robustness of features. Furthermore, a dynamic task-consistent-aware label assignment (DTLA) strategy is developed to select optimal candidate positions and assign labels dynamically according to ranked task-aware scores obtained from TS-Conv. Extensive experiments on several public datasets covering multiple scenes, multimodal images, and multiple categories of objects demonstrate the effectiveness, scalability, and superior performance of the proposed TS-Conv.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
316,131
1906.03973
E-LPIPS: Robust Perceptual Image Similarity via Random Transformation Ensembles
It has been recently shown that the hidden variables of convolutional neural networks make for an efficient perceptual similarity metric that accurately predicts human judgment on relative image similarity assessment. First, we show that such learned perceptual similarity metrics (LPIPS) are susceptible to adversarial attacks that dramatically contradict human visual similarity judgment. While this is not surprising in light of neural networks' well-known weakness to adversarial perturbations, we proceed to show that self-ensembling with an infinite family of random transformations of the input --- a technique known not to render classification networks robust --- is enough to turn the metric robust against attack, while retaining predictive power on human judgments. Finally, we study the geometry imposed by our our novel self-ensembled metric (E-LPIPS) on the space of natural images. We find evidence of "perceptual convexity" by showing that convex combinations of similar-looking images retain appearance, and that discrete geodesics yield meaningful frame interpolation and texture morphing, all without explicit correspondences.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
true
false
false
134,556
2408.02651
Can Reinforcement Learning Unlock the Hidden Dangers in Aligned Large Language Models?
Large Language Models (LLMs) have demonstrated impressive capabilities in natural language tasks, but their safety and morality remain contentious due to their training on internet text corpora. To address these concerns, alignment techniques have been developed to improve the public usability and safety of LLMs. Yet, the potential for generating harmful content through these models seems to persist. This paper explores the concept of jailbreaking LLMs-reversing their alignment through adversarial triggers. Previous methods, such as soft embedding prompts, manually crafted prompts, and gradient-based automatic prompts, have had limited success on black-box models due to their requirements for model access and for producing a low variety of manually crafted prompts, making them susceptible to being blocked. This paper introduces a novel approach using reinforcement learning to optimize adversarial triggers, requiring only inference API access to the target model and a small surrogate model. Our method, which leverages a BERTScore-based reward function, enhances the transferability and effectiveness of adversarial triggers on new black-box models. We demonstrate that this approach improves the performance of adversarial triggers on a previously untested language model.
false
false
false
false
true
false
false
false
true
false
false
false
true
false
false
false
false
false
478,704
1905.07470
A semantic-aided particle filter approach for AUV localization
This paper presents a novel approach to AUV localization, based on a semantic-aided particle filter. Particle filters have been used successfully for robotics localization since many years. Most of the approaches are however based on geometric measurements and geometric information and simulations. In the past years more and more efforts from research goes towards cognitive robotics and the marine domain is not exception. Moving from signal to symbol becomes therefore paramount for more complex applications. This paper presents a contribution in the well-known area of underwater localization, incorporating semantic information. An extension to the standard particle filter approach is presented, based on semantic information of the environment. A comparison with the geometric approach shows the advantages of a semantic layer to successfully perform self-localization.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
131,241
1304.1527
Decision under Uncertainty
We derive axiomatically the probability function that should be used to make decisions given any form of underlying uncertainty.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
23,560
2104.01284
A GPU Implementation of a Look-Ahead Optimal Controller for Eco-Driving Based on Dynamic Programming
Predictive energy management of Connected and Automated Vehicles (CAVs), in particular those with multiple power sources, has the potential to significantly improve energy savings in real-world driving conditions. In particular, the eco-driving problem seeks to design optimal speed and power usage profiles based upon available information from connectivity and advanced mapping features to minimize the fuel consumption between two designated locations. In this work, the eco-driving problem is formulated as a three-state receding horizon optimal control problem and solved via Dynamic Programming (DP). The optimal solution, in terms of vehicle speed and battery State of Charge (SoC) trajectories, allows a connected and automated hybrid electric vehicle to intelligently pass the signalized intersections and minimize fuel consumption over a prescribed route. To enable real-time implementation, a parallel architecture of DP is proposed for an NVIDIA GPU with CUDA programming. Simulation results indicate that the proposed optimal controller delivers more than 15% fuel economy benefits compared to a baseline control strategy and that the solver time can be reduced by more than 90% by the parallel implementation when compared to a serial implementation.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
228,289
1301.7404
Resolving Conflicting Arguments under Uncertainties
Distributed knowledge based applications in open domain rely on common sense information which is bound to be uncertain and incomplete. To draw the useful conclusions from ambiguous data, one must address uncertainties and conflicts incurred in a holistic view. No integrated frameworks are viable without an in-depth analysis of conflicts incurred by uncertainties. In this paper, we give such an analysis and based on the result, propose an integrated framework. Our framework extends definite argumentation theory to model uncertainty. It supports three views over conflicting and uncertain knowledge. Thus, knowledge engineers can draw different conclusions depending on the application context (i.e. view). We also give an illustrative example on strategical decision support to show the practical usefulness of our framework.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
21,637
2106.00050
Continual 3D Convolutional Neural Networks for Real-time Processing of Videos
We introduce Continual 3D Convolutional Neural Networks (Co3D CNNs), a new computational formulation of spatio-temporal 3D CNNs, in which videos are processed frame-by-frame rather than by clip. In online tasks demanding frame-wise predictions, Co3D CNNs dispense with the computational redundancies of regular 3D CNNs, namely the repeated convolutions over frames, which appear in overlapping clips. We show that Continual 3D CNNs can reuse preexisting 3D-CNN weights to reduce the per-prediction floating point operations (FLOPs) in proportion to the temporal receptive field while retaining similar memory requirements and accuracy. This is validated with multiple models on Kinetics-400 and Charades with remarkable results: CoX3D models attain state-of-the-art complexity/accuracy trade-offs on Kinetics-400 with 12.1-15.3x reductions of FLOPs and 2.3-3.8% improvements in accuracy compared to regular X3D models while reducing peak memory consumption by up to 48%. Moreover, we investigate the transient response of Co3D CNNs at start-up and perform extensive benchmarks of on-hardware processing characteristics for publicly available 3D CNNs.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
237,963
2502.05497
DeepThink: Aligning Language Models with Domain-Specific User Intents
Supervised fine-tuning with synthesized instructions has been a common practice for adapting LLMs to domain-specific QA tasks. However, the synthesized instructions deviate from real user questions and expected answers. This study proposes a novel framework called DeepThink to generate high-quality instructions. DeepThink first generates a few seed questions to mimic actual user questions, simulates conversations to uncover the hidden user needs, and refines the answer by conversational contexts and the retrieved documents for more comprehensive answers. Experiments demonstrate that DeepThink achieves an average performance improvement of 7.92% compared to a GPT-4-turbo+RAG-based assistant on the real user test set in the advertising domain across dimensions such as relevance, completeness, clarity, accuracy, and actionability.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
531,641
1901.01347
Learning to Remember More with Less Memorization
Memory-augmented neural networks consisting of a neural controller and an external memory have shown potentials in long-term sequential learning. Current RAM-like memory models maintain memory accessing every timesteps, thus they do not effectively leverage the short-term memory held in the controller. We hypothesize that this scheme of writing is suboptimal in memory utilization and introduces redundant computation. To validate our hypothesis, we derive a theoretical bound on the amount of information stored in a RAM-like system and formulate an optimization problem that maximizes the bound. The proposed solution dubbed Uniform Writing is proved to be optimal under the assumption of equal timestep contributions. To relax this assumption, we introduce modifications to the original solution, resulting in a solution termed Cached Uniform Writing. This method aims to balance between maximizing memorization and forgetting via overwriting mechanisms. Through an extensive set of experiments, we empirically demonstrate the advantages of our solutions over other recurrent architectures, claiming the state-of-the-arts in various sequential modeling tasks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
117,954
2112.14705
Lane Change Decision-Making through Deep Reinforcement Learning
Due to the complexity and volatility of the traffic environment, decision-making in autonomous driving is a significantly hard problem. In this project, we use a Deep Q-Network, along with rule-based constraints to make lane-changing decision. A safe and efficient lane change behavior may be obtained by combining high-level lateral decision-making with low-level rule-based trajectory monitoring. The agent is anticipated to perform appropriate lane-change maneuvers in a real-world-like udacity simulator after training it for a total of 100 episodes. The results shows that the rule-based DQN performs better than the DQN method. The rule-based DQN achieves a safety rate of 0.8 and average speed of 47 MPH
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
273,591
1808.07243
Controversy Rules - Discovering Regions Where Classifiers (Dis-)Agree Exceptionally
Finding regions for which there is higher controversy among different classifiers is insightful with regards to the domain and our models. Such evaluation can falsify assumptions, assert some, or also, bring to the attention unknown phenomena. The present work describes an algorithm, which is based on the Exceptional Model Mining framework, and enables that kind of investigations. We explore several public datasets and show the usefulness of this approach in classification tasks. We show in this paper a few interesting observations about those well explored datasets, some of which are general knowledge, and other that as far as we know, were not reported before.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
105,696
2005.10957
Classification of Epithelial Ovarian Carcinoma Whole-Slide Pathology Images Using Deep Transfer Learning
Ovarian cancer is the most lethal cancer of the female reproductive organs. There are $5$ major histological subtypes of epithelial ovarian cancer, each with distinct morphological, genetic, and clinical features. Currently, these histotypes are determined by a pathologist's microscopic examination of tumor whole-slide images (WSI). This process has been hampered by poor inter-observer agreement (Cohen's kappa $0.54$-$0.67$). We utilized a \textit{two}-stage deep transfer learning algorithm based on convolutional neural networks (CNN) and progressive resizing for automatic classification of epithelial ovarian carcinoma WSIs. The proposed algorithm achieved a mean accuracy of $87.54\%$ and Cohen's kappa of $0.8106$ in the slide-level classification of $305$ WSIs; performing better than a standard CNN and pathologists without gynecology-specific training.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
178,331
1712.05404
SEE: Towards Semi-Supervised End-to-End Scene Text Recognition
Detecting and recognizing text in natural scene images is a challenging, yet not completely solved task. In recent years several new systems that try to solve at least one of the two sub-tasks (text detection and text recognition) have been proposed. In this paper we present SEE, a step towards semi-supervised neural networks for scene text detection and recognition, that can be optimized end-to-end. Most existing works consist of multiple deep neural networks and several pre-processing steps. In contrast to this, we propose to use a single deep neural network, that learns to detect and recognize text from natural images, in a semi-supervised way. SEE is a network that integrates and jointly learns a spatial transformer network, which can learn to detect text regions in an image, and a text recognition network that takes the identified text regions and recognizes their textual content. We introduce the idea behind our novel approach and show its feasibility, by performing a range of experiments on standard benchmark datasets, where we achieve competitive results.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
86,728
1811.08564
Feature Selection Convolutional Neural Networks for Visual Tracking
Most of the existing tracking methods based on CNN(convolutional neural networks) are too slow for real-time application despite the excellent tracking precision compared with the traditional ones. Moreover, neural networks are memory intensive which will take up lots of hardware resources. In this paper, a feature selection visual tracking algorithm combining CNN based MDNet(Multi-Domain Network) and RoIAlign was developed. We find that there is a lot of redundancy in feature maps from convolutional layers. So valid feature maps are selected by mutual information and others are abandoned which can reduce the complexity and computation of the network and do not affect the precision. The major problem of MDNet also lies in the time efficiency. Considering the computational complexity of MDNet is mainly caused by the large amount of convolution operations and fine-tuning of the network during tracking, a RoIAlign layer which could conduct the convolution over the whole image instead of each RoI is added to accelerate the convolution and a new strategy of fine-tuning the fully-connected layers is used to accelerate the update. With RoIAlign employed, the computation speed has been increased and it shows greater precision than RoIPool. Because RoIAlign can process float number coordinates by bilinear interpolation. These strategies can accelerate the processing, reduce the complexity with very low impact on precision and it can run at around 10 fps(while the speed of MDNet is about 1 fps). The proposed algorithm has been evaluated on a benchmark: OTB100, on which high precision and speed have been obtained.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
114,064
2203.13086
HiFi++: a Unified Framework for Bandwidth Extension and Speech Enhancement
Generative adversarial networks have recently demonstrated outstanding performance in neural vocoding outperforming best autoregressive and flow-based models. In this paper, we show that this success can be extended to other tasks of conditional audio generation. In particular, building upon HiFi vocoders, we propose a novel HiFi++ general framework for bandwidth extension and speech enhancement. We show that with the improved generator architecture, HiFi++ performs better or comparably with the state-of-the-art in these tasks while spending significantly less computational resources. The effectiveness of our approach is validated through a series of extensive experiments.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
287,502
1904.07735
A Graph Theory Approach for Regional Controllability of Boolean Cellular Automata
Controllability is one of the central concepts of modern control theory that allows a good understanding of a system's behaviour. It consists in constraining a system to reach the desired state from an initial state within a given time interval. When the desired objective affects only a sub-region of the domain, the control is said to be regional. The purpose of this paper is to study a particular case of regional control using cellular automata models since they are spatially extended systems where spatial properties can be easily defined thanks to their intrinsic locality. We investigate the case of boundary controls on the target region using an original approach based on graph theory. Necessary and sufficient conditions are given based on the Hamiltonian Circuit and strongly connected component. The controls are obtained using a preimage approach.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
127,873
2002.08616
Diversity sampling is an implicit regularization for kernel methods
Kernel methods have achieved very good performance on large scale regression and classification problems, by using the Nystr\"om method and preconditioning techniques. The Nystr\"om approximation -- based on a subset of landmarks -- gives a low rank approximation of the kernel matrix, and is known to provide a form of implicit regularization. We further elaborate on the impact of sampling diverse landmarks for constructing the Nystr\"om approximation in supervised as well as unsupervised kernel methods. By using Determinantal Point Processes for sampling, we obtain additional theoretical results concerning the interplay between diversity and regularization. Empirically, we demonstrate the advantages of training kernel methods based on subsets made of diverse points. In particular, if the dataset has a dense bulk and a sparser tail, we show that Nystr\"om kernel regression with diverse landmarks increases the accuracy of the regression in sparser regions of the dataset, with respect to a uniform landmark sampling. A greedy heuristic is also proposed to select diverse samples of significant size within large datasets when exact DPP sampling is not practically feasible.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
164,818
2501.03939
Visual question answering: from early developments to recent advances -- a survey
Visual Question Answering (VQA) is an evolving research field aimed at enabling machines to answer questions about visual content by integrating image and language processing techniques such as feature extraction, object detection, text embedding, natural language understanding, and language generation. With the growth of multimodal data research, VQA has gained significant attention due to its broad applications, including interactive educational tools, medical image diagnosis, customer service, entertainment, and social media captioning. Additionally, VQA plays a vital role in assisting visually impaired individuals by generating descriptive content from images. This survey introduces a taxonomy of VQA architectures, categorizing them based on design choices and key components to facilitate comparative analysis and evaluation. We review major VQA approaches, focusing on deep learning-based methods, and explore the emerging field of Large Visual Language Models (LVLMs) that have demonstrated success in multimodal tasks like VQA. The paper further examines available datasets and evaluation metrics essential for measuring VQA system performance, followed by an exploration of real-world VQA applications. Finally, we highlight ongoing challenges and future directions in VQA research, presenting open questions and potential areas for further development. This survey serves as a comprehensive resource for researchers and practitioners interested in the latest advancements and future
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
523,048
2005.14140
Modeling the Distribution of Normal Data in Pre-Trained Deep Features for Anomaly Detection
Anomaly Detection (AD) in images is a fundamental computer vision problem and refers to identifying images and image substructures that deviate significantly from the norm. Popular AD algorithms commonly try to learn a model of normality from scratch using task specific datasets, but are limited to semi-supervised approaches employing mostly normal data due to the inaccessibility of anomalies on a large scale combined with the ambiguous nature of anomaly appearance. We follow an alternative approach and demonstrate that deep feature representations learned by discriminative models on large natural image datasets are well suited to describe normality and detect even subtle anomalies in a transfer learning setting. Our model of normality is established by fitting a multivariate Gaussian (MVG) to deep feature representations of classification networks trained on ImageNet using normal data only. By subsequently applying the Mahalanobis distance as the anomaly score we outperform the current state of the art on the public MVTec AD dataset, achieving an AUROC value of $95.8 \pm 1.2$ (mean $\pm$ SEM) over all 15 classes. We further investigate why the learned representations are discriminative to the AD task using Principal Component Analysis. We find that the principal components containing little variance in normal data are the ones crucial for discriminating between normal and anomalous instances. This gives a possible explanation to the often sub-par performance of AD approaches trained from scratch using normal data only. By selectively fitting a MVG to these most relevant components only, we are able to further reduce model complexity while retaining AD performance. We also investigate setting the working point by selecting acceptable False Positive Rate thresholds based on the MVG assumption. Code available at https://github.com/ORippler/gaussian-ad-mvtec
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
179,194
1804.02628
Clustering and Retrieval Method of Immunological Memory Cell in Clonal Selection Algorithm
The clonal selection principle explains the basic features of an adaptive immune response to a antigenic stimulus. It established the idea that only those cells that recognize the antigens are selected to proliferate and differentiate. This paper explains a computational implementation of the clonal selection principle that explicitly takes into account the affinity maturation of the immune response. Antibodies generated by the clonal selection algorithm are clustered in some categories according to the affinity maturation, so that immunological memory cells which respond to the specified pathogen are created. Experimental results to classify the medical database of Coronary Heart Disease databases are reported. For the dataset, our proposed method shows the 99.6\% classification capability of training data.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
false
94,449
2311.16108
Picogrid: An experimental platform for prosumer microgrids
The Microgrid paradigm is gaining momentum as one of the key pieces of technology for expanding clean energy access and improving energy resilience. Most of the interest in this pertains to distinct entities that either generate electricity or act as loads, i.e., distinct producers and consumers. Remote community microgrids and emerging transactive energy service models with interconnected prosumers do not clearly fit into this paradigm. Notwithstanding various publications that present concepts and simulations, there has been a dearth of experimental platforms to study them, due to practical challenges. This paper presents the `Picogrid' - an experimental platform particularly designed for dc prosumer microgrids. It is a low-power, low-cost hardware platform that enables interconnecting multiple prosumer entities in a bench-top setup. Each prosumer sends data to a cloud dashboard and can receive set points for optimal operation from a remote computer system, lending itself to use in a virtual lab setup. The platform enables implementation of custom power profiles based on real-world generation and demand datasets. Features of the platform are demonstrated using simulation and experimental results.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
410,761
2203.05123
Multi-Task Adversarial Learning for Treatment Effect Estimation in Basket Trials
Estimating treatment effects from observational data provides insights about causality guiding many real-world applications such as different clinical study designs, which are the formulations of trials, experiments, and observational studies in medical, clinical, and other types of research. In this paper, we describe causal inference for application in a novel clinical design called basket trial that tests how well a new drug works in patients who have different types of cancer that all have the same mutation. We propose a multi-task adversarial learning (MTAL) method, which incorporates feature selection multi-task representation learning and adversarial learning to estimate potential outcomes across different tumor types for patients sharing the same genetic mutation but having different tumor types. In our paper, the basket trial is employed as an intuitive example to present this new causal inference setting. This new causal inference setting includes, but is not limited to basket trials. This setting has the same challenges as the traditional causal inference problem, i.e., missing counterfactual outcomes under different subgroups and treatment selection bias due to confounders. We present the practical advantages of our MTAL method for the analysis of synthetic basket trial data and evaluate the proposed estimator on two benchmarks, IHDP and News. The results demonstrate the superiority of our MTAL method over the competing state-of-the-art methods.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
284,712
2203.13977
Exploring Self-Attention for Visual Intersection Classification
In robot vision, self-attention has recently emerged as a technique for capturing non-local contexts. In this study, we introduced a self-attention mechanism into the intersection recognition system as a method to capture the non-local contexts behind the scenes. An intersection classification system comprises two distinctive modules: (a) a first-person vision (FPV) module, which uses a short egocentric view sequence as the intersection is passed, and (b) a third-person vision (TPV) module, which uses a single view immediately before entering the intersection. The self-attention mechanism is effective in the TPV module because most parts of the local pattern (e.g., road edges, buildings, and sky) are similar to each other, and thus the use of a non-local context (e.g., the angle between two diagonal corners around an intersection) would be effective. This study makes three major contributions. First, we proposed a self-attention-based approach for intersection classification using TPVs. Second, we presented a practical system in which a self-attention-based TPV module is combined with an FPV module to improve the overall recognition performance. Finally, experiments using the public KITTI dataset show that the above self-attention-based system outperforms conventional recognition based on local patterns and recognition based on convolution operations.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
287,832
2112.08688
Evidentiality-guided Generation for Knowledge-Intensive NLP Tasks
Retrieval-augmented generation models have shown state-of-the-art performance across many knowledge-intensive NLP tasks such as open question answering and fact verification. These models are trained to generate the final output given the retrieved passages, which can be irrelevant to the original query, leading to learning spurious cues or answer memorization. This work introduces a method to incorporate the evidentiality of passages -- whether a passage contains correct evidence to support the output -- into training the generator. We introduce a multi-task learning framework to jointly generate the final output and predict the evidentiality of each passage, leveraging a new task-agnostic method to obtain silver evidentiality labels for supervision. Our experiments on five datasets across three knowledge-intensive tasks show that our new evidentiality-guided generator significantly outperforms its direct counterpart with the same-size model and advances the state of the art on FaVIQ-Ambig. We attribute these improvements to both the auxiliary multi-task learning and silver evidentiality mining techniques.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
271,898
2311.09500
Pseudo-keypoint RKHS Learning for Self-supervised 6DoF Pose Estimation
We address the simulation-to-real domain gap in six degree-of-freedom pose estimation (6DoF PE), and propose a novel self-supervised keypoint voting-based 6DoF PE framework, effectively narrowing this gap using a learnable kernel in RKHS. We formulate this domain gap as a distance in high-dimensional feature space, distinct from previous iterative matching methods. We propose an adapter network, which is pre-trained on purely synthetic data with synthetic ground truth poses, and which evolves the network parameters from this source synthetic domain to the target real domain. Importantly, the real data training only uses pseudo-poses estimated by pseudo-keypoints, and thereby requires no real ground truth data annotations. Our proposed method is called RKHSPose, and achieves state-of-the-art performance among self-supervised methods on three commonly used 6DoF PE datasets including LINEMOD (+4.2%), Occlusion LINEMOD (+2%), and YCB-Video (+3%). It also compares favorably to fully supervised methods on all six applicable BOP core datasets, achieving within -11.3% to +0.2% of the top fully supervised results.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
408,153
2003.10306
Safe Crossover of Neural Networks Through Neuron Alignment
One of the main and largely unexplored challenges in evolving the weights of neural networks using genetic algorithms is to find a sensible crossover operation between parent networks. Indeed, naive crossover leads to functionally damaged offspring that do not retain information from the parents. This is because neural networks are invariant to permutations of neurons, giving rise to multiple ways of representing the same solution. This is often referred to as the competing conventions problem. In this paper, we propose a two-step safe crossover(SC) operator. First, the neurons of the parents are functionally aligned by computing how well they correlate, and only then are the parents recombined. We compare two ways of measuring relationships between neurons: Pairwise Correlation (PwC) and Canonical Correlation Analysis (CCA). We test our safe crossover operators (SC-PwC and SC-CCA) on MNIST and CIFAR-10 by performing arithmetic crossover on the weights of feed-forward neural network pairs. We show that it effectively transmits information from parents to offspring and significantly improves upon naive crossover. Our method is computationally fast,can serve as a way to explore the fitness landscape more efficiently and makes safe crossover a potentially promising operator in future neuroevolution research and applications.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
169,299
2408.03356
RayGauss: Volumetric Gaussian-Based Ray Casting for Photorealistic Novel View Synthesis
Differentiable volumetric rendering-based methods made significant progress in novel view synthesis. On one hand, innovative methods have replaced the Neural Radiance Fields (NeRF) network with locally parameterized structures, enabling high-quality renderings in a reasonable time. On the other hand, approaches have used differentiable splatting instead of NeRF's ray casting to optimize radiance fields rapidly using Gaussian kernels, allowing for fine adaptation to the scene. However, differentiable ray casting of irregularly spaced kernels has been scarcely explored, while splatting, despite enabling fast rendering times, is susceptible to clearly visible artifacts. Our work closes this gap by providing a physically consistent formulation of the emitted radiance c and density {\sigma}, decomposed with Gaussian functions associated with Spherical Gaussians/Harmonics for all-frequency colorimetric representation. We also introduce a method enabling differentiable ray casting of irregularly distributed Gaussians using an algorithm that integrates radiance fields slab by slab and leverages a BVH structure. This allows our approach to finely adapt to the scene while avoiding splatting artifacts. As a result, we achieve superior rendering quality compared to the state-of-the-art while maintaining reasonable training times and achieving inference speeds of 25 FPS on the Blender dataset. Project page with videos and code: https://raygauss.github.io/
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
478,988
2107.08661
Translatotron 2: High-quality direct speech-to-speech translation with voice preservation
We present Translatotron 2, a neural direct speech-to-speech translation model that can be trained end-to-end. Translatotron 2 consists of a speech encoder, a linguistic decoder, an acoustic synthesizer, and a single attention module that connects them together. Experimental results on three datasets consistently show that Translatotron 2 outperforms the original Translatotron by a large margin on both translation quality (up to +15.5 BLEU) and speech generation quality, and approaches the same of cascade systems. In addition, we propose a simple method for preserving speakers' voices from the source speech to the translation speech in a different language. Unlike existing approaches, the proposed method is able to preserve each speaker's voice on speaker turns without requiring for speaker segmentation. Furthermore, compared to existing approaches, it better preserves speaker's privacy and mitigates potential misuse of voice cloning for creating spoofing audio artifacts.
false
false
true
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
246,797
2406.12560
Towards Bayesian Data Selection
A wide range of machine learning algorithms iteratively add data to the training sample. Examples include semi-supervised learning, active learning, multi-armed bandits, and Bayesian optimization. We embed this kind of data addition into decision theory by framing data selection as a decision problem. This paves the way for finding Bayes-optimal selections of data. For the illustrative case of self-training in semi-supervised learning, we derive the respective Bayes criterion. We further show that deploying this criterion mitigates the issue of confirmation bias by empirically assessing our method for generalized linear models, semi-parametric generalized additive models, and Bayesian neural networks on simulated and real-world data.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
465,462
1208.2976
Discriminating different classes of biological networks by analyzing the graphs spectra distribution
The brain's structural and functional systems, protein-protein interaction, and gene networks are examples of biological systems that share some features of complex networks, such as highly connected nodes, modularity, and small-world topology. Recent studies indicate that some pathologies present topological network alterations relative to norms seen in the general population. Therefore, methods to discriminate the processes that generate the different classes of networks (e.g., normal and disease) might be crucial for the diagnosis, prognosis, and treatment of the disease. It is known that several topological properties of a network (graph) can be described by the distribution of the spectrum of its adjacency matrix. Moreover, large networks generated by the same random process have the same spectrum distribution, allowing us to use it as a "fingerprint". Based on this relationship, we introduce and propose the entropy of a graph spectrum to measure the "uncertainty" of a random graph and the Kullback-Leibler and Jensen-Shannon divergences between graph spectra to compare networks. We also introduce general methods for model selection and network model parameter estimation, as well as a statistical procedure to test the nullity of divergence between two classes of complex networks. Finally, we demonstrate the usefulness of the proposed methods by applying them on (1) protein-protein interaction networks of different species and (2) on networks derived from children diagnosed with Attention Deficit Hyperactivity Disorder (ADHD) and typically developing children. We conclude that scale-free networks best describe all the protein-protein interactions. Also, we show that our proposed measures succeeded in the identification of topological changes in the network while other commonly used measures (number of edges, clustering coefficient, average path length) failed.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
18,077
1908.01057
Proposition d'un mod\`ele pour l'optimisation automatique de boucles dans le compilateur Tiramisu : cas d'optimisation de d\'eroulage
Computer architectures become more and more complex. It requires more effort to develop techniques that improve the programs of performance and allow to exploit material resources efficiently. As a result, many transformations are applied on various levels of code abstraction. The first level is the high level, where the representation is close to the high level language. The second one is the low level, where the presentation is close to the machine code. Those transformations are called code optimizations. Optimizing programs requires deep expertise. On one hand, it is a tedious task, because it requires a lot of tests to find out the best combination of optimizations to apply with their best factors. On the other hand, this task is critical, because it may degrade the performance of the program instead of improving it. The automatization of this task can deal with this problem and permit to obtain good results. Our end of study project consists on proposing a novel approach based on neural networks to automatically optimize loops in Tiramisu. Tiramisu is a new language to create a code of high performance. It allows to separate between the algorithm and its optimizations. We have chosen loop unrolling as a study case. Our contribution aims to automate the choice of the best loop unrolling factor for a program written in Tiramisu.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
140,661
2501.05309
Private Selection with Heterogeneous Sensitivities
Differentially private (DP) selection involves choosing a high-scoring candidate from a finite candidate pool, where each score depends on a sensitive dataset. This problem arises naturally in a variety of contexts including model selection, hypothesis testing, and within many DP algorithms. Classical methods, such as Report Noisy Max (RNM), assume all candidates' scores are equally sensitive to changes in a single individual's data, but this often isn't the case. To address this, algorithms like the Generalised Exponential Mechanism (GEM) leverage variability in candidate sensitivities. However, we observe that while these algorithms can outperform RNM in some situations, they may underperform in others - they can even perform worse than random selection. In this work, we explore how the distribution of scores and sensitivities impacts DP selection mechanisms. In all settings we study, we find that there exists a mechanism that utilises heterogeneity in the candidate sensitivities that outperforms standard mechanisms like RNM. However, no single mechanism uniformly outperforms RNM. We propose using the correlation between the scores and sensitivities as the basis for deciding which DP selection mechanism to use. Further, we design a slight variant of GEM, modified GEM that generally performs well whenever GEM performs poorly. Relying on the correlation heuristic we propose combined GEM, which adaptively chooses between GEM and modified GEM and outperforms both in polarised settings.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
true
523,538
1510.06469
Optimal Temporal Logic Planning in Probabilistic Semantic Maps
This paper considers robot motion planning under temporal logic constraints in probabilistic maps obtained by semantic simultaneous localization and mapping (SLAM). The uncertainty in a map distribution presents a great challenge for obtaining correctness guarantees with respect to the linear temporal logic (LTL) specification. We show that the problem can be formulated as an optimal control problem in which both the semantic map and the logic formula evaluation are stochastic. Our first contribution is to reduce the stochastic control problem for a subclass of LTL to a deterministic shortest path problem by introducing a confidence parameter $delta$. A robot trajectory obtained from the deterministic problem is guaranteed to have minimum cost and to satisfy the logic specification in the true environment with probability $delta$. Our second contribution is to design an admissible heuristic function that guides the planning in the deterministic problem towards satisfying the temporal logic specification. This allows us to obtain an optimal and very efficient solution using the A* algorithm. The performance and correctness of our approach are demonstrated in a simulated semantic environment using a differential-drive robot.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
48,110
1612.02255
Knowledge Representation in Graphs using Convolutional Neural Networks
Knowledge Graphs (KG) constitute a flexible representation of complex relationships between entities particularly useful for biomedical data. These KG, however, are very sparse with many missing edges (facts) and the visualisation of the mesh of interactions nontrivial. Here we apply a compositional model to embed nodes and relationships into a vectorised semantic space to perform graph completion. A visualisation tool based on Convolutional Neural Networks and Self-Organised Maps (SOM) is proposed to extract high-level insights from the KG. We apply this technique to a subset of CTD, containing interactions of compounds with human genes / proteins and show that the performance is comparable to the one obtained by structural models.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
65,214
2401.14469
Unveiling the Unseen: Identifiable Clusters in Trained Depthwise Convolutional Kernels
Recent advances in depthwise-separable convolutional neural networks (DS-CNNs) have led to novel architectures, that surpass the performance of classical CNNs, by a considerable scalability and accuracy margin. This paper reveals another striking property of DS-CNN architectures: discernible and explainable patterns emerge in their trained depthwise convolutional kernels in all layers. Through an extensive analysis of millions of trained filters, with different sizes and from various models, we employed unsupervised clustering with autoencoders, to categorize these filters. Astonishingly, the patterns converged into a few main clusters, each resembling the difference of Gaussian (DoG) functions, and their first and second-order derivatives. Notably, we were able to classify over 95\% and 90\% of the filters from state-of-the-art ConvNextV2 and ConvNeXt models, respectively. This finding is not merely a technological curiosity; it echoes the foundational models neuroscientists have long proposed for the vision systems of mammals. Our results thus deepen our understanding of the emergent properties of trained DS-CNNs and provide a bridge between artificial and biological visual processing systems. More broadly, they pave the way for more interpretable and biologically-inspired neural network designs in the future.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
true
false
false
424,101
2302.07393
Spectral Clustering for Crowdsourcing with Inherently Distinct Task Types
The Dawid-Skene model is the most widely assumed model in the analysis of crowdsourcing algorithms that estimate ground-truth labels from noisy worker responses. In this work, we are motivated by crowdsourcing applications where workers have distinct skill sets and their accuracy additionally depends on a task's type. While weighted majority vote (WMV) with a single weight vector for each worker achieves the optimal label estimation error in the Dawid-Skene model, we show that different weights for different types are necessary for a multi-type model. Focusing on the case where there are two types of tasks, we propose a spectral method to partition tasks into two groups that cluster tasks by type. Our analysis reveals that task types can be perfectly recovered if the number of workers $n$ scales logarithmically with the number of tasks $d$. Any algorithm designed for the Dawid-Skene model can then be applied independently to each type to infer the labels. Numerical experiments show how clustering tasks by type before estimating ground-truth labels enhances the performance of crowdsourcing algorithms in practical applications.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
345,712
2407.16031
An Exponential Mixing Condition for Quantum Channels
Quantum channels, pivotal in information processing, describe transformations within quantum systems and enable secure communication and error correction. Ergodic and mixing properties elucidate their behavior. In this paper, we establish a sufficient condition for mixing based on a quantum Markov-Dobrushin inequality. We prove that if the Markov-Dobrushin constant of a quantum channel exceeds zero, it exhibits exponential mixing behavior. We explore limitations of some quantum channels, demonstrating that unistochastic channels are not mixing. Additionally, we analyze ergodicity of a class of mixed-unitary channels associated with finite groups of unitary operators. Finally, we apply our results to the qubit depolarizing channel.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
475,428
1811.04491
Multiple Subspace Alignment Improves Domain Adaptation
We present a novel unsupervised domain adaptation (DA) method for cross-domain visual recognition. Though subspace methods have found success in DA, their performance is often limited due to the assumption of approximating an entire dataset using a single low-dimensional subspace. Instead, we develop a method to effectively represent the source and target datasets via a collection of low-dimensional subspaces, and subsequently align them by exploiting the natural geometry of the space of subspaces, on the Grassmann manifold. We demonstrate the effectiveness of this approach, using empirical studies on two widely used benchmarks, with state of the art domain adaptation performance
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
113,104
2402.15959
Towards Robust Image Stitching: An Adaptive Resistance Learning against Compatible Attacks
Image stitching seamlessly integrates images captured from varying perspectives into a single wide field-of-view image. Such integration not only broadens the captured scene but also augments holistic perception in computer vision applications. Given a pair of captured images, subtle perturbations and distortions which go unnoticed by the human visual system tend to attack the correspondence matching, impairing the performance of image stitching algorithms. In light of this challenge, this paper presents the first attempt to improve the robustness of image stitching against adversarial attacks. Specifically, we introduce a stitching-oriented attack~(SoA), tailored to amplify the alignment loss within overlapping regions, thereby targeting the feature matching procedure. To establish an attack resistant model, we delve into the robustness of stitching architecture and develop an adaptive adversarial training~(AAT) to balance attack resistance with stitching precision. In this way, we relieve the gap between the routine adversarial training and benign models, ensuring resilience without quality compromise. Comprehensive evaluation across real-world and synthetic datasets validate the deterioration of SoA on stitching performance. Furthermore, AAT emerges as a more robust solution against adversarial perturbations, delivering superior stitching results. Code is available at:https://github.com/Jzy2017/TRIS.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
432,353
1504.07225
Correlational Neural Networks
Common Representation Learning (CRL), wherein different descriptions (or views) of the data are embedded in a common subspace, is receiving a lot of attention recently. Two popular paradigms here are Canonical Correlation Analysis (CCA) based approaches and Autoencoder (AE) based approaches. CCA based approaches learn a joint representation by maximizing correlation of the views when projected to the common subspace. AE based methods learn a common representation by minimizing the error of reconstructing the two views. Each of these approaches has its own advantages and disadvantages. For example, while CCA based approaches outperform AE based approaches for the task of transfer learning, they are not as scalable as the latter. In this work we propose an AE based approach called Correlational Neural Network (CorrNet), that explicitly maximizes correlation among the views when projected to the common subspace. Through a series of experiments, we demonstrate that the proposed CorrNet is better than the above mentioned approaches with respect to its ability to learn correlated common representations. Further, we employ CorrNet for several cross language tasks and show that the representations learned using CorrNet perform better than the ones learned using other state of the art approaches.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
true
false
false
42,510
2402.17246
SDR-Former: A Siamese Dual-Resolution Transformer for Liver Lesion Classification Using 3D Multi-Phase Imaging
Automated classification of liver lesions in multi-phase CT and MR scans is of clinical significance but challenging. This study proposes a novel Siamese Dual-Resolution Transformer (SDR-Former) framework, specifically designed for liver lesion classification in 3D multi-phase CT and MR imaging with varying phase counts. The proposed SDR-Former utilizes a streamlined Siamese Neural Network (SNN) to process multi-phase imaging inputs, possessing robust feature representations while maintaining computational efficiency. The weight-sharing feature of the SNN is further enriched by a hybrid Dual-Resolution Transformer (DR-Former), comprising a 3D Convolutional Neural Network (CNN) and a tailored 3D Transformer for processing high- and low-resolution images, respectively. This hybrid sub-architecture excels in capturing detailed local features and understanding global contextual information, thereby, boosting the SNN's feature extraction capabilities. Additionally, a novel Adaptive Phase Selection Module (APSM) is introduced, promoting phase-specific intercommunication and dynamically adjusting each phase's influence on the diagnostic outcome. The proposed SDR-Former framework has been validated through comprehensive experiments on two clinical datasets: a three-phase CT dataset and an eight-phase MR dataset. The experimental results affirm the efficacy of the proposed framework. To support the scientific community, we are releasing our extensive multi-phase MR dataset for liver lesion analysis to the public. This pioneering dataset, being the first publicly available multi-phase MR dataset in this field, also underpins the MICCAI LLD-MMRI Challenge. The dataset is accessible at:https://bit.ly/3IyYlgN.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
432,911
2501.09283
Free-Knots Kolmogorov-Arnold Network: On the Analysis of Spline Knots and Advancing Stability
Kolmogorov-Arnold Neural Networks (KANs) have gained significant attention in the machine learning community. However, their implementation often suffers from poor training stability and heavy trainable parameter. Furthermore, there is limited understanding of the behavior of the learned activation functions derived from B-splines. In this work, we analyze the behavior of KANs through the lens of spline knots and derive the lower and upper bound for the number of knots in B-spline-based KANs. To address existing limitations, we propose a novel Free Knots KAN that enhances the performance of the original KAN while reducing the number of trainable parameters to match the trainable parameter scale of standard Multi-Layer Perceptrons (MLPs). Additionally, we introduce new a training strategy to ensure $C^2$ continuity of the learnable spline, resulting in smoother activation compared to the original KAN and improve the training stability by range expansion. The proposed method is comprehensively evaluated on 8 datasets spanning various domains, including image, text, time series, multimodal, and function approximation tasks. The promising results demonstrates the feasibility of KAN-based network and the effectiveness of proposed method.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
525,080
2306.17249
A Hybrid System for Systematic Generalization in Simple Arithmetic Problems
Solving symbolic reasoning problems that require compositionality and systematicity is considered one of the key ingredients of human intelligence. However, symbolic reasoning is still a great challenge for deep learning models, which often cannot generalize the reasoning pattern to out-of-distribution test cases. In this work, we propose a hybrid system capable of solving arithmetic problems that require compositional and systematic reasoning over sequences of symbols. The model acquires such a skill by learning appropriate substitution rules, which are applied iteratively to the input string until the expression is completely resolved. We show that the proposed system can accurately solve nested arithmetical expressions even when trained only on a subset including the simplest cases, significantly outperforming both a sequence-to-sequence model trained end-to-end and a state-of-the-art large language model.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
false
376,645
1402.3144
A Robust Ensemble Approach to Learn From Positive and Unlabeled Data Using SVM Base Models
We present a novel approach to learn binary classifiers when only positive and unlabeled instances are available (PU learning). This problem is routinely cast as a supervised task with label noise in the negative set. We use an ensemble of SVM models trained on bootstrap resamples of the training data for increased robustness against label noise. The approach can be considered in a bagging framework which provides an intuitive explanation for its mechanics in a semi-supervised setting. We compared our method to state-of-the-art approaches in simulations using multiple public benchmark data sets. The included benchmark comprises three settings with increasing label noise: (i) fully supervised, (ii) PU learning and (iii) PU learning with false positives. Our approach shows a marginal improvement over existing methods in the second setting and a significant improvement in the third.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
30,844
2106.01151
Towards Deeper Deep Reinforcement Learning with Spectral Normalization
In computer vision and natural language processing, innovations in model architecture that increase model capacity have reliably translated into gains in performance. In stark contrast with this trend, state-of-the-art reinforcement learning (RL) algorithms often use small MLPs, and gains in performance typically originate from algorithmic innovations. It is natural to hypothesize that small datasets in RL necessitate simple models to avoid overfitting; however, this hypothesis is untested. In this paper we investigate how RL agents are affected by exchanging the small MLPs with larger modern networks with skip connections and normalization, focusing specifically on actor-critic algorithms. We empirically verify that naively adopting such architectures leads to instabilities and poor performance, likely contributing to the popularity of simple models in practice. However, we show that dataset size is not the limiting factor, and instead argue that instability from taking gradients through the critic is the culprit. We demonstrate that spectral normalization (SN) can mitigate this issue and enable stable training with large modern architectures. After smoothing with SN, larger models yield significant performance improvements -- suggesting that more "easy" gains may be had by focusing on model architectures in addition to algorithmic innovations.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
238,408
2109.00530
A Gradient Sampling Algorithm for Stratified Maps with Applications to Topological Data Analysis
We introduce a novel gradient descent algorithm extending the well-known Gradient Sampling methodology to the class of stratifiably smooth objective functions, which are defined as locally Lipschitz functions that are smooth on some regular pieces-called the strata-of the ambient Euclidean space. For this class of functions, our algorithm achieves a sub-linear convergence rate. We then apply our method to objective functions based on the (extended) persistent homology map computed over lower-star filters, which is a central tool of Topological Data Analysis. For this, we propose an efficient exploration of the corresponding stratification by using the Cayley graph of the permutation group. Finally, we provide benchmark and novel topological optimization problems, in order to demonstrate the utility and applicability of our framework.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
253,130
2209.06673
Fault-Tolerant Preparation of Quantum Polar Codes Encoding One Logical Qubit
This paper explores a new approach to fault-tolerant quantum computing (FTQC), relying on quantum polar codes. We consider quantum polar codes of Calderbank-Shor-Steane type, encoding one logical qubit, which we refer to as $\mathcal{Q}_1$ codes. First, we show that a subfamily of $\mathcal{Q}_1$ codes is equivalent to the well-known family of Shor codes. Moreover, we show that $\mathcal{Q}_1$ codes significantly outperform Shor codes, of the same length and minimum distance. Second, we consider the fault-tolerant preparation of $\mathcal{Q}_1$ code states. We give a recursive procedure to prepare a $\mathcal{Q}_1$ code state, based on two-qubit Pauli measurements only. The procedure is not by itself fault-tolerant, however, the measurement operations therein provide redundant classical bits, which can be advantageously used for error detection. Fault-tolerance is then achieved by combining the proposed recursive procedure with an error detection method. Finally, we consider the fault-tolerant error correction of $\mathcal{Q}_1$ codes. We use Steane error correction, which incorporates the proposed fault-tolerant code state preparation procedure. We provide numerical estimates of the logical error rates for $\mathcal{Q}_1$ and Shor codes of length $16$ and $64$ qubits, assuming a circuit-level depolarizing noise model. Remarkably, the $\mathcal{Q}_1$ code of length $64$ qubits achieves a logical error rate very close to $10^{-6}$ for the physical error rate $p = 10^{-3}$, therefore, demonstrating the potential of the proposed polar codes based approach to FTQC.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
317,477
2107.02839
Toward Robotically Automated Femoral Vascular Access
Advanced resuscitative technologies, such as Extra Corporeal Membrane Oxygenation (ECMO) cannulation or Resuscitative Endovascular Balloon Occlusion of the Aorta (REBOA), are technically difficult even for skilled medical personnel. This paper describes the core technologies that comprise a teleoperated system capable of granting femoral vascular access, which is an important step in both of these procedures and a major roadblock in their wider use in the field. These technologies include a kinematic manipulator, various sensing modalities, and a user interface. In addition, we evaluate our system on a surgical phantom as well as in-vivo porcine experiments. These resulted in, to the best of our knowledge, the first robot-assisted arterial catheterizations; a major step towards our eventual goal of automatic catheter insertion through the Seldinger technique.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
244,955
2404.03200
Future-Proofing Class Incremental Learning
Exemplar-Free Class Incremental Learning is a highly challenging setting where replay memory is unavailable. Methods relying on frozen feature extractors have drawn attention recently in this setting due to their impressive performances and lower computational costs. However, those methods are highly dependent on the data used to train the feature extractor and may struggle when an insufficient amount of classes are available during the first incremental step. To overcome this limitation, we propose to use a pre-trained text-to-image diffusion model in order to generate synthetic images of future classes and use them to train the feature extractor. Experiments on the standard benchmarks CIFAR100 and ImageNet-Subset demonstrate that our proposed method can be used to improve state-of-the-art methods for exemplar-free class incremental learning, especially in the most difficult settings where the first incremental step only contains few classes. Moreover, we show that using synthetic samples of future classes achieves higher performance than using real data from different classes, paving the way for better and less costly pre-training methods for incremental learning.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
444,162
2010.05856
Exemplar-Controllable Paraphrasing and Translation using Bitext
Most prior work on exemplar-based syntactically controlled paraphrase generation relies on automatically-constructed large-scale paraphrase datasets, which are costly to create. We sidestep this prerequisite by adapting models from prior work to be able to learn solely from bilingual text (bitext). Despite only using bitext for training, and in near zero-shot conditions, our single proposed model can perform four tasks: controlled paraphrase generation in both languages and controlled machine translation in both language directions. To evaluate these tasks quantitatively, we create three novel evaluation datasets. Our experimental results show that our models achieve competitive results on controlled paraphrase generation and strong performance on controlled machine translation. Analysis shows that our models learn to disentangle semantics and syntax in their latent representations, but still suffer from semantic drift.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
200,277
1701.03937
Hedera: Scalable Indexing and Exploring Entities in Wikipedia Revision History
Much of work in semantic web relying on Wikipedia as the main source of knowledge often work on static snapshots of the dataset. The full history of Wikipedia revisions, while contains much more useful information, is still difficult to access due to its exceptional volume. To enable further research on this collection, we developed a tool, named Hedera, that efficiently extracts semantic information from Wikipedia revision history datasets. Hedera exploits Map-Reduce paradigm to achieve rapid extraction, it is able to handle one entire Wikipedia articles revision history within a day in a medium-scale cluster, and supports flexible data structures for various kinds of semantic web study.
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
66,780
2401.15458
A New Method for Vehicle Logo Recognition Based on Swin Transformer
Intelligent Transportation Systems (ITS) utilize sensors, cameras, and big data analysis to monitor real-time traffic conditions, aiming to improve traffic efficiency and safety. Accurate vehicle recognition is crucial in this process, and Vehicle Logo Recognition (VLR) stands as a key method. VLR enables effective management and monitoring by distinguishing vehicles on the road. Convolutional Neural Networks (CNNs) have made impressive strides in VLR research. However, achieving higher performance demands significant time and computational resources for training. Recently, the rise of Transformer models has brought new opportunities to VLR. Swin Transformer, with its efficient computation and global feature modeling capabilities, outperforms CNNs under challenging conditions. In this paper, we implement real-time VLR using Swin Transformer and fine-tune it for optimal performance. Extensive experiments conducted on three public vehicle logo datasets (HFUT-VL1, XMU, CTGU-VLD) demonstrate impressive top accuracy results of 99.28%, 100%, and 99.17%, respectively. Additionally, the use of a transfer learning strategy enables our method to be on par with state-of-the-art VLR methods. These findings affirm the superiority of our approach over existing methods. Future research can explore and optimize the application of the Swin Transformer in other vehicle vision recognition tasks to drive advancements in ITS.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
424,452
1811.06458
Psychophysical evaluation of individual low-level feature influences on visual attention
In this study we provide the analysis of eye movement behavior elicited by low-level feature distinctiveness with a dataset of synthetically-generated image patterns. Design of visual stimuli was inspired by the ones used in previous psychophysical experiments, namely in free-viewing and visual searching tasks, to provide a total of 15 types of stimuli, divided according to the task and feature to be analyzed. Our interest is to analyze the influences of low-level feature contrast between a salient region and the rest of distractors, providing fixation localization characteristics and reaction time of landing inside the salient region. Eye-tracking data was collected from 34 participants during the viewing of a 230 images dataset. Results show that saliency is predominantly and distinctively influenced by: 1. feature type, 2. feature contrast, 3. temporality of fixations, 4. task difficulty and 5. center bias. This experimentation proposes a new psychophysical basis for saliency model evaluation using synthetic images.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
113,532
2007.10629
CSLNSpeech: solving extended speech separation problem with the help of Chinese sign language
Previous audio-visual speech separation methods use the synchronization of the speaker's facial movement and speech in the video to supervise the speech separation in a self-supervised way. In this paper, we propose a model to solve the speech separation problem assisted by both face and sign language, which we call the extended speech separation problem. We design a general deep learning network for learning the combination of three modalities, audio, face, and sign language information, for better solving the speech separation problem. To train the model, we introduce a large-scale dataset named the Chinese Sign Language News Speech (CSLNSpeech) dataset, in which three modalities of audio, face, and sign language coexist. Experiment results show that the proposed model has better performance and robustness than the usual audio-visual system. Besides, sign language modality can also be used alone to supervise speech separation tasks, and the introduction of sign language is helpful for hearing-impaired people to learn and communicate. Last, our model is a general speech separation framework and can achieve very competitive separation performance on two open-source audio-visual datasets. The code is available at https://github.com/iveveive/SLNSpeech
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
188,327
1905.12712
Path-Augmented Graph Transformer Network
Much of the recent work on learning molecular representations has been based on Graph Convolution Networks (GCN). These models rely on local aggregation operations and can therefore miss higher-order graph properties. To remedy this, we propose Path-Augmented Graph Transformer Networks (PAGTN) that are explicitly built on longer-range dependencies in graph-structured data. Specifically, we use path features in molecular graphs to create global attention layers. We compare our PAGTN model against the GCN model and show that our model consistently outperforms GCNs on molecular property prediction datasets including quantum chemistry (QM7, QM8, QM9), physical chemistry (ESOL, Lipophilictiy) and biochemistry (BACE, BBBP).
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
132,848
1002.2928
Reconstruction of signals with unknown spectra in information field theory with parameter uncertainty
The optimal reconstruction of cosmic metric perturbations and other signals requires knowledge of their power spectra and other parameters. If these are not known a priori, they have to be measured simultaneously from the same data used for the signal reconstruction. We formulate the general problem of signal inference in the presence of unknown parameters within the framework of information field theory. We develop a generic parameter uncertainty renormalized estimation (PURE) technique and address the problem of reconstructing Gaussian signals with unknown power-spectrum with five different approaches: (i) separate maximum-a-posteriori power spectrum measurement and subsequent reconstruction, (ii) maximum-a-posteriori power reconstruction with marginalized power-spectrum, (iii) maximizing the joint posterior of signal and spectrum, (iv) guessing the spectrum from the variance in the Wiener filter map, and (v) renormalization flow analysis of the field theoretical problem providing the PURE filter. In all cases, the reconstruction can be described or approximated as Wiener filter operations with assumed signal spectra derived from the data according to the same recipe, but with differing coefficients. All of these filters, except the renormalized one, exhibit a perception threshold in case of a Jeffreys prior for the unknown spectrum. Data modes, with variance below this threshold do not affect the signal reconstruction at all. Filter (iv) seems to be similar to the so called Karhune-Loeve and Feldman-Kaiser-Peacock estimators for galaxy power spectra used in cosmology, which therefore should also exhibit a marginal perception threshold if correctly implemented. We present statistical performance tests and show that the PURE filter is superior to the others.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
5,704
1906.07851
Key Instance Selection for Unsupervised Video Object Segmentation
This paper proposes key instance selection based on video saliency covering objectness and dynamics for unsupervised video object segmentation (UVOS). Our method takes frames sequentially and extracts object proposals with corresponding masks for each frame. We link objects according to their similarity until the M-th frame and then assign them unique IDs (i.e., instances). Similarity measure takes into account multiple properties such as ReID descriptor, expected trajectory, and semantic co-segmentation result. After M-th frame, we select K IDs based on video saliency and frequency of appearance; then only these key IDs are tracked through the remaining frames. Thanks to these technical contributions, our results are ranked third on the leaderboard of UVOS DAVIS challenge.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
135,701
2203.12899
Facial Expression Classification using Fusion of Deep Neural Network in Video for the 3rd ABAW3 Competition
For computers to recognize human emotions, expression classification is an equally important problem in the human-computer interaction area. In the 3rd Affective Behavior Analysis In-The-Wild competition, the task of expression classification includes eight classes with six basic expressions of human faces from videos. In this paper, we employ a transformer mechanism to encode the robust representation from the backbone. Fusion of the robust representations plays an important role in the expression classification task. Our approach achieves 30.35\% and 28.60\% for the $F_1$ score on the validation set and the test set, respectively. This result shows the effectiveness of the proposed architecture based on the Aff-Wild2 dataset.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
287,437
2303.15065
Single-subject Multi-contrast MRI Super-resolution via Implicit Neural Representations
Clinical routine and retrospective cohorts commonly include multi-parametric Magnetic Resonance Imaging; however, they are mostly acquired in different anisotropic 2D views due to signal-to-noise-ratio and scan-time constraints. Thus acquired views suffer from poor out-of-plane resolution and affect downstream volumetric image analysis that typically requires isotropic 3D scans. Combining different views of multi-contrast scans into high-resolution isotropic 3D scans is challenging due to the lack of a large training cohort, which calls for a subject-specific framework. This work proposes a novel solution to this problem leveraging Implicit Neural Representations (INR). Our proposed INR jointly learns two different contrasts of complementary views in a continuous spatial function and benefits from exchanging anatomical information between them. Trained within minutes on a single commodity GPU, our model provides realistic super-resolution across different pairs of contrasts in our experiments with three datasets. Using Mutual Information (MI) as a metric, we find that our model converges to an optimum MI amongst sequences, achieving anatomically faithful reconstruction. Code is available at: https://github.com/jqmcginnis/multi_contrast_inr/
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
354,347
1507.05875
Efficient Dodgson-Score Calculation Using Heuristics and Parallel Computing
Conflict of interest is the permanent companion of any population of agents (computational or biological). For that reason, the ability to compromise is of paramount importance, making voting a key element of societal mechanisms. One of the voting procedures most often discussed in the literature and, due to its intuitiveness, also conceptually quite appealing is Charles Dodgson's scoring rule, basically using the respective closeness to being a Condorcet winner for evaluating competing alternatives. In this paper, we offer insights on the practical limits of algorithms computing the exact Dodgson scores from a number of votes. While the problem itself is theoretically intractable, this work proposes and analyses five different solutions which try distinct approaches to practically solve the issue in an effective manner. Additionally, three of the discussed procedures can be run in parallel which has the potential of drastically reducing the problem size.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
true
45,330
2204.08194
Phishing Fraud Detection on Ethereum using Graph Neural Network
Blockchain has widespread applications in the financial field but has also attracted increasing cybercrimes. Recently, phishing fraud has emerged as a major threat to blockchain security, calling for the development of effective regulatory strategies. Nowadays network science has been widely used in modeling Ethereum transaction data, further introducing the network representation learning technology to analyze the transaction patterns. In this paper, we consider phishing detection as a graph classification task and propose an end-to-end Phishing Detection Graph Neural Network framework (PDGNN). Specifically, we first construct a lightweight Ethereum transaction network and extract transaction subgraphs of collected phishing accounts. Then we propose an end-to-end detection model based on Chebyshev-GCN to precisely distinguish between normal and phishing accounts. Extensive experiments on five Ethereum datasets demonstrate that our PDGNN significantly outperforms general phishing detection methods and scales well in large transaction networks.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
292,002
2010.00950
Regularized K-means through hard-thresholding
We study a framework of regularized $K$-means methods based on direct penalization of the size of the cluster centers. Different penalization strategies are considered and compared through simulation and theoretical analysis. Based on the results, we propose HT $K$-means, which uses an $\ell_0$ penalty to induce sparsity in the variables. Different techniques for selecting the tuning parameter are discussed and compared. The proposed method stacks up favorably with the most popular regularized $K$-means methods in an extensive simulation study. Finally, HT $K$-means is applied to several real data examples. Graphical displays are presented and used in these examples to gain more insight into the datasets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
198,453
2306.14587
Near-Field Beamforming for STAR-RIS Networks
Recently, simultaneously transmitting and reflecting reconfigurable intelligent surfaces (STAR-RISs) have received significant research interest. The employment of large STAR-RIS and high-frequency signaling inevitably make the near-field propagation dominant in wireless communications. In this work, a STAR-RIS aided near-field multiple-input multiple-multiple (MIMO) communication framework is proposed. A weighted sum rate maximization problem for the joint optimization of the active beamforming at the base station (BS) and the transmission/reflection-coefficients (TRCs) at the STAR-RIS is formulated. The non-convex problem is solved by a block coordinate descent (BCD)-based algorithm. In particular, under given STAR-RIS TRCs, the optimal active beamforming matrices are obtained by solving a convex quadratically constrained quadratic program. For given active beamforming matrices, two algorithms are suggested for optimizing the STAR-RIS TRCs: a penalty-based iterative (PEN) algorithm and an element-wise iterative (ELE) algorithm. The latter algorithm is conceived for STAR-RISs with a large number of elements. Numerical results illustrate that: i) near-field beamforming for STAR-RIS aided MIMO communications significantly improves the achieved weighted sum rate compared with far-field beamforming; ii) the near-field channels facilitated by the STAR-RIS provide enhanced degrees-of-freedom and accessibility for the multi-user MIMO system; and iii) the BCD-PEN algorithm achieves better performance than the BCD-ELE algorithm, while the latter has a significantly lower computational complexity.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
375,735
1805.10970
A Generative Model For Electron Paths
Chemical reactions can be described as the stepwise redistribution of electrons in molecules. As such, reactions are often depicted using `arrow-pushing' diagrams which show this movement as a sequence of arrows. We propose an electron path prediction model (ELECTRO) to learn these sequences directly from raw reaction data. Instead of predicting product molecules directly from reactant molecules in one shot, learning a model of electron movement has the benefits of (a) being easy for chemists to interpret, (b) incorporating constraints of chemistry, such as balanced atom counts before and after the reaction, and (c) naturally encoding the sparsity of chemical reactions, which usually involve changes in only a small number of atoms in the reactants.We design a method to extract approximate reaction paths from any dataset of atom-mapped reaction SMILES strings. Our model achieves excellent performance on an important subset of the USPTO reaction dataset, comparing favorably to the strongest baselines. Furthermore, we show that our model recovers a basic knowledge of chemistry without being explicitly trained to do so.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
98,812
2406.19803
Scalable and Domain-General Abstractive Proposition Segmentation
Segmenting text into fine-grained units of meaning is important to a wide range of NLP applications. The default approach of segmenting text into sentences is often insufficient, especially since sentences are usually complex enough to include multiple units of meaning that merit separate treatment in the downstream task. We focus on the task of abstractive proposition segmentation (APS): transforming text into simple, self-contained, well-formed sentences. Several recent works have demonstrated the utility of proposition segmentation with few-shot prompted LLMs for downstream tasks such as retrieval-augmented grounding and fact verification. However, this approach does not scale to large amounts of text and may not always extract all the facts from the input text. In this paper, we first introduce evaluation metrics for the task to measure several dimensions of quality. We then propose a scalable, yet accurate, proposition segmentation model. We model proposition segmentation as a supervised task by training LLMs on existing annotated datasets and show that training yields significantly improved results. We further show that by using the fine-tuned LLMs (Gemini Pro and Gemini Ultra) as teachers for annotating large amounts of multi-domain synthetic distillation data, we can train smaller student models (Gemma 1 2B and 7B) with results similar to the teacher LLMs. We then demonstrate that our technique leads to effective domain generalization, by annotating data in two domains outside the original training data and evaluating on them. Finally, as a key contribution of the paper, we share an easy-to-use API for NLP practitioners to use.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
468,571
2501.10639
Latent-space adversarial training with post-aware calibration for defending large language models against jailbreak attacks
Ensuring safety alignment has become a critical requirement for large language models (LLMs), particularly given their widespread deployment in real-world applications. However, LLMs remain susceptible to jailbreak attacks, which exploit system vulnerabilities to bypass safety measures and generate harmful outputs. Although numerous defense mechanisms based on adversarial training have been proposed, a persistent challenge lies in the exacerbation of over-refusal behaviors, which compromise the overall utility of the model. To address these challenges, we propose a Latent-space Adversarial Training with Post-aware Calibration (LATPC) framework. During the adversarial training phase, LATPC compares harmful and harmless instructions in the latent space and extracts safety-critical dimensions to construct refusal features attack, precisely simulating agnostic jailbreak attack types requiring adversarial mitigation. At the inference stage, an embedding-level calibration mechanism is employed to alleviate over-refusal behaviors with minimal computational overhead. Experimental results demonstrate that, compared to various defense methods across five types of jailbreak attacks, LATPC framework achieves a superior balance between safety and utility. Moreover, our analysis underscores the effectiveness of extracting safety-critical dimensions from the latent space for constructing robust refusal feature attacks.
false
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
525,598
1205.3993
Diffusion Strategies Outperform Consensus Strategies for Distributed Estimation over Adaptive Networks
Adaptive networks consist of a collection of nodes with adaptation and learning abilities. The nodes interact with each other on a local level and diffuse information across the network to solve estimation and inference tasks in a distributed manner. In this work, we compare the mean-square performance of two main strategies for distributed estimation over networks: consensus strategies and diffusion strategies. The analysis in the paper confirms that under constant step-sizes, diffusion strategies allow information to diffuse more thoroughly through the network and this property has a favorable effect on the evolution of the network: diffusion networks are shown to converge faster and reach lower mean-square deviation than consensus networks, and their mean-square stability is insensitive to the choice of the combination weights. In contrast, and surprisingly, it is shown that consensus networks can become unstable even if all the individual nodes are stable and able to solve the estimation task on their own. When this occurs, cooperation over the network leads to a catastrophic failure of the estimation task. This phenomenon does not occur for diffusion networks: we show that stability of the individual nodes always ensures stability of the diffusion network irrespective of the combination topology. Simulation results support the theoretical findings.
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
16,057
0903.5066
Modified-CS: Modifying Compressive Sensing for Problems with Partially Known Support
We study the problem of reconstructing a sparse signal from a limited number of its linear projections when a part of its support is known, although the known part may contain some errors. The ``known" part of the support, denoted T, may be available from prior knowledge. Alternatively, in a problem of recursively reconstructing time sequences of sparse spatial signals, one may use the support estimate from the previous time instant as the ``known" part. The idea of our proposed solution (modified-CS) is to solve a convex relaxation of the following problem: find the signal that satisfies the data constraint and is sparsest outside of T. We obtain sufficient conditions for exact reconstruction using modified-CS. These are much weaker than those needed for compressive sensing (CS) when the sizes of the unknown part of the support and of errors in the known part are small compared to the support size. An important extension called Regularized Modified-CS (RegModCS) is developed which also uses prior signal estimate knowledge. Simulation comparisons for both sparse and compressible signals are shown.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
3,436