id
stringlengths 9
16
| title
stringlengths 4
278
| abstract
stringlengths 3
4.08k
| cs.HC
bool 2
classes | cs.CE
bool 2
classes | cs.SD
bool 2
classes | cs.SI
bool 2
classes | cs.AI
bool 2
classes | cs.IR
bool 2
classes | cs.LG
bool 2
classes | cs.RO
bool 2
classes | cs.CL
bool 2
classes | cs.IT
bool 2
classes | cs.SY
bool 2
classes | cs.CV
bool 2
classes | cs.CR
bool 2
classes | cs.CY
bool 2
classes | cs.MA
bool 2
classes | cs.NE
bool 2
classes | cs.DB
bool 2
classes | Other
bool 2
classes | __index_level_0__
int64 0
541k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2407.01996
|
ViG-Bias: Visually Grounded Bias Discovery and Mitigation
|
The proliferation of machine learning models in critical decision making processes has underscored the need for bias discovery and mitigation strategies. Identifying the reasons behind a biased system is not straightforward, since in many occasions they are associated with hidden spurious correlations which are not easy to spot. Standard approaches rely on bias audits performed by analyzing model performance in pre-defined subgroups of data samples, usually characterized by common attributes like gender or ethnicity when it comes to people, or other specific attributes defining semantically coherent groups of images. However, it is not always possible to know a-priori the specific attributes defining the failure modes of visual recognition systems. Recent approaches propose to discover these groups by leveraging large vision language models, which enable the extraction of cross-modal embeddings and the generation of textual descriptions to characterize the subgroups where a certain model is underperforming. In this work, we argue that incorporating visual explanations (e.g. heatmaps generated via GradCAM or other approaches) can boost the performance of such bias discovery and mitigation frameworks. To this end, we introduce Visually Grounded Bias Discovery and Mitigation (ViG-Bias), a simple yet effective technique which can be integrated to a variety of existing frameworks to improve both, discovery and mitigation performance. Our comprehensive evaluation shows that incorporating visual explanations enhances existing techniques like DOMINO, FACTS and Bias-to-Text, across several challenging datasets, including CelebA, Waterbirds, and NICO++.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 469,543
|
2205.10798
|
PAC-Wrap: Semi-Supervised PAC Anomaly Detection
|
Anomaly detection is essential for preventing hazardous outcomes for safety-critical applications like autonomous driving. Given their safety-criticality, these applications benefit from provable bounds on various errors in anomaly detection. To achieve this goal in the semi-supervised setting, we propose to provide Probably Approximately Correct (PAC) guarantees on the false negative and false positive detection rates for anomaly detection algorithms. Our method (PAC-Wrap) can wrap around virtually any existing semi-supervised and unsupervised anomaly detection method, endowing it with rigorous guarantees. Our experiments with various anomaly detectors and datasets indicate that PAC-Wrap is broadly effective.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 297,878
|
1711.00532
|
SCDA: School Compatibility Decomposition Algorithm for Solving the
Multi-School Bus Routing and Scheduling Problem
|
Safely serving the school transportation demand with the minimum number of buses is one of the highest financial goals of school transportation directors. To achieve that objective, a good and efficient way to solve the routing and scheduling problem is required. Due to the growth of the computing power, the spotlight has been shed on solving the combined problem of the school bus routing and scheduling problem. We show that an integrated multi-school bus routing and scheduling can be formulated with the help of trip compatibility. A novel decomposition algorithm is proposed to solve the integrated model. The merit of this integrated model and the decomposition method is that with the consideration of the trip compatibility, the interrelationship between the routing and scheduling sub-problems will not be lost in the process of decomposition. Results show the proposed decomposed problem could provide the solutions using the same number of buses as the integrated model in much shorter time (as little as 0.6%) and that the proposed method can save up to 26% number of buses from existing research.
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 83,734
|
2412.02689
|
Preliminary Investigation into Data Scaling Laws for Imitation
Learning-Based End-to-End Autonomous Driving
|
The end-to-end autonomous driving paradigm has recently attracted lots of attention due to its scalability. However, existing methods are constrained by the limited scale of real-world data, which hinders a comprehensive exploration of the scaling laws associated with end-to-end autonomous driving. To address this issue, we collected substantial data from various driving scenarios and behaviors and conducted an extensive study on the scaling laws of existing imitation learning-based end-to-end autonomous driving paradigms. Specifically, approximately 4 million demonstrations from 23 different scenario types were gathered, amounting to over 30,000 hours of driving demonstrations. We performed open-loop evaluations and closed-loop simulation evaluations in 1,400 diverse driving demonstrations (1,300 for open-loop and 100 for closed-loop) under stringent assessment conditions. Through experimental analysis, we discovered that (1) the performance of the driving model exhibits a power-law relationship with the amount of training data; (2) a small increase in the quantity of long-tailed data can significantly improve the performance for the corresponding scenarios; (3) appropriate scaling of data enables the model to achieve combinatorial generalization in novel scenes and actions. Our results highlight the critical role of data scaling in improving the generalizability of models across diverse autonomous driving scenarios, assuring safe deployment in the real world. Project repository: https://github.com/ucaszyp/Driving-Scaling-Law
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 513,639
|
2010.15025
|
Non-Autoregressive Transformer ASR with CTC-Enhanced Decoder Input
|
Non-autoregressive (NAR) transformer models have achieved significantly inference speedup but at the cost of inferior accuracy compared to autoregressive (AR) models in automatic speech recognition (ASR). Most of the NAR transformers take a fixed-length sequence filled with MASK tokens or a redundant sequence copied from encoder states as decoder input, they cannot provide efficient target-side information thus leading to accuracy degradation. To address this problem, we propose a CTC-enhanced NAR transformer, which generates target sequence by refining predictions of the CTC module. Experimental results show that our method outperforms all previous NAR counterparts and achieves 50x faster decoding speed than a strong AR baseline with only 0.0 ~ 0.3 absolute CER degradation on Aishell-1 and Aishell-2 datasets.
| false
| false
| true
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 203,648
|
2402.18302
|
EchoTrack: Auditory Referring Multi-Object Tracking for Autonomous
Driving
|
This paper introduces the task of Auditory Referring Multi-Object Tracking (AR-MOT), which dynamically tracks specific objects in a video sequence based on audio expressions and appears as a challenging problem in autonomous driving. Due to the lack of semantic modeling capacity in audio and video, existing works have mainly focused on text-based multi-object tracking, which often comes at the cost of tracking quality, interaction efficiency, and even the safety of assistance systems, limiting the application of such methods in autonomous driving. In this paper, we delve into the problem of AR-MOT from the perspective of audio-video fusion and audio-video tracking. We put forward EchoTrack, an end-to-end AR-MOT framework with dual-stream vision transformers. The dual streams are intertwined with our Bidirectional Frequency-domain Cross-attention Fusion Module (Bi-FCFM), which bidirectionally fuses audio and video features from both frequency- and spatiotemporal domains. Moreover, we propose the Audio-visual Contrastive Tracking Learning (ACTL) regime to extract homogeneous semantic features between expressions and visual objects by learning homogeneous features between different audio and video objects effectively. Aside from the architectural design, we establish the first set of large-scale AR-MOT benchmarks, including Echo-KITTI, Echo-KITTI+, and Echo-BDD. Extensive experiments on the established benchmarks demonstrate the effectiveness of the proposed EchoTrack and its components. The source code and datasets are available at https://github.com/lab206/EchoTrack.
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 433,377
|
2110.14842
|
Towards the ultimate limits of quantum channel discrimination
|
This note studies the difficulty of discriminating quantum channels under operational regimes. First, we make a conjecture on the exponentially strong converse of quantum channel hypothesis testing under coherent strategies, meaning that any strategy to make the Type II error decays with an exponent larger than the regularized channel relative entropy will unavoidably result in the Type I error converging to one exponentially fast in the asymptotic limit. This conjecture will imply the desirable quantum channel Stein's Lemma and the continuity of the regularized (amortized) Sandwiched R\'{e}nyi channel divergence at $\alpha=1$. We also remark that there was a gap in the proof of the above conjecture in our previous arXiv version. Such gap exists since a lemma basically comes from [Brandao and Plenio, 2010] was found to be false. Second, we develop a framework to show the interplay between the strategies of channel discrimination, the operational regimes, and variants of channel divergences. This framework systematically underlies the operational meaning of quantum channel divergences in quantum channel discrimination. Our work makes an attempt towards understanding the ultimate limit of quantum channel discrimination, as well as its connection to quantum channel divergences in the asymptotic regime.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| 263,660
|
2501.04517
|
Histogram-Equalized Quantization for logic-gated Residual Neural
Networks
|
Adjusting the quantization according to the data or to the model loss seems mandatory to enable a high accuracy in the context of quantized neural networks. This work presents Histogram-Equalized Quantization (HEQ), an adaptive framework for linear symmetric quantization. HEQ automatically adapts the quantization thresholds using a unique step size optimization. We empirically show that HEQ achieves state-of-the-art performances on CIFAR-10. Experiments on the STL-10 dataset even show that HEQ enables a proper training of our proposed logic-gated (OR, MUX) residual networks with a higher accuracy at a lower hardware complexity than previous work.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| 523,251
|
2209.12244
|
Multimodal Channel-Mixing: Channel and Spatial Masked AutoEncoder on
Facial Action Unit Detection
|
Recent studies have focused on utilizing multi-modal data to develop robust models for facial Action Unit (AU) detection. However, the heterogeneity of multi-modal data poses challenges in learning effective representations. One such challenge is extracting relevant features from multiple modalities using a single feature extractor. Moreover, previous studies have not fully explored the potential of multi-modal fusion strategies. In contrast to the extensive work on late fusion, there are limited investigations on early fusion for channel information exploration. This paper presents a novel multi-modal reconstruction network, named Multimodal Channel-Mixing (MCM), as a pre-trained model to learn robust representation for facilitating multi-modal fusion. The approach follows an early fusion setup, integrating a Channel-Mixing module, where two out of five channels are randomly dropped. The dropped channels then are reconstructed from the remaining channels using masked autoencoder. This module not only reduces channel redundancy, but also facilitates multi-modal learning and reconstruction capabilities, resulting in robust feature learning. The encoder is fine-tuned on a downstream task of automatic facial action unit detection. Pre-training experiments were conducted on BP4D+, followed by fine-tuning on BP4D and DISFA to assess the effectiveness and robustness of the proposed framework. The results demonstrate that our method meets and surpasses the performance of state-of-the-art baseline methods.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 319,465
|
2409.18257
|
Developing a Dual-Stage Vision Transformer Model for Lung Disease
Classification
|
Lung diseases have become a prevalent problem throughout the United States, affecting over 34 million people. Accurate and timely diagnosis of the different types of lung diseases is critical, and Artificial Intelligence (AI) methods could speed up these processes. A dual-stage vision transformer is built throughout this research by integrating a Vision Transformer (ViT) and a Swin Transformer to classify 14 different lung diseases from X-ray scans of patients with these diseases. The proposed model achieved an accuracy of 92.06\% when making predictions on an unseen testing subset of the dataset after data preprocessing and training the neural network. The model showed promise for accurately classifying lung diseases and diagnosing patients who suffer from these harmful diseases.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 492,161
|
2107.02070
|
Antithetic Riemannian Manifold And Quantum-Inspired Hamiltonian Monte
Carlo
|
Markov Chain Monte Carlo inference of target posterior distributions in machine learning is predominately conducted via Hamiltonian Monte Carlo and its variants. This is due to Hamiltonian Monte Carlo based samplers ability to suppress random-walk behaviour. As with other Markov Chain Monte Carlo methods, Hamiltonian Monte Carlo produces auto-correlated samples which results in high variance in the estimators, and low effective sample size rates in the generated samples. Adding antithetic sampling to Hamiltonian Monte Carlo has been previously shown to produce higher effective sample rates compared to vanilla Hamiltonian Monte Carlo. In this paper, we present new algorithms which are antithetic versions of Riemannian Manifold Hamiltonian Monte Carlo and Quantum-Inspired Hamiltonian Monte Carlo. The Riemannian Manifold Hamiltonian Monte Carlo algorithm improves on Hamiltonian Monte Carlo by taking into account the local geometry of the target, which is beneficial for target densities that may exhibit strong correlations in the parameters. Quantum-Inspired Hamiltonian Monte Carlo is based on quantum particles that can have random mass. Quantum-Inspired Hamiltonian Monte Carlo uses a random mass matrix which results in better sampling than Hamiltonian Monte Carlo on spiky and multi-modal distributions such as jump diffusion processes. The analysis is performed on jump diffusion process using real world financial market data, as well as on real world benchmark classification tasks using Bayesian logistic regression.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 244,691
|
0903.1139
|
The Complexity of Reasoning with Global Constraints
|
Constraint propagation is one of the techniques central to the success of constraint programming. To reduce search, fast algorithms associated with each constraint prune the domains of variables. With global (or non-binary) constraints, the cost of such propagation may be much greater than the quadratic cost for binary constraints. We therefore study the computational complexity of reasoning with global constraints. We first characterise a number of important questions related to constraint propagation. We show that such questions are intractable in general, and identify dependencies between the tractability and intractability of the different questions. We then demonstrate how the tools of computational complexity can be used in the design and analysis of specific global constraints. In particular, we illustrate how computational complexity can be used to determine when a lesser level of local consistency should be enforced, when constraints can be safely generalized, when decomposing constraints will reduce the amount of pruning, and when combining constraints is tractable.
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| 3,297
|
2104.09556
|
Removing Diffraction Image Artifacts in Under-Display Camera via Dynamic
Skip Connection Network
|
Recent development of Under-Display Camera (UDC) systems provides a true bezel-less and notch-free viewing experience on smartphones (and TV, laptops, tablets), while allowing images to be captured from the selfie camera embedded underneath. In a typical UDC system, the microstructure of the semi-transparent organic light-emitting diode (OLED) pixel array attenuates and diffracts the incident light on the camera, resulting in significant image quality degradation. Oftentimes, noise, flare, haze, and blur can be observed in UDC images. In this work, we aim to analyze and tackle the aforementioned degradation problems. We define a physics-based image formation model to better understand the degradation. In addition, we utilize one of the world's first commodity UDC smartphone prototypes to measure the real-world Point Spread Function (PSF) of the UDC system, and provide a model-based data synthesis pipeline to generate realistically degraded images. We specially design a new domain knowledge-enabled Dynamic Skip Connection Network (DISCNet) to restore the UDC images. We demonstrate the effectiveness of our method through extensive experiments on both synthetic and real UDC data. Our physics-based image formation model and proposed DISCNet can provide foundations for further exploration in UDC image restoration, and even for general diffraction artifact removal in a broader sense.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 231,270
|
2303.14628
|
Multi-Frame Self-Supervised Depth Estimation with Multi-Scale Feature
Fusion in Dynamic Scenes
|
Multi-frame methods improve monocular depth estimation over single-frame approaches by aggregating spatial-temporal information via feature matching. However, the spatial-temporal feature leads to accuracy degradation in dynamic scenes. To enhance the performance, recent methods tend to propose complex architectures for feature matching and dynamic scenes. In this paper, we show that a simple learning framework, together with designed feature augmentation, leads to superior performance. (1) A novel dynamic objects detecting method with geometry explainability is proposed. The detected dynamic objects are excluded during training, which guarantees the static environment assumption and relieves the accuracy degradation problem of the multi-frame depth estimation. (2) Multi-scale feature fusion is proposed for feature matching in the multi-frame depth network, which improves feature matching, especially between frames with large camera motion. (3) The robust knowledge distillation with a robust teacher network and reliability guarantee is proposed, which improves the multi-frame depth estimation without computation complexity increase during the test. The experiments show that our proposed methods achieve great performance improvement on the multi-frame depth estimation.
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 354,178
|
2103.05150
|
Orientation to Pose: Continuum Robots Shape Sensing Based on Piecewise
Polynomial Curvature Model
|
Continuum robots are typically slender and flexible with infinite freedoms in theory, which poses a challenge for their control and application. The shape sensing of continuum robots is vital to realise accuracy control. This letter proposed a novel general real-time shape sensing framework of continuum robots based on the piecewise polynomial curvature (PPC) kinematics model. We illustrate the coupling between orientation and position at any given location of the continuum robots. Further, the coupling relation could be bridged by the PPC kinematics. Therefore, we propose to estimate the shape of continuum robots through orientation estimation, using the off-the-shelf orientation sensors, e.g., IMUs, mounted on certain locations. The approach gives a valuable framework to the shape sensing of continuum robots, universality, accuracy and convenience. The accuracy of the general approach is verified in the experiments of multi-type physical prototypes.
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 223,882
|
1003.0445
|
On The Design of Signature Codes in Decentralized Wireless Networks
|
This paper addresses a unified approach towards communication in decentralized wireless networks of separate transmitter-receiver pairs. In general, users are unaware of each other's codebooks and there is no central controller to assign the resources in the network to the users. A randomized signaling scheme is introduced in which each user locally spreads its Gaussian signal along a randomly generated spreading code comprised of a sequence of nonzero elements over a certain alphabet. Along with spreading, each transmitter also masks its output independently from transmission to transmission. Using a conditional version of entropy power inequality and a key lemma on the differential entropy of mixed Gaussian random vectors, achievable rates are developed for the users. It is seen that as the number of users increases, the achievable Sum Multiplexing Gain of the network approaches that of a centralized orthogonal scheme where multiuser interference is completely avoided. An interesting observation is that in general the elements of a spreading code are not equiprobable over the underlying alphabet. Finally, using the recently developed extremal inequality of Liu-Viswanath, we present an optimality result showing that transmission of Gaussian signals via spreading and masking yields higher achievable rates than the maximum achievable rate attained by applying masking only.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| 5,819
|
2110.11891
|
On the Necessity of Auditable Algorithmic Definitions for Machine
Unlearning
|
Machine unlearning, i.e. having a model forget about some of its training data, has become increasingly more important as privacy legislation promotes variants of the right-to-be-forgotten. In the context of deep learning, approaches for machine unlearning are broadly categorized into two classes: exact unlearning methods, where an entity has formally removed the data point's impact on the model by retraining the model from scratch, and approximate unlearning, where an entity approximates the model parameters one would obtain by exact unlearning to save on compute costs. In this paper, we first show that the definition that underlies approximate unlearning, which seeks to prove the approximately unlearned model is close to an exactly retrained model, is incorrect because one can obtain the same model using different datasets. Thus one could unlearn without modifying the model at all. We then turn to exact unlearning approaches and ask how to verify their claims of unlearning. Our results show that even for a given training trajectory one cannot formally prove the absence of certain data points used during training. We thus conclude that unlearning is only well-defined at the algorithmic level, where an entity's only possible auditable claim to unlearning is that they used a particular algorithm designed to allow for external scrutiny during an audit.
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| 262,645
|
2402.03774
|
Learning a Decision Tree Algorithm with Transformers
|
Decision trees are renowned for their ability to achieve high predictive performance while remaining interpretable, especially on tabular data. Traditionally, they are constructed through recursive algorithms, where they partition the data at every node in a tree. However, identifying a good partition is challenging, as decision trees optimized for local segments may not yield global generalization. To address this, we introduce MetaTree, a transformer-based model trained via meta-learning to directly produce strong decision trees. Specifically, we fit both greedy decision trees and globally optimized decision trees on a large number of datasets, and train MetaTree to produce only the trees that achieve strong generalization performance. This training enables MetaTree to emulate these algorithms and intelligently adapt its strategy according to the context, thereby achieving superior generalization performance.
| false
| false
| false
| false
| true
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 427,186
|
0708.3920
|
Kinematic analysis of the 3-RPR parallel manipulator
|
The aim of this paper is the kinematic study of a 3-RPR planar parallel manipulator where the three fixed revolute joints are actuated. The direct and inverse kinematic problem as well as the singular configuration is characterized. On parallel singular configurations, the motion produce by the mobile platform can be compared to the Reuleaux straight-line mechanism.
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 609
|
1205.4875
|
A New Approach Towards the Golomb-Welch Conjecture
|
The Golomb-Welch conjecture deals with the existence of perfect $e$% -error correcting Lee codes of word length $n,$ $PL(n,e)$ codes. Although there are many papers on the topic, the conjecture is still far from being solved. In this paper we initiate the study of an invariant connected to abelian groups that enables us to reformulate the conjecture, and then to prove the non-existence of linear PL(n,2) codes for $n\leq 12$. Using this new approach we also construct the first quasi-perfect Lee codes for dimension $n=3,$ and show that, for fixed $n$, there are only finitely many such codes over $Z^n$.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| 16,125
|
2308.03908
|
ViLP: Knowledge Exploration using Vision, Language, and Pose Embeddings
for Video Action Recognition
|
Video Action Recognition (VAR) is a challenging task due to its inherent complexities. Though different approaches have been explored in the literature, designing a unified framework to recognize a large number of human actions is still a challenging problem. Recently, Multi-Modal Learning (MML) has demonstrated promising results in this domain. In literature, 2D skeleton or pose modality has often been used for this task, either independently or in conjunction with the visual information (RGB modality) present in videos. However, the combination of pose, visual information, and text attributes has not been explored yet, though text and pose attributes independently have been proven to be effective in numerous computer vision tasks. In this paper, we present the first pose augmented Vision-language model (VLM) for VAR. Notably, our scheme achieves an accuracy of 92.81% and 73.02% on two popular human video action recognition benchmark datasets, UCF-101 and HMDB-51, respectively, even without any video data pre-training, and an accuracy of 96.11% and 75.75% after kinetics pre-training.
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 384,211
|
2402.03220
|
The Benefits of Reusing Batches for Gradient Descent in Two-Layer
Networks: Breaking the Curse of Information and Leap Exponents
|
We investigate the training dynamics of two-layer neural networks when learning multi-index target functions. We focus on multi-pass gradient descent (GD) that reuses the batches multiple times and show that it significantly changes the conclusion about which functions are learnable compared to single-pass gradient descent. In particular, multi-pass GD with finite stepsize is found to overcome the limitations of gradient flow and single-pass GD given by the information exponent (Ben Arous et al., 2021) and leap exponent (Abbe et al., 2023) of the target function. We show that upon re-using batches, the network achieves in just two time steps an overlap with the target subspace even for functions not satisfying the staircase property (Abbe et al., 2021). We characterize the (broad) class of functions efficiently learned in finite time. The proof of our results is based on the analysis of the Dynamical Mean-Field Theory (DMFT). We further provide a closed-form description of the dynamical process of the low-dimensional projections of the weights, and numerical experiments illustrating the theory.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 426,908
|
2001.10516
|
Tri-graph Information Propagation for Polypharmacy Side Effect
Prediction
|
The use of drug combinations often leads to polypharmacy side effects (POSE). A recent method formulates POSE prediction as a link prediction problem on a graph of drugs and proteins, and solves it with Graph Convolutional Networks (GCNs). However, due to the complex relationships in POSE, this method has high computational cost and memory demand. This paper proposes a flexible Tri-graph Information Propagation (TIP) model that operates on three subgraphs to learn representations progressively by propagation from protein-protein graph to drug-drug graph via protein-drug graph. Experiments show that TIP improves accuracy by 7%+, time efficiency by 83$\times$, and space efficiency by 3$\times$.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 161,845
|
2411.15100
|
XGrammar: Flexible and Efficient Structured Generation Engine for Large
Language Models
|
The applications of LLM Agents are becoming increasingly complex and diverse, leading to a high demand for structured outputs that can be parsed into code, structured function calls, and embodied agent commands. These developments bring significant demands for structured generation in LLM inference. Context-free grammar is a flexible approach to enable structured generation via constrained decoding. However, executing context-free grammar requires going through several stack states over all tokens in vocabulary during runtime, bringing non-negligible overhead for structured generation. In this paper, we propose XGrammar, a flexible and efficient structure generation engine for large language models. XGrammar accelerates context-free grammar execution by dividing the vocabulary into context-independent tokens that can be prechecked and context-dependent tokens that need to be interpreted during runtime. We further build transformations to expand the grammar context and reduce the number of context-independent tokens. Additionally, we build an efficient persistent stack to accelerate the context-dependent token checks. Finally, we co-design the grammar engine with LLM inference engine to overlap grammar computation with GPU executions. Evaluation results show that XGrammar can achieve up to 100x speedup over existing solutions. Combined with an LLM inference engine, it can generate near-zero overhead structure generation in end-to-end low-LLM serving.
| false
| false
| false
| false
| true
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| true
| 510,437
|
2104.13293
|
Evidential segmentation of 3D PET/CT images
|
PET and CT are two modalities widely used in medical image analysis. Accurately detecting and segmenting lymphomas from these two imaging modalities are critical tasks for cancer staging and radiotherapy planning. However, this task is still challenging due to the complexity of PET/CT images, and the computation cost to process 3D data. In this paper, a segmentation method based on belief functions is proposed to segment lymphomas in 3D PET/CT images. The architecture is composed of a feature extraction module and an evidential segmentation (ES) module. The ES module outputs not only segmentation results (binary maps indicating the presence or absence of lymphoma in each voxel) but also uncertainty maps quantifying the classification uncertainty. The whole model is optimized by minimizing Dice and uncertainty loss functions to increase segmentation accuracy. The method was evaluated on a database of 173 patients with diffuse large b-cell lymphoma. Quantitative and qualitative results show that our method outperforms the state-of-the-art methods.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 232,461
|
1312.6130
|
A Functional View of Strong Negation in Answer Set Programming
|
The distinction between strong negation and default negation has been useful in answer set programming. We present an alternative account of strong negation, which lets us view strong negation in terms of the functional stable model semantics by Bartholomew and Lee. More specifically, we show that, under complete interpretations, minimizing both positive and negative literals in the traditional answer set semantics is essentially the same as ensuring the uniqueness of Boolean function values under the functional stable model semantics. The same account lets us view Lifschitz's two-valued logic programs as a special case of the functional stable model semantics. In addition, we show how non-Boolean intensional functions can be eliminated in favor of Boolean intensional functions, and furthermore can be represented using strong negation, which provides a way to compute the functional stable model semantics using existing ASP solvers. We also note that similar results hold with the functional stable model semantics by Cabalar.
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 29,320
|
1810.06765
|
A survey of automatic de-identification of longitudinal clinical
narratives
|
Use of medical data, also known as electronic health records, in research helps develop and advance medical science. However, protecting patient confidentiality and identity while using medical data for analysis is crucial. Medical data can be in the form of tabular structures (i.e. tables), free-form narratives, and images. This study focuses on medical data in the free form longitudinal text. De-identification of electronic health records provides the opportunity to use such data for research without it affecting patient privacy, and avoids the need for individual patient consent. In recent years there is increasing interest in developing an accurate, robust and adaptable automatic de-identification system for electronic health records. This is mainly due to the dilemma between the availability of an abundance of health data, and the inability to use such data in research due to legal and ethical restrictions. De-identification tracks in competitions such as the 2014 i2b2 UTHealth and the 2016 CEGS N-GRID shared tasks have provided a great platform to advance this area. The primary reasons for this include the open source nature of the dataset and the fact that raw psychiatric data were used for 2016 competitions. This study focuses on noticeable trend changes in the techniques used in the development of automatic de-identification for longitudinal clinical narratives. More specifically, the shift from using conditional random fields (CRF) based systems only or rules (regular expressions, dictionary or combinations) based systems only, to hybrid models (combining CRF and rules), and more recently to deep learning based systems. We review the literature and results that arose from the 2014 and the 2016 competitions and discuss the outcomes of these systems. We also provide a list of research questions that emerged from this survey.
| false
| false
| false
| false
| true
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 110,499
|
2110.12798
|
Variational Gaussian Processes: A Functional Analysis View
|
Variational Gaussian process (GP) approximations have become a standard tool in fast GP inference. This technique requires a user to select variational features to increase efficiency. So far the common choices in the literature are disparate and lacking generality. We propose to view the GP as lying in a Banach space which then facilitates a unified perspective. This is used to understand the relationship between existing features and to draw a connection between kernel ridge regression and variational GP approximations.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 262,972
|
2205.11672
|
Why does Throwing Away Data Improve Worst-Group Error?
|
When facing data with imbalanced classes or groups, practitioners follow an intriguing strategy to achieve best results. They throw away examples until the classes or groups are balanced in size, and then perform empirical risk minimization on the reduced training set. This opposes common wisdom in learning theory, where the expected error is supposed to decrease as the dataset grows in size. In this work, we leverage extreme value theory to address this apparent contradiction. Our results show that the tails of the data distribution play an important role in determining the worst-group-accuracy of linear classifiers. When learning on data with heavy tails, throwing away data restores the geometric symmetry of the resulting classifier, and therefore improves its worst-group generalization.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 298,227
|
2407.09777
|
Graph Transformers: A Survey
|
Graph transformers are a recent advancement in machine learning, offering a new class of neural network models for graph-structured data. The synergy between transformers and graph learning demonstrates strong performance and versatility across various graph-related tasks. This survey provides an in-depth review of recent progress and challenges in graph transformer research. We begin with foundational concepts of graphs and transformers. We then explore design perspectives of graph transformers, focusing on how they integrate graph inductive biases and graph attention mechanisms into the transformer architecture. Furthermore, we propose a taxonomy classifying graph transformers based on depth, scalability, and pre-training strategies, summarizing key principles for effective development of graph transformer models. Beyond technical analysis, we discuss the applications of graph transformer models for node-level, edge-level, and graph-level tasks, exploring their potential in other application scenarios as well. Finally, we identify remaining challenges in the field, such as scalability and efficiency, generalization and robustness, interpretability and explainability, dynamic and complex graphs, as well as data quality and diversity, charting future directions for graph transformer research.
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 472,711
|
1407.2812
|
Rate-Optimal Detection of Very Short Signal Segments
|
Motivated by a range of applications in engineering and genomics, we consider in this paper detection of very short signal segments in three settings: signals with known shape, arbitrary signals, and smooth signals. Optimal rates of detection are established for the three cases and rate-optimal detectors are constructed. The detectors are easily implementable and are based on scanning with linear and quadratic statistics. Our analysis reveals both similarities and differences in the strategy and fundamental difficulty of detection among these three settings.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| 34,566
|
2109.02517
|
Error Controlled Actor-Critic
|
On error of value function inevitably causes an overestimation phenomenon and has a negative impact on the convergence of the algorithms. To mitigate the negative effects of the approximation error, we propose Error Controlled Actor-critic which ensures confining the approximation error in value function. We present an analysis of how the approximation error can hinder the optimization process of actor-critic methods.Then, we derive an upper boundary of the approximation error of Q function approximator and find that the error can be lowered by restricting on the KL-divergence between every two consecutive policies when training the policy. The results of experiments on a range of continuous control tasks demonstrate that the proposed actor-critic algorithm apparently reduces the approximation error and significantly outperforms other model-free RL algorithms.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 253,776
|
2206.08724
|
Crowdsourcing Relative Rankings of Multi-Word Expressions: Experts
versus Non-Experts
|
In this study we investigate to which degree experts and non-experts agree on questions of difficulty in a crowdsourcing experiment. We ask non-experts (second language learners of Swedish) and two groups of experts (teachers of Swedish as a second/foreign language and CEFR experts) to rank multi-word expressions in a crowdsourcing experiment. We find that the resulting rankings by all the three tested groups correlate to a very high degree, which suggests that judgments produced in a comparative setting are not influenced by professional insights into Swedish as a second language.
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 303,266
|
2407.15472
|
Learning deep illumination-robust features from multispectral filter
array images
|
Multispectral (MS) snapshot cameras equipped with a MS filter array (MSFA), capture multiple spectral bands in a single shot, resulting in a raw mosaic image where each pixel holds only one channel value. The fully-defined MS image is estimated from the raw one through \textit{demosaicing}, which inevitably introduces spatio-spectral artifacts. Moreover, training on fully-defined MS images can be computationally intensive, particularly with deep neural networks (DNNs), and may result in features lacking discrimination power due to suboptimal learning of spatio-spectral interactions. Furthermore, outdoor MS image acquisition occurs under varying lighting conditions, leading to illumination-dependent features. This paper presents an original approach to learn discriminant and illumination-robust features directly from raw images. It involves: \textit{raw spectral constancy} to mitigate the impact of illumination, \textit{MSFA-preserving} transformations suited for raw image augmentation to train DNNs on diverse raw textures, and \textit{raw-mixing} to capture discriminant spatio-spectral interactions in raw images. Experiments on MS image classification show that our approach outperforms both handcrafted and recent deep learning-based methods, while also requiring significantly less computational effort. The source code is available at https://github.com/AnisAmziane/RawTexture.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 475,199
|
2103.04469
|
Local word statistics affect reading times independently of surprisal
|
Surprisal theory has provided a unifying framework for understanding many phenomena in sentence processing (Hale, 2001; Levy, 2008a), positing that a word's conditional probability given all prior context fully determines processing difficulty. Problematically for this claim, one local statistic, word frequency, has also been shown to affect processing, even when conditional probability given context is held constant. Here, we ask whether other local statistics have a role in processing, or whether word frequency is a special case. We present the first clear evidence that more complex local statistics, word bigram and trigram probability, also affect processing independently of surprisal. These findings suggest a significant and independent role of local statistics in processing. Further, it motivates research into new generalizations of surprisal that can also explain why local statistical information should have an outsized effect.
| false
| false
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 223,644
|
2309.06188
|
Computer Vision Pipeline for Automated Antarctic Krill Analysis
|
British Antarctic Survey (BAS) researchers launch annual expeditions to the Antarctic in order to estimate Antarctic Krill biomass and assess the change from previous years. These comparisons provide insight into the effects of the current environment on this key component of the marine food chain. In this work we have developed tools for automating the data collection and analysis process, using web-based image annotation tools and deep learning image classification and regression models. We achieve highly accurate krill instance segmentation results with an average 77.28% AP score, as well as separate maturity stage and length estimation of krill specimens with 62.99% accuracy and a 1.98mm length error respectively.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 391,341
|
2409.08202
|
What Makes a Maze Look Like a Maze?
|
A unique aspect of human visual understanding is the ability to flexibly interpret abstract concepts: acquiring lifted rules explaining what they symbolize, grounding them across familiar and unfamiliar contexts, and making predictions or reasoning about them. While off-the-shelf vision-language models excel at making literal interpretations of images (e.g., recognizing object categories such as tree branches), they still struggle to make sense of such visual abstractions (e.g., how an arrangement of tree branches may form the walls of a maze). To address this challenge, we introduce Deep Schema Grounding (DSG), a framework that leverages explicit structured representations of visual abstractions for grounding and reasoning. At the core of DSG are schemas--dependency graph descriptions of abstract concepts that decompose them into more primitive-level symbols. DSG uses large language models to extract schemas, then hierarchically grounds concrete to abstract components of the schema onto images with vision-language models. The grounded schema is used to augment visual abstraction understanding. We systematically evaluate DSG and different methods in reasoning on our new Visual Abstractions Dataset, which consists of diverse, real-world images of abstract concepts and corresponding question-answer pairs labeled by humans. We show that DSG significantly improves the abstract visual reasoning performance of vision-language models, and is a step toward human-aligned understanding of visual abstractions.
| false
| false
| false
| false
| true
| false
| true
| false
| true
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 487,807
|
2112.08789
|
Harnessing Cross-lingual Features to Improve Cognate Detection for
Low-resource Languages
|
Cognates are variants of the same lexical form across different languages; for example 'fonema' in Spanish and 'phoneme' in English are cognates, both of which mean 'a unit of sound'. The task of automatic detection of cognates among any two languages can help downstream NLP tasks such as Cross-lingual Information Retrieval, Computational Phylogenetics, and Machine Translation. In this paper, we demonstrate the use of cross-lingual word embeddings for detecting cognates among fourteen Indian Languages. Our approach introduces the use of context from a knowledge graph to generate improved feature representations for cognate detection. We, then, evaluate the impact of our cognate detection mechanism on neural machine translation (NMT), as a downstream task. We evaluate our methods to detect cognates on a challenging dataset of twelve Indian languages, namely, Sanskrit, Hindi, Assamese, Oriya, Kannada, Gujarati, Tamil, Telugu, Punjabi, Bengali, Marathi, and Malayalam. Additionally, we create evaluation datasets for two more Indian languages, Konkani and Nepali. We observe an improvement of up to 18% points, in terms of F-score, for cognate detection. Furthermore, we observe that cognates extracted using our method help improve NMT quality by up to 2.76 BLEU. We also release our code, newly constructed datasets and cross-lingual models publicly.
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 271,938
|
2207.05321
|
Bi-fidelity Evolutionary Multiobjective Search for Adversarially Robust
Deep Neural Architectures
|
Deep neural networks have been found vulnerable to adversarial attacks, thus raising potentially concerns in security-sensitive contexts. To address this problem, recent research has investigated the adversarial robustness of deep neural networks from the architectural point of view. However, searching for architectures of deep neural networks is computationally expensive, particularly when coupled with adversarial training process. To meet the above challenge, this paper proposes a bi-fidelity multiobjective neural architecture search approach. First, we formulate the NAS problem for enhancing adversarial robustness of deep neural networks into a multiobjective optimization problem. Specifically, in addition to a low-fidelity performance predictor as the first objective, we leverage an auxiliary-objective -- the value of which is the output of a surrogate model trained with high-fidelity evaluations. Secondly, we reduce the computational cost by combining three performance estimation methods, i.e., parameter sharing, low-fidelity evaluation, and surrogate-based predictor. The effectiveness of the proposed approach is confirmed by extensive experiments conducted on CIFAR-10, CIFAR-100 and SVHN datasets.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| 307,499
|
2201.10095
|
RecShard: Statistical Feature-Based Memory Optimization for
Industry-Scale Neural Recommendation
|
We propose RecShard, a fine-grained embedding table (EMB) partitioning and placement technique for deep learning recommendation models (DLRMs). RecShard is designed based on two key observations. First, not all EMBs are equal, nor all rows within an EMB are equal in terms of access patterns. EMBs exhibit distinct memory characteristics, providing performance optimization opportunities for intelligent EMB partitioning and placement across a tiered memory hierarchy. Second, in modern DLRMs, EMBs function as hash tables. As a result, EMBs display interesting phenomena, such as the birthday paradox, leaving EMBs severely under-utilized. RecShard determines an optimal EMB sharding strategy for a set of EMBs based on training data distributions and model characteristics, along with the bandwidth characteristics of the underlying tiered memory hierarchy. In doing so, RecShard achieves over 6 times higher EMB training throughput on average for capacity constrained DLRMs. The throughput increase comes from improved EMB load balance by over 12 times and from the reduced access to the slower memory by over 87 times.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| 276,876
|
2107.11020
|
Emotion analysis and detection during COVID-19
|
Crises such as natural disasters, global pandemics, and social unrest continuously threaten our world and emotionally affect millions of people worldwide in distinct ways. Understanding emotions that people express during large-scale crises helps inform policy makers and first responders about the emotional states of the population as well as provide emotional support to those who need such support. We present CovidEmo, ~3K English tweets labeled with emotions and temporally distributed across 18 months. Our analyses reveal the emotional toll caused by COVID-19, and changes of the social narrative and associated emotions over time. Motivated by the time-sensitive nature of crises and the cost of large-scale annotation efforts, we examine how well large pre-trained language models generalize across domains and timeline in the task of perceived emotion prediction in the context of COVID-19. Our analyses suggest that cross-domain information transfers occur, yet there are still significant gaps. We propose semi-supervised learning as a way to bridge this gap, obtaining significantly better performance using unlabeled data from the target domain.
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| true
| false
| false
| false
| false
| 247,474
|
2405.17955
|
Efficient Prior Calibration From Indirect Data
|
Bayesian inversion is central to the quantification of uncertainty within problems arising from numerous applications in science and engineering. To formulate the approach, four ingredients are required: a forward model mapping the unknown parameter to an element of a solution space, often the solution space for a differential equation; an observation operator mapping an element of the solution space to the data space; a noise model describing how noise pollutes the observations; and a prior model describing knowledge about the unknown parameter before the data is acquired. This paper is concerned with learning the prior model from data; in particular, learning the prior from multiple realizations of indirect data obtained through the noisy observation process. The prior is represented, using a generative model, as the pushforward of a Gaussian in a latent space; the pushforward map is learned by minimizing an appropriate loss function. A metric that is well-defined under empirical approximation is used to define the loss function for the pushforward map to make an implementable methodology. Furthermore, an efficient residual-based neural operator approximation of the forward model is proposed and it is shown that this may be learned concurrently with the pushforward map, using a bilevel optimization formulation of the problem; this use of neural operator approximation has the potential to make prior learning from indirect data more computationally efficient, especially when the observation process is expensive, non-smooth or not known. The ideas are illustrated with the Darcy flow inverse problem of finding permeability from piezometric head measurements.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 458,206
|
2012.13504
|
LMMSE Processing for Cell-free Massive MIMO with Radio Stripes and MRC
Fronthaul
|
Cell-free massive MIMO provides ubiquitous connectivity for multiple users, and implementation using radio stripes is very efficient. Compared with collocated massive MIMO, the major cost includes fronthaul overheads and AP hardware. Maximum ratio combination (MRC) achieves a low fronthaul loading and low-cost AP, but the performance is bad. This letter proposes to implement a quasi-LMMSE (Q-LMMSE) processing using MRC fronthaul design. Q-LMMSE is derived from a standard LMMSE, which gains interference information from MRC signal via singular value decomposition. Simulation results show that the proposed Q-LMMSE increases the spectral efficiency by several times using same MRC fronthaul.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| 213,236
|
1912.02914
|
RED-NET: A Recursive Encoder-Decoder Network for Edge Detection
|
In this paper, we introduce RED-NET: A Recursive Encoder-Decoder Network with Skip-Connections for edge detection in natural images. The proposed network is a novel integration of a Recursive Neural Network with an Encoder-Decoder architecture. The recursive network enables us to increase the network depth without increasing the number of parameters. Adding skip-connections between encoder and decoder helps the gradients reach all the layers of a network more easily and allows information related to finer details in the early stage of the encoder to be fully utilized in the decoder. Based on our extensive experiments on popular boundary detection datasets including BSDS500 \cite{Arbelaez2011}, NYUD \cite{Silberman2012} and Pascal Context \cite{Mottaghi2014}, RED-NET significantly advances the state-of-the-art on edge detection regarding standard evaluation metrics such as Optimal Dataset Scale (ODS) F-measure, Optimal Image Scale (OIS) F-measure, and Average Precision (AP).
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 156,464
|
2401.03085
|
Consensus-Threshold Criterion for Offline Signature Verification using
Convolutional Neural Network Learned Representations
|
A genuine signer's signature is naturally unstable even at short time-intervals whereas, expert forgers always try to perfectly mimic a genuine signer's signature. This presents a challenge which puts a genuine signer at risk of being denied access, while a forge signer is granted access. The implication is a high false acceptance rate (FAR) which is the percentage of forge signature classified as belonging to a genuine class. Existing work have only scratched the surface of signature verification because the misclassification error remains high. In this paper, a consensus-threshold distance-based classifier criterion is proposed for offline writer-dependent signature verification. Using features extracted from SigNet and SigNet-F deep convolutional neural network models, the proposed classifier minimizes FAR. This is demonstrated via experiments on four datasets: GPDS-300, MCYT, CEDAR and Brazilian PUC-PR datasets. On GPDS-300, the consensus threshold classifier improves the state-of-the-art performance by achieving a 1.27% FAR compared to 8.73% and 17.31% recorded in literature. This performance is consistent across other datasets and guarantees that the risk of imposters gaining access to sensitive documents or transactions is minimal.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 419,959
|
2411.14639
|
Differentially Private Adaptation of Diffusion Models via Noisy
Aggregated Embeddings
|
We introduce novel methods for adapting diffusion models under differential privacy (DP) constraints, enabling privacy-preserving style and content transfer without fine-tuning. Traditional approaches to private adaptation, such as DP-SGD, incur significant computational overhead and degrade model performance when applied to large, complex models. Our approach instead leverages embedding-based techniques: Universal Guidance and Textual Inversion (TI), adapted with differentially private mechanisms. We apply these methods to Stable Diffusion for style adaptation using two private datasets: a collection of artworks by a single artist and pictograms from the Paris 2024 Olympics. Experimental results show that the TI-based adaptation achieves superior fidelity in style transfer, even under strong privacy guarantees, while both methods maintain high privacy resilience by employing calibrated noise and subsampling strategies. Our findings demonstrate a feasible and efficient pathway for privacy-preserving diffusion model adaptation, balancing data protection with the fidelity of generated images, and offer insights into embedding-driven methods for DP in generative AI applications.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| true
| true
| false
| false
| false
| false
| false
| 510,257
|
2203.04111
|
Plumeria at SemEval-2022 Task 6: Robust Approaches for Sarcasm Detection
for English and Arabic Using Transformers and Data Augmentation
|
This paper describes our submission to SemEval-2022 Task 6 on sarcasm detection and its five subtasks for English and Arabic. Sarcasm conveys a meaning which contradicts the literal meaning, and it is mainly found on social networks. It has a significant role in understanding the intention of the user. For detecting sarcasm, we used deep learning techniques based on transformers due to its success in the field of Natural Language Processing (NLP) without the need for feature engineering. The datasets were taken from tweets. We created new datasets by augmenting with external data or by using word embeddings and repetition of instances. Experiments were done on the datasets with different types of preprocessing because it is crucial in this task. The rank of our team was consistent across four subtasks (fourth rank in three subtasks and sixth rank in one subtask); whereas other teams might be in the top ranks for some subtasks but rank drastically less in other subtasks. This implies the robustness and stability of the models and the techniques we used.
| false
| false
| false
| false
| true
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 284,352
|
1811.04129
|
STA: Spatial-Temporal Attention for Large-Scale Video-based Person
Re-Identification
|
In this work, we propose a novel Spatial-Temporal Attention (STA) approach to tackle the large-scale person re-identification task in videos. Different from the most existing methods, which simply compute representations of video clips using frame-level aggregation (e.g. average pooling), the proposed STA adopts a more effective way for producing robust clip-level feature representation. Concretely, our STA fully exploits those discriminative parts of one target person in both spatial and temporal dimensions, which results in a 2-D attention score matrix via inter-frame regularization to measure the importances of spatial parts across different frames. Thus, a more robust clip-level feature representation can be generated according to a weighted sum operation guided by the mined 2-D attention score matrix. In this way, the challenging cases for video-based person re-identification such as pose variation and partial occlusion can be well tackled by the STA. We conduct extensive experiments on two large-scale benchmarks, i.e. MARS and DukeMTMC-VideoReID. In particular, the mAP reaches 87.7% on MARS, which significantly outperforms the state-of-the-arts with a large margin of more than 11.6%.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 112,998
|
2404.12810
|
Enhancing Counterfactual Explanation Search with Diffusion Distance and
Directional Coherence
|
A pressing issue in the adoption of AI models is the increasing demand for more human-centric explanations of their predictions. To advance towards more human-centric explanations, understanding how humans produce and select explanations has been beneficial. In this work, inspired by insights of human cognition we propose and test the incorporation of two novel biases to enhance the search for effective counterfactual explanations. Central to our methodology is the application of diffusion distance, which emphasizes data connectivity and actionability in the search for feasible counterfactual explanations. In particular, diffusion distance effectively weights more those points that are more interconnected by numerous short-length paths. This approach brings closely connected points nearer to each other, identifying a feasible path between them. We also introduce a directional coherence term that allows the expression of a preference for the alignment between the joint and marginal directional changes in feature space to reach a counterfactual. This term enables the generation of counterfactual explanations that align with a set of marginal predictions based on expectations of how the outcome of the model varies by changing one feature at a time. We evaluate our method, named Coherent Directional Counterfactual Explainer (CoDiCE), and the impact of the two novel biases against existing methods such as DiCE, FACE, Prototypes, and Growing Spheres. Through a series of ablation experiments on both synthetic and real datasets with continuous and mixed-type features, we demonstrate the effectiveness of our method.
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 448,044
|
2410.08218
|
A Visual-Analytical Approach for Automatic Detection of Cyclonic Events
in Satellite Observations
|
Estimating the location and intensity of tropical cyclones holds crucial significance for predicting catastrophic weather events. In this study, we approach this task as a detection and regression challenge, specifically over the North Indian Ocean (NIO) region where best tracks location and wind speed information serve as the labels. The current process for cyclone detection and intensity estimation involves physics-based simulation studies which are time-consuming, only using image features will automate the process for significantly faster and more accurate predictions. While conventional methods typically necessitate substantial prior knowledge for training, we are exploring alternative approaches to enhance efficiency. This research aims to focus specifically on cyclone detection, intensity estimation and related aspects using only image input and data-driven approaches and will lead to faster inference time and automate the process as opposed to current NWP models being utilized at SAC. In context to algorithm development, a novel two stage detection and intensity estimation module is proposed. In the first level detection we try to localize the cyclone over an entire image as captured by INSAT3D over the NIO (North Indian Ocean). For the intensity estimation task, we propose a CNN-LSTM network, which works on the cyclone centered images, utilizing a ResNet-18 backbone, by which we are able to capture both temporal and spatial characteristics.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 497,012
|
2012.14736
|
Present-Biased Optimization
|
This paper explores the behavior of present-biased agents, that is, agents who erroneously anticipate the costs of future actions compared to their real costs. Specifically, the paper extends the original framework proposed by Akerlof (1991) for studying various aspects of human behavior related to time-inconsistent planning, including procrastination, and abandonment, as well as the elegant graph-theoretic model encapsulating this framework recently proposed by Kleinberg and Oren (2014). The benefit of this extension is twofold. First, it enables to perform fine grained analysis of the behavior of present-biased agents depending on the optimisation task they have to perform. In particular, we study covering tasks vs. hitting tasks, and show that the ratio between the cost of the solutions computed by present-biased agents and the cost of the optimal solutions may differ significantly depending on the problem constraints. Second, our extension enables to study not only underestimation of future costs, coupled with minimization problems, but also all combinations of minimization/maximization, and underestimation/overestimation. We study the four scenarios, and we establish upper bounds on the cost ratio for three of them (the cost ratio for the original scenario was known to be unbounded), providing a complete global picture of the behavior of present-biased agents, as far as optimisation tasks are concerned.
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| 213,592
|
2005.13787
|
Assessing Centrality Without Knowing Connections
|
We consider the privacy-preserving computation of node influence in distributed social networks, as measured by egocentric betweenness centrality (EBC). Motivated by modern communication networks spanning multiple providers, we show for the first time how multiple mutually-distrusting parties can successfully compute node EBC while revealing only differentially-private information about their internal network connections. A theoretical utility analysis upper bounds a primary source of private EBC error---private release of ego networks---with high probability. Empirical results demonstrate practical applicability with a low 1.07 relative error achievable at strong privacy budget $\epsilon=0.1$ on a Facebook graph, and insignificant performance degradation as the number of network provider parties grows.
| false
| false
| false
| true
| false
| false
| true
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| 179,100
|
2105.11601
|
Personalized Transformer for Explainable Recommendation
|
Personalization of natural language generation plays a vital role in a large spectrum of tasks, such as explainable recommendation, review summarization and dialog systems. In these tasks, user and item IDs are important identifiers for personalization. Transformer, which is demonstrated with strong language modeling capability, however, is not personalized and fails to make use of the user and item IDs since the ID tokens are not even in the same semantic space as the words. To address this problem, we present a PErsonalized Transformer for Explainable Recommendation (PETER), on which we design a simple and effective learning objective that utilizes the IDs to predict the words in the target explanation, so as to endow the IDs with linguistic meanings and to achieve personalized Transformer. Besides generating explanations, PETER can also make recommendations, which makes it a unified model for the whole recommendation-explanation pipeline. Extensive experiments show that our small unpretrained model outperforms fine-tuned BERT on the generation task, in terms of both effectiveness and efficiency, which highlights the importance and the nice utility of our design.
| false
| false
| false
| false
| true
| true
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 236,753
|
2106.03722
|
Error Loss Networks
|
A novel model called error loss network (ELN) is proposed to build an error loss function for supervised learning. The ELN is in structure similar to a radial basis function (RBF) neural network, but its input is an error sample and output is a loss corresponding to that error sample. That means the nonlinear input-output mapper of ELN creates an error loss function. The proposed ELN provides a unified model for a large class of error loss functions, which includes some information theoretic learning (ITL) loss functions as special cases. The activation function, weight parameters and network size of the ELN can be predetermined or learned from the error samples. On this basis, we propose a new machine learning paradigm where the learning process is divided into two stages: first, learning a loss function using an ELN; second, using the learned loss function to continue to perform the learning. Experimental results are presented to demonstrate the desirable performance of the new method.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 239,422
|
1611.06009
|
Fuzzy Statistical Matrices for Cell Classification
|
In this paper, we generalize image (texture) statistical descriptors and propose algorithms that improve their efficacy. Recently, a new method showed how the popular Co-Occurrence Matrix (COM) can be modified into a fuzzy version (FCOM) which is more effective and robust to noise. Here, we introduce new fuzzy versions of two additional higher order statistical matrices: the Run Length Matrix (RLM) and the Size Zone Matrix (SZM). We define the fuzzy zones and propose an efficient algorithm to compute the descriptors. We demonstrate the advantage of the proposed improvements over several state-of-the-art methods on three tasks from quantitative cell biology: analyzing and classifying Human Epithelial type 2 (HEp-2) cells using Indirect Immunofluorescence protocol (IFF).
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 64,111
|
1712.00900
|
On the Effect of Shadowing Correlation on Wireless Network Performance
|
We propose and analyze a new shadowing field model meant to capture spatial correlations. The interference field associated with this new model is compared to that of the widely used independent shadowing model. Independent shadowing over links is adopted because of the resulting closed forms for performance metrics, and in spite of the well-known fact that the shadowing fields of networks are spatially correlated. The main purpose of this paper is to challenge this independent shadowing approximation. For this, we analyze the interference measured at the origin in networks where 1) nodes which are in the same cell of some random shadowing tessellation share the same shadow, or 2) nodes which share a common mother point in some cluster process share the same shadow. By leveraging stochastic comparison techniques, we give the order relation of the three main user performance metrics, namely coverage probability, Shannon throughput and local delay, under both the correlated and the independent shadowing assumptions. We show that the evaluation of the considered metrics under the independent approximation is systematically pessimistic compared to the correlated shadowing model. The improvement in each metric when adopting the correlated shadow model is quantified and shown to be quite significant.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| 86,003
|
2003.05837
|
Top-1 Solution of Multi-Moments in Time Challenge 2019
|
In this technical report, we briefly introduce the solutions of our team 'Efficient' for the Multi-Moments in Time challenge in ICCV 2019. We first conduct several experiments with popular Image-Based action recognition methods TRN, TSN, and TSM. Then a novel temporal interlacing network is proposed towards fast and accurate recognition. Besides, the SlowFast network and its variants are explored. Finally, we ensemble all the above models and achieve 67.22\% on the validation set and 60.77\% on the test set, which ranks 1st on the final leaderboard. In addition, we release a new code repository for video understanding which unifies state-of-the-art 2D and 3D methods based on PyTorch. The solution of the challenge is also included in the repository, which is available at https://github.com/Sense-X/X-Temporal.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 167,962
|
2009.10484
|
Asymptotically Optimal Sampling-Based Motion Planning Methods
|
Motion planning is a fundamental problem in autonomous robotics that requires finding a path to a specified goal that avoids obstacles and takes into account a robot's limitations and constraints. It is often desirable for this path to also optimize a cost function, such as path length. Formal path-quality guarantees for continuously valued search spaces are an active area of research interest. Recent results have proven that some sampling-based planning methods probabilistically converge toward the optimal solution as computational effort approaches infinity. This survey summarizes the assumptions behind these popular asymptotically optimal techniques and provides an introduction to the significant ongoing research on this topic.
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 196,905
|
2412.15826
|
Using matrix-product states for time-series machine learning
|
Matrix-product states (MPS) have proven to be a versatile ansatz for modeling quantum many-body physics. For many applications, and particularly in one-dimension, they capture relevant quantum correlations in many-body wavefunctions while remaining tractable to store and manipulate on a classical computer. This has motivated researchers to also apply the MPS ansatz to machine learning (ML) problems where capturing complex correlations in datasets is also a key requirement. Here, we develop and apply an MPS-based algorithm, MPSTime, for learning a joint probability distribution underlying an observed time-series dataset, and show how it can be used to tackle important time-series ML problems, including classification and imputation. MPSTime can efficiently learn complicated time-series probability distributions directly from data, requires only moderate maximum MPS bond dimension $\chi_{\rm max}$, with values for our applications ranging between $\chi_{\rm max} = 20-150$, and can be trained for both classification and imputation tasks under a single logarithmic loss function. Using synthetic and publicly available real-world datasets, spanning applications in medicine, energy, and astronomy, we demonstrate performance competitive with state-of-the-art ML approaches, but with the key advantage of encoding the full joint probability distribution learned from the data. By sampling from the joint probability distribution and calculating its conditional entanglement entropy, we show how its underlying structure can be uncovered and interpreted. This manuscript is supplemented with the release of a publicly available code package MPSTime that implements our approach. The efficiency of the MPS-based ansatz for learning complex correlation structures from time-series data is likely to underpin interpretable advances to challenging time-series ML problems across science, industry, and medicine.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 519,279
|
1107.5951
|
Optimal, scalable forward models for computing gravity anomalies
|
We describe three approaches for computing a gravity signal from a density anomaly. The first approach consists of the classical "summation" technique, whilst the remaining two methods solve the Poisson problem for the gravitational potential using either a Finite Element (FE) discretization employing a multilevel preconditioner, or a Green's function evaluated with the Fast Multipole Method (FMM). The methods utilizing the PDE formulation described here differ from previously published approaches used in gravity modeling in that they are optimal, implying that both the memory and computational time required scale linearly with respect to the number of unknowns in the potential field. Additionally, all of the implementations presented here are developed such that the computations can be performed in a massively parallel, distributed memory computing environment. Through numerical experiments, we compare the methods on the basis of their discretization error, CPU time and parallel scalability. We demonstrate the parallel scalability of all these techniques by running forward models with up to $10^8$ voxels on 1000's of cores.
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| 11,508
|
2406.02950
|
Joint Beam Search Integrating CTC, Attention, and Transducer Decoders
|
End-to-end automatic speech recognition (E2E-ASR) can be classified by its decoder architectures, such as connectionist temporal classification (CTC), recurrent neural network transducer (RNN-T), attention-based encoder-decoder, and Mask-CTC models. Each decoder architecture has advantages and disadvantages, leading practitioners to switch between these different models depending on application requirements. Instead of building separate models, we propose a joint modeling scheme where four decoders (CTC, RNN-T, attention, and Mask-CTC) share the same encoder -- we refer to this as 4D modeling. The 4D model is trained jointly, which will bring model regularization and maximize the model robustness thanks to their complementary properties. To efficiently train the 4D model, we introduce a two-stage training strategy that stabilizes the joint training. In addition, we propose three novel joint beam search algorithms by combining three decoders (CTC, RNN-T, and attention) to further improve performance. These three beam search algorithms differ in which decoder is used as the primary decoder. We carefully evaluate the performance and computational tradeoffs associated with each algorithm. Experimental results demonstrate that the jointly trained 4D model outperforms the E2E-ASR models trained with only one individual decoder. Furthermore, we demonstrate that the proposed joint beam search algorithm outperforms the previously proposed CTC/attention decoding.
| false
| false
| true
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 461,017
|
2403.00748
|
Primal-Dual iLQR
|
We introduce a new algorithm for solving unconstrained discrete-time optimal control problems. Our method follows a direct multiple shooting approach, and consists of applying the SQP method together with an $\ell_2$ augmented Lagrangian primal-dual merit function. We use the LQR algorithm to efficiently solve the primal-dual Newton-KKT system. As our algorithm is a specialization of NPSQP, it inherits its generic properties, including global convergence, fast local convergence, and the lack of need for second order corrections or dimension expansions, improving on existing direct multiple shooting approaches such as acados, ALTRO, GNMS, FATROP, and FDDP. The solutions of the LQR-shaped subproblems posed by our algorithm can be be parallelized to run in time logarithmic in the number of stages, states, and controls. Moreover, as our method avoids sequential rollouts of the nonlinear dynamics, it can run in $O(1)$ parallel time per line search iteration. Therefore, this paper provides a practical, theoretically sound, and highly parallelizable (for example, with a GPU) method for solving nonlinear discrete-time optimal control problems. An open-source JAX implementation of this algorithm can be found on GitHub (joaospinto/primal_dual_ilqr).
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 434,082
|
2406.17697
|
HGTDP-DTA: Hybrid Graph-Transformer with Dynamic Prompt for Drug-Target
Binding Affinity Prediction
|
Drug target binding affinity (DTA) is a key criterion for drug screening. Existing experimental methods are time-consuming and rely on limited structural and domain information. While learning-based methods can model sequence and structural information, they struggle to integrate contextual data and often lack comprehensive modeling of drug-target interactions. In this study, we propose a novel DTA prediction method, termed HGTDP-DTA, which utilizes dynamic prompts within a hybrid Graph-Transformer framework. Our method generates context-specific prompts for each drug-target pair, enhancing the model's ability to capture unique interactions. The introduction of prompt tuning further optimizes the prediction process by filtering out irrelevant noise and emphasizing task-relevant information, dynamically adjusting the input features of the molecular graph. The proposed hybrid Graph-Transformer architecture combines structural information from Graph Convolutional Networks (GCNs) with sequence information captured by Transformers, facilitating the interaction between global and local information. Additionally, we adopted the multi-view feature fusion method to project molecular graph views and affinity subgraph views into a common feature space, effectively combining structural and contextual information. Experiments on two widely used public datasets, Davis and KIBA, show that HGTDP-DTA outperforms state-of-the-art DTA prediction methods in both prediction performance and generalization ability.
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 467,682
|
1806.07057
|
Effect of Hyper-Parameter Optimization on the Deep Learning Model
Proposed for Distributed Attack Detection in Internet of Things Environment
|
This paper studies the effect of various hyper-parameters and their selection for the best performance of the deep learning model proposed in [1] for distributed attack detection in the Internet of Things (IoT). The findings show that there are three hyper-parameters that have more influence on the best performance achieved by the model. As a consequence, this study shows that the model's accuracy as reported in the paper is not achievable, based on the best selections of parameters, which is also supported by another recent publication [2].
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| 100,825
|
1502.06075
|
A new network-based algorithm for human activity recognition in video
|
In this paper, a new network-transmission-based (NTB) algorithm is proposed for human activity recognition in videos. The proposed NTB algorithm models the entire scene as an error-free network. In this network, each node corresponds to a patch of the scene and each edge represents the activity correlation between the corresponding patches. Based on this network, we further model people in the scene as packages while human activities can be modeled as the process of package transmission in the network. By analyzing these specific "package transmission" processes, various activities can be effectively detected. The implementation of our NTB algorithm into abnormal activity detection and group activity recognition are described in detail in the paper. Experimental results demonstrate the effectiveness of our proposed algorithm.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 40,445
|
2005.13189
|
Decentralized Optimization On Time-Varying Directed Graphs Under
Communication Constraints
|
We consider the problem of decentralized optimization where a collection of agents, each having access to a local cost function, communicate over a time-varying directed network and aim to minimize the sum of those functions. In practice, the amount of information that can be exchanged between the agents is limited due to communication constraints. We propose a communication-efficient algorithm for decentralized convex optimization that rely on sparsification of local updates exchanged between neighboring agents in the network. In directed networks, message sparsification alters column-stochasticity -- a property that plays an important role in establishing convergence of decentralized learning tasks. We propose a decentralized optimization scheme that relies on local modification of mixing matrices, and show that it achieves $\mathcal{O}(\frac{\mathrm{ln}T}{\sqrt{T}})$ convergence rate in the considered settings. Experiments validate theoretical results and demonstrate efficacy of the proposed algorithm.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| 178,941
|
1702.03222
|
Mining Electronic Health Records: A Survey
|
The continuously increasing cost of the US healthcare system has received significant attention. Central to the ideas aimed at curbing this trend is the use of technology, in the form of the mandate to implement electronic health records (EHRs). EHRs consist of patient information such as demographics, medications, laboratory test results, diagnosis codes and procedures. Mining EHRs could lead to improvement in patient health management as EHRs contain detailed information related to disease prognosis for large patient populations. In this manuscript, we provide a structured and comprehensive overview of data mining techniques for modeling EHR data. We first provide a detailed understanding of the major application areas to which EHR mining has been applied and then discuss the nature of EHR data and its accompanying challenges. Next, we describe major approaches used for EHR mining, the metrics associated with EHRs, and the various study designs. With this foundation, we then provide a systematic and methodological organization of existing data mining techniques used to model EHRs and discuss ideas for future research. We conclude this survey with a comprehensive summary of clinical data mining applications of EHR data, as illustrated in the online supplement.
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 68,092
|
1807.07282
|
Anomaly Detection for Water Treatment System based on Neural Network
with Automatic Architecture Optimization
|
We continue to develop our neural network (NN) based forecasting approach to anomaly detection (AD) using the Secure Water Treatment (SWaT) industrial control system (ICS) testbed dataset. We propose genetic algorithms (GA) to find the best NN architecture for a given dataset, using the NAB metric to assess the quality of different architectures. The drawbacks of the F1-metric are analyzed. Several techniques are proposed to improve the quality of AD: exponentially weighted smoothing, mean p-powered error measure, individual error weight for each variable, disjoint prediction windows. Based on the techniques used, an approach to anomaly interpretation is introduced.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 103,288
|
cs/0703035
|
On the Distortion SNR Exponent of Some Layered Transmission Schemes
|
We consider the problem of joint source-channel coding for transmitting K samples of a complex Gaussian source over T = bK uses of a block-fading multiple input multiple output (MIMO) channel with M transmit and N receive antennas. We consider the case when we are allowed to code over L blocks. The channel gain is assumed to be constant over a block and channel gains for different blocks are assumed to be independent. The performance measure of interest is the rate of decay of the expected mean squared error with the signal-to-noise ratio (SNR), called the distortion SNR exponent. We first show that using a broadcast strategy of Gunduz and Erkip, but with a different power and rate allocation policy, the optimal distortion SNR exponent can be achieved for bandwidth efficiencies 0 < b < (|N-M|+1)/min(M,N). This is the first time the optimal exponent is characterized for 1/min(M,N) < b < (|N-M |+ 1)/ min(M, N). Also, for b > MNL^2, we show that the broadcast scheme achieves the optimal exponent of MNL. Special cases of this result have been derived for the L=1 case and for M=N=1 by Gunduz and Erkip. We then propose a digital layered transmission scheme that uses both time layering and superposition. This includes many previously known schemes as special cases. The proposed scheme is at least as good as the currently best known schemes for the entire range of bandwidth efficiencies, whereas at least for some M, N, and b, it is strictly better than the currently best known schemes.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| 540,218
|
2211.08717
|
SWIN-SFTNet : Spatial Feature Expansion and Aggregation using Swin
Transformer For Whole Breast micro-mass segmentation
|
Incorporating various mass shapes and sizes in training deep learning architectures has made breast mass segmentation challenging. Moreover, manual segmentation of masses of irregular shapes is time-consuming and error-prone. Though Deep Neural Network has shown outstanding performance in breast mass segmentation, it fails in segmenting micro-masses. In this paper, we propose a novel U-net-shaped transformer-based architecture, called Swin-SFTNet, that outperforms state-of-the-art architectures in breast mammography-based micro-mass segmentation. Firstly to capture the global context, we designed a novel Spatial Feature Expansion and Aggregation Block(SFEA) that transforms sequential linear patches into a structured spatial feature. Next, we combine it with the local linear features extracted by the swin transformer block to improve overall accuracy. We also incorporate a novel embedding loss that calculates similarities between linear feature embeddings of the encoder and decoder blocks. With this approach, we achieve higher segmentation dice over the state-of-the-art by 3.10% on CBIS-DDSM, 3.81% on InBreast, and 3.13% on CBIS pre-trained model on the InBreast test data set.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 330,744
|
2310.02065
|
VENOM: A Vectorized N:M Format for Unleashing the Power of Sparse Tensor
Cores
|
The increasing success and scaling of Deep Learning models demands higher computational efficiency and power. Sparsification can lead to both smaller models as well as higher compute efficiency, and accelerated hardware is becoming available. However, exploiting it efficiently requires kernel implementations, pruning algorithms, and storage formats, to utilize hardware support of specialized sparse vector units. An example of those are the NVIDIA's Sparse Tensor Cores (SPTCs), which promise a 2x speedup. However, SPTCs only support the 2:4 format, limiting achievable sparsity ratios to 50%. We present the V:N:M format, which enables the execution of arbitrary N:M ratios on SPTCs. To efficiently exploit the resulting format, we propose Spatha, a high-performance sparse-library for DL routines. We show that Spatha achieves up to 37x speedup over cuBLAS. We also demonstrate a second-order pruning technique that enables sparsification to high sparsity ratios with V:N:M and little to no loss in accuracy in modern transformers.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| 396,696
|
2111.13850
|
Temporal Context Mining for Learned Video Compression
|
We address end-to-end learned video compression with a special focus on better learning and utilizing temporal contexts. For temporal context mining, we propose to store not only the previously reconstructed frames, but also the propagated features into the generalized decoded picture buffer. From the stored propagated features, we propose to learn multi-scale temporal contexts, and re-fill the learned temporal contexts into the modules of our compression scheme, including the contextual encoder-decoder, the frame generator, and the temporal context encoder. Our scheme discards the parallelization-unfriendly auto-regressive entropy model to pursue a more practical decoding time. We compare our scheme with x264 and x265 (representing industrial software for H.264 and H.265, respectively) as well as the official reference software for H.264, H.265, and H.266 (JM, HM, and VTM, respectively). When intra period is 32 and oriented to PSNR, our scheme outperforms H.265--HM by 14.4% bit rate saving; when oriented to MS-SSIM, our scheme outperforms H.266--VTM by 21.1% bit rate saving.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 268,409
|
2110.09936
|
NeuralDiff: Segmenting 3D objects that move in egocentric videos
|
Given a raw video sequence taken from a freely-moving camera, we study the problem of decomposing the observed 3D scene into a static background and a dynamic foreground containing the objects that move in the video sequence. This task is reminiscent of the classic background subtraction problem, but is significantly harder because all parts of the scene, static and dynamic, generate a large apparent motion due to the camera large viewpoint change. In particular, we consider egocentric videos and further separate the dynamic component into objects and the actor that observes and moves them. We achieve this factorization by reconstructing the video via a triple-stream neural rendering network that explains the different motions based on corresponding inductive biases. We demonstrate that our method can successfully separate the different types of motion, outperforming recent neural rendering baselines at this task, and can accurately segment moving objects. We do so by assessing the method empirically on challenging videos from the EPIC-KITCHENS dataset which we augment with appropriate annotations to create a new benchmark for the task of dynamic object segmentation on unconstrained video sequences, for complex 3D environments.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| true
| 261,981
|
2303.12343
|
LD-ZNet: A Latent Diffusion Approach for Text-Based Image Segmentation
|
Large-scale pre-training tasks like image classification, captioning, or self-supervised techniques do not incentivize learning the semantic boundaries of objects. However, recent generative foundation models built using text-based latent diffusion techniques may learn semantic boundaries. This is because they have to synthesize intricate details about all objects in an image based on a text description. Therefore, we present a technique for segmenting real and AI-generated images using latent diffusion models (LDMs) trained on internet-scale datasets. First, we show that the latent space of LDMs (z-space) is a better input representation compared to other feature representations like RGB images or CLIP encodings for text-based image segmentation. By training the segmentation models on the latent z-space, which creates a compressed representation across several domains like different forms of art, cartoons, illustrations, and photographs, we are also able to bridge the domain gap between real and AI-generated images. We show that the internal features of LDMs contain rich semantic information and present a technique in the form of LD-ZNet to further boost the performance of text-based segmentation. Overall, we show up to 6% improvement over standard baselines for text-to-image segmentation on natural images. For AI-generated imagery, we show close to 20% improvement compared to state-of-the-art techniques. The project is available at https://koutilya-pnvr.github.io/LD-ZNet/.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 353,231
|
2410.04721
|
ACDC: Autoregressive Coherent Multimodal Generation using Diffusion
Correction
|
Autoregressive models (ARMs) and diffusion models (DMs) represent two leading paradigms in generative modeling, each excelling in distinct areas: ARMs in global context modeling and long-sequence generation, and DMs in generating high-quality local contexts, especially for continuous data such as images and short videos. However, ARMs often suffer from exponential error accumulation over long sequences, leading to physically implausible results, while DMs are limited by their local context generation capabilities. In this work, we introduce Autoregressive Coherent multimodal generation with Diffusion Correction (ACDC), a zero-shot approach that combines the strengths of both ARMs and DMs at the inference stage without the need for additional fine-tuning. ACDC leverages ARMs for global context generation and memory-conditioned DMs for local correction, ensuring high-quality outputs by correcting artifacts in generated multimodal tokens. In particular, we propose a memory module based on large language models (LLMs) that dynamically adjusts the conditioning texts for the DMs, preserving crucial global context information. Our experiments on multimodal tasks, including coherent multi-frame story generation and autoregressive video generation, demonstrate that ACDC effectively mitigates the accumulation of errors and significantly enhances the quality of generated outputs, achieving superior performance while remaining agnostic to specific ARM and DM architectures. Project page: https://acdc2025.github.io/
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 495,412
|
2410.12843
|
Exploring Prompt Engineering: A Systematic Review with SWOT Analysis
|
In this paper, we conduct a comprehensive SWOT analysis of prompt engineering techniques within the realm of Large Language Models (LLMs). Emphasizing linguistic principles, we examine various techniques to identify their strengths, weaknesses, opportunities, and threats. Our findings provide insights into enhancing AI interactions and improving language model comprehension of human prompts. The analysis covers techniques including template-based approaches and fine-tuning, addressing the problems and challenges associated with each. The conclusion offers future research directions aimed at advancing the effectiveness of prompt engineering in optimizing human-machine communication.
| false
| false
| false
| false
| true
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 499,231
|
1906.06047
|
Dynamic Term-Modal Logics for First-Order Epistemic Planning
|
Many classical planning frameworks are built on first-order languages. The first-order expressive power is desirable for compactly representing actions via schemas, and for specifying quantified conditions such as $\neg\exists x\mathsf{blocks\_door}(x)$. In contrast, several recent epistemic planning frameworks are built on propositional epistemic logic. The epistemic language is useful to describe planning problems involving higher-order reasoning or epistemic goals such as $K_{a}\neg\mathsf{problem}$. This paper develops a first-order version of Dynamic Epistemic Logic (DEL). In this framework, for example, $\exists xK_{x}\exists y\mathsf{blocks\_door}(y)$ is a formula. The formalism combines the strengths of DEL (higher-order reasoning) with those of first-order logic (lifted representation) to model multi-agent epistemic planning. The paper introduces an epistemic language with a possible-worlds semantics, followed by novel dynamics given by first-order action models and their execution via product updates. Taking advantage of the first-order machinery, epistemic action schemas are defined to provide compact, problem-independent domain descriptions, in the spirit of PDDL. Concerning metatheory, the paper defines axiomatic normal term-modal logics, shows a Canonical Model Theorem-like result which allows establishing completeness through frame characterization formulas, shows decidability for the finite agent case, and shows a general completeness result for the dynamic extension by reduction axioms.
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| true
| 135,198
|
2209.13012
|
Survey on Fairness Notions and Related Tensions
|
Automated decision systems are increasingly used to take consequential decisions in problems such as job hiring and loan granting with the hope of replacing subjective human decisions with objective machine learning (ML) algorithms. However, ML-based decision systems are prone to bias, which results in yet unfair decisions. Several notions of fairness have been defined in the literature to capture the different subtleties of this ethical and social concept (e.g., statistical parity, equal opportunity, etc.). Fairness requirements to be satisfied while learning models created several types of tensions among the different notions of fairness and other desirable properties such as privacy and classification accuracy. This paper surveys the commonly used fairness notions and discusses the tensions among them with privacy and accuracy. Different methods to address the fairness-accuracy trade-off (classified into four approaches, namely, pre-processing, in-processing, post-processing, and hybrid) are reviewed. The survey is consolidated with experimental analysis carried out on fairness benchmark datasets to illustrate the relationship between fairness measures and accuracy in real-world scenarios.
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| 319,735
|
2407.20274
|
Exploring the Plausibility of Hate and Counter Speech Detectors with
Explainable AI
|
In this paper we investigate the explainability of transformer models and their plausibility for hate speech and counter speech detection. We compare representatives of four different explainability approaches, i.e., gradient-based, perturbation-based, attention-based, and prototype-based approaches, and analyze them quantitatively with an ablation study and qualitatively in a user study. Results show that perturbation-based explainability performs best, followed by gradient-based and attention-based explainability. Prototypebased experiments did not yield useful results. Overall, we observe that explainability strongly supports the users in better understanding the model predictions.
| false
| false
| false
| false
| true
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 477,122
|
2303.12992
|
A Survey of Historical Learning: Learning Models with Learning History
|
New knowledge originates from the old. The various types of elements, deposited in the training history, are a large amount of wealth for improving learning deep models. In this survey, we comprehensively review and summarize the topic--``Historical Learning: Learning Models with Learning History'', which learns better neural models with the help of their learning history during its optimization, from three detailed aspects: Historical Type (what), Functional Part (where) and Storage Form (how). To our best knowledge, it is the first survey that systematically studies the methodologies which make use of various historical statistics when training deep neural networks. The discussions with related topics like recurrent/memory networks, ensemble learning, and reinforcement learning are demonstrated. We also expose future challenges of this topic and encourage the community to pay attention to the think of historical learning principles when designing algorithms. The paper list related to historical learning is available at \url{https://github.com/Martinser/Awesome-Historical-Learning.}
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 353,489
|
2306.08766
|
A Copernican Revolution in Data
|
Half a century ago, Charles Bachman foresaw the significance and centrality of data in the digital world. In this short paper, we delve into the evolution of these ideas within the database community over the past decades. We believe that this historical analysis helps deepen our comprehension of the fundamental changes undergoing our discipline and provides insights into the future trajectory of our field.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| true
| false
| 373,534
|
1507.04811
|
Lift-Based Bidding in Ad Selection
|
Real-time bidding (RTB) has become one of the largest online advertising markets in the world. Today the bid price per ad impression is typically decided by the expected value of how it can lead to a desired action event (e.g., registering an account or placing a purchase order) to the advertiser. However, this industry standard approach to decide the bid price does not consider the actual effect of the ad shown to the user, which should be measured based on the performance lift among users who have been or have not been exposed to a certain treatment of ads. In this paper, we propose a new bidding strategy and prove that if the bid price is decided based on the performance lift rather than absolute performance value, advertisers can actually gain more action events. We describe the modeling methodology to predict the performance lift and demonstrate the actual performance gain through blind A/B test with real ad campaigns in an industry-leading Demand-Side Platform (DSP). We also discuss the relationship between attribution models and bidding strategies. We prove that, to move the DSPs to bid based on performance lift, they should be rewarded according to the relative performance lift they contribute.
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| 45,217
|
2006.07995
|
BatVision with GCC-PHAT Features for Better Sound to Vision Predictions
|
Inspired by sophisticated echolocation abilities found in nature, we train a generative adversarial network to predict plausible depth maps and grayscale layouts from sound. To achieve this, our sound-to-vision model processes binaural echo-returns from chirping sounds. We build upon previous work with BatVision that consists of a sound-to-vision model and a self-collected dataset using our mobile robot and low-cost hardware. We improve on the previous model by introducing several changes to the model, which leads to a better depth and grayscale estimation, and increased perceptual quality. Rather than using raw binaural waveforms as input, we generate generalized cross-correlation (GCC) features and use these as input instead. In addition, we change the model generator and base it on residual learning and use spectral normalization in the discriminator. We compare and present both quantitative and qualitative improvements over our previous BatVision model.
| false
| false
| true
| false
| false
| false
| false
| true
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 182,033
|
2403.02253
|
KnowPhish: Large Language Models Meet Multimodal Knowledge Graphs for
Enhancing Reference-Based Phishing Detection
|
Phishing attacks have inflicted substantial losses on individuals and businesses alike, necessitating the development of robust and efficient automated phishing detection approaches. Reference-based phishing detectors (RBPDs), which compare the logos on a target webpage to a known set of logos, have emerged as the state-of-the-art approach. However, a major limitation of existing RBPDs is that they rely on a manually constructed brand knowledge base, making it infeasible to scale to a large number of brands, which results in false negative errors due to the insufficient brand coverage of the knowledge base. To address this issue, we propose an automated knowledge collection pipeline, using which we collect a large-scale multimodal brand knowledge base, KnowPhish, containing 20k brands with rich information about each brand. KnowPhish can be used to boost the performance of existing RBPDs in a plug-and-play manner. A second limitation of existing RBPDs is that they solely rely on the image modality, ignoring useful textual information present in the webpage HTML. To utilize this textual information, we propose a Large Language Model (LLM)-based approach to extract brand information of webpages from text. Our resulting multimodal phishing detection approach, KnowPhish Detector (KPD), can detect phishing webpages with or without logos. We evaluate KnowPhish and KPD on a manually validated dataset, and a field study under Singapore's local context, showing substantial improvements in effectiveness and efficiency compared to state-of-the-art baselines.
| false
| false
| false
| false
| true
| false
| true
| false
| true
| false
| false
| false
| true
| false
| false
| false
| false
| false
| 434,745
|
2203.07856
|
Improved Multi-label Classification under Temporal Concept Drift:
Rethinking Group-Robust Algorithms in a Label-Wise Setting
|
In document classification for, e.g., legal and biomedical text, we often deal with hundreds of classes, including very infrequent ones, as well as temporal concept drift caused by the influence of real world events, e.g., policy changes, conflicts, or pandemics. Class imbalance and drift can sometimes be mitigated by resampling the training data to simulate (or compensate for) a known target distribution, but what if the target distribution is determined by unknown future events? Instead of simply resampling uniformly to hedge our bets, we focus on the underlying optimization algorithms used to train such document classifiers and evaluate several group-robust optimization algorithms, initially proposed to mitigate group-level disparities. Reframing group-robust algorithms as adaptation algorithms under concept drift, we find that Invariant Risk Minimization and Spectral Decoupling outperform sampling-based approaches to class imbalance and concept drift, and lead to much better performance on minority classes. The effect is more pronounced the larger the label set.
| false
| false
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 285,587
|
2204.13176
|
Divisible Codes for Quantum Computation
|
Divisible codes are defined by the property that codeword weights share a common divisor greater than one. They are used to design signals for communications and sensing, and this paper explores how they can be used to protect quantum information as it is transformed by logical gates. Given a CSS code $\mathcal{C}$, we derive conditions that are both necessary and sufficient for a transversal diagonal physical operator $U_Z$ to preserve $\mathcal{C}$ and induce $U_L$. The group of $Z$-stabilizers in a CSS code $\mathcal{C}$ is determined by the dual of a classical $[n, k_1]$ binary code $\mathcal{C}_1$, and the group of $X$-stabilizers is determined by a classical $[n, k_2]$ binary code $\mathcal{C}_2$ that is contained in $\mathcal{C}_1$. The requirement that a diagonal physical operator $U_Z$ fixes a CSS code $\mathcal{C}$ leads to constraints on the congruence of weights in cosets of $\mathcal{C}_2$. These constraints are a perfect fit to divisible codes, and represent an opportunity to take advantage of the extensive literature on classical codes with two or three weights. We construct new families of CSS codes using cosets of the first order Reed Muller code defined by quadratic forms. We provide a simple alternative to the standard method of deriving the coset weight distributions (based on Dickson normal form) that may be of independent interest. Finally, we develop an approach to circumventing the Eastin-Knill Theorem which states that no QECC can implement a universal set of logical gates through transversal gates alone. The essential idea is to design stabilizer codes in layers, with $N_1$ inner qubits and $N_2$ outer qubits, and to assemble a universal set of fault tolerant gates on the inner qubits.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| 293,724
|
1703.01310
|
Count-Based Exploration with Neural Density Models
|
Bellemare et al. (2016) introduced the notion of a pseudo-count, derived from a density model, to generalize count-based exploration to non-tabular reinforcement learning. This pseudo-count was used to generate an exploration bonus for a DQN agent and combined with a mixed Monte Carlo update was sufficient to achieve state of the art on the Atari 2600 game Montezuma's Revenge. We consider two questions left open by their work: First, how important is the quality of the density model for exploration? Second, what role does the Monte Carlo update play in exploration? We answer the first question by demonstrating the use of PixelCNN, an advanced neural density model for images, to supply a pseudo-count. In particular, we examine the intrinsic difficulties in adapting Bellemare et al.'s approach when assumptions about the model are violated. The result is a more practical and general algorithm requiring no special apparatus. We combine PixelCNN pseudo-counts with different agent architectures to dramatically improve the state of the art on several hard Atari games. One surprising finding is that the mixed Monte Carlo update is a powerful facilitator of exploration in the sparsest of settings, including Montezuma's Revenge.
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 69,338
|
2109.10279
|
Ranking Feature-Block Importance in Artificial Multiblock Neural
Networks
|
In artificial neural networks, understanding the contributions of input features on the prediction fosters model explainability and delivers relevant information about the dataset. While typical setups for feature importance ranking assess input features individually, in this study, we go one step further and rank the importance of groups of features, denoted as feature-blocks. A feature-block can contain features of a specific type or features derived from a particular source, which are presented to the neural network in separate input branches (multiblock ANNs). This work presents three methods pursuing distinct strategies to rank features in multiblock ANNs by their importance: (1) a composite strategy building on individual feature importance rankings, (2) a knock-in, and (3) a knock-out strategy. While the composite strategy builds on state-of-the-art feature importance rankings, knock-in and knock-out strategies evaluate the block as a whole via a mutual information criterion. Our experiments consist of a simulation study validating all three approaches, followed by a case study on two distinct real-world datasets to compare the strategies. We conclude that each strategy has its merits for specific application scenarios.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 256,568
|
2304.11834
|
Robust Tickets Can Transfer Better: Drawing More Transferable
Subnetworks in Transfer Learning
|
Transfer learning leverages feature representations of deep neural networks (DNNs) pretrained on source tasks with rich data to empower effective finetuning on downstream tasks. However, the pretrained models are often prohibitively large for delivering generalizable representations, which limits their deployment on edge devices with constrained resources. To close this gap, we propose a new transfer learning pipeline, which leverages our finding that robust tickets can transfer better, i.e., subnetworks drawn with properly induced adversarial robustness can win better transferability over vanilla lottery ticket subnetworks. Extensive experiments and ablation studies validate that our proposed transfer learning pipeline can achieve enhanced accuracy-sparsity trade-offs across both diverse downstream tasks and sparsity patterns, further enriching the lottery ticket hypothesis.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 359,990
|
1911.09548
|
A parallel space-time multigrid method for the eddy-current equation
|
We expand the applicabilities and capabilities of an already existing space-time parallel method based on a block Jacobi smoother. First we formulate a more detailed criterion for spatial coarsening, which enables the method to deal with unstructured meshes and varying material parameters. Further we investigate the application to the eddy-current equation, where the non-trivial kernel of the curl operator causes severe problems. This is remedied with a new nodal auxiliary space correction. We proceed to identify convergence rates by local Fourier analysis and numerical experiments. Finally, we present a numerical experiment which demonstrates its excellent scaling properties.
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| 154,553
|
1710.06512
|
Pose-based Deep Gait Recognition
|
Human gait or walking manner is a biometric feature that allows identification of a person when other biometric features such as the face or iris are not visible. In this paper, we present a new pose-based convolutional neural network model for gait recognition. Unlike many methods that consider the full-height silhouette of a moving person, we consider the motion of points in the areas around human joints. To extract motion information, we estimate the optical flow between consecutive frames. We propose a deep convolutional model that computes pose-based gait descriptors. We compare different network architectures and aggregation methods and experimentally assess various sets of body parts to determine which are the most important for gait recognition. In addition, we investigate the generalization ability of the developed algorithms by transferring them between datasets. The results of these experiments show that our approach outperforms state-of-the-art methods.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 82,784
|
2401.06925
|
Modeling Latent Selection with Structural Causal Models
|
Selection bias is ubiquitous in real-world data, and can lead to misleading results if not dealt with properly. We introduce a conditioning operation on Structural Causal Models (SCMs) to model latent selection from a causal perspective. We show that the conditioning operation transforms an SCM with the presence of an explicit latent selection mechanism into an SCM without such selection mechanism, which partially encodes the causal semantics of the selected subpopulation according to the original SCM. Furthermore, we show that this conditioning operation preserves the simplicity, acyclicity, and linearity of SCMs, and commutes with marginalization. Thanks to these properties, combined with marginalization and intervention, the conditioning operation offers a valuable tool for conducting causal reasoning tasks within causal models where latent details have been abstracted away. We demonstrate by example how classical results of causal inference can be generalized to include selection bias and how the conditioning operation helps with modeling of real-world problems.
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 421,347
|
1712.01097
|
Generalized Grounding Graphs: A Probabilistic Framework for
Understanding Grounded Commands
|
Many task domains require robots to interpret and act upon natural language commands which are given by people and which refer to the robot's physical surroundings. Such interpretation is known variously as the symbol grounding problem, grounded semantics and grounded language acquisition. This problem is challenging because people employ diverse vocabulary and grammar, and because robots have substantial uncertainty about the nature and contents of their surroundings, making it difficult to associate the constitutive language elements (principally noun phrases and spatial relations) of the command text to elements of those surroundings. Symbolic models capture linguistic structure but have not scaled successfully to handle the diverse language produced by untrained users. Existing statistical approaches can better handle diversity, but have not to date modeled complex linguistic structure, limiting achievable accuracy. Recent hybrid approaches have addressed limitations in scaling and complexity, but have not effectively associated linguistic and perceptual features. Our framework, called Generalized Grounding Graphs (G^3), addresses these issues by defining a probabilistic graphical model dynamically according to the linguistic parse structure of a natural language command. This approach scales effectively, handles linguistic diversity, and enables the system to associate parts of a command with the specific objects, places, and events in the external world to which they refer. We show that robots can learn word meanings and use those learned meanings to robustly follow natural language commands produced by untrained users. We demonstrate our approach for both mobility commands and mobile manipulation commands involving a variety of semi-autonomous robotic platforms, including a wheelchair, a micro-air vehicle, a forklift, and the Willow Garage PR2.
| false
| false
| false
| false
| false
| false
| false
| true
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 86,046
|
2007.01510
|
MIRA: Leveraging Multi-Intention Co-click Information in Web-scale
Document Retrieval using Deep Neural Networks
|
We study the problem of deep recall model in industrial web search, which is, given a user query, retrieve hundreds of most relevance documents from billions of candidates. The common framework is to train two encoding models based on neural embedding which learn the distributed representations of queries and documents separately and match them in the latent semantic space. However, all the exiting encoding models only leverage the information of the document itself, which is often not sufficient in practice when matching with query terms, especially for the hard tail queries. In this work we aim to leverage the additional information for each document from its co-click neighbour to help document retrieval. The challenges include how to effectively extract information and eliminate noise when involving co-click information in deep model while meet the demands of billion-scale data size for real time online inference. To handle the noise in co-click relations, we firstly propose a web-scale Multi-Intention Co-click document Graph(MICG) which builds the co-click connections between documents on click intention level but not on document level. Then we present an encoding framework MIRA based on Bert and graph attention networks which leverages a two-factor attention mechanism to aggregate neighbours. To meet the online latency requirements, we only involve neighbour information in document side, which can save the time-consuming query neighbor search in real time serving. We conduct extensive offline experiments on both public dataset and private web-scale dataset from two major commercial search engines demonstrating the effectiveness and scalability of the proposed method compared with several baselines. And a further case study reveals that co-click relations mainly help improve web search quality from two aspects: key concept enhancing and query term complementary.
| false
| false
| false
| false
| false
| true
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 185,457
|
1804.09288
|
A Closer Look at Weak Label Learning for Audio Events
|
Audio content analysis in terms of sound events is an important research problem for a variety of applications. Recently, the development of weak labeling approaches for audio or sound event detection (AED) and availability of large scale weakly labeled dataset have finally opened up the possibility of large scale AED. However, a deeper understanding of how weak labels affect the learning for sound events is still missing from literature. In this work, we first describe a CNN based approach for weakly supervised training of audio events. The approach follows some basic design principle desirable in a learning method relying on weakly labeled audio. We then describe important characteristics, which naturally arise in weakly supervised learning of sound events. We show how these aspects of weak labels affect the generalization of models. More specifically, we study how characteristics such as label density and corruption of labels affects weakly supervised training for audio events. We also study the feasibility of directly obtaining weak labeled data from the web without any manual label and compare it with a dataset which has been manually labeled. The analysis and understanding of these factors should be taken into picture in the development of future weak label learning methods. Audioset, a large scale weakly labeled dataset for sound events is used in our experiments.
| false
| false
| true
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| 95,939
|
2405.06363
|
Projection by Convolution: Optimal Sample Complexity for Reinforcement
Learning in Continuous-Space MDPs
|
We consider the problem of learning an $\varepsilon$-optimal policy in a general class of continuous-space Markov decision processes (MDPs) having smooth Bellman operators. Given access to a generative model, we achieve rate-optimal sample complexity by performing a simple, \emph{perturbed} version of least-squares value iteration with orthogonal trigonometric polynomials as features. Key to our solution is a novel projection technique based on ideas from harmonic analysis. Our~$\widetilde{\mathcal{O}}(\epsilon^{-2-d/(\nu+1)})$ sample complexity, where $d$ is the dimension of the state-action space and $\nu$ the order of smoothness, recovers the state-of-the-art result of discretization approaches for the special case of Lipschitz MDPs $(\nu=0)$. At the same time, for $\nu\to\infty$, it recovers and greatly generalizes the $\mathcal{O}(\epsilon^{-2})$ rate of low-rank MDPs, which are more amenable to regression approaches. In this sense, our result bridges the gap between two popular but conflicting perspectives on continuous-space MDPs.
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 453,272
|
1202.3215
|
Data quality measurement on categorical data using genetic algorithm
|
Data quality on categorical attribute is a difficult problem that has not received as much attention as numerical counterpart. Our basic idea is to employ association rule for the purpose of data quality measurement. Strong rule generation is an important area of data mining. Association rule mining problems can be considered as a multi objective problem rather than as a single objective one. The main area of concentration was the rules generated by association rule mining using genetic algorithm. The advantage of using genetic algorithm is to discover high level prediction rules is that they perform a global search and cope better with attribute interaction than the greedy rule induction algorithm often used in data mining. Genetic algorithm based approach utilizes the linkage between association rule and feature selection. In this paper, we put forward a Multi objective genetic algorithm approach for data quality on categorical attributes. The result shows that our approach is outperformed by the objectives like accuracy, completeness, comprehensibility and interestingness.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| 14,336
|
2408.08005
|
Inversion-DeepONet: A Novel DeepONet-Based Network with Encoder-Decoder
for Full Waveform Inversion
|
Full waveform inversion (FWI) plays a crucial role in the field of geophysics. There has been lots of research about applying deep learning (DL) methods to FWI. The success of DL-FWI relies significantly on the quantity and diversity of the datasets. Nevertheless, existing FWI datasets, like OpenFWI, where sources have fixed locations or identical frequencies, provide limited information and do not represent the complex real-world scene. For instance, low frequencies help in resolving larger-scale structures. High frequencies allow for a more detailed subsurface features. %A single source frequency is insufficient to describe subsurface structural properties. We consider that simultaneously using sources with different frequencies, instead of performing inversion using low frequencies data and then gradually introducing higher frequencies data, has rationale and potential advantages. Hence, we develop three enhanced datasets based on OpenFWI where each source have varying locations, frequencies or both. Moreover, we propose a novel deep operator network (DeepONet) architecture Inversion-DeepONet for FWI. We utilize convolutional neural network (CNN) to extract the features from seismic data in branch net. Source parameters, such as locations and frequencies, are fed to trunk net. Then another CNN is employed as the decoder of DeepONet to reconstruct the velocity models more effectively. Through experiments, we confirm the superior performance on accuracy and generalization ability of our network, compared with existing data-driven FWI methods.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 480,815
|
2106.08590
|
Domain Consistency Regularization for Unsupervised Multi-source Domain
Adaptive Classification
|
Deep learning-based multi-source unsupervised domain adaptation (MUDA) has been actively studied in recent years. Compared with single-source unsupervised domain adaptation (SUDA), domain shift in MUDA exists not only between the source and target domains but also among multiple source domains. Most existing MUDA algorithms focus on extracting domain-invariant representations among all domains whereas the task-specific decision boundaries among classes are largely neglected. In this paper, we propose an end-to-end trainable network that exploits domain Consistency Regularization for unsupervised Multi-source domain Adaptive classification (CRMA). CRMA aligns not only the distributions of each pair of source and target domains but also that of all domains. For each pair of source and target domains, we employ an intra-domain consistency to regularize a pair of domain-specific classifiers to achieve intra-domain alignment. In addition, we design an inter-domain consistency that targets joint inter-domain alignment among all domains. To address different similarities between multiple source domains and the target domain, we design an authorization strategy that assigns different authorities to domain-specific classifiers adaptively for optimal pseudo label prediction and self-training. Extensive experiments show that CRMA tackles unsupervised domain adaptation effectively under a multi-source setup and achieves superior adaptation consistently across multiple MUDA datasets.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 241,345
|
2204.14079
|
Fix the Noise: Disentangling Source Feature for Transfer Learning of
StyleGAN
|
Transfer learning of StyleGAN has recently shown great potential to solve diverse tasks, especially in domain translation. Previous methods utilized a source model by swapping or freezing weights during transfer learning, however, they have limitations on visual quality and controlling source features. In other words, they require additional models that are computationally demanding and have restricted control steps that prevent a smooth transition. In this paper, we propose a new approach to overcome these limitations. Instead of swapping or freezing, we introduce a simple feature matching loss to improve generation quality. In addition, to control the degree of source features, we train a target model with the proposed strategy, FixNoise, to preserve the source features only in a disentangled subspace of a target feature space. Owing to the disentangled feature space, our method can smoothly control the degree of the source features in a single model. Extensive experiments demonstrate that the proposed method can generate more consistent and realistic images than previous works.
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 294,047
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.