id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2411.02911
Synergizing Hyper-accelerated Power Optimization and Wavelength-Dependent QoT-Aware Cross-Layer Design in Next-Generation Multi-Band EONs
The extension of elastic optical networks (EON) to multi-band transmission (MB-EON) shows promise in enhancing spectral efficiency, throughput, and long-term cost-effectiveness for telecom operators. However, designing MB-EON networks introduces complex challenges, notably the optimization of physical parameters like optical power and quality of transmission (QoT). Frequency-dependent characteristics of fiber, such as loss, dispersion, and nonlinear effects, alongside inter-channel stimulated Raman scattering, pose significant hurdles when extending beyond the L+C (LC) band to a continuous spectrum over 100 nm. In this study, we propose a span-by-span methodology for optimal power allocation, introducing two hyper-accelerated power optimization (HPO) strategies: flat launch power (FLP) and flat received power (FRP). These approaches significantly expedite network power optimization while preserving the stability of running services. Our comparative analysis of FLP and FRP models reveals that while FRP has a minimal effect on capacity (increasing less than 10 Tbps for an L+C+S (LCS) system over 100 km), it improves flatness and GSNR/OSNR metrics in the S-band by approximately 2/0 dB and 2.5/6 dB, respectively. A network-wide analysis across various topologies shows that the FRP technique enhances minimum GSNR, contributing to a throughput increase of 12% to 75%, depending on network scale, at a 1% bandwidth blocking rate. Lastly, our application of HPO in MB-EON for both local and global power optimization demonstrates that while both approaches offer comparable performance, global optimization is simpler and more cost-effective for large-scale networks.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
505,718
2110.06803
Learn to Ignore: Domain Adaptation for Multi-Site MRI Analysis
The limited availability of large image datasets, mainly due to data privacy and differences in acquisition protocols or hardware, is a significant issue in the development of accurate and generalizable machine learning methods in medicine. This is especially the case for Magnetic Resonance (MR) images, where different MR scanners introduce a bias that limits the performance of a machine learning model. We present a novel method that learns to ignore the scanner-related features present in MR images, by introducing specific additional constraints on the latent space. We focus on a real-world classification scenario, where only a small dataset provides images of all classes. Our method \textit{Learn to Ignore (L2I)} outperforms state-of-the-art domain adaptation methods on a multi-site MR dataset for a classification task between multiple sclerosis patients and healthy controls.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
260,747
1903.05369
Face Liveness Detection Based on Client Identity Using Siamese Network
Face liveness detection is an essential prerequisite for face recognition applications. Previous face liveness detection methods usually train a binary classifier to differentiate between a fake face and a real face before face recognition. The client identity information is not utilized in previous face liveness detection methods. However, in practical face recognition applications, face spoofing attacks are always aimed at a specific client, and the client identity information can provide useful clues for face liveness detection. In this paper, we propose a face liveness detection method based on the client identity using Siamese network. We detect face liveness after face recognition instead of before face recognition, that is, we detect face liveness with the client identity information. We train a Siamese network with image pairs. Each image pair consists of two real face images or one real and one fake face images. The face images in each pair come from a same client. Given a test face image, the face image is firstly recognized by face recognition system, then the real face image of the identified client is retrieved to help the face liveness detection. Experiment results demonstrate the effectiveness of our method.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
124,153
2209.13724
Stochastic projection based approach for gradient free physics informed learning
We propose a stochastic projection-based gradient free physics-informed neural network. The proposed approach, referred to as the stochastic projection based physics informed neural network (SP-PINN), blends upscaled stochastic projection theory with the recently proposed physics-informed neural network. This results in a framework that is robust and can solve problems involving complex solution domain and discontinuities. SP-PINN is a gradient-free approach which addresses the computational bottleneck associated with automatic differentiation in conventional PINN. Efficacy of the proposed approach is illustrated by a number of examples involving regular domain, complex domain, complex response and phase field based fracture mechanics problems. Case studies by varying network architecture (activation function) and number of collocation points have also been presented.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
320,000
1204.2649
Multiuser Switched Diversity Scheduling Schemes
Multiuser switched-diversity scheduling schemes were recently proposed in order to overcome the heavy feedback requirements of conventional opportunistic scheduling schemes by applying a threshold-based, distributed, and ordered scheduling mechanism. The main idea behind these schemes is that slight reduction in the prospected multiuser diversity gains is an acceptable trade-off for great savings in terms of required channel-state-information feedback messages. In this work, we characterize the achievable rate region of multiuser switched diversity systems and compare it with the rate region of full feedback multiuser diversity systems. We propose also a novel proportional fair multiuser switched-based scheduling scheme and we demonstrate that it can be optimized using a practical and distributed method to obtain the feedback thresholds. We finally demonstrate by numerical examples that switched-diversity scheduling schemes operate within 0.3 bits/sec/Hz from the ultimate network capacity of full feedback systems in Rayleigh fading conditions.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
15,429
2501.15554
BoTier: Multi-Objective Bayesian Optimization with Tiered Composite Objectives
Scientific optimization problems are usually concerned with balancing multiple competing objectives, which come as preferences over both the outcomes of an experiment (e.g. maximize the reaction yield) and the corresponding input parameters (e.g. minimize the use of an expensive reagent). Typically, practical and economic considerations define a hierarchy over these objectives, which must be reflected in algorithms for sample-efficient experiment planning. Herein, we introduce BoTier, a composite objective that can flexibly represent a hierarchy of preferences over both experiment outcomes and input parameters. We provide systematic benchmarks on synthetic and real-life surfaces, demonstrating the robust applicability of BoTier across a number of use cases. Importantly, BoTier is implemented in an auto-differentiable fashion, enabling seamless integration with the BoTorch library, thereby facilitating adoption by the scientific community.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
527,606
0803.2925
Equivalence of Probabilistic Tournament and Polynomial Ranking Selection
Crucial to an Evolutionary Algorithm's performance is its selection scheme. We mathematically investigate the relation between polynomial rank and probabilistic tournament methods which are (respectively) generalisations of the popular linear ranking and tournament selection schemes. We show that every probabilistic tournament is equivalent to a unique polynomial rank scheme. In fact, we derived explicit operators for translating between these two types of selection. Of particular importance is that most linear and most practical quadratic rank schemes are probabilistic tournaments.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
1,461
1911.10640
Algorithmic Bias in Recidivism Prediction: A Causal Perspective
ProPublica's analysis of recidivism predictions produced by Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) software tool for the task, has shown that the predictions were racially biased against African American defendants. We analyze the COMPAS data using a causal reformulation of the underlying algorithmic fairness problem. Specifically, we assess whether COMPAS exhibits racial bias against African American defendants using FACT, a recently introduced causality grounded measure of algorithmic fairness. We use the Neyman-Rubin potential outcomes framework for causal inference from observational data to estimate FACT from COMPAS data. Our analysis offers strong evidence that COMPAS exhibits racial bias against African American defendants. We further show that the FACT estimates from COMPAS data are robust in the presence of unmeasured confounding.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
154,893
1904.10230
A Large RGB-D Dataset for Semi-supervised Monocular Depth Estimation
Current self-supervised methods for monocular depth estimation are largely based on deeply nested convolutional networks that leverage stereo image pairs or monocular sequences during a training phase. However, they often exhibit inaccurate results around occluded regions and depth boundaries. In this paper, we present a simple yet effective approach for monocular depth estimation using stereo image pairs. The study aims to propose a student-teacher strategy in which a shallow student network is trained with the auxiliary information obtained from a deeper and more accurate teacher network. Specifically, we first train the stereo teacher network by fully utilizing the binocular perception of 3-D geometry and then use the depth predictions of the teacher network to train the student network for monocular depth inference. This enables us to exploit all available depth data from massive unlabeled stereo pairs. We propose a strategy that involves the use of a data ensemble to merge the multiple depth predictions of the teacher network to improve the training samples by collecting non-trivial knowledge beyond a single prediction. To refine the inaccurate depth estimation that is used when training the student network, we further propose stereo confidence-guided regression loss that handles the unreliable pseudo depth values in occlusion, texture-less region, and repetitive pattern. To complement the existing dataset comprising outdoor driving scenes, we built a novel large-scale dataset consisting of one million outdoor stereo images taken using hand-held stereo cameras. Finally, we demonstrate that the monocular depth estimation network provides feature representations that are suitable for high-level vision tasks. The experimental results for various outdoor scenarios demonstrate the effectiveness and flexibility of our approach, which outperforms state-of-the-art approaches.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
128,580
2105.14280
Hashing-Accelerated Graph Neural Networks for Link Prediction
Networks are ubiquitous in the real world. Link prediction, as one of the key problems for network-structured data, aims to predict whether there exists a link between two nodes. The traditional approaches are based on the explicit similarity computation between the compact node representation by embedding each node into a low-dimensional space. In order to efficiently handle the intensive similarity computation in link prediction, the hashing technique has been successfully used to produce the node representation in the Hamming space. However, the hashing-based link prediction algorithms face accuracy loss from the randomized hashing techniques or inefficiency from the learning to hash techniques in the embedding process. Currently, the Graph Neural Network (GNN) framework has been widely applied to the graph-related tasks in an end-to-end manner, but it commonly requires substantial computational resources and memory costs due to massive parameter learning, which makes the GNN-based algorithms impractical without the help of a powerful workhorse. In this paper, we propose a simple and effective model called #GNN, which balances the trade-off between accuracy and efficiency. #GNN is able to efficiently acquire node representation in the Hamming space for link prediction by exploiting the randomized hashing technique to implement message passing and capture high-order proximity in the GNN framework. Furthermore, we characterize the discriminative power of #GNN in probability. The extensive experimental results demonstrate that the proposed #GNN algorithm achieves accuracy comparable to the learning-based algorithms and outperforms the randomized algorithm, while running significantly faster than the learning-based algorithms. Also, the proposed algorithm shows excellent scalability on a large-scale network with the limited resources.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
237,613
2402.11877
Finite-Time Error Analysis of Online Model-Based Q-Learning with a Relaxed Sampling Model
Reinforcement learning has witnessed significant advancements, particularly with the emergence of model-based approaches. Among these, $Q$-learning has proven to be a powerful algorithm in model-free settings. However, the extension of $Q$-learning to a model-based framework remains relatively unexplored. In this paper, we delve into the sample complexity of $Q$-learning when integrated with a model-based approach. Through theoretical analyses and empirical evaluations, we seek to elucidate the conditions under which model-based $Q$-learning excels in terms of sample efficiency compared to its model-free counterpart.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
430,618
2410.07046
S2HPruner: Soft-to-Hard Distillation Bridges the Discretization Gap in Pruning
Recently, differentiable mask pruning methods optimize the continuous relaxation architecture (soft network) as the proxy of the pruned discrete network (hard network) for superior sub-architecture search. However, due to the agnostic impact of the discretization process, the hard network struggles with the equivalent representational capacity as the soft network, namely discretization gap, which severely spoils the pruning performance. In this paper, we first investigate the discretization gap and propose a novel structural differentiable mask pruning framework named S2HPruner to bridge the discretization gap in a one-stage manner. In the training procedure, SH2Pruner forwards both the soft network and its corresponding hard network, then distills the hard network under the supervision of the soft network. To optimize the mask and prevent performance degradation, we propose a decoupled bidirectional knowledge distillation. It blocks the weight updating from the hard to the soft network while maintaining the gradient corresponding to the mask. Compared with existing pruning arts, S2HPruner achieves surpassing pruning performance without fine-tuning on comprehensive benchmarks, including CIFAR-100, Tiny ImageNet, and ImageNet with a variety of network architectures. Besides, investigation and analysis experiments explain the effectiveness of S2HPruner. Codes will be released soon.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
496,443
2110.09380
Learning multiplane images from single views with self-supervision
Generating static novel views from an already captured image is a hard task in computer vision and graphics, in particular when the single input image has dynamic parts such as persons or moving objects. In this paper, we tackle this problem by proposing a new framework, called CycleMPI, that is capable of learning a multiplane image representation from single images through a cyclic training strategy for self-supervision. Our framework does not require stereo data for training, therefore it can be trained with massive visual data from the Internet, resulting in a better generalization capability even for very challenging cases. Although our method does not require stereo data for supervision, it reaches results on stereo datasets comparable to the state of the art in a zero-shot scenario. We evaluated our method on RealEstate10K and Mannequin Challenge datasets for view synthesis and presented qualitative results on Places II dataset.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
261,791
2409.13645
DP$^2$-FedSAM: Enhancing Differentially Private Federated Learning Through Personalized Sharpness-Aware Minimization
Federated learning (FL) is a distributed machine learning approach that allows multiple clients to collaboratively train a model without sharing their raw data. To prevent sensitive information from being inferred through the model updates shared in FL, differentially private federated learning (DPFL) has been proposed. DPFL ensures formal and rigorous privacy protection in FL by clipping and adding random noise to the shared model updates. However, the existing DPFL methods often result in severe model utility degradation, especially in settings with data heterogeneity. To enhance model utility, we propose a novel DPFL method named DP$^2$-FedSAM: Differentially Private and Personalized Federated Learning with Sharpness-Aware Minimization. DP$^2$-FedSAM leverages personalized partial model-sharing and sharpness-aware minimization optimizer to mitigate the adverse impact of noise addition and clipping, thereby significantly improving model utility without sacrificing privacy. From a theoretical perspective, we provide a rigorous theoretical analysis of the privacy and convergence guarantees of our proposed method. To evaluate the effectiveness of DP$^2$-FedSAM, we conduct extensive evaluations based on common benchmark datasets. Our results verify that our method improves the privacy-utility trade-off compared to the existing DPFL methods, particularly in heterogeneous data settings.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
true
490,081
2201.06309
Group Gated Fusion on Attention-based Bidirectional Alignment for Multimodal Emotion Recognition
Emotion recognition is a challenging and actively-studied research area that plays a critical role in emotion-aware human-computer interaction systems. In a multimodal setting, temporal alignment between different modalities has not been well investigated yet. This paper presents a new model named as Gated Bidirectional Alignment Network (GBAN), which consists of an attention-based bidirectional alignment network over LSTM hidden states to explicitly capture the alignment relationship between speech and text, and a novel group gated fusion (GGF) layer to integrate the representations of different modalities. We empirically show that the attention-aligned representations outperform the last-hidden-states of LSTM significantly, and the proposed GBAN model outperforms existing state-of-the-art multimodal approaches on the IEMOCAP dataset.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
275,685
2303.10294
Forecasting COVID-19 Case Counts Based on 2020 Ontario Data
Objective: To develop machine learning models that can predict the number of COVID-19 cases per day given the last 14 days of environmental and mobility data. Approach: COVID-19 data from four counties around Toronto, Ontario, were used. Data were prepared into daily records containing the number of new COVID case counts, patient demographic data, outdoor weather variables, indoor environment factors, and human movement based on cell mobility and public health restrictions. This data was analyzed to determine the most important variables and their interactions. Predictive models were developed using CNN and LSTM deep neural network approaches. A 5-fold chronological cross-validation approach used these methods to develop predictive models using data from Mar 1 to Oct 14 2020, and test them on data covering Oct 15 to Dec 24 2020. Results: The best LSTM models forecasted tomorrow's daily COVID case counts with 90.7% accuracy, and the 7-day rolling average COVID case counts with 98.1% accuracy using independent test data. The best models to forecast the next 7 days of daily COVID case counts did so with 79.4% accuracy over all days. Models forecasting the 7-day rolling average case counts had a mean accuracy of 83.6% on the same test set. Conclusions: Our findings point to the importance of indoor humidity for the transmission of a virus such as COVID-19. During the coldest portions of the year, when humans spend greater amounts of time indoors or in vehicles, air quality drops within buildings, most significantly indoor relative humidity levels. Moderate to high indoor temperatures coupled with low IRH (below 20%) create conditions where viral transmission is more likely because water vapour ejected from an infected person's mouth can remain longer in the air because of evaporation and dry skin conditions, particularly in a recipient's airway, promotes transmission.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
352,384
2410.01393
Signal Adversarial Examples Generation for Signal Detection Network via White-Box Attack
With the development and application of deep learning in signal detection tasks, the vulnerability of neural networks to adversarial attacks has also become a security threat to signal detection networks. This paper defines a signal adversarial examples generation model for signal detection network from the perspective of adding perturbations to the signal. The model uses the inequality relationship of L2-norm between time domain and time-frequency domain to constrain the energy of signal perturbations. Building upon this model, we propose a method for generating signal adversarial examples utilizing gradient-based attacks and Short-Time Fourier Transform. The experimental results show that under the constraint of signal perturbation energy ratio less than 3%, our adversarial attack resulted in a 28.1% reduction in the mean Average Precision (mAP), a 24.7% reduction in recall, and a 30.4% reduction in precision of the signal detection network. Compared to random noise perturbation of equivalent intensity, our adversarial attack demonstrates a significant attack effect.
false
false
false
false
false
false
false
false
false
false
false
true
true
false
false
false
false
false
493,748
2408.15488
Legilimens: Practical and Unified Content Moderation for Large Language Model Services
Given the societal impact of unsafe content generated by large language models (LLMs), ensuring that LLM services comply with safety standards is a crucial concern for LLM service providers. Common content moderation methods are limited by an effectiveness-and-efficiency dilemma, where simple models are fragile while sophisticated models consume excessive computational resources. In this paper, we reveal for the first time that effective and efficient content moderation can be achieved by extracting conceptual features from chat-oriented LLMs, despite their initial fine-tuning for conversation rather than content moderation. We propose a practical and unified content moderation framework for LLM services, named Legilimens, which features both effectiveness and efficiency. Our red-team model-based data augmentation enhances the robustness of Legilimens against state-of-the-art jailbreaking. Additionally, we develop a framework to theoretically analyze the cost-effectiveness of Legilimens compared to other methods. We have conducted extensive experiments on five host LLMs, seventeen datasets, and nine jailbreaking methods to verify the effectiveness, efficiency, and robustness of Legilimens against normal and adaptive adversaries. A comparison of Legilimens with both commercial and academic baselines demonstrates the superior performance of Legilimens. Furthermore, we confirm that Legilimens can be applied to few-shot scenarios and extended to multi-label classification tasks.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
483,951
1812.02497
Active Learning Methods based on Statistical Leverage Scores
In many real-world machine learning applications, unlabeled data are abundant whereas class labels are expensive and scarce. An active learner aims to obtain a model of high accuracy with as few labeled instances as possible by effectively selecting useful examples for labeling. We propose a new selection criterion that is based on statistical leverage scores and present two novel active learning methods based on this criterion: ALEVS for querying single example at each iteration and DBALEVS for querying a batch of examples. To assess the representativeness of the examples in the pool, ALEVS and DBALEVS use the statistical leverage scores of the kernel matrices computed on the examples of each class. Additionally, DBALEVS selects a diverse a set of examples that are highly representative but are dissimilar to already labeled examples through maximizing a submodular set function defined with the statistical leverage scores and the kernel matrix computed on the pool of the examples. The submodularity property of the set scoring function let us identify batches with a constant factor approximate to the optimal batch in an efficient manner. Our experiments on diverse datasets show that querying based on leverage scores is a powerful strategy for active learning.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
115,770
2104.05575
GAttANet: Global attention agreement for convolutional neural networks
Transformer attention architectures, similar to those developed for natural language processing, have recently proved efficient also in vision, either in conjunction with or as a replacement for convolutional layers. Typically, visual attention is inserted in the network architecture as a (series of) feedforward self-attention module(s), with mutual key-query agreement as the main selection and routing operation. However efficient, this strategy is only vaguely compatible with the way that attention is implemented in biological brains: as a separate and unified network of attentional selection regions, receiving inputs from and exerting modulatory influence on the entire hierarchy of visual regions. Here, we report experiments with a simple such attention system that can improve the performance of standard convolutional networks, with relatively few additional parameters. Each spatial position in each layer of the network produces a key-query vector pair; all queries are then pooled into a global attention query. On the next iteration, the match between each key and the global attention query modulates the network's activations -- emphasizing or silencing the locations that agree or disagree (respectively) with the global attention system. We demonstrate the usefulness of this brain-inspired Global Attention Agreement network (GAttANet) for various convolutional backbones (from a simple 5-layer toy model to a standard ResNet50 architecture) and datasets (CIFAR10, CIFAR100, Imagenet-1k). Each time, our global attention system improves accuracy over the corresponding baseline.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
229,775
cs/0510020
Sur le statut r\'{e}f\'{e}rentiel des entit\'{e}s nomm\'{e}es
We show in this paper that, on the one hand, named entities can be designated using different denominations and that, on the second hand, names denoting named entities are polysemous. The analysis cannot be limited to reference resolution but should take into account naming strategies, which are mainly based on two linguistic operations: synecdoche and metonymy. Lastly, we present a model that explicitly represents the different denominations in discourse, unifying the way to represent linguistic knowledge and world knowledge.
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
539,002
2012.02023
Source location on multilayer networks
Nowadays it is not uncommon to have to deal with dissemination on multi-layered networks and often finding the source of said propagation can be a crucial task. In this paper we tackle this exact problem with a maximum likelihood approach that we extend to be operational on multi-layered graphs. We test our method for source location estimation on synthetic networks and outline its potential strengths and limitations. We also observe some non-trivial and perhaps surprising phenomena where the more of the system one observes the worse the results become whereas increased problem complexity in the form of more layers can actually improve our performance.
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
false
209,621
2206.01435
Dual-Port Dynamically Reconfigurable Battery with Semi-Controlled and Fully-Controlled Outputs
Modular multilevel converters (MMC) and cascaded H-bridge (CHB) converters are an established concept in ultra-high voltage systems. In combination with batteries, these circuits allow dynamically changing the series or parallel configuration of subportions of the battery as so-called modular battery integrated converters or reconfigurable batteries, and are being discussed for grid-storage and electromobility applications. A large body of research focuses on such circuits for supplying a single load, such as a motor for electric drives. Modularity, failure tolerance, less dependence on the weakest element of a battery pack, higher controllability, and better efficiency are the main incentives behind this pursuit. However, most studies neglect the auxiliary loads which require isolation from the high-voltage battery. This paper proposes a simple topology and controller that can fork off a second (galvanically isolated) output of a reconfigurable dc battery. The proposed system provides a nonisolated semicontrolled port for the dc link to maintain the operating point of the main inverter(s) close to optimal, while fully controlling an isolated output for the auxiliaries per the safety regulations. The proposed system does not require additional active switches for the auxiliary port and can operate with a wide range of voltages. Simulation and experiments verify the developed analysis.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
300,476
2212.14258
HIER: Metric Learning Beyond Class Labels via Hierarchical Regularization
Supervision for metric learning has long been given in the form of equivalence between human-labeled classes. Although this type of supervision has been a basis of metric learning for decades, we argue that it hinders further advances in the field. In this regard, we propose a new regularization method, dubbed HIER, to discover the latent semantic hierarchy of training data, and to deploy the hierarchy to provide richer and more fine-grained supervision than inter-class separability induced by common metric learning losses.HIER achieves this goal with no annotation for the semantic hierarchy but by learning hierarchical proxies in hyperbolic spaces. The hierarchical proxies are learnable parameters, and each of them is trained to serve as an ancestor of a group of data or other proxies to approximate the semantic hierarchy among them. HIER deals with the proxies along with data in hyperbolic space since the geometric properties of the space are well-suited to represent their hierarchical structure. The efficacy of HIER is evaluated on four standard benchmarks, where it consistently improved the performance of conventional methods when integrated with them, and consequently achieved the best records, surpassing even the existing hyperbolic metric learning technique, in almost all settings.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
338,556
2305.12070
Instrumental Variable Learning for Chest X-ray Classification
The chest X-ray (CXR) is commonly employed to diagnose thoracic illnesses, but the challenge of achieving accurate automatic diagnosis through this method persists due to the complex relationship between pathology. In recent years, various deep learning-based approaches have been suggested to tackle this problem but confounding factors such as image resolution or noise problems often damage model performance. In this paper, we focus on the chest X-ray classification task and proposed an interpretable instrumental variable (IV) learning framework, to eliminate the spurious association and obtain accurate causal representation. Specifically, we first construct a structural causal model (SCM) for our task and learn the confounders and the preliminary representations of IV, we then leverage electronic health record (EHR) as auxiliary information and we fuse the above feature with our transformer-based semantic fusion module, so the IV has the medical semantic. Meanwhile, the reliability of IV is further guaranteed via the constraints of mutual information between related causal variables. Finally, our approach's performance is demonstrated using the MIMIC-CXR, NIH ChestX-ray 14, and CheXpert datasets, and we achieve competitive results.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
365,819
2305.15781
VanillaKD: Revisit the Power of Vanilla Knowledge Distillation from Small Scale to Large Scale
The tremendous success of large models trained on extensive datasets demonstrates that scale is a key ingredient in achieving superior results. Therefore, the reflection on the rationality of designing knowledge distillation (KD) approaches for limited-capacity architectures solely based on small-scale datasets is now deemed imperative. In this paper, we identify the \emph{small data pitfall} that presents in previous KD methods, which results in the underestimation of the power of vanilla KD framework on large-scale datasets such as ImageNet-1K. Specifically, we show that employing stronger data augmentation techniques and using larger datasets can directly decrease the gap between vanilla KD and other meticulously designed KD variants. This highlights the necessity of designing and evaluating KD approaches in the context of practical scenarios, casting off the limitations of small-scale datasets. Our investigation of the vanilla KD and its variants in more complex schemes, including stronger training strategies and different model capacities, demonstrates that vanilla KD is elegantly simple but astonishingly effective in large-scale scenarios. Without bells and whistles, we obtain state-of-the-art ResNet-50, ViT-S, and ConvNeXtV2-T models for ImageNet, which achieve 83.1\%, 84.3\%, and 85.0\% top-1 accuracy, respectively. PyTorch code and checkpoints can be found at https://github.com/Hao840/vanillaKD.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
367,782
2402.03469
Rethinking the Role of Proxy Rewards in Language Model Alignment
Learning from human feedback via proxy reward modeling has been studied to align Large Language Models (LLMs) with human values. However, achieving reliable training through that proxy reward model (RM) is not a trivial problem, and its behavior remained as a black-box. In this paper, we study the role of proxy rewards in the LLM alignment via `reverse reward engineering' by composing interpretable features as a white-box reward function. We aim to replicate the ground truth (gold) reward signal by achieving a monotonic relationship between the proxy and gold reward signals after training the model using the proxy reward in reinforcement learning (RL). Our findings indicate that successfully emulating the gold reward requires generating responses that are relevant with enough length to open-ended questions, while also ensuring response consistency in closed-ended questions. Furthermore, resulting models optimizing our devised white-box reward show competitive performances with strong open-source RMs in alignment benchmarks. We highlight its potential usage as a simple but strong reward baseline for the LLM alignment, not requiring explicit human feedback dataset and RM training. Our code is available at https://github.com/naver-ai/rethinking-proxy-reward.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
427,024
1812.11293
DeGroot-Friedkin Map in Opinion Dynamics is Mirror Descent
We provide a variational interpretation of the DeGroot-Friedkin map in opinion dynamics. Specifically, we show that the nonlinear dynamics for the DeGroot-Friedkin map can be viewed as mirror descent on the standard simplex with the associated Bregman divergence being equal to the generalized Kullback-Leibler divergence, i.e., an entropic mirror descent. Our results reveal that the DeGroot-Friedkin map elicits an individual's social power to be close to her social influence while minimizing the so called "extropy" -- the entropy of the complimentary opinion.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
117,532
1603.09247
Statistical Quadrature Evolution by Inference for Continuous-Variable Quantum Key Distribution
We define the statistical quadrature evolution (QE) method for multicarrier continuous-variable quantum key distribution (CVQKD). A multicarrier CVQKD protocol uses Gaussian subcarrier quantum continuous variables (CVs) for information transmission. The QE scheme utilizes the theory of mathematical statistics and statistical information processing. The QE model is based on the Gaussian quadrature inference (GQI) framework to provide a minimal error estimate of the CV state quadratures. The QE block evaluates a unique and stable estimation of the non-observable continuous input from the measurement results and through the statistical inference method yielded from the GQI framework. The QE method minimizes the overall expected error by an estimator function and provides a viable, easily implementable, and computationally efficient way to maximize the extractable information from the observed data. The QE framework can be established in an arbitrary CVQKD protocol and measurement setting and is implementable by standard low-complexity functions, which is particularly convenient for experimental CVQKD.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
53,896
1206.6728
Epidemic thresholds of the Susceptible-Infected-Susceptible model on networks: A comparison of numerical and theoretical results
Recent work has shown that different theoretical approaches to the dynamics of the Susceptible-Infected-Susceptible (SIS) model for epidemics lead to qualitatively different estimates for the position of the epidemic threshold in networks. Here we present large-scale numerical simulations of the SIS dynamics on various types of networks, allowing the precise determination of the effective threshold for systems of finite size N. We compare quantitatively the numerical thresholds with theoretical predictions of the heterogeneous mean-field theory and of the quenched mean-field theory. We show that the latter is in general more accurate, scaling with N with the correct exponent, but often failing to capture the correct prefactor.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
17,037
0709.3034
Query Evaluation in P2P Systems of Taxonomy-based Sources: Algorithms, Complexity, and Optimizations
In this study, we address the problem of answering queries over a peer-to-peer system of taxonomy-based sources. A taxonomy states subsumption relationships between negation-free DNF formulas on terms and negation-free conjunctions of terms. To the end of laying the foundations of our study, we first consider the centralized case, deriving the complexity of the decision problem and of query evaluation. We conclude by presenting an algorithm that is efficient in data complexity and is based on hypergraphs. More expressive forms of taxonomies are also investigated, which however lead to intractability. We then move to the distributed case, and introduce a logical model of a network of taxonomy-based sources. On such network, a distributed version of the centralized algorithm is then presented, based on a message passing paradigm, and its correctness is proved. We finally discuss optimization issues, and relate our work to the literature.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
true
673
1704.05566
Simultaneous Policy Learning and Latent State Inference for Imitating Driver Behavior
In this work, we propose a method for learning driver models that account for variables that cannot be observed directly. When trained on a synthetic dataset, our models are able to learn encodings for vehicle trajectories that distinguish between four distinct classes of driver behavior. Such encodings are learned without any knowledge of the number of driver classes or any objective that directly requires the models to learn encodings for each class. We show that driving policies trained with knowledge of latent variables are more effective than baseline methods at imitating the driver behavior that they are trained to replicate. Furthermore, we demonstrate that the actions chosen by our policy are heavily influenced by the latent variable settings that are provided to them.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
72,031
2206.04728
Towards Target Sequential Rules
In many real-world applications, sequential rule mining (SRM) can provide prediction and recommendation functions for a variety of services. It is an important technique of pattern mining to discover all valuable rules that belong to high-frequency and high-confidence sequential rules. Although several algorithms of SRM are proposed to solve various practical problems, there are no studies on target sequential rules. Targeted sequential rule mining aims at mining the interesting sequential rules that users focus on, thus avoiding the generation of other invalid and unnecessary rules. This approach can further improve the efficiency of users in analyzing rules and reduce the consumption of data resources. In this paper, we provide the relevant definitions of target sequential rule and formulate the problem of targeted sequential rule mining. Furthermore, we propose an efficient algorithm, called targeted sequential rule mining (TaSRM). Several pruning strategies and an optimization are introduced to improve the efficiency of TaSRM. Finally, a large number of experiments are conducted on different benchmarks, and we analyze the results in terms of their running time, memory consumption, and scalability, as well as query cases with different query rules. It is shown that the novel algorithm TaSRM and its variants can achieve better experimental performance compared to the existing baseline algorithm.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
true
false
301,739
2208.04579
Adaptive Zeroth-Order Optimisation of Nonconvex Composite Objectives
In this paper, we propose and analyze algorithms for zeroth-order optimization of non-convex composite objectives, focusing on reducing the complexity dependence on dimensionality. This is achieved by exploiting the low dimensional structure of the decision set using the stochastic mirror descent method with an entropy alike function, which performs gradient descent in the space equipped with the maximum norm. To improve the gradient estimation, we replace the classic Gaussian smoothing method with a sampling method based on the Rademacher distribution and show that the mini-batch method copes with the non-Euclidean geometry. To avoid tuning hyperparameters, we analyze the adaptive stepsizes for the general stochastic mirror descent and show that the adaptive version of the proposed algorithm converges without requiring prior knowledge about the problem.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
312,159
2003.10388
Generating Natural Language Adversarial Examples on a Large Scale with Generative Models
Today text classification models have been widely used. However, these classifiers are found to be easily fooled by adversarial examples. Fortunately, standard attacking methods generate adversarial texts in a pair-wise way, that is, an adversarial text can only be created from a real-world text by replacing a few words. In many applications, these texts are limited in numbers, therefore their corresponding adversarial examples are often not diverse enough and sometimes hard to read, thus can be easily detected by humans and cannot create chaos at a large scale. In this paper, we propose an end to end solution to efficiently generate adversarial texts from scratch using generative models, which are not restricted to perturbing the given texts. We call it unrestricted adversarial text generation. Specifically, we train a conditional variational autoencoder (VAE) with an additional adversarial loss to guide the generation of adversarial examples. Moreover, to improve the validity of adversarial texts, we utilize discrimators and the training framework of generative adversarial networks (GANs) to make adversarial texts consistent with real data. Experimental results on sentiment analysis demonstrate the scalability and efficiency of our method. It can attack text classification models with a higher success rate than existing methods, and provide acceptable quality for humans in the meantime.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
169,321
2004.00451
Spatio-temporal Tubelet Feature Aggregation and Object Linking in Videos
This paper addresses the problem of how to exploit spatio-temporal information available in videos to improve the object detection precision. We propose a two stage object detector called FANet based on short-term spatio-temporal feature aggregation to give a first detection set, and long-term object linking to refine these detections. Firstly, we generate a set of short tubelet proposals containing the object in $N$ consecutive frames. Then, we aggregate RoI pooled deep features through the tubelet using a temporal pooling operator that summarizes the information with a fixed size output independent of the number of input frames. On top of that, we define a double head implementation that we feed with spatio-temporal aggregated information for spatio-temporal object classification, and with spatial information extracted from the current frame for object localization and spatial classification. Furthermore, we also specialize each head branch architecture to better perform in each task taking into account the input data. Finally, a long-term linking method builds long tubes using the previously calculated short tubelets to overcome detection errors. We have evaluated our model in the widely used ImageNet VID dataset achieving a 80.9% mAP, which is the new state-of-the-art result for single models. Also, in the challenging small object detection dataset USC-GRAD-STDdb, our proposal outperforms the single frame baseline by 5.4% mAP.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
170,641
0907.1925
Modeling self-organizing traffic lights with elementary cellular automata
There have been several highway traffic models proposed based on cellular automata. The simplest one is elementary cellular automaton rule 184. We extend this model to city traffic with cellular automata coupled at intersections using only rules 184, 252, and 136. The simplicity of the model offers a clear understanding of the main properties of city traffic and its phase transitions. We use the proposed model to compare two methods for coordinating traffic lights: a green-wave method that tries to optimize phases according to expected flows and a self-organizing method that adapts to the current traffic conditions. The self-organizing method delivers considerable improvements over the green-wave method. For low densities, the self-organizing method promotes the formation and coordination of platoons that flow freely in four directions, i.e. with a maximum velocity and no stops. For medium densities, the method allows a constant usage of the intersections, exploiting their maximum flux capacity. For high densities, the method prevents gridlocks and promotes the formation and coordination of "free-spaces" that flow in the opposite direction of traffic.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
4,081
2201.12338
Machine Learning Based Relative Orbit Transfer for Swarm Spacecraft Motion Planning
In this paper we describe a machine learning based framework for spacecraft swarm trajectory planning. In particular, we focus on coordinating motions of multi-spacecraft in formation flying through passive relative orbit(PRO) transfers. Accounting for spacecraft dynamics while avoiding collisions between the agents makes spacecraft swarm trajectory planning difficult. Centralized approaches can be used to solve this problem, but are computationally demanding and scale poorly with the number of agents in the swarm. As a result, centralized algorithms are ill-suited for real time trajectory planning on board small spacecraft (e.g. CubeSats) comprising the swarm. In our approach a neural network is used to approximate solutions of a centralized method. The necessary training data is generated using a centralized convex optimization framework through which several instances of the n=10 spacecraft swarm trajectory planning problem are solved. We are interested in answering the following questions which will give insight on the potential utility of deep learning-based approaches to the multi-spacecraft motion planning problem: 1) Can neural networks produce feasible trajectories that satisfy safety constraints (e.g. collision avoidance) and low in fuel cost? 2) Can a neural network trained using n spacecraft data be used to solve problems for spacecraft swarms of differing size?
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
277,607
2306.07004
Occlusion-aware Risk Assessment and Driving Strategy for Autonomous Vehicles Using Simplified Reachability Quantification
One of the unresolved challenges for autonomous vehicles is safe navigation among occluded pedestrians and vehicles. Previous approaches included generating phantom vehicles and assessing their risk, but they often made the ego vehicle overly conservative or could not conduct a real-time risk assessment in heavily occluded situations. We propose an efficient occlusion-aware risk assessment method using simplified reachability quantification that quantifies the reachability of phantom agents with a simple distribution model on phantom agents' state. Furthermore, we propose a driving strategy for safe and efficient navigation in occluded areas that sets the speed limit of an autonomous vehicle using the risk of phantom agents. Simulations were conducted to evaluate the performance of the proposed method in various occlusion scenarios involving other vehicles and obstacles. Compared with the baseline case of no occlusion-aware risk assessment, the proposed method increased the traversal time of an intersection by 1.48 times but decreased the average collision rate and discomfort score by up to 6.14 times and 5.03 times, respectively. The proposed method has shown the state-of-the-art level of time efficiency with constant time complexity and computational time less than 5 ms.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
372,839
2302.09235
Generalization and Stability of Interpolating Neural Networks with Minimal Width
We investigate the generalization and optimization properties of shallow neural-network classifiers trained by gradient descent in the interpolating regime. Specifically, in a realizable scenario where model weights can achieve arbitrarily small training error $\epsilon$ and their distance from initialization is $g(\epsilon)$, we demonstrate that gradient descent with $n$ training data achieves training error $O(g(1/T)^2 /T)$ and generalization error $O(g(1/T)^2 /n)$ at iteration $T$, provided there are at least $m=\Omega(g(1/T)^4)$ hidden neurons. We then show that our realizable setting encompasses a special case where data are separable by the model's neural tangent kernel. For this and logistic-loss minimization, we prove the training loss decays at a rate of $\tilde O(1/ T)$ given polylogarithmic number of neurons $m=\Omega(\log^4 (T))$. Moreover, with $m=\Omega(\log^{4} (n))$ neurons and $T\approx n$ iterations, we bound the test loss by $\tilde{O}(1/n)$. Our results differ from existing generalization outcomes using the algorithmic-stability framework, which necessitate polynomial width and yield suboptimal generalization rates. Central to our analysis is the use of a new self-bounded weak-convexity property, which leads to a generalized local quasi-convexity property for sufficiently parameterized neural-network classifiers. Eventually, despite the objective's non-convexity, this leads to convergence and generalization-gap bounds that resemble those found in the convex setting of linear logistic regression.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
346,333
2208.06146
Feature-Based Time-Series Analysis in R using the theft Package
Time series are measured and analyzed across the sciences. One method of quantifying the structure of time series is by calculating a set of summary statistics or `features', and then representing a time series in terms of its properties as a feature vector. The resulting feature space is interpretable and informative, and enables conventional statistical learning approaches, including clustering, regression, and classification, to be applied to time-series datasets. Many open-source software packages for computing sets of time-series features exist across multiple programming languages, including catch22 (22 features: Matlab, R, Python, Julia), feasts (42 features: R), tsfeatures (63 features: R), Kats (40 features: Python), tsfresh (779 features: Python), and TSFEL (390 features: Python). However, there are several issues: (i) a singular access point to these packages is not currently available; (ii) to access all feature sets, users must be fluent in multiple languages; and (iii) these feature-extraction packages lack extensive accompanying methodological pipelines for performing feature-based time-series analysis, such as applications to time-series classification. Here we introduce a solution to these issues in an R software package called theft: Tools for Handling Extraction of Features from Time series. theft is a unified and extendable framework for computing features from the six open-source time-series feature sets listed above. It also includes a suite of functions for processing and interpreting the performance of extracted features, including extensive data-visualization templates, low-dimensional projections, and time-series classification operations. With an increasing volume and complexity of time-series datasets in the sciences and industry, theft provides a standardized framework for comprehensively quantifying and interpreting informative structure in time series.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
312,619
2308.12649
APART: Diverse Skill Discovery using All Pairs with Ascending Reward and DropouT
We study diverse skill discovery in reward-free environments, aiming to discover all possible skills in simple grid-world environments where prior methods have struggled to succeed. This problem is formulated as mutual training of skills using an intrinsic reward and a discriminator trained to predict a skill given its trajectory. Our initial solution replaces the standard one-vs-all (softmax) discriminator with a one-vs-one (all pairs) discriminator and combines it with a novel intrinsic reward function and a dropout regularization technique. The combined approach is named APART: Diverse Skill Discovery using All Pairs with Ascending Reward and Dropout. We demonstrate that APART discovers all the possible skills in grid worlds with remarkably fewer samples than previous works. Motivated by the empirical success of APART, we further investigate an even simpler algorithm that achieves maximum skills by altering VIC, rescaling its intrinsic reward, and tuning the temperature of its softmax discriminator. We believe our findings shed light on the crucial factors underlying success of skill discovery algorithms in reinforcement learning.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
387,627
2402.16714
Quantum linear algebra is all you need for Transformer architectures
Generative machine learning methods such as large-language models are revolutionizing the creation of text and images. While these models are powerful they also harness a large amount of computational resources. The transformer is a key component in large language models that aims to generate a suitable completion of a given partial sequence. In this work, we investigate transformer architectures under the lens of fault-tolerant quantum computing. The input model is one where trained weight matrices are given as block encodings and we construct the query, key, and value matrices for the transformer. We show how to prepare a block encoding of the self-attention matrix, with a new subroutine for the row-wise application of the softmax function. In addition, we combine quantum subroutines to construct important building blocks in the transformer, the residual connection and layer normalization, and the feed-forward neural network. Our subroutines prepare an amplitude encoding of the transformer output, which can be measured to obtain a prediction. Based on common open-source large-language models, we provide insights into the behavior of important parameters determining the run time of the quantum algorithm. We discuss the potential and challenges for obtaining a quantum advantage.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
432,666
2106.00162
HERALD: An Annotation Efficient Method to Detect User Disengagement in Social Conversations
Open-domain dialog systems have a user-centric goal: to provide humans with an engaging conversation experience. User engagement is one of the most important metrics for evaluating open-domain dialog systems, and could also be used as real-time feedback to benefit dialog policy learning. Existing work on detecting user disengagement typically requires hand-labeling many dialog samples. We propose HERALD, an efficient annotation framework that reframes the training data annotation process as a denoising problem. Specifically, instead of manually labeling training samples, we first use a set of labeling heuristics to label training samples automatically. We then denoise the weakly labeled data using the Shapley algorithm. Finally, we use the denoised data to train a user engagement detector. Our experiments show that HERALD improves annotation efficiency significantly and achieves 86% user disengagement detection accuracy in two dialog corpora.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
238,010
2106.06707
Graph Neural Networks with Local Graph Parameters
Various recent proposals increase the distinguishing power of Graph Neural Networks GNNs by propagating features between $k$-tuples of vertices. The distinguishing power of these "higher-order'' GNNs is known to be bounded by the $k$-dimensional Weisfeiler-Leman (WL) test, yet their $\mathcal O(n^k)$ memory requirements limit their applicability. Other proposals infuse GNNs with local higher-order graph structural information from the start, hereby inheriting the desirable $\mathcal O(n)$ memory requirement from GNNs at the cost of a one-time, possibly non-linear, preprocessing step. We propose local graph parameter enabled GNNs as a framework for studying the latter kind of approaches and precisely characterize their distinguishing power, in terms of a variant of the WL test, and in terms of the graph structural properties that they can take into account. Local graph parameters can be added to any GNN architecture, and are cheap to compute. In terms of expressive power, our proposal lies in the middle of GNNs and their higher-order counterparts. Further, we propose several techniques to aide in choosing the right local graph parameters. Our results connect GNNs with deep results in finite model theory and finite variable logics. Our experimental evaluation shows that adding local graph parameters often has a positive effect for a variety of GNNs, datasets and graph learning tasks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
240,585
2311.09999
TransFusion -- A Transparency-Based Diffusion Model for Anomaly Detection
Surface anomaly detection is a vital component in manufacturing inspection. Current discriminative methods follow a two-stage architecture composed of a reconstructive network followed by a discriminative network that relies on the reconstruction output. Currently used reconstructive networks often produce poor reconstructions that either still contain anomalies or lack details in anomaly-free regions. Discriminative methods are robust to some reconstructive network failures, suggesting that the discriminative network learns a strong normal appearance signal that the reconstructive networks miss. We reformulate the two-stage architecture into a single-stage iterative process that allows the exchange of information between the reconstruction and localization. We propose a novel transparency-based diffusion process where the transparency of anomalous regions is progressively increased, restoring their normal appearance accurately while maintaining the appearance of anomaly-free regions using localization cues of previous steps. We implement the proposed process as TRANSparency DifFUSION (TransFusion), a novel discriminative anomaly detection method that achieves state-of-the-art performance on both the VisA and the MVTec AD datasets, with an image-level AUROC of 98.5% and 99.2%, respectively. Code: https://github.com/MaticFuc/ECCV_TransFusion
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
408,382
2305.05976
Say What You Mean! Large Language Models Speak Too Positively about Negative Commonsense Knowledge
Large language models (LLMs) have been widely studied for their ability to store and utilize positive knowledge. However, negative knowledge, such as "lions don't live in the ocean", is also ubiquitous in the world but rarely mentioned explicitly in the text. What do LLMs know about negative knowledge? This work examines the ability of LLMs to negative commonsense knowledge. We design a constrained keywords-to-sentence generation task (CG) and a Boolean question-answering task (QA) to probe LLMs. Our experiments reveal that LLMs frequently fail to generate valid sentences grounded in negative commonsense knowledge, yet they can correctly answer polar yes-or-no questions. We term this phenomenon the belief conflict of LLMs. Our further analysis shows that statistical shortcuts and negation reporting bias from language modeling pre-training cause this conflict.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
363,361
1802.03594
Online Learning for Effort Reduction in Interactive Neural Machine Translation
Neural machine translation systems require large amounts of training data and resources. Even with this, the quality of the translations may be insufficient for some users or domains. In such cases, the output of the system must be revised by a human agent. This can be done in a post-editing stage or following an interactive machine translation protocol. We explore the incremental update of neural machine translation systems during the post-editing or interactive translation processes. Such modifications aim to incorporate the new knowledge, from the edited sentences, into the translation system. Updates to the model are performed on-the-fly, as sentences are corrected, via online learning techniques. In addition, we implement a novel interactive, adaptive system, able to react to single-character interactions. This system greatly reduces the human effort required for obtaining high-quality translations. In order to stress our proposals, we conduct exhaustive experiments varying the amount and type of data available for training. Results show that online learning effectively achieves the objective of reducing the human effort required during the post-editing or the interactive machine translation stages. Moreover, these adaptive systems also perform well in scenarios with scarce resources. We show that a neural machine translation system can be rapidly adapted to a specific domain, exclusively by means of online learning techniques.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
90,014
1202.0568
Acoustic Communication for Medical Nanorobots
Communication among microscopic robots (nanorobots) can coordinate their activities for biomedical tasks. The feasibility of in vivo ultrasonic communication is evaluated for micron-size robots broadcasting into various types of tissues. Frequencies between 10MHz and 300MHz give the best tradeoff between efficient acoustic generation and attenuation for communication over distances of about 100 microns. Based on these results, we find power available from ambient oxygen and glucose in the bloodstream can readily support communication rates of about 10,000 bits/second between micron-sized robots. We discuss techniques, such as directional acoustic beams, that can increase this rate. The acoustic pressure fields enabling this communication are unlikely to damage nearby tissue, and short bursts at considerably higher power could be of therapeutic use.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
14,103
2204.05132
A Spiking Neural Network based on Neural Manifold for Augmenting Intracortical Brain-Computer Interface Data
Brain-computer interfaces (BCIs), transform neural signals in the brain into in-structions to control external devices. However, obtaining sufficient training data is difficult as well as limited. With the advent of advanced machine learning methods, the capability of brain-computer interfaces has been enhanced like never before, however, these methods require a large amount of data for training and thus require data augmentation of the limited data available. Here, we use spiking neural networks (SNN) as data generators. It is touted as the next-generation neu-ral network and is considered as one of the algorithms oriented to general artifi-cial intelligence because it borrows the neural information processing from bio-logical neurons. We use the SNN to generate neural spike information that is bio-interpretable and conforms to the intrinsic patterns in the original neural data. Ex-periments show that the model can directly synthesize new spike trains, which in turn improves the generalization ability of the BCI decoder. Both the input and output of the spiking neural model are spike information, which is a brain-inspired intelligence approach that can be better integrated with BCI in the future.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
290,921
2303.14568
Measuring Classification Decision Certainty and Doubt
Quantitative characterizations and estimations of uncertainty are of fundamental importance in optimization and decision-making processes. Herein, we propose intuitive scores, which we call certainty and doubt, that can be used in both a Bayesian and frequentist framework to assess and compare the quality and uncertainty of predictions in (multi-)classification decision machine learning problems.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
354,150
2106.15930
The Performance Impact of Newton Iterations per Solver Call in Partitioned Fluid-Structure Interaction
The cost of a partitioned fluid-structure interaction scheme is typically assessed by the number of coupling iterations required per time step, while ignoring the Newton loops within the nonlinear sub-solvers. In this work, we discuss why these single-field iterations deserve more attention when evaluating the coupling's efficiency and how to find the optimal number of Newton steps per coupling iteration.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
243,917
2208.07247
Using Artificial Intelligence and IoT for Constructing a Smart Trash Bin
The research reported in this paper transforms a normal trash bin into a smarter one by applying computer vision technology. With the support of sensors and actuator devices, the trash bin can automatically classify garbage. In particular, a camera on the trash bin takes pictures of trash, then the central processing unit analyzes and makes decisions regarding which bin to drop trash into. The accuracy of our trash bin system achieves 90%. Besides, our model is connected to the Internet to update the bin status for further management. A mobile application is developed for managing the bin.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
312,989
2305.18599
Improving Generalization for Multimodal Fake News Detection
The increasing proliferation of misinformation and its alarming impact have motivated both industry and academia to develop approaches for fake news detection. However, state-of-the-art approaches are usually trained on datasets of smaller size or with a limited set of specific topics. As a consequence, these models lack generalization capabilities and are not applicable to real-world data. In this paper, we propose three models that adopt and fine-tune state-of-the-art multimodal transformers for multimodal fake news detection. We conduct an in-depth analysis by manipulating the input data aimed to explore models performance in realistic use cases on social media. Our study across multiple models demonstrates that these systems suffer significant performance drops against manipulated data. To reduce the bias and improve model generalization, we suggest training data augmentation to conduct more meaningful experiments for fake news detection on social media. The proposed data augmentation techniques enable models to generalize better and yield improved state-of-the-art results.
false
false
false
false
false
true
true
false
true
false
false
false
false
false
false
false
false
true
369,153
2009.02286
Vulnerability of Face Recognition Systems Against Composite Face Reconstruction Attack
Rounding confidence score is considered trivial but a simple and effective countermeasure to stop gradient descent based image reconstruction attacks. However, its capability in the face of more sophisticated reconstruction attacks is an uninvestigated research area. In this paper, we prove that, the face reconstruction attacks based on composite faces can reveal the inefficiency of rounding policy as countermeasure. We assume that, the attacker takes advantage of face composite parts which helps the attacker to get access to the most important features of the face or decompose it to the independent segments. Afterwards, decomposed segments are exploited as search parameters to create a search path to reconstruct optimal face. Face composition parts enable the attacker to violate the privacy of face recognition models even with a blind search. However, we assume that, the attacker may take advantage of random search to reconstruct the target face faster. The algorithm is started with random composition of face parts as initial face and confidence score is considered as fitness value. Our experiments show that, since the rounding policy as countermeasure can't stop the random search process, current face recognition systems are extremely vulnerable against such sophisticated attacks. To address this problem, we successfully test Face Detection Score Filtering (FDSF) as a countermeasure to protect the privacy of training data against proposed attack.
false
false
false
false
false
false
true
false
false
false
false
true
true
false
false
false
false
false
194,510
2301.00126
Broad Learning System with Takagi-Sugeno Fuzzy Subsystem for Tobacco Origin Identification based on Near Infrared Spectroscopy
Tobacco origin identification is significantly important in tobacco industry. Modeling analysis for sensor data with near infrared spectroscopy has become a popular method for rapid detection of internal features. However, for sensor data analysis using traditional artificial neural network or deep network models, the training process is extremely time-consuming. In this paper, a novel broad learning system with Takagi-Sugeno (TS) fuzzy subsystem is proposed for rapid identification of tobacco origin. Incremental learning is employed in the proposed method, which obtains the weight matrix of the network after a very small amount of computation, resulting in much shorter training time for the model, with only about 3 seconds for the extra step training. The experimental results show that the TS fuzzy subsystem can extract features from the near infrared data and effectively improve the recognition performance. The proposed method can achieve the highest prediction accuracy (95.59 %) in comparison to the traditional classification algorithms, artificial neural network, and deep convolutional neural network, and has a great advantage in the training time with only about 128 seconds.
false
false
false
false
true
false
true
false
false
false
true
false
false
false
false
false
false
false
338,805
2405.18414
Don't Forget to Connect! Improving RAG with Graph-based Reranking
Retrieval Augmented Generation (RAG) has greatly improved the performance of Large Language Model (LLM) responses by grounding generation with context from existing documents. These systems work well when documents are clearly relevant to a question context. But what about when a document has partial information, or less obvious connections to the context? And how should we reason about connections between documents? In this work, we seek to answer these two core questions about RAG generation. We introduce G-RAG, a reranker based on graph neural networks (GNNs) between the retriever and reader in RAG. Our method combines both connections between documents and semantic information (via Abstract Meaning Representation graphs) to provide a context-informed ranker for RAG. G-RAG outperforms state-of-the-art approaches while having smaller computational footprint. Additionally, we assess the performance of PaLM 2 as a reranker and find it to significantly underperform G-RAG. This result emphasizes the importance of reranking for RAG even when using Large Language Models.
false
false
false
true
true
false
true
false
true
false
false
false
false
false
false
false
false
false
458,416
1808.01262
The Text-Based Adventure AI Competition
In 2016, 2017, and 2018 at the IEEE Conference on Computational Intelligence in Games, the authors of this paper ran a competition for agents that can play classic text-based adventure games. This competition fills a gap in existing game AI competitions that have typically focussed on traditional card/board games or modern video games with graphical interfaces. By providing a platform for evaluating agents in text-based adventures, the competition provides a novel benchmark for game AI with unique challenges for natural language understanding and generation. This paper summarises the three competitions ran in 2016, 2017, and 2018 (including details of open source implementations of both the competition framework and our competitors) and presents the results of an improved evaluation of these competitors across 20 games.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
104,538
1607.05351
Towards Analytics Aware Ontology Based Access to Static and Streaming Data (Extended Version)
Real-time analytics that requires integration and aggregation of heterogeneous and distributed streaming and static data is a typical task in many industrial scenarios such as diagnostics of turbines in Siemens. OBDA approach has a great potential to facilitate such tasks; however, it has a number of limitations in dealing with analytics that restrict its use in important industrial applications. Based on our experience with Siemens, we argue that in order to overcome those limitations OBDA should be extended and become analytics, source, and cost aware. In this work we propose such an extension. In particular, we propose an ontology, mapping, and query language for OBDA, where aggregate and other analytical functions are first class citizens. Moreover, we develop query optimisation techniques that allow to efficiently process analytical tasks over static and streaming data. We implement our approach in a system and evaluate our system with Siemens turbine data.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
true
false
58,740
2203.16274
Characterizing YouTube and BitChute Content and Mobilizers During U.S. Election Fraud Discussions on Twitter
In this study, we characterize the cross-platform mobilization of YouTube and BitChute videos on Twitter during the 2020 U.S. Election fraud discussions. Specifically, we extend the VoterFraud2020 dataset to describe the prevalence of content supplied by both platforms, the mobilizers of that content, the suppliers of that content, and the content itself. We find that while BitChute videos promoting election fraud claims were linked to and engaged with in the Twitter discussion, they played a relatively small role compared to YouTube videos promoting fraud claims. This core finding points to the continued need for proactive, consistent, and collaborative content moderation solutions rather than the reactive and inconsistent solutions currently being used. Additionally, we find that cross-platform disinformation spread from video platforms was not prominently from bot accounts or political elites, but rather average Twitter users. This finding supports past work arguing that research on disinformation should move beyond a focus on bots and trolls to a focus on participatory disinformation spread.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
288,725
1406.2375
Parsing Semantic Parts of Cars Using Graphical Models and Segment Appearance Consistency
This paper addresses the problem of semantic part parsing (segmentation) of cars, i.e.assigning every pixel within the car to one of the parts (e.g.body, window, lights, license plates and wheels). We formulate this as a landmark identification problem, where a set of landmarks specifies the boundaries of the parts. A novel mixture of graphical models is proposed, which dynamically couples the landmarks to a hierarchy of segments. When modeling pairwise relation between landmarks, this coupling enables our model to exploit the local image contents in addition to spatial deformation, an aspect that most existing graphical models ignore. In particular, our model enforces appearance consistency between segments within the same part. Parsing the car, including finding the optimal coupling between landmarks and segments in the hierarchy, is performed by dynamic programming. We evaluate our method on a subset of PASCAL VOC 2010 car images and on the car subset of 3D Object Category dataset (CAR3D). We show good results and, in particular, quantify the effectiveness of using the segment appearance consistency in terms of accuracy of part localization and segmentation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
33,737
2402.08604
Sampling Space-Saving Set Sketches
Large, distributed data streams are now ubiquitous. High-accuracy sketches with low memory overhead have become the de facto method for analyzing this data. For instance, if we wish to group data by some label and report the largest counts using fixed memory, we need to turn to mergeable heavy hitter sketches that can provide highly accurate approximate counts. Similarly, if we wish to keep track of the number of distinct items in a single set spread across several streams using fixed memory, we can turn to mergeable count distinct sketches that can provide highly accurate set cardinalities. If we were to try to keep track of the cardinality of multiple sets and report only on the largest ones, maintaining individual count distinct sketches for each set can grow unwieldy, especially if the number of sets is not known in advance. We consider the natural combination of the heavy hitters problem with the count distinct problem, the heavy distinct hitters problem: given a stream of $(\ell, x)$ pairs, find all the labels $\ell$ that are paired with a large number of distinct items $x$ using only constant memory. No previous work on heavy distinct hitters has managed to be of practical use in the large, distributed data stream setting. We propose a new algorithm, the Sampling Space-Saving Set Sketch, which combines sketching and sampling techniques and has all the desired properties for size, speed, accuracy, mergeability, and invertibility. We compare our algorithm to several existing solutions to the heavy distinct hitters problem, and provide experimental results across several data sets showing the superiority of the new sketch.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
true
429,159
2112.06868
Variational autoencoders in the presence of low-dimensional data: landscape and implicit bias
Variational Autoencoders are one of the most commonly used generative models, particularly for image data. A prominent difficulty in training VAEs is data that is supported on a lower-dimensional manifold. Recent work by Dai and Wipf (2020) proposes a two-stage training algorithm for VAEs, based on a conjecture that in standard VAE training the generator will converge to a solution with 0 variance which is correctly supported on the ground truth manifold. They gave partial support for that conjecture by showing that some optima of the VAE loss do satisfy this property, but did not analyze the training dynamics. In this paper, we show that for linear encoders/decoders, the conjecture is true-that is the VAE training does recover a generator with support equal to the ground truth manifold-and does so due to an implicit bias of gradient descent rather than merely the VAE loss itself. In the nonlinear case, we show that VAE training frequently learns a higher-dimensional manifold which is a superset of the ground truth manifold.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
271,316
2410.14181
Exploring the Role of Network Centrality in Player Selection: A Case Study of Pakistan Super League
Cricket, a popular bat-and-ball game in South Asia, is played between two 11-player teams. The Pakistan Super League (PSL) is a commercial T20 domestic league comprised of six franchise-owned teams, where player selection is competitive. In this study, an existing role-based ranking structure is assessed that evaluates player performance in the context of team belongingness to generate optimal Pakistan cricket teams for international tournaments. The underlying assumption is that since cricket is fundamentally a team sport, the performance of players compared to their peers plays a crucial role in their selection. To accomplish this, a network is generated using ball-by-ball data from previous PSL matches (2016-2022), and social network analysis (SNA) techniques such as centrality and clustering coefficient measures, are employed to quantify the level of belongingness among Pakistani cricket players within the PSL network. Characteristic network models, such as the Erd\"os-R\'enyi, Watts-Strogatz, and Barab\'asi-Albert models are utilized to gain insights into the small-world properties of the network. By ranking players using centrality and clustering coefficient metrics, four teams are formulated, and these teams are subsequently compared to the official squad selected by the Pakistan Cricket Board (PCB) for the recent ICC Men's T20 World Cup in 2022. This evaluation sheds light on the allegations of nepotism and favoritism in team formations that have been attributed to the PCB over the years. Based on our findings, out of the 18 players in the World Cup squad, 11 were included in the teams we formed. While most of the 7 players who were not included in our teams were still selected for the ICC Men's T20 World Cup 2022, they ranked highly in our rankings, suggesting their potential and competence.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
499,916
1910.02136
Risks of Using Non-verified Open Data: A case study on using Machine Learning techniques for predicting Pregnancy Outcomes in India
Artificial intelligence (AI) has evolved considerably in the last few years. While applications of AI is now becoming more common in fields like retail and marketing, application of AI in solving problems related to developing countries is still an emerging topic. Specially, AI applications in resource-poor settings remains relatively nascent. There is a huge scope of AI being used in such settings. For example, researchers have started exploring AI applications to reduce poverty and deliver a broad range of critical public services. However, despite many promising use cases, there are many dataset related challenges that one has to overcome in such projects. These challenges often take the form of missing data, incorrectly collected data and improperly labeled variables, among other factors. As a result, we can often end up using data that is not representative of the problem we are trying to solve. In this case study, we explore the challenges of using such an open dataset from India, to predict an important health outcome. We highlight how the use of AI without proper understanding of reporting metrics can lead to erroneous conclusions.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
148,147
2305.01457
Memory of recurrent networks: Do we compute it right?
Numerical evaluations of the memory capacity (MC) of recurrent neural networks reported in the literature often contradict well-established theoretical bounds. In this paper, we study the case of linear echo state networks, for which the total memory capacity has been proven to be equal to the rank of the corresponding Kalman controllability matrix. We shed light on various reasons for the inaccurate numerical estimations of the memory, and we show that these issues, often overlooked in the recent literature, are of an exclusively numerical nature. More explicitly, we prove that when the Krylov structure of the linear MC is ignored, a gap between the theoretical MC and its empirical counterpart is introduced. As a solution, we develop robust numerical approaches by exploiting a result of MC neutrality with respect to the input mask matrix. Simulations show that the memory curves that are recovered using the proposed methods fully agree with the theory.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
361,678
2312.09969
Nearest Neighbor Sampling for Covariate Shift Adaptation
Many existing covariate shift adaptation methods estimate sample weights given to loss values to mitigate the gap between the source and the target distribution. However, estimating the optimal weights typically involves computationally expensive matrix inversion and hyper-parameter tuning. In this paper, we propose a new covariate shift adaptation method which avoids estimating the weights. The basic idea is to directly work on unlabeled target data, labeled according to the $k$-nearest neighbors in the source dataset. Our analysis reveals that setting $k = 1$ is an optimal choice. This property removes the necessity of tuning the only hyper-parameter $k$ and leads to a running time quasi-linear in the sample size. Our results include sharp rates of convergence for our estimator, with a tight control of the mean square error and explicit constants. In particular, the variance of our estimators has the same rate of convergence as for standard parametric estimation despite their non-parametric nature. The proposed estimator shares similarities with some matching-based treatment effect estimators used, e.g., in biostatistics, econometrics, and epidemiology. Our experiments show that it achieves drastic reduction in the running time with remarkable accuracy.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
415,951
1806.01209
Stability Analysis for Fast Settling Switched DPLL
In current generation digital phase locked loop (DPLL) architectures, techniques like adaptive loop bandwidth with loop order switching and switched phase-detection are employed to achieve better lock time and jitter performance. This work derives stability conditions for such DPLL architectures using Multiple Lyapunov Functions (MLFs) for switched systems. The loop-parameters chosen on the basis of these stability conditions ensure that chattering phenomenon does not occur during switching between different subsystems. A 5GHz fractional-N DPLL designed with these loop-parameter values is fabricated in CMOS65nm-LL technology. The measured settling time of the implemented DPLL is within 1us. The efficiency of switching rule and stability conditions used for this DPLL is validated with the fast settling response, which is the best lock time reported until now for fractional-N DPLLs.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
99,502
1907.12952
Pyramid: Machine Learning Framework to Estimate the Optimal Timing and Resource Usage of a High-Level Synthesis Design
The emergence of High-Level Synthesis (HLS) tools shifted the paradigm of hardware design by making the process of mapping high-level programming languages to hardware design such as C to VHDL/Verilog feasible. HLS tools offer a plethora of techniques to optimize designs for both area and performance, but resource usage and timing reports of HLS tools mostly deviate from the post-implementation results. In addition, to evaluate a hardware design performance, it is critical to determine the maximum achievable clock frequency. Obtaining such information using static timing analysis provided by CAD tools is difficult, due to the multitude of tool options. Moreover, a binary search to find the maximum frequency is tedious, time-consuming, and often does not obtain the optimal result. To address these challenges, we propose a framework, called Pyramid, that uses machine learning to accurately estimate the optimal performance and resource utilization of an HLS design. For this purpose, we first create a database of C-to-FPGA results from a diverse set of benchmarks. To find the achievable maximum clock frequency, we use Minerva, which is an automated hardware optimization tool. Minerva determines the close-to-optimal settings of tools, using static timing analysis and a heuristic algorithm, and targets either optimal throughput or throughput-to-area. Pyramid uses the database to train an ensemble machine learning model to map the HLS-reported features to the results of Minerva. To this end, Pyramid re-calibrates the results of HLS to bridge the accuracy gap and enable developers to estimate the throughput or throughput-to-area of hardware design with more than 95% accuracy and alleviates the need to perform actual implementation for estimation.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
140,259
2304.06783
A Distributionally Robust Approach to Regret Optimal Control using the Wasserstein Distance
This paper proposes a distributionally robust approach to regret optimal control of discrete-time linear dynamical systems with quadratic costs subject to a stochastic additive disturbance on the state process. The underlying probability distribution of the disturbance process is unknown, but assumed to lie in a given ball of distributions defined in terms of the type-2 Wasserstein distance. In this framework, strictly causal linear disturbance feedback controllers are designed to minimize the worst-case expected regret. The regret incurred by a controller is defined as the difference between the cost it incurs in response to a realization of the disturbance process and the cost incurred by the optimal noncausal controller which has perfect knowledge of the disturbance process realization at the outset. Building on a well-established duality theory for optimal transport problems, we derive a reformulation of the minimax regret optimal control problem as a tractable semidefinite program. Using the equivalent dual reformulation, we characterize a worst-case distribution achieving the worst-case expected regret in relation to the distribution at the center of the Wasserstein ball. We compare the minimax regret optimal control design method with the distributionally robust optimal control approach using an illustrative example and numerical experiments.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
358,098
2403.05573
Beyond Predictive Algorithms in Child Welfare
Caseworkers in the child welfare (CW) sector use predictive decision-making algorithms built on risk assessment (RA) data to guide and support CW decisions. Researchers have highlighted that RAs can contain biased signals which flatten CW case complexities and that the algorithms may benefit from incorporating contextually rich case narratives, i.e. - casenotes written by caseworkers. To investigate this hypothesized improvement, we quantitatively deconstructed two commonly used RAs from a United States CW agency. We trained classifier models to compare the predictive validity of RAs with and without casenote narratives and applied computational text analysis on casenotes to highlight topics uncovered in the casenotes. Our study finds that common risk metrics used to assess families and build CWS predictive risk models (PRMs) are unable to predict discharge outcomes for children who are not reunified with their birth parent(s). We also find that although casenotes cannot predict discharge outcomes, they contain contextual case signals. Given the lack of predictive validity of RA scores and casenotes, we propose moving beyond quantitative risk assessments for public sector algorithms and towards using contextual sources of information such as narratives to study public sociotechnical systems.
true
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
436,065
2010.03288
Double Targeted Universal Adversarial Perturbations
Despite their impressive performance, deep neural networks (DNNs) are widely known to be vulnerable to adversarial attacks, which makes it challenging for them to be deployed in security-sensitive applications, such as autonomous driving. Image-dependent perturbations can fool a network for one specific image, while universal adversarial perturbations are capable of fooling a network for samples from all classes without selection. We introduce a double targeted universal adversarial perturbations (DT-UAPs) to bridge the gap between the instance-discriminative image-dependent perturbations and the generic universal perturbations. This universal perturbation attacks one targeted source class to sink class, while having a limited adversarial effect on other non-targeted source classes, for avoiding raising suspicions. Targeting the source and sink class simultaneously, we term it double targeted attack (DTA). This provides an attacker with the freedom to perform precise attacks on a DNN model while raising little suspicion. We show the effectiveness of the proposed DTA algorithm on a wide range of datasets and also demonstrate its potential as a physical attack.
false
false
false
false
false
false
true
false
false
false
false
true
true
false
false
false
false
false
199,357
1011.0208
Network Diversity and Economic Development: a Comment
Network diversity yields context-dependent benefits that are not yet fully-understood. I elaborate on a recently introduced distinction between tie strength diversity and information source diversity, and explain when, how, and why they matter. The issue whether there are benefits to specialization is the key.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
8,084
2205.10268
B-cos Networks: Alignment is All We Need for Interpretability
We present a new direction for increasing the interpretability of deep neural networks (DNNs) by promoting weight-input alignment during training. For this, we propose to replace the linear transforms in DNNs by our B-cos transform. As we show, a sequence (network) of such transforms induces a single linear transform that faithfully summarises the full model computations. Moreover, the B-cos transform introduces alignment pressure on the weights during optimisation. As a result, those induced linear transforms become highly interpretable and align with task-relevant features. Importantly, the B-cos transform is designed to be compatible with existing architectures and we show that it can easily be integrated into common models such as VGGs, ResNets, InceptionNets, and DenseNets, whilst maintaining similar performance on ImageNet. The resulting explanations are of high visual quality and perform well under quantitative metrics for interpretability. Code available at https://www.github.com/moboehle/B-cos.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
297,635
2301.12030
TiLT: A Time-Centric Approach for Stream Query Optimization and Parallelization
Stream processing engines (SPEs) are widely used for large scale streaming analytics over unbounded time-ordered data streams. Modern day streaming analytics applications exhibit diverse compute characteristics and demand strict latency and throughput requirements. Over the years, there has been significant attention in building hardware-efficient stream processing engines (SPEs) that support several query optimization, parallelization, and execution strategies to meet the performance requirements of large scale streaming analytics applications. However, in this work, we observe that these strategies often fail to generalize well on many real-world streaming analytics applications due to several inherent design limitations of current SPEs. We further argue that these limitations stem from the shortcomings of the fundamental design choices and the query representation model followed in modern SPEs. To address these challenges, we first propose TiLT, a novel intermediate representation (IR) that offers a highly expressive temporal query language amenable to effective query optimization and parallelization strategies. We subsequently build a compiler backend for TiLT that applies such optimizations on streaming queries and generates hardware-efficient code to achieve high performance on multi-core stream query executions. We demonstrate that TiLT achieves up to 326x (20.49x on average) higher throughput compared to state-of-the-art SPEs (e.g., Trill) across eight real-world streaming analytics applications. TiLT source code is available at https://github.com/ampersand-projects/tilt.git.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
true
342,357
1802.04434
signSGD: Compressed Optimisation for Non-Convex Problems
Training large neural networks requires distributing learning across multiple workers, where the cost of communicating gradients can be a significant bottleneck. signSGD alleviates this problem by transmitting just the sign of each minibatch stochastic gradient. We prove that it can get the best of both worlds: compressed gradients and SGD-level convergence rate. The relative $\ell_1/\ell_2$ geometry of gradients, noise and curvature informs whether signSGD or SGD is theoretically better suited to a particular problem. On the practical side we find that the momentum counterpart of signSGD is able to match the accuracy and convergence speed of Adam on deep Imagenet models. We extend our theory to the distributed setting, where the parameter server uses majority vote to aggregate gradient signs from each worker enabling 1-bit compression of worker-server communication in both directions. Using a theorem by Gauss we prove that majority vote can achieve the same reduction in variance as full precision distributed SGD. Thus, there is great promise for sign-based optimisation schemes to achieve fast communication and fast convergence. Code to reproduce experiments is to be found at https://github.com/jxbz/signSGD .
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
90,220
1703.02072
Performance Analysis for Time-of-Arrival Estimation with Oversampled Low-Complexity 1-bit A/D Conversion
Analog-to-digtial (A/D) conversion plays a crucial role when it comes to the design of energy-efficient and fast signal processing systems. As its complexity grows exponentially with the number of output bits, significant savings are possible when resorting to a minimum resolution of a single bit. However, then the nonlinear effect which is introduced by the A/D converter results in a pronounced performance loss, in particular for the case when the receiver is operated outside the low signal-to-noise ratio (SNR) regime. By trading the A/D resolution for a moderately faster sampling rate, we show that for time-of-arrival (TOA) estimation under any SNR level it is possible to obtain a low-complexity $1$-bit receive system which features a smaller performance degradation then the classical low SNR hard-limiting loss of $2/\pi$ ($-1.96$ dB). Key to this result is the employment of a lower bound for the Fisher information matrix which enables us to approximate the estimation performance for coarsely quantized receivers with correlated noise models in a pessimistic way.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
69,485
2111.00884
Enhanced Language Representation with Label Knowledge for Span Extraction
Span extraction, aiming to extract text spans (such as words or phrases) from plain texts, is a fundamental process in Information Extraction. Recent works introduce the label knowledge to enhance the text representation by formalizing the span extraction task into a question answering problem (QA Formalization), which achieves state-of-the-art performance. However, QA Formalization does not fully exploit the label knowledge and suffers from low efficiency in training/inference. To address those problems, we introduce a new paradigm to integrate label knowledge and further propose a novel model to explicitly and efficiently integrate label knowledge into text representations. Specifically, it encodes texts and label annotations independently and then integrates label knowledge into text representation with an elaborate-designed semantics fusion module. We conduct extensive experiments on three typical span extraction tasks: flat NER, nested NER, and event detection. The empirical results show that 1) our method achieves state-of-the-art performance on four benchmarks, and 2) reduces training time and inference time by 76% and 77% on average, respectively, compared with the QA Formalization paradigm. Our code and data are available at https://github.com/Akeepers/LEAR.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
264,375
2101.06802
Measure-conditional Discriminator with Stationary Optimum for GANs and Statistical Distance Surrogates
We propose a simple but effective modification of the discriminators, namely measure-conditional discriminators, as a plug-and-play module for different GANs. By taking the generated distributions as part of input so that the target optimum for the discriminator is stationary, the proposed discriminator is more robust than the vanilla one. A variant of the measure-conditional discriminator can also handle multiple target distributions, or act as a surrogate model of statistical distances such as KL divergence with applications to transfer learning.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
215,838
2206.14350
Convolutional Neural Network Based Partial Face Detection
Due to the massive explanation of artificial intelligence, machine learning technology is being used in various areas of our day-to-day life. In the world, there are a lot of scenarios where a simple crime can be prevented before it may even happen or find the person responsible for it. A face is one distinctive feature that we have and can differentiate easily among many other species. But not just different species, it also plays a significant role in determining someone from the same species as us, humans. Regarding this critical feature, a single problem occurs most often nowadays. When the camera is pointed, it cannot detect a person's face, and it becomes a poor image. On the other hand, where there was a robbery and a security camera installed, the robber's identity is almost indistinguishable due to the low-quality camera. But just making an excellent algorithm to work and detecting a face reduces the cost of hardware, and it doesn't cost that much to focus on that area. Facial recognition, widget control, and such can be done by detecting the face correctly. This study aims to create and enhance a machine learning model that correctly recognizes faces. Total 627 Data have been collected from different Bangladeshi people's faces on four angels. In this work, CNN, Harr Cascade, Cascaded CNN, Deep CNN & MTCNN are these five machine learning approaches implemented to get the best accuracy of our dataset. After creating and running the model, Multi-Task Convolutional Neural Network (MTCNN) achieved 96.2% best model accuracy with training data rather than other machine learning models.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
305,255
1611.00791
Predicting Domain Generation Algorithms with Long Short-Term Memory Networks
Various families of malware use domain generation algorithms (DGAs) to generate a large number of pseudo-random domain names to connect to a command and control (C&C) server. In order to block DGA C&C traffic, security organizations must first discover the algorithm by reverse engineering malware samples, then generating a list of domains for a given seed. The domains are then either preregistered or published in a DNS blacklist. This process is not only tedious, but can be readily circumvented by malware authors using a large number of seeds in algorithms with multivariate recurrence properties (e.g., banjori) or by using a dynamic list of seeds (e.g., bedep). Another technique to stop malware from using DGAs is to intercept DNS queries on a network and predict whether domains are DGA generated. Such a technique will alert network administrators to the presence of malware on their networks. In addition, if the predictor can also accurately predict the family of DGAs, then network administrators can also be alerted to the type of malware that is on their networks. This paper presents a DGA classifier that leverages long short-term memory (LSTM) networks to predict DGAs and their respective families without the need for a priori feature extraction. Results are significantly better than state-of-the-art techniques, providing 0.9993 area under the receiver operating characteristic curve for binary classification and a micro-averaged F1 score of 0.9906. In other terms, the LSTM technique can provide a 90% detection rate with a 1:10000 false positive (FP) rate---a twenty times FP improvement over comparable methods. Experiments in this paper are run on open datasets and code snippets are provided to reproduce the results.
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
false
false
63,278
2206.06256
On the impact of dataset size and class imbalance in evaluating machine-learning-based windows malware detection techniques
The purpose of this project was to collect and analyse data about the comparability and real-life applicability of published results focusing on Microsoft Windows malware, more specifically the impact of dataset size and testing dataset imbalance on measured detector performance. Some researchers use smaller datasets, and if dataset size has a significant impact on performance, that makes comparison of the published results difficult. Researchers also tend to use balanced datasets and accuracy as a metric for testing. The former is not a true representation of reality, where benign samples significantly outnumber malware, and the latter is approach is known to be problematic for imbalanced problems. The project identified two key objectives, to understand if dataset size correlates to measured detector performance to an extent that prevents meaningful comparison of published results, and to understand if good performance reported in published research can be expected to perform well in a real-world deployment scenario. The research's results suggested that dataset size does correlate with measured detector performance to an extent that prevents meaningful comparison of published results, and without understanding the nature of the training set size-accuracy curve for published results conclusions between approaches on which approach is "better" shouldn't be made solely based on accuracy scores. Results also suggested that high accuracy scores don't necessarily translate to high real-world performance.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
302,310
2210.09138
An Open-source Benchmark of Deep Learning Models for Audio-visual Apparent and Self-reported Personality Recognition
Personality determines a wide variety of human daily and working behaviours, and is crucial for understanding human internal and external states. In recent years, a large number of automatic personality computing approaches have been developed to predict either the apparent personality or self-reported personality of the subject based on non-verbal audio-visual behaviours. However, the majority of them suffer from complex and dataset-specific pre-processing steps and model training tricks. In the absence of a standardized benchmark with consistent experimental settings, it is not only impossible to fairly compare the real performances of these personality computing models but also makes them difficult to be reproduced. In this paper, we present the first reproducible audio-visual benchmarking framework to provide a fair and consistent evaluation of eight existing personality computing models (e.g., audio, visual and audio-visual) and seven standard deep learning models on both self-reported and apparent personality recognition tasks. Building upon a set of benchmarked models, we also investigate the impact of two previously-used long-term modelling strategies for summarising short-term/frame-level predictions on personality computing results. The results conclude: (i) apparent personality traits, inferred from facial behaviours by most benchmarked deep learning models, show more reliability than self-reported ones; (ii) visual models frequently achieved superior performances than audio models on personality recognition; (iii) non-verbal behaviours contribute differently in predicting different personality traits; and (iv) our reproduced personality computing models generally achieved worse performances than their original reported results. Our benchmark is publicly available at \url{https://github.com/liaorongfan/DeepPersonality}.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
324,431
2303.10812
Coordination Control of Free-Flyer Manipulators
This paper presents a method for guiding a robot manipulator to capture and bring a tumbling satellite to a state of rest. The proposed approach includes developing a coordination control for the combined system of the space robot and the target satellite, where the satellite acts as the manipulator payload. This control ensures that the robot tracks the optimal path while regulating the attitude of the chase vehicle to a desired value. Two optimal trajectories are then designed for the pre- and post-capture phases. In the pre-capturing phase, the manipulator manoeuvres are optimized by minimizing a cost function that includes the time of travel and the weighted norms of the end-effector velocity and acceleration, subject to the constraint that the robot end-effector and a grapple fixture on the satellite arrive at the rendezvous point with the same velocity. In the post-grasping phase, the manipulator dumps the initial velocity of the tumbling satellite in minimum time while ensuring that the magnitude of the torque applied to the satellite remains below a safe value. Overall, this method offers a promising solution for effectively capturing and bringing tumbling satellites to a state of rest.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
352,590
1603.04467
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
TensorFlow is an interface for expressing machine learning algorithms, and an implementation for executing such algorithms. A computation expressed using TensorFlow can be executed with little or no change on a wide variety of heterogeneous systems, ranging from mobile devices such as phones and tablets up to large-scale distributed systems of hundreds of machines and thousands of computational devices such as GPU cards. The system is flexible and can be used to express a wide variety of algorithms, including training and inference algorithms for deep neural network models, and it has been used for conducting research and for deploying machine learning systems into production across more than a dozen areas of computer science and other fields, including speech recognition, computer vision, robotics, information retrieval, natural language processing, geographic information extraction, and computational drug discovery. This paper describes the TensorFlow interface and an implementation of that interface that we have built at Google. The TensorFlow API and a reference implementation were released as an open-source package under the Apache 2.0 license in November, 2015 and are available at www.tensorflow.org.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
53,244
2401.13078
Open-Source, Cost-Aware Kinematically Feasible Planning for Mobile and Surface Robotics
This paper introduces the Smac Planner, an openly available search-based planning framework with multiple algorithm implementations including 2D-A*, Hybrid-A*, and State Lattice planners. This work is motivated by the lack of performant and available feasible planners for mobile and surface robotics research. This paper contains three main contributions. First, it briefly describes a minimal open-source software framework where search-based planners may be easily added. Further, this paper characterizes new variations on the feasible planners - dubbed Cost-Aware - specific to mobile roboticist's needs. This fills the gap of missing kinematically feasible implementations suitable for academic, extension, and deployed use. Finally, we provide baseline benchmarking against other standard planning frameworks. Smac Planner has further significance by becoming the standard open-source planning system within ROS 2's Nav2 framework which powers thousands of robots in research and industry.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
423,599
2502.09238
OpenBench: A New Benchmark and Baseline for Semantic Navigation in Smart Logistics
The increasing demand for efficient last-mile delivery in smart logistics underscores the role of autonomous robots in enhancing operational efficiency and reducing costs. Traditional navigation methods, which depend on high-precision maps, are resource-intensive, while learning-based approaches often struggle with generalization in real-world scenarios. To address these challenges, this work proposes the Openstreetmap-enhanced oPen-air sEmantic Navigation (OPEN) system that combines foundation models with classic algorithms for scalable outdoor navigation. The system uses off-the-shelf OpenStreetMap (OSM) for flexible map representation, thereby eliminating the need for extensive pre-mapping efforts. It also employs Large Language Models (LLMs) to comprehend delivery instructions and Vision-Language Models (VLMs) for global localization, map updates, and house number recognition. To compensate the limitations of existing benchmarks that are inadequate for assessing last-mile delivery, this work introduces a new benchmark specifically designed for outdoor navigation in residential areas, reflecting the real-world challenges faced by autonomous delivery systems. Extensive experiments in simulated and real-world environments demonstrate the proposed system's efficacy in enhancing navigation efficiency and reliability. To facilitate further research, our code and benchmark are publicly available.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
533,372
2002.12486
Distributionally Robust Chance Constrained Programming with Generative Adversarial Networks (GANs)
This paper presents a novel deep learning based data-driven optimization method. A novel generative adversarial network (GAN) based data-driven distributionally robust chance constrained programming framework is proposed. GAN is applied to fully extract distributional information from historical data in a nonparametric and unsupervised way without a priori approximation or assumption. Since GAN utilizes deep neural networks, complicated data distributions and modes can be learned, and it can model uncertainty efficiently and accurately. Distributionally robust chance constrained programming takes into consideration ambiguous probability distributions of uncertain parameters. To tackle the computational challenges, sample average approximation method is adopted, and the required data samples are generated by GAN in an end-to-end way through the differentiable networks. The proposed framework is then applied to supply chain optimization under demand uncertainty. The applicability of the proposed approach is illustrated through a county-level case study of a spatially explicit biofuel supply chain in Illinois.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
166,048
2408.14366
MIMO Precoding for Rydberg Atomic Receivers
Leveraging the strong atom-light interaction, a Rydberg atomic receiver can measure radio waves with extreme sensitivity. Existing research primarily focuses on improving the signal detection capability of atomic receivers, while traditional signal processing schemes at the transmitter side have remained unchanged. As a result, these schemes fail to maximize the throughput of atomic receivers, given that the coupling between atomic dipole moment and radio-wave magnitude results in a nonlinear transmission model in contrast to the traditional linear one. To address this issue, our work proposes to design customized precoding techniques for atomic multiple-input-multiple-output (MIMO) systems to achieve the channel capacity. A strong-reference approximation is initially proposed to linearize the nonlinear transition model of atomic receivers. This facilitates the derivation of atomic-MIMO channel capacity as $\min(N_r/2, N_t)\log({\rm SNR})$ at high signal-to-noise ratios (SNRs) for $N_r$ receive atomic antennas and $N_t$ classic transmit antennas. Then, a new digital precoding technique, termed In-phase-and-Quadrature (IQ) aware precoding is presented, which features independent processing of I/Q data streams using four real-valued matrices. The design is shown to be capacity-achieving for the atomic MIMO system. In addition, for the case of large-scale MIMO system, we extend the preceding fully-digital precoding design to the popular hybrid precoding architecture, which cascades a classical analog precoder with a low-dimensional version of the proposed IQ-aware digital precoder. By alternatively optimizing the digital and analog parts, the hybrid design is able to approach the performance of the optimal IQ-aware fully digital precoding. Simulation results validate the superiority of proposed IQ-aware precoding methods over existing techniques in the context of atomic MIMO communication.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
483,508
2410.18491
ChineseSafe: A Chinese Benchmark for Evaluating Safety in Large Language Models
With the rapid development of Large language models (LLMs), understanding the capabilities of LLMs in identifying unsafe content has become increasingly important. While previous works have introduced several benchmarks to evaluate the safety risk of LLMs, the community still has a limited understanding of current LLMs' capability to recognize illegal and unsafe content in Chinese contexts. In this work, we present a Chinese safety benchmark (ChineseSafe) to facilitate research on the content safety of large language models. To align with the regulations for Chinese Internet content moderation, our ChineseSafe contains 205,034 examples across 4 classes and 10 sub-classes of safety issues. For Chinese contexts, we add several special types of illegal content: political sensitivity, pornography, and variant/homophonic words. Moreover, we employ two methods to evaluate the legal risks of popular LLMs, including open-sourced models and APIs. The results reveal that many LLMs exhibit vulnerability to certain types of safety issues, leading to legal risks in China. Our work provides a guideline for developers and researchers to facilitate the safety of LLMs. Our results are also available at https://huggingface.co/spaces/SUSTech/ChineseSafe-Benchmark.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
501,910
1909.12732
Alleviating Privacy Attacks via Causal Learning
Machine learning models, especially deep neural networks have been shown to be susceptible to privacy attacks such as membership inference where an adversary can detect whether a data point was used for training a black-box model. Such privacy risks are exacerbated when a model's predictions are used on an unseen data distribution. To alleviate privacy attacks, we demonstrate the benefit of predictive models that are based on the causal relationships between input features and the outcome. We first show that models learnt using causal structure generalize better to unseen data, especially on data from different distributions than the train distribution. Based on this generalization property, we establish a theoretical link between causality and privacy: compared to associational models, causal models provide stronger differential privacy guarantees and are more robust to membership inference attacks. Experiments on simulated Bayesian networks and the colored-MNIST dataset show that associational models exhibit upto 80% attack accuracy under different test distributions and sample sizes whereas causal models exhibit attack accuracy close to a random guess.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
147,195
1602.02070
Compressive PCA for Low-Rank Matrices on Graphs
We introduce a novel framework for an approxi- mate recovery of data matrices which are low-rank on graphs, from sampled measurements. The rows and columns of such matrices belong to the span of the first few eigenvectors of the graphs constructed between their rows and columns. We leverage this property to recover the non-linear low-rank structures efficiently from sampled data measurements, with a low cost (linear in n). First, a Resrtricted Isometry Property (RIP) condition is introduced for efficient uniform sampling of the rows and columns of such matrices based on the cumulative coherence of graph eigenvectors. Secondly, a state-of-the-art fast low-rank recovery method is suggested for the sampled data. Finally, several efficient, parallel and parameter-free decoders are presented along with their theoretical analysis for decoding the low-rank and cluster indicators for the full data matrix. Thus, we overcome the computational limitations of the standard linear low-rank recovery methods for big datasets. Our method can also be seen as a major step towards efficient recovery of non- linear low-rank structures. For a matrix of size n X p, on a single core machine, our method gains a speed up of $p^2/k$ over Robust Principal Component Analysis (RPCA), where k << p is the subspace dimension. Numerically, we can recover a low-rank matrix of size 10304 X 1000, 100 times faster than Robust PCA.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
51,788
1904.07687
Advanced Customer Activity Prediction based on Deep Hierarchic Encoder-Decoders
Product recommender systems and customer profiling techniques have always been a priority in online retail. Recent machine learning research advances and also wide availability of massive parallel numerical computing has enabled various approaches and directions of recommender systems advancement. Worth to mention is the fact that in past years multiple traditional "offline" retail business are gearing more and more towards employing inferential and even predictive analytics both to stock-related problems such as predictive replenishment but also to enrich customer interaction experience. One of the most important areas of recommender systems research and development is that of Deep Learning based models which employ representational learning to model consumer behavioral patterns. Current state of the art in Deep Learning based recommender systems uses multiple approaches ranging from already classical methods such as the ones based on learning product representation vector, to recurrent analysis of customer transactional time-series and up to generative models based on adversarial training. Each of these methods has multiple advantages and inherent weaknesses such as inability of understanding the actual user-journey, ability to propose only single product recommendation or top-k product recommendations without prediction of actual next-best-offer. In our work we will present a new and innovative architectural approach of applying state-of-the-art hierarchical multi-module encoder-decoder architecture in order to solve several of current state-of-the-art recommender systems issues. Our approach will also produce by-products such as product need-based segmentation and customer behavioral segmentation - all in an end-to-end trainable approach. Finally, we will present a couple methods that solve known retail & distribution pain-points based on the proposed architecture.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
127,859
1806.03934
When and where do feed-forward neural networks learn localist representations?
According to parallel distributed processing (PDP) theory in psychology, neural networks (NN) learn distributed rather than interpretable localist representations. This view has been held so strongly that few researchers have analysed single units to determine if this assumption is correct. However, recent results from psychology, neuroscience and computer science have shown the occasional existence of local codes emerging in artificial and biological neural networks. In this paper, we undertake the first systematic survey of when local codes emerge in a feed-forward neural network, using generated input and output data with known qualities. We find that the number of local codes that emerge from a NN follows a well-defined distribution across the number of hidden layer neurons, with a peak determined by the size of input data, number of examples presented and the sparsity of input data. Using a 1-hot output code drastically decreases the number of local codes on the hidden layer. The number of emergent local codes increases with the percentage of dropout applied to the hidden layer, suggesting that the localist encoding may offer a resilience to noisy networks. This data suggests that localist coding can emerge from feed-forward PDP networks and suggests some of the conditions that may lead to interpretable localist representations in the cortex. The findings highlight how local codes should not be dismissed out of hand.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
true
false
true
100,119
2206.11776
Graph Neural Networks for Temperature-Dependent Activity Coefficient Prediction of Solutes in Ionic Liquids
Ionic liquids (ILs) are important solvents for sustainable processes and predicting activity coefficients (ACs) of solutes in ILs is needed. Recently, matrix completion methods (MCMs), transformers, and graph neural networks (GNNs) have shown high accuracy in predicting ACs of binary mixtures, superior to well-established models, e.g., COSMO-RS and UNIFAC. GNNs are particularly promising here as they learn a molecular graph-to-property relationship without pretraining, typically required for transformers, and are, unlike MCMs, applicable to molecules not included in training. For ILs, however, GNN applications are currently missing. Herein, we present a GNN to predict temperature-dependent infinite dilution ACs of solutes in ILs. We train the GNN on a database including more than 40,000 AC values and compare it to a state-of-the-art MCM. The GNN and MCM achieve similar high prediction performance, with the GNN additionally enabling high-quality predictions for ACs of solutions that contain ILs and solutes not considered during training.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
304,364
2310.04662
HalluciDet: Hallucinating RGB Modality for Person Detection Through Privileged Information
A powerful way to adapt a visual recognition model to a new domain is through image translation. However, common image translation approaches only focus on generating data from the same distribution as the target domain. Given a cross-modal application, such as pedestrian detection from aerial images, with a considerable shift in data distribution between infrared (IR) to visible (RGB) images, a translation focused on generation might lead to poor performance as the loss focuses on irrelevant details for the task. In this paper, we propose HalluciDet, an IR-RGB image translation model for object detection. Instead of focusing on reconstructing the original image on the IR modality, it seeks to reduce the detection loss of an RGB detector, and therefore avoids the need to access RGB data. This model produces a new image representation that enhances objects of interest in the scene and greatly improves detection performance. We empirically compare our approach against state-of-the-art methods for image translation and for fine-tuning on IR, and show that our HalluciDet improves detection accuracy in most cases by exploiting the privileged information encoded in a pre-trained RGB detector. Code: https://github.com/heitorrapela/HalluciDet
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
397,750
2211.15851
CSI-PPPNet: A One-Sided One-for-All Deep Learning Framework for Massive MIMO CSI Feedback
To reduce multiuser interference and maximize the spectrum efficiency in orthogonal frequency division duplexing massive multiple-input multiple-output (MIMO) systems, the downlink channel state information (CSI) estimated at the user equipment (UE) is required at the base station (BS). This paper presents a novel method for massive MIMO CSI feedback via a one-sided one-for-all deep learning framework. The CSI is compressed via linear projections at the UE, and is recovered at the BS using deep learning (DL) with plug-and-play priors (PPP). Instead of using handcrafted regularizers for the wireless channel responses, the proposed approach, namely CSI-PPPNet, exploits a DL based denoisor in place of the proximal operator of the prior in an alternating optimization scheme. In this way, a DL model trained once for denoising can be repurposed for CSI recovery tasks with arbitrary compression ratio. The one-sided one-for-all framework reduces model storage space, relieves the burden of joint model training and model delivery, and could be applied at UEs with limited device memories and computation power. Extensive experiments over the open indoor and urban macro scenarios show the effectiveness and advantages of the proposed method.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
333,410
2208.05225
How Effective is Byte Pair Encoding for Out-Of-Vocabulary Words in Neural Machine Translation?
Neural Machine Translation (NMT) is an open vocabulary problem. As a result, dealing with the words not occurring during training (a.k.a. out-of-vocabulary (OOV) words) have long been a fundamental challenge for NMT systems. The predominant method to tackle this problem is Byte Pair Encoding (BPE) which splits words, including OOV words, into sub-word segments. BPE has achieved impressive results for a wide range of translation tasks in terms of automatic evaluation metrics. While it is often assumed that by using BPE, NMT systems are capable of handling OOV words, the effectiveness of BPE in translating OOV words has not been explicitly measured. In this paper, we study to what extent BPE is successful in translating OOV words at the word-level. We analyze the translation quality of OOV words based on word type, number of segments, cross-attention weights, and the frequency of segment n-grams in the training data. Our experiments show that while careful BPE settings seem to be fairly useful in translating OOV words across datasets, a considerable percentage of OOV words are translated incorrectly. Furthermore, we highlight the slightly higher effectiveness of BPE in translating OOV words for special cases, such as named-entities and when the languages involved are linguistically close to each other.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
312,348
2002.05477
Approximability of Monotone Submodular Function Maximization under Cardinality and Matroid Constraints in the Streaming Model
Maximizing a monotone submodular function under various constraints is a classical and intensively studied problem. However, in the single-pass streaming model, where the elements arrive one by one and an algorithm can store only a small fraction of input elements, there is much gap in our knowledge, even though several approximation algorithms have been proposed in the literature. In this work, we present the first lower bound on the approximation ratios for cardinality and matroid constraints that beat $1-\frac{1}{e}$ in the single-pass streaming model. Let $n$ be the number of elements in the stream. Then, we prove that any (randomized) streaming algorithm for a cardinality constraint with approximation ratio $\frac{2}{2+\sqrt{2}}+\varepsilon$ requires $\Omega\left(\frac{n}{K^2}\right)$ space for any $\varepsilon>0$, where $K$ is the size limit of the output set. We also prove that any (randomized) streaming algorithm for a (partition) matroid constraint with approximation ratio $\frac{K}{2K-1}+\varepsilon$ requires $\Omega\left(\frac{n}{K}\right)$ space for any $\varepsilon>0$, where $K$ is the rank of the given matroid. In addition, we give streaming algorithms when we only have a weak oracle with which we can only evaluate function values on feasible sets. Specifically, we show weak-oracle streaming algorithms for cardinality and matroid constraints with approximation ratios $\frac{K}{2K-1}$ and $\frac{1}{2}$, respectively, whose space complexity is exponential in $K$ but is independent of $n$. The former one exactly matches the known inapproximability result for a cardinality constraint in the weak oracle model. The latter one almost matches our lower bound of $\frac{K}{2K-1}$ for a matroid constraint, which almost settles the approximation ratio for a matroid constraint that can be obtained by a streaming algorithm whose space complexity is independent of $n$.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
163,906
1912.00544
Multi-Scale Self-Attention for Text Classification
In this paper, we introduce the prior knowledge, multi-scale structure, into self-attention modules. We propose a Multi-Scale Transformer which uses multi-scale multi-head self-attention to capture features from different scales. Based on the linguistic perspective and the analysis of pre-trained Transformer (BERT) on a huge corpus, we further design a strategy to control the scale distribution for each layer. Results of three different kinds of tasks (21 datasets) show our Multi-Scale Transformer outperforms the standard Transformer consistently and significantly on small and moderate size datasets.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
155,807