id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2102.00798
Landmark Breaker: Obstructing DeepFake By Disturbing Landmark Extraction
The recent development of Deep Neural Networks (DNN) has significantly increased the realism of AI-synthesized faces, with the most notable examples being the DeepFakes. The DeepFake technology can synthesize a face of target subject from a face of another subject, while retains the same face attributes. With the rapidly increased social media portals (Facebook, Instagram, etc), these realistic fake faces rapidly spread though the Internet, causing a broad negative impact to the society. In this paper, we describe Landmark Breaker, the first dedicated method to disrupt facial landmark extraction, and apply it to the obstruction of the generation of DeepFake videos.Our motivation is that disrupting the facial landmark extraction can affect the alignment of input face so as to degrade the DeepFake quality. Our method is achieved using adversarial perturbations. Compared to the detection methods that only work after DeepFake generation, Landmark Breaker goes one step ahead to prevent DeepFake generation. The experiments are conducted on three state-of-the-art facial landmark extractors using the recent Celeb-DF dataset.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
217,912
2408.12752
High-distance codes with transversal Clifford and T-gates
The non-local interactions in several quantum devices allow for the realization of more compact quantum encodings while retaining the same degree of protection against noise. Anticipating that short to medium-length codes will soon be realizable, it is important to construct stabilizer codes that, for a given code distance, admit fault-tolerant implementations of logical gates with the fewest number of physical qubits. We extract high-distance doubly even codes from the quantum quadratic-residue code family that admit a transversal implementation of the single-qubit Clifford group and block transversal implementation of the full Clifford group. Applying a doubling procedure [arXiv:1509.03239] to such codes yields a family of high-distance weak triply even codes which admit a transversal implementation of the logical $\texttt{T}$-gate. Relaxing the triply even property, we also obtain a family of triorthogonal codes which requires an even lower overhead at the cost of additional Clifford gates to achieve the same logical operation. To our knowledge, our doubly even and triorthogonal families are the shortest qubit stabilizer codes of the same distance that can realize their respective gates.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
482,863
2301.03635
Evolutionary Carrier Selection for Shared Truck Delivery Services
With multiple carriers in a logistics market, customers can choose the best carrier to deliver their products and packages. In this paper, we present a novel approach of using the stochastic evolutionary game to analyze the decision-making of the customers using the less-than-truckload (LTL) delivery service. We propose inter-related optimization and game models that allow us to analyze the vehicle routing optimization for the LTL carriers and carrier selection for the customers, respectively. The stochastic evolutionary game model incorporates a small perturbation of customers' decision-making which exists due to irrationality. The solution of the stochastic evolutionary game in terms of stochastically stable states is characterized by using the Markov chain model. The numerical results show the impact of carriers' and customers' parameters on the stable states.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
339,851
2302.02650
Tree-Based Learning on Amperometric Time Series Data Demonstrates High Accuracy for Classification
Elucidating exocytosis processes provide insights into cellular neurotransmission mechanisms, and may have potential in neurodegenerative diseases research. Amperometry is an established electrochemical method for the detection of neurotransmitters released from and stored inside cells. An important aspect of the amperometry method is the sub-millisecond temporal resolution of the current recordings which leads to several hundreds of gigabytes of high-quality data. In this study, we present a universal method for the classification with respect to diverse amperometric datasets using data-driven approaches in computational science. We demonstrate a very high prediction accuracy (greater than or equal to 95%). This includes an end-to-end systematic machine learning workflow for amperometric time series datasets consisting of pre-processing; feature extraction; model identification; training and testing; followed by feature importance evaluation - all implemented. We tested the method on heterogeneous amperometric time series datasets generated using different experimental approaches, chemical stimulations, electrode types, and varying recording times. We identified a certain overarching set of common features across these datasets which enables accurate predictions. Further, we showed that information relevant for the classification of amperometric traces are neither in the spiky segments alone, nor can it be retrieved from just the temporal structure of spikes. In fact, the transients between spikes and the trace baselines carry essential information for a successful classification, thereby strongly demonstrating that an effective feature representation of amperometric time series requires the full time series. To our knowledge, this is one of the first studies that propose a scheme for machine learning, and in particular, supervised learning on full amperometry time series data.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
344,076
2211.13672
A Self-Attention Ansatz for Ab-initio Quantum Chemistry
We present a novel neural network architecture using self-attention, the Wavefunction Transformer (Psiformer), which can be used as an approximation (or Ansatz) for solving the many-electron Schr\"odinger equation, the fundamental equation for quantum chemistry and material science. This equation can be solved from first principles, requiring no external training data. In recent years, deep neural networks like the FermiNet and PauliNet have been used to significantly improve the accuracy of these first-principle calculations, but they lack an attention-like mechanism for gating interactions between electrons. Here we show that the Psiformer can be used as a drop-in replacement for these other neural networks, often dramatically improving the accuracy of the calculations. On larger molecules especially, the ground state energy can be improved by dozens of kcal/mol, a qualitative leap over previous methods. This demonstrates that self-attention networks can learn complex quantum mechanical correlations between electrons, and are a promising route to reaching unprecedented accuracy in chemical calculations on larger systems.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
332,548
1805.12243
Novel Video Prediction for Large-scale Scene using Optical Flow
Making predictions of future frames is a critical challenge in autonomous driving research. Most of the existing methods for video prediction attempt to generate future frames in simple and fixed scenes. In this paper, we propose a novel and effective optical flow conditioned method for the task of video prediction with an application to complex urban scenes. In contrast with previous work, the prediction model only requires video sequences and optical flow sequences for training and testing. Our method uses the rich spatial-temporal features in video sequences. The method takes advantage of the motion information extracting from optical flow maps between neighbor images as well as previous images. Empirical evaluations on the KITTI dataset and the Cityscapes dataset demonstrate the effectiveness of our method.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
99,122
2112.13021
Noninvasive Fetal Electrocardiography: Models, Technologies and Algorithms
The fetal electrocardiogram (fECG) was first recorded from the maternal abdominal surface in the early 1900s. During the past fifty years, the most advanced electronics technologies and signal processing algorithms have been used to convert noninvasive fetal electrocardiography into a reliable technology for fetal cardiac monitoring. In this chapter, the major signal processing techniques, which have been developed for the modeling, extraction and analysis of the fECG from noninvasive maternal abdominal recordings are reviewed and compared with one another in detail. The major topics of the chapter include: 1) the electrophysiology of the fECG from the signal processing viewpoint, 2) the mathematical model of the maternal volume conduction media and the waveform models of the fECG acquired from body surface leads, 3) the signal acquisition requirements, 4) model-based techniques for fECG noise and interference cancellation, including adaptive filters and semi-blind source separation techniques, and 5) recent algorithmic advances for fetal motion tracking and online fECG extraction from few number of channels.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
273,120
1602.04418
Identifiability Assumptions and Algorithm for Directed Graphical Models with Feedback
Directed graphical models provide a useful framework for modeling causal or directional relationships for multivariate data. Prior work has largely focused on identifiability and search algorithms for directed acyclic graphical (DAG) models. In many applications, feedback naturally arises and directed graphical models that permit cycles occur. In this paper we address the issue of identifiability for general directed cyclic graphical (DCG) models satisfying the Markov assumption. In particular, in addition to the faithfulness assumption which has already been introduced for cyclic models, we introduce two new identifiability assumptions, one based on selecting the model with the fewest edges and the other based on selecting the DCG model that entails the maximum number of d-separation rules. We provide theoretical results comparing these assumptions which show that: (1) selecting models with the largest number of d-separation rules is strictly weaker than the faithfulness assumption; (2) unlike for DAG models, selecting models with the fewest edges does not necessarily result in a milder assumption than the faithfulness assumption. We also provide connections between our two new principles and minimality assumptions. We use our identifiability assumptions to develop search algorithms for small-scale DCG models. Our simulation study supports our theoretical results, showing that the algorithms based on our two new principles generally out-perform algorithms based on the faithfulness assumption in terms of selecting the true skeleton for DCG models.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
52,127
2010.05312
Covid-19 vaccination strategies with limited resources -- a model based on social network graphs
We develop a model of infection spread that takes into account the existence of a vulnerable group as well as the variability of the social relations of individuals. We develop a compartmentalized power-law model, with power-law connections between the vulnerable and the general population, considering these connections as well as the connections among the vulnerable as parameters that we vary in our tests. We use the model to study a number of vaccination strategies under two hypotheses: first, we assume a limited availability of vaccine but an infinite vaccination capacity, so that all the available doses can be administered in a short time (negligible with respect to the evolution of the epidemic). Then we assume a limited vaccination capacity, so that the doses are administered in a time non-negligible with respect to the evolution of the epidemic. We develop optimal strategies for the various social parameters, where a strategy consists of (1) the fraction of vaccine that is administered to the vulnerable population and (2) the criterion that is used to administer it to the general population. In the case of a limited vaccination capacity, the fraction (1) is a function of time, and we study how to optimize it to obtain a maximal reduction in the number of victims.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
200,065
2310.17041
On Surgical Fine-tuning for Language Encoders
Fine-tuning all the layers of a pre-trained neural language encoder (either using all the parameters or using parameter-efficient methods) is often the de-facto way of adapting it to a new task. We show evidence that for different downstream language tasks, fine-tuning only a subset of layers is sufficient to obtain performance that is close to and often better than fine-tuning all the layers in the language encoder. We propose an efficient metric based on the diagonal of the Fisher information matrix (FIM score), to select the candidate layers for selective fine-tuning. We show, empirically on GLUE and SuperGLUE tasks and across distinct language encoders, that this metric can effectively select layers leading to a strong downstream performance. Our work highlights that task-specific information corresponding to a given downstream task is often localized within a few layers, and tuning only those is sufficient for strong performance. Additionally, we demonstrate the robustness of the FIM score to rank layers in a manner that remains constant during the optimization process.
false
false
false
false
true
true
false
false
true
false
false
false
false
false
false
false
false
false
402,968
2407.00706
Sum-of-norms regularized Nonnegative Matrix Factorization
When applying nonnegative matrix factorization (NMF), generally the rank parameter is unknown. Such rank in NMF, called the nonnegative rank, is usually estimated heuristically since computing the exact value of it is NP-hard. In this work, we propose an approximation method to estimate such rank while solving NMF on-the-fly. We use sum-of-norm (SON), a group-lasso structure that encourages pairwise similarity, to reduce the rank of a factor matrix where the rank is overestimated at the beginning. On various datasets, SON-NMF is able to reveal the correct nonnegative rank of the data without any prior knowledge nor tuning. SON-NMF is a nonconvx nonsmmoth non-separable non-proximable problem, solving it is nontrivial. First, as rank estimation in NMF is NP-hard, the proposed approach does not enjoy a lower computational complexity. Using a graph-theoretic argument, we prove that the complexity of the SON-NMF is almost irreducible. Second, the per-iteration cost of any algorithm solving SON-NMF is possibly high, which motivated us to propose a first-order BCD algorithm to approximately solve SON-NMF with a low per-iteration cost, in which we do so by the proximal average operator. Lastly, we propose a simple greedy method for post-processing. SON-NMF exhibits favourable features for applications. Beside the ability to automatically estimate the rank from data, SON-NMF can deal with rank-deficient data matrix, can detect weak component with small energy. Furthermore, on the application of hyperspectral imaging, SON-NMF handle the issue of spectral variability naturally.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
468,977
2311.14777
From Text to Image: Exploring GPT-4Vision's Potential in Advanced Radiological Analysis across Subspecialties
The study evaluates and compares GPT-4 and GPT-4Vision for radiological tasks, suggesting GPT-4Vision may recognize radiological features from images, thereby enhancing its diagnostic potential over text-based descriptions.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
410,259
1905.04083
ES-CTC: A Deep Neuroevolution Model for Cooperative Intelligent Freeway Traffic Control
Cooperative intelligent freeway traffic control is an important application in intelligent transportation systems, which is expected to improve the mobility of freeway networks. In this paper, we propose a deep neuroevolution model, called ES-CTC, to achieve a cooperative control scheme of ramp metering, differential variable speed limits and lane change control agents for improving freeway traffic. In this model, the graph convolutional networks are used to learn more meaningful spatial pattern from traffic sensors, a knowledge sharing layer is designed for communication between different agents. The proposed neural networks structure allows different agents share knowledge with each other and execute action asynchronously. In order to address the delayed reward and action asynchronism issues, the evolutionary strategy is utilized to train the agents under stochastic traffic demands. The experimental results on a simulated freeway section indicate that ES-CTC is a viable approach and outperforms several existing methods
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
130,364
2304.04884
Multi-Sample Consensus Driven Unsupervised Normal Estimation for 3D Point Clouds
Deep normal estimators have made great strides on synthetic benchmarks. Unfortunately, their performance dramatically drops on the real scan data since they are supervised only on synthetic datasets. The point-wise annotation of ground truth normals is vulnerable to inefficiency and inaccuracies, which totally makes it impossible to build perfect real datasets for supervised deep learning. To overcome the challenge, we propose a multi-sample consensus paradigm for unsupervised normal estimation. The paradigm consists of multi-candidate sampling, candidate rejection, and mode determination. The latter two are driven by neighbor point consensus and candidate consensus respectively. Two primary implementations of the paradigm, MSUNE and MSUNE-Net, are proposed. MSUNE minimizes a candidate consensus loss in mode determination. As a robust optimization method, it outperforms the cutting-edge supervised deep learning methods on real data at the cost of longer runtime for sampling enough candidate normals for each query point. MSUNE-Net, the first unsupervised deep normal estimator as far as we know, significantly promotes the multi-sample consensus further. It transfers the three online stages of MSUNE to offline training. Thereby its inference time is 100 times faster. Besides that, more accurate inference is achieved, since the candidates of query points from similar patches can form a sufficiently large candidate set implicitly in MSUNE-Net. Comprehensive experiments demonstrate that the two proposed unsupervised methods are noticeably superior to some supervised deep normal estimators on the most common synthetic dataset. More importantly, they show better generalization ability and outperform all the SOTA conventional and deep methods on three real datasets: NYUV2, KITTI, and a dataset from PCV [1].
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
357,395
2308.13474
OCTAL: Graph Representation Learning for LTL Model Checking
Model Checking is widely applied in verifying the correctness of complex and concurrent systems against a specification. Pure symbolic approaches while popular, suffer from the state space explosion problem due to cross product operations required that make them prohibitively expensive for large-scale systems and/or specifications. In this paper, we propose to use graph representation learning (GRL) for solving linear temporal logic (LTL) model checking, where the system and the specification are expressed by a B{\"u}chi automaton and an LTL formula, respectively. A novel GRL-based framework \model, is designed to learn the representation of the graph-structured system and specification, which reduces the model checking problem to binary classification. Empirical experiments on two model checking scenarios show that \model achieves promising accuracy, with up to $11\times$ overall speedup against canonical SOTA model checkers and $31\times$ for satisfiability checking alone.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
387,932
2409.17547
Triple Point Masking
Existing 3D mask learning methods encounter performance bottlenecks under limited data, and our objective is to overcome this limitation. In this paper, we introduce a triple point masking scheme, named TPM, which serves as a scalable framework for pre-training of masked autoencoders to achieve multi-mask learning for 3D point clouds. Specifically, we augment the baselines with two additional mask choices (i.e., medium mask and low mask) as our core insight is that the recovery process of an object can manifest in diverse ways. Previous high-masking schemes focus on capturing the global representation but lack the fine-grained recovery capability, so that the generated pre-trained weights tend to play a limited role in the fine-tuning process. With the support of the proposed TPM, available methods can exhibit more flexible and accurate completion capabilities, enabling the potential autoencoder in the pre-training stage to consider multiple representations of a single 3D object. In addition, an SVM-guided weight selection module is proposed to fill the encoder parameters for downstream networks with the optimal weight during the fine-tuning stage, maximizing linear accuracy and facilitating the acquisition of intricate representations for new objects. Extensive experiments show that the four baselines equipped with the proposed TPM achieve comprehensive performance improvements on various downstream tasks. Our code and models are available at https://github.com/liujia99/TPM.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
491,860
1806.00148
Interpreting Deep Learning: The Machine Learning Rorschach Test?
Theoretical understanding of deep learning is one of the most important tasks facing the statistics and machine learning communities. While deep neural networks (DNNs) originated as engineering methods and models of biological networks in neuroscience and psychology, they have quickly become a centerpiece of the machine learning toolbox. Unfortunately, DNN adoption powered by recent successes combined with the open-source nature of the machine learning community, has outpaced our theoretical understanding. We cannot reliably identify when and why DNNs will make mistakes. In some applications like text translation these mistakes may be comical and provide for fun fodder in research talks, a single error can be very costly in tasks like medical imaging. As we utilize DNNs in increasingly sensitive applications, a better understanding of their properties is thus imperative. Recent advances in DNN theory are numerous and include many different sources of intuition, such as learning theory, sparse signal analysis, physics, chemistry, and psychology. An interesting pattern begins to emerge in the breadth of possible interpretations. The seemingly limitless approaches are mostly constrained by the lens with which the mathematical operations are viewed. Ultimately, the interpretation of DNNs appears to mimic a type of Rorschach test --- a psychological test wherein subjects interpret a series of seemingly ambiguous ink-blots. Validation for DNN theory requires a convergence of the literature. We must distinguish between universal results that are invariant to the analysis perspective and those that are specific to a particular network configuration. Simultaneously we must deal with the fact that many standard statistical tools for quantifying generalization or empirically assessing important network features are difficult to apply to DNNs.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
99,244
2401.01053
Cheetah: Natural Language Generation for 517 African Languages
Low-resource African languages pose unique challenges for natural language processing (NLP) tasks, including natural language generation (NLG). In this paper, we develop Cheetah, a massively multilingual NLG language model for African languages. Cheetah supports 517 African languages and language varieties, allowing us to address the scarcity of NLG resources and provide a solution to foster linguistic diversity. We demonstrate the effectiveness of Cheetah through comprehensive evaluations across six generation downstream tasks. In five of the six tasks, Cheetah significantly outperforms other models, showcasing its remarkable performance for generating coherent and contextually appropriate text in a wide range of African languages. We additionally conduct a detailed human evaluation to delve deeper into the linguistic capabilities of Cheetah. The introduction of Cheetah has far-reaching benefits for linguistic diversity. By leveraging pretrained models and adapting them to specific languages, our approach facilitates the development of practical NLG applications for African communities. The findings of this study contribute to advancing NLP research in low-resource settings, enabling greater accessibility and inclusion for African languages in a rapidly expanding digital landscape. We publicly release our models for research.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
419,204
2501.04729
Stability Exchange near Folds: Analysis of an end-loaded Elastica with a Lever Arm
Numerous problems in physical sciences can be expressed as parameter-dependent variational problems. The associated family of equilibria may or may not exist realistically and can be determined after examining its stability. Hence, it is crucial to determine the stability and track its transitions. Generally, the stability characteristics of the equilibria change near the folds in the parameter space. The direction of stability change can be encoded through a particular projection of the solutions. In this article, we identify such projections for variational problems characterized by fixed-free ends, a class of problems frequently found in mechanics. Using the developed theory, we study an Elastica subject to an end load applied through a rigid lever arm. The examples revealed several instances of snap-back instability in these systems. These findings may aid in enhancing the design of soft robot arms and other innovative switching mechanisms.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
523,321
2401.11531
Tempo: Confidentiality Preservation in Cloud-Based Neural Network Training
Cloud deep learning platforms provide cost-effective deep neural network (DNN) training for customers who lack computation resources. However, cloud systems are often untrustworthy and vulnerable to attackers, leading to growing concerns about model privacy. Recently, researchers have sought to protect data privacy in deep learning by leveraging CPU trusted execution environments (TEEs), which minimize the use of cryptography, but existing works failed to simultaneously utilize the computational resources of GPUs to assist in training and prevent model leakage. This paper presents Tempo, the first cloud-based deep learning system that cooperates with TEE and distributed GPUs for efficient DNN training with model confidentiality preserved. To tackle the challenge of preserving privacy while offloading linear algebraic operations from TEE to GPUs for efficient batch computation, we introduce a customized permutation-based obfuscation algorithm to blind both inputs and model parameters. An optimization mechanism that reduces encryption operations is proposed for faster weight updates during backpropagation to speed up training. We implement Tempo and evaluate it with both training and inference for two prevalent DNNs. Empirical results indicate that Tempo outperforms baselines and offers sufficient privacy protection.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
423,041
2310.13218
Deep Reinforcement Learning-Enabled Adaptive Forecasting-Aided State Estimation in Distribution Systems with Multi-Source Multi-Rate Data
Distribution system state estimation (DSSE) is paramount for effective state monitoring and control. However, stochastic outputs of renewables and asynchronous streaming of multi-rate measurements in practical systems largely degrade the estimation performance. This paper proposes a deep reinforcement learning (DRL)-enabled adaptive DSSE algorithm in unbalanced distribution systems, which tackles hybrid measurements with different time scales efficiently. We construct a three-step forecasting-aided state estimation framework, including DRL-based parameter identification, prediction, and state estimation, with multi-rate measurements incorporating limited synchrophasor data. Furthermore, a DRL-based adaptive parameter identification mechanism is embedded in the prediction step. As a novel attempt at utilizing DRL to enable DSSE adaptive to varying operating conditions, this method improves the prediction performance and further facilitates accurate state estimation. Case studies in two unbalanced feeders indicate that our method captures state variation with multi-source multi-rate data efficiently, outperforming the traditional methods.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
401,339
2502.01908
Unlocking Efficient Large Inference Models: One-Bit Unrolling Tips the Scales
Recent advancements in Large Language Model (LLM) compression, such as BitNet and BitNet b1.58, have marked significant strides in reducing the computational demands of LLMs through innovative one-bit quantization techniques. We extend this frontier by looking at Large Inference Models (LIMs) that have become indispensable across various applications. However, their scale and complexity often come at a significant computational cost. We introduce a novel approach that leverages one-bit algorithm unrolling, effectively integrating information from the physical world in the model architecture. Our method achieves a bit-per-link rate significantly lower than the 1.58 bits reported in prior work, thanks to the natural sparsity that emerges in our network architectures. We numerically demonstrate that the proposed one-bit algorithm unrolling scheme can improve both training and test outcomes by effortlessly increasing the number of layers while substantially compressing the network. Additionally, we provide theoretical results on the generalization gap, convergence rate, stability, and sensitivity of our proposed one-bit algorithm unrolling.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
530,084
2404.15074
Outage Probability Analysis of Wireless Paths with Faulty Reconfigurable Intelligent Surfaces
We consider a next generation wireless network incorporating a base station a set of typically low-cost and faulty Reconfigurable Intelligent Surfaces (RISs). The base station needs to select the path including the RIS to provide the maximum signal-to-noise ratio (SNR) to the user. We study the effect of the number of elements, distance and RIS hardware failure on the path outage probability, and based on the known signal propagation model at high frequencies, derive the closed-form expression for the said probability of outage. Numerical results show the path outage likelihood as function of the probability of hardware failure of RIS elements, the number of elements, and the distance between mobile users and the RIS.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
448,934
2406.17542
CDQuant: Greedy Coordinate Descent for Accurate LLM Quantization
Large language models (LLMs) have recently demonstrated remarkable performance across diverse language tasks. But their deployment is often constrained by their substantial computational and storage requirements. Quantization has emerged as a key technique for addressing this challenge, enabling the compression of large models with minimal impact on performance. The recent GPTQ algorithm, a post-training quantization (PTQ) method, has proven highly effective for compressing LLMs, sparking a wave of research that leverages GPTQ as a core component. Recognizing the pivotal role of GPTQ in the PTQ landscape, we introduce CDQuant, a simple and scalable alternative to GPTQ with improved performance. CDQuant uses greedy coordinate descent to minimize the layer-wise reconstruction loss to achieve high-quality quantized weights. Our algorithm is easy to implement and scales efficiently to models with hundreds of billions of parameters. We perform extensive evaluation on Gemma, and PaLM2 model families, and demonstrate that CDQuant consistently outperforms GPTQ in 2-4 bit weight quantization. Moreover, CDQuant improves the performance of state-of-the-art PTQ techniques such as QuIP and FrameQuant when used as a replacement for their GPTQ component, resulting in further gains in quality.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
467,616
2305.08883
Watermarking Text Generated by Black-Box Language Models
LLMs now exhibit human-like skills in various fields, leading to worries about misuse. Thus, detecting generated text is crucial. However, passive detection methods are stuck in domain specificity and limited adversarial robustness. To achieve reliable detection, a watermark-based method was proposed for white-box LLMs, allowing them to embed watermarks during text generation. The method involves randomly dividing the model vocabulary to obtain a special list and adjusting the probability distribution to promote the selection of words in the list. A detection algorithm aware of the list can identify the watermarked text. However, this method is not applicable in many real-world scenarios where only black-box language models are available. For instance, third-parties that develop API-based vertical applications cannot watermark text themselves because API providers only supply generated text and withhold probability distributions to shield their commercial interests. To allow third-parties to autonomously inject watermarks into generated text, we develop a watermarking framework for black-box language model usage scenarios. Specifically, we first define a binary encoding function to compute a random binary encoding corresponding to a word. The encodings computed for non-watermarked text conform to a Bernoulli distribution, wherein the probability of a word representing bit-1 being approximately 0.5. To inject a watermark, we alter the distribution by selectively replacing words representing bit-0 with context-based synonyms that represent bit-1. A statistical test is then used to identify the watermark. Experiments demonstrate the effectiveness of our method on both Chinese and English datasets. Furthermore, results under re-translation, polishing, word deletion, and synonym substitution attacks reveal that it is arduous to remove the watermark without compromising the original semantics.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
364,446
2105.04222
Leveraging Slot Descriptions for Zero-Shot Cross-Domain Dialogue State Tracking
Zero-shot cross-domain dialogue state tracking (DST) enables us to handle task-oriented dialogue in unseen domains without the expense of collecting in-domain data. In this paper, we propose a slot description enhanced generative approach for zero-shot cross-domain DST. Specifically, our model first encodes dialogue context and slots with a pre-trained self-attentive encoder, and generates slot values in an auto-regressive manner. In addition, we incorporate Slot Type Informed Descriptions that capture the shared information across slots to facilitate cross-domain knowledge transfer. Experimental results on the MultiWOZ dataset show that our proposed method significantly improves existing state-of-the-art results in the zero-shot cross-domain setting.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
234,433
1401.1753
A Solution of Degree Constrained Spanning Tree Using Hybrid GA
In real life, it is always an urge to reach our goal in minimum effort i.e., it should have a minimum constrained path. The path may be shortest route in practical life, either physical or electronic medium. The scenario is to represents the ambiance as a graph and to find a spanning tree with custom design criteria. Here, we have chosen a minimum degree spanning tree, which can be generated in real time with minimum turnaround time. The problem is NP-complete in nature [1, 2]. The solution approach, in general, is approximate. We have used a heuristic approach, namely hybrid genetic algorithm (GA), with motivated criteria of encoded data structures of graph. We compare the experimental result with the existing approximate algorithm and the result is so encouraging that we are interested to use it in our future applications.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
true
29,680
2110.03232
Design of an Intelligent Vision Algorithm for Recognition and Classification of Apples in an Orchard Scene
Apple is one of the remarkable fresh fruit that contains a high degree of nutritious and medicinal value. Hand harvesting of apples by seasonal farmworkers increases physical damages on the surface of these fruits, which causes a great loss in marketing quality. The main objective of this study is focused on designing a robust vision algorithm for robotic apple harvesters. The proposed algorithm is able to recognize and classify 4-classes of objects found in an orchard scene including apples, leaves, trunk and branches, and sky into two apples and non-apples classes. 100 digital images of Red Delicious apples and 100 digital images of Golden Delicious apples were selected among 1000 captured images of apples from 18 apple gardens in West Azerbaijan, Iran. An image processing algorithm is proposed for segmentation and extraction of the image classes based on the color characteristics of mentioned classes. Invariant-Momentums were chosen as the extracted features from the segmented classes, e.g. apples. Multilayer Feedforward Neural Networks, MFNNs, were used as an artificial intelligence tool for the recognition and classification of image classes.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
259,418
2106.00576
Exposing Previously Undetectable Faults in Deep Neural Networks
Existing methods for testing DNNs solve the oracle problem by constraining the raw features (e.g. image pixel values) to be within a small distance of a dataset example for which the desired DNN output is known. But this limits the kinds of faults these approaches are able to detect. In this paper, we introduce a novel DNN testing method that is able to find faults in DNNs that other methods cannot. The crux is that, by leveraging generative machine learning, we can generate fresh test inputs that vary in their high-level features (for images, these include object shape, location, texture, and colour). We demonstrate that our approach is capable of detecting deliberately injected faults as well as new faults in state-of-the-art DNNs, and that in both cases, existing methods are unable to find these faults.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
true
238,178
1906.09762
Closed-Form Delay-Optimal Computation Offloading in Mobile Edge Computing Systems
Mobile edge computing (MEC) has recently emerged as a promising technology to release the tension between computation-intensive applications and resource-limited mobile terminals (MTs). In this paper, we study the delay-optimal computation offloading in computation-constrained MEC systems. We consider the computation task queue at the MEC server due to its constrained computation capability. In this case, the task queue at the MT and that at the MEC server are strongly coupled in a cascade manner, which creates complex interdependencies and brings new technical challenges. We model the computation offloading problem as an infinite horizon average cost Markov decision process (MDP), and approximate it to a virtual continuous time system (VCTS) with reflections. Different to most of the existing works, we develop the dynamic instantaneous rate estimation for deriving the closed-form approximate priority functions in different scenarios. Based on the approximate priority functions, we propose a closed-form multi-level water-filling computation offloading solution to characterize the influence of not only the local queue state information (LQSI) but also the remote queue state information (RQSI). A extension is provided from single MT single MEC server scenarios to multiple MTs multiple MEC servers scenarios and several insights are derived. Finally, the simulation results show that the proposed scheme outperforms the conventional schemes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
136,260
2408.04708
MulliVC: Multi-lingual Voice Conversion With Cycle Consistency
Voice conversion aims to modify the source speaker's voice to resemble the target speaker while preserving the original speech content. Despite notable advancements in voice conversion these days, multi-lingual voice conversion (including both monolingual and cross-lingual scenarios) has yet to be extensively studied. It faces two main challenges: 1) the considerable variability in prosody and articulation habits across languages; and 2) the rarity of paired multi-lingual datasets from the same speaker. In this paper, we propose MulliVC, a novel voice conversion system that only converts timbre and keeps original content and source language prosody without multi-lingual paired data. Specifically, each training step of MulliVC contains three substeps: In step one the model is trained with monolingual speech data; then, steps two and three take inspiration from back translation, construct a cyclical process to disentangle the timbre and other information (content, prosody, and other language-related information) in the absence of multi-lingual data from the same speaker. Both objective and subjective results indicate that MulliVC significantly surpasses other methods in both monolingual and cross-lingual contexts, demonstrating the system's efficacy and the viability of the three-step approach with cycle consistency. Audio samples can be found on our demo page (mullivc.github.io).
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
479,498
2307.01069
Shi-NeSS: Detecting Good and Stable Keypoints with a Neural Stability Score
Learning a feature point detector presents a challenge both due to the ambiguity of the definition of a keypoint and correspondingly the need for a specially prepared ground truth labels for such points. In our work, we address both of these issues by utilizing a combination of a hand-crafted Shi detector and a neural network. We build on the principled and localized keypoints provided by the Shi detector and perform their selection using the keypoint stability score regressed by the neural network - Neural Stability Score (NeSS). Therefore, our method is named Shi-NeSS since it combines the Shi detector and the properties of the keypoint stability score, and it only requires for training sets of images without dataset pre-labeling or the need for reconstructed correspondence labels. We evaluate Shi-NeSS on HPatches, ScanNet, MegaDepth and IMC-PT, demonstrating state-of-the-art performance and good generalization on downstream tasks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
377,224
1910.11106
Label-Conditioned Next-Frame Video Generation with Neural Flows
Recent state-of-the-art video generation systems employ Generative Adversarial Networks (GANs) or Variational Autoencoders (VAEs) to produce novel videos. However, VAE models typically produce blurry outputs when faced with sub-optimal conditioning of the input, and GANs are known to be unstable for large output sizes. In addition, the output videos of these models are difficult to evaluate, partly because the GAN loss function is not an accurate measure of convergence. In this work, we propose using a state-of-the-art neural flow generator called Glow to generate videos conditioned on a textual label, one frame at a time. Neural flow models are more stable than standard GANs, as they only optimize a single cross entropy loss function, which is monotonic and avoids the circular convergence issues of the GAN minimax objective. In addition, we also show how to condition Glow on external context, while still preserving the invertible nature of each "flow" layer. Finally, we evaluate the proposed Glow model by calculating cross entropy on a held-out validation set of videos, in order to compare multiple versions of the proposed model via an ablation study. We show generated videos and discuss future improvements.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
150,685
1910.08216
A language processing algorithm for predicting tactical solutions to an operational planning problem under uncertainty
This paper is devoted to the prediction of solutions to a stochastic discrete optimization problem. Through an application, we illustrate how we can use a state-of-the-art neural machine translation (NMT) algorithm to predict the solutions by defining appropriate vocabularies, syntaxes and constraints. We attend to applications where the predictions need to be computed in very short computing time -- in the order of milliseconds or less. The results show that with minimal adaptations to the model architecture and hyperparameter tuning, the NMT algorithm can produce accurate solutions within the computing time budget. While these predictions are slightly less accurate than approximate stochastic programming solutions (sample average approximation), they can be computed faster and with less variability.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
149,805
2111.10078
Defeating Catastrophic Forgetting via Enhanced Orthogonal Weights Modification
The ability of neural networks (NNs) to learn and remember multiple tasks sequentially is facing tough challenges in achieving general artificial intelligence due to their catastrophic forgetting (CF) issues. Fortunately, the latest OWM Orthogonal Weights Modification) and other several continual learning (CL) methods suggest some promising ways to overcome the CF issue. However, none of existing CL methods explores the following three crucial questions for effectively overcoming the CF issue: that is, what knowledge does it contribute to the effective weights modification of the NN during its sequential tasks learning? When the data distribution of a new learning task changes corresponding to the previous learned tasks, should a uniform/specific weight modification strategy be adopted or not? what is the upper bound of the learningable tasks sequentially for a given CL method? ect. To achieve this, in this paper, we first reveals the fact that of the weight gradient of a new learning task is determined by both the input space of the new task and the weight space of the previous learned tasks sequentially. On this observation and the recursive least square optimal method, we propose a new efficient and effective continual learning method EOWM via enhanced OWM. And we have theoretically and definitively given the upper bound of the learningable tasks sequentially of our EOWM. Extensive experiments conducted on the benchmarks demonstrate that our EOWM is effectiveness and outperform all of the state-of-the-art CL baselines.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
267,201
2102.03674
Generating Artificial Core Users for Interpretable Condensed Data
Recent work has shown that in a dataset of user ratings on items there exists a group of Core Users who hold most of the information necessary for recommendation. This set of Core Users can be as small as 20 percent of the users. Core Users can be used to make predictions for out-of-sample users without much additional work. Since Core Users substantially shrink a ratings dataset without much loss of information, they can be used to improve recommendation efficiency. We propose a method, combining latent factor models, ensemble boosting and K-means clustering, to generate a small set of Artificial Core Users (ACUs) from real Core User data. Our ACUs have dense rating information, and improve the recommendation performance of real Core Users while remaining interpretable.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
218,827
2311.13231
Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model
Using reinforcement learning with human feedback (RLHF) has shown significant promise in fine-tuning diffusion models. Previous methods start by training a reward model that aligns with human preferences, then leverage RL techniques to fine-tune the underlying models. However, crafting an efficient reward model demands extensive datasets, optimal architecture, and manual hyperparameter tuning, making the process both time and cost-intensive. The direct preference optimization (DPO) method, effective in fine-tuning large language models, eliminates the necessity for a reward model. However, the extensive GPU memory requirement of the diffusion model's denoising process hinders the direct application of the DPO method. To address this issue, we introduce the Direct Preference for Denoising Diffusion Policy Optimization (D3PO) method to directly fine-tune diffusion models. The theoretical analysis demonstrates that although D3PO omits training a reward model, it effectively functions as the optimal reward model trained using human feedback data to guide the learning process. This approach requires no training of a reward model, proving to be more direct, cost-effective, and minimizing computational overhead. In experiments, our method uses the relative scale of objectives as a proxy for human preference, delivering comparable results to methods using ground-truth rewards. Moreover, D3PO demonstrates the ability to reduce image distortion rates and generate safer images, overcoming challenges lacking robust reward models. Our code is publicly available at https://github.com/yk7333/D3PO.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
409,675
1909.02688
AutoGMM: Automatic and Hierarchical Gaussian Mixture Modeling in Python
Background: Gaussian mixture modeling is a fundamental tool in clustering, as well as discriminant analysis and semiparametric density estimation. However, estimating the optimal model for any given number of components is an NP-hard problem, and estimating the number of components is in some respects an even harder problem. Findings: In R, a popular package called mclust addresses both of these problems. However, Python has lacked such a package. We therefore introduce AutoGMM, a Python algorithm for automatic Gaussian mixture modeling, and its hierarchical version, HGMM. AutoGMM builds upon scikit-learn's AgglomerativeClustering and GaussianMixture classes, with certain modifications to make the results more stable. Empirically, on several different applications, AutoGMM performs approximately as well as mclust, and sometimes better. Conclusions: AutoMM, a freely available Python package, enables efficient Gaussian mixture modeling by automatically selecting the initialization, number of clusters and covariance constraints.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
144,258
1610.02567
Mining the Web for Pharmacovigilance: the Case Study of Duloxetine and Venlafaxine
Adverse reactions caused by drugs following their release into the market are among the leading causes of death in many countries. The rapid growth of electronically available health related information, and the ability to process large volumes of them automatically, using natural language processing (NLP) and machine learning algorithms, have opened new opportunities for pharmacovigilance. Survey found that more than 70% of US Internet users consult the Internet when they require medical information. In recent years, research in this area has addressed for Adverse Drug Reaction (ADR) pharmacovigilance using social media, mainly Twitter and medical forums and websites. This paper will show the information which can be collected from a variety of Internet data sources and search engines, mainly Google Trends and Google Correlate. While considering the case study of two popular Major depressive Disorder (MDD) drugs, Duloxetine and Venlafaxine, we will provide a comparative analysis for their reactions using publicly-available alternative data sources.
false
false
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
62,119
2010.03146
Unsupervised Parsing via Constituency Tests
We propose a method for unsupervised parsing based on the linguistic notion of a constituency test. One type of constituency test involves modifying the sentence via some transformation (e.g. replacing the span with a pronoun) and then judging the result (e.g. checking if it is grammatical). Motivated by this idea, we design an unsupervised parser by specifying a set of transformations and using an unsupervised neural acceptability model to make grammaticality decisions. To produce a tree given a sentence, we score each span by aggregating its constituency test judgments, and we choose the binary tree with the highest total score. While this approach already achieves performance in the range of current methods, we further improve accuracy by fine-tuning the grammaticality model through a refinement procedure, where we alternate between improving the estimated trees and improving the grammaticality model. The refined model achieves 62.8 F1 on the Penn Treebank test set, an absolute improvement of 7.6 points over the previous best published result.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
199,294
2207.02736
Characterizing disruptions in online gaming behavior following software patches
Multiplayer online games are ideal settings for studying the effects of technological disruptions on social behavior. Software patches to online games cause significant changes to the game's rules and require players to develop new strategies to cope with these disruptions. We surveyed players, analyzed the content of software patch notes, and analyzed changes to the character selection behaviors in more than 53 million matches of Dota 2 in the days before and after software patches over a 30-month period. We found that the severity of patches is correlated with the magnitude of behavioral changes following a patch. We discuss the opportunities of leveraging software patches to online games as a valuable but overlooked empirical instrument for measuring behavioral dynamics.
true
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
true
306,604
2404.02595
QFNN-FFD: Quantum Federated Neural Network for Financial Fraud Detection
This study introduces the Quantum Federated Neural Network for Financial Fraud Detection (QFNN-FFD), a cutting-edge framework merging Quantum Machine Learning (QML) and quantum computing with Federated Learning (FL) for financial fraud detection. Using quantum technologies' computational power and the robust data privacy protections offered by FL, QFNN-FFD emerges as a secure and efficient method for identifying fraudulent transactions within the financial sector. Implementing a dual-phase training model across distributed clients enhances data integrity and enables superior performance metrics, achieving precision rates consistently above 95%. Additionally, QFNN-FFD demonstrates exceptional resilience by maintaining an impressive 80% accuracy, highlighting its robustness and readiness for real-world applications. This combination of high performance, security, and robustness against noise positions QFNN-FFD as a transformative advancement in financial technology solutions and establishes it as a new benchmark for privacy-focused fraud detection systems. This framework facilitates the broader adoption of secure, quantum-enhanced financial services and inspires future innovations that could use QML to tackle complex challenges in other areas requiring high confidentiality and accuracy.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
443,931
1206.6404
Policy Gradients with Variance Related Risk Criteria
Managing risk in dynamic decision problems is of cardinal importance in many fields such as finance and process control. The most common approach to defining risk is through various variance related criteria such as the Sharpe Ratio or the standard deviation adjusted reward. It is known that optimizing many of the variance related risk criteria is NP-hard. In this paper we devise a framework for local policy gradient style algorithms for reinforcement learning for variance related criteria. Our starting point is a new formula for the variance of the cost-to-go in episodic tasks. Using this formula we develop policy gradient algorithms for criteria that involve both the expected cost and the variance of the cost. We prove the convergence of these algorithms to local minima and demonstrate their applicability in a portfolio planning problem.
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
16,939
2406.04746
PQPP: A Joint Benchmark for Text-to-Image Prompt and Query Performance Prediction
Text-to-image generation has recently emerged as a viable alternative to text-to-image retrieval, due to the visually impressive results of generative diffusion models. Although query performance prediction is an active research topic in information retrieval, to the best of our knowledge, there is no prior study that analyzes the difficulty of queries (prompts) in text-to-image generation, based on human judgments. To this end, we introduce the first dataset of prompts which are manually annotated in terms of image generation performance. In order to determine the difficulty of the same prompts in image retrieval, we also collect manual annotations that represent retrieval performance. We thus propose the first benchmark for joint text-to-image prompt and query performance prediction, comprising 10K queries. Our benchmark enables: (i) the comparative assessment of the difficulty of prompts/queries in image generation and image retrieval, and (ii) the evaluation of prompt/query performance predictors addressing both generation and retrieval. We present results with several pre-generation/retrieval and post-generation/retrieval performance predictors, thus providing competitive baselines for future research. Our benchmark and code is publicly available under the CC BY 4.0 license at https://github.com/Eduard6421/PQPP.
false
false
false
false
true
false
true
false
true
false
false
true
false
false
false
false
false
false
461,823
2001.05571
On Model Evaluation under Non-constant Class Imbalance
Many real-world classification problems are significantly class-imbalanced to detriment of the class of interest. The standard set of proper evaluation metrics is well-known but the usual assumption is that the test dataset imbalance equals the real-world imbalance. In practice, this assumption is often broken for various reasons. The reported results are then often too optimistic and may lead to wrong conclusions about industrial impact and suitability of proposed techniques. We introduce methods focusing on evaluation under non-constant class imbalance. We show that not only the absolute values of commonly used metrics, but even the order of classifiers in relation to the evaluation metric used is affected by the change of the imbalance rate. Finally, we demonstrate that using subsampling in order to get a test dataset with class imbalance equal to the one observed in the wild is not necessary, and eventually can lead to significant errors in classifier's performance estimate.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
160,576
2208.06416
Uni6Dv2: Noise Elimination for 6D Pose Estimation
Uni6D is the first 6D pose estimation approach to employ a unified backbone network to extract features from both RGB and depth images. We discover that the principal reasons of Uni6D performance limitations are Instance-Outside and Instance-Inside noise. Uni6D's simple pipeline design inherently introduces Instance-Outside noise from background pixels in the receptive field, while ignoring Instance-Inside noise in the input depth data. In this paper, we propose a two-step denoising approach for dealing with the aforementioned noise in Uni6D. To reduce noise from non-instance regions, an instance segmentation network is utilized in the first step to crop and mask the instance. A lightweight depth denoising module is proposed in the second step to calibrate the depth feature before feeding it into the pose regression network. Extensive experiments show that our Uni6Dv2 reliably and robustly eliminates noise, outperforming Uni6D without sacrificing too much inference efficiency. It also reduces the need for annotated real data that requires costly labeling.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
312,712
2401.08135
Machine Learning-Based Malicious Vehicle Detection for Security Threats and Attacks in Vehicle Ad-hoc Network (VANET) Communications
With the rapid growth of Vehicle Ad-hoc Network (VANET) as a promising technology for efficient and reliable communication among vehicles and infrastructure, the security and integrity of VANET communications has become a critical concern. One of the significant threats to VANET is the presence of blackhole attacks, where malicious nodes disrupt the network's functionality and compromise data confidentiality, integrity, and availability. In this paper, we propose a machine learning-based approach for blackhole detection in VANET. To achieve this task, we first create a comprehensive dataset comprising normal and malicious traffic flows. Afterward, we study and define a promising set of features to discriminate the blackhole attacks. Finally, we evaluate various machine learning algorithms, including Gradient Boosting, Random Forest, Support Vector Machines, k-Nearest Neighbors, Gaussian Naive Bayes, and Logistic Regression. Experimental results demonstrate the effectiveness of these algorithms in distinguishing between normal and malicious nodes. Our findings also highlight the potential of machine learning based approach in enhancing the security of VANET by detecting and mitigating blackhole attacks.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
true
421,777
2301.13359
IM-IAD: Industrial Image Anomaly Detection Benchmark in Manufacturing
Image anomaly detection (IAD) is an emerging and vital computer vision task in industrial manufacturing (IM). Recently, many advanced algorithms have been reported, but their performance deviates considerably with various IM settings. We realize that the lack of a uniform IM benchmark is hindering the development and usage of IAD methods in real-world applications. In addition, it is difficult for researchers to analyze IAD algorithms without a uniform benchmark. To solve this problem, we propose a uniform IM benchmark, for the first time, to assess how well these algorithms perform, which includes various levels of supervision (unsupervised versus fully supervised), learning paradigms (few-shot, continual and noisy label), and efficiency (memory usage and inference speed). Then, we construct a comprehensive image anomaly detection benchmark (IM-IAD), which includes 19 algorithms on seven major datasets with a uniform setting. Extensive experiments (17,017 total) on IM-IAD provide in-depth insights into IAD algorithm redesign or selection. Moreover, the proposed IM-IAD benchmark challenges existing algorithms and suggests future research directions. To foster reproducibility and accessibility, the source code of IM-IAD is uploaded on the website, https://github.com/M-3LAB/IM-IAD.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
342,873
1902.09155
CityJSON: a compact and easy-to-use encoding of the CityGML data model
The international standard CityGML is both a data model and an exchange format to store digital 3D models of cities. While the data model is used by several cities, companies, and governments, in this paper we argue that its XML-based exchange format has several drawbacks. These drawbacks mean that it is difficult for developers to implement parsers for CityGML, and that practitioners have, as a consequence, to convert their data to other formats if they want to exchange them with others. We present CityJSON, a new JSON-based exchange format for the CityGML data model (version 2.0.0). CityJSON was designed with programmers in mind, so that software and APIs supporting it can be quickly built. It was also designed to be compact (a compression factor of around six with real-world datasets), and to be friendly for web and mobile development. We argue that it is considerably easier to use than the CityGML format, both for reading and for creating datasets. We discuss in this paper the main features of CityJSON, briefly present the different software packages to parse/view/edit/create files (including one to automatically convert between the JSON and GML encodings), analyse how real-world datasets compare to those of CityGML, and we also introduce \emph{Extensions}, which allow us to extend the core data model in a documented manner.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
122,357
1806.02375
Understanding Batch Normalization
Batch normalization (BN) is a technique to normalize activations in intermediate layers of deep neural networks. Its tendency to improve accuracy and speed up training have established BN as a favorite technique in deep learning. Yet, despite its enormous success, there remains little consensus on the exact reason and mechanism behind these improvements. In this paper we take a step towards a better understanding of BN, following an empirical approach. We conduct several experiments, and show that BN primarily enables training with larger learning rates, which is the cause for faster convergence and better generalization. For networks without BN we demonstrate how large gradient updates can result in diverging loss and activations growing uncontrollably with network depth, which limits possible learning rates. BN avoids this problem by constantly correcting activations to be zero-mean and of unit standard deviation, which enables larger gradient steps, yields faster convergence and may help bypass sharp local minima. We further show various ways in which gradients and activations of deep unnormalized networks are ill-behaved. We contrast our results against recent findings in random matrix theory, shedding new light on classical initialization schemes and their consequences.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
99,764
1709.06919
Bayesian Optimization with Automatic Prior Selection for Data-Efficient Direct Policy Search
One of the most interesting features of Bayesian optimization for direct policy search is that it can leverage priors (e.g., from simulation or from previous tasks) to accelerate learning on a robot. In this paper, we are interested in situations for which several priors exist but we do not know in advance which one fits best the current situation. We tackle this problem by introducing a novel acquisition function, called Most Likely Expected Improvement (MLEI), that combines the likelihood of the priors and the expected improvement. We evaluate this new acquisition function on a transfer learning task for a 5-DOF planar arm and on a possibly damaged, 6-legged robot that has to learn to walk on flat ground and on stairs, with priors corresponding to different stairs and different kinds of damages. Our results show that MLEI effectively identifies and exploits the priors, even when there is no obvious match between the current situations and the priors.
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
true
false
false
81,199
2301.13173
Shape-aware Text-driven Layered Video Editing
Temporal consistency is essential for video editing applications. Existing work on layered representation of videos allows propagating edits consistently to each frame. These methods, however, can only edit object appearance rather than object shape changes due to the limitation of using a fixed UV mapping field for texture atlas. We present a shape-aware, text-driven video editing method to tackle this challenge. To handle shape changes in video editing, we first propagate the deformation field between the input and edited keyframe to all frames. We then leverage a pre-trained text-conditioned diffusion model as guidance for refining shape distortion and completing unseen regions. The experimental results demonstrate that our method can achieve shape-aware consistent video editing and compare favorably with the state-of-the-art.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
342,808
2301.03142
Exploration in Model-based Reinforcement Learning with Randomized Reward
Model-based Reinforcement Learning (MBRL) has been widely adapted due to its sample efficiency. However, existing worst-case regret analysis typically requires optimistic planning, which is not realistic in general. In contrast, motivated by the theory, empirical study utilizes ensemble of models, which achieve state-of-the-art performance on various testing environments. Such deviation between theory and empirical study leads us to question whether randomized model ensemble guarantee optimism, and hence the optimal worst-case regret? This paper partially answers such question from the perspective of reward randomization, a scarcely explored direction of exploration with MBRL. We show that under the kernelized linear regulator (KNR) model, reward randomization guarantees a partial optimism, which further yields a near-optimal worst-case regret in terms of the number of interactions. We further extend our theory to generalized function approximation and identified conditions for reward randomization to attain provably efficient exploration. Correspondingly, we propose concrete examples of efficient reward randomization. To the best of our knowledge, our analysis establishes the first worst-case regret analysis on randomized MBRL with function approximation.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
339,711
2007.15831
OREBA: A Dataset for Objectively Recognizing Eating Behaviour and Associated Intake
Automatic detection of intake gestures is a key element of automatic dietary monitoring. Several types of sensors, including inertial measurement units (IMU) and video cameras, have been used for this purpose. The common machine learning approaches make use of the labeled sensor data to automatically learn how to make detections. One characteristic, especially for deep learning models, is the need for large datasets. To meet this need, we collected the Objectively Recognizing Eating Behavior and Associated Intake (OREBA) dataset. The OREBA dataset aims to provide comprehensive multi-sensor data recorded during the course of communal meals for researchers interested in intake gesture detection. Two scenarios are included, with 100 participants for a discrete dish and 102 participants for a shared dish, totalling 9069 intake gestures. Available sensor data consists of synchronized frontal video and IMU with accelerometer and gyroscope for both hands. We report the details of data collection and annotation, as well as details of sensor processing. The results of studies on IMU and video data involving deep learning models are reported to provide a baseline for future research. Specifically, the best baseline models achieve performances of $F_1$ = 0.853 for the discrete dish using video and $F_1$ = 0.852 for the shared dish using inertial data.
true
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
189,773
1912.05888
Variational Coupling Revisited: Simpler Models, Theoretical Connections, and Novel Applications
Variational models with coupling terms are becoming increasingly popular in image analysis. They involve auxiliary variables, such that their energy minimisation splits into multiple fractional steps that can be solved easier and more efficiently. In our paper we show that coupling models offer a number of interesting properties that go far beyond their obvious numerical benefits. We demonstrate that discontinuity-preserving denoising can be achieved even with quadratic data and smoothness terms, provided that the coupling term involves the $L^1$ norm. We show that such an $L^1$ coupling term provides additional information as a powerful edge detector that has remained unexplored so far. While coupling models in the literature approximate higher order regularisation, we argue that already first order coupling models can be useful. As a specific example, we present a first order coupling model that outperforms classical TV regularisation. It also establishes a theoretical connection between TV regularisation and the Mumford-Shah segmentation approach. Unlike other Mumford-Shah algorithms, it is a strictly convex approximation, for which we can guarantee convergence of a split Bregman algorithm.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
157,212
2502.06802
Solving the Content Gap in Roblox Game Recommendations: LLM-Based Profile Generation and Reranking
With the vast and dynamic user-generated content on Roblox, creating effective game recommendations requires a deep understanding of game content. Traditional recommendation models struggle with the inconsistent and sparse nature of game text features such as titles and descriptions. Recent advancements in large language models (LLMs) offer opportunities to enhance recommendation systems by analyzing in-game text data. This paper addresses two challenges: generating high-quality, structured text features for games without extensive human annotation, and validating these features to ensure they improve recommendation relevance. We propose an approach that extracts in-game text and uses LLMs to infer attributes such as genre and gameplay objectives from raw player interactions. Additionally, we introduce an LLM-based re-ranking mechanism to assess the effectiveness of the generated text features, enhancing personalization and user satisfaction. Beyond recommendations, our approach supports applications such as user engagement-based integrity detection, already deployed in production. This scalable framework demonstrates the potential of in-game text understanding to improve recommendation quality on Roblox and adapt recommendations to its unique, user-generated ecosystem.
false
false
false
false
true
true
true
false
true
false
false
false
false
false
false
false
false
false
532,249
0806.1834
A Low-decoding-complexity, Large coding Gain, Full-rate, Full-diversity STBC for 4 X 2 MIMO System
This paper proposes a low decoding complexity, full-diversity and full-rate space-time block code (STBC) for 4 transmit and 2 receive ($4\times 2$) multiple-input multiple-output (MIMO) systems. For such systems, the best code known is the DjABBA code and recently, Biglieri, Hong and Viterbo have proposed another STBC (BHV code) which has lower decoding complexity than DjABBA but does not have full-diversity like the DjABBA code. The code proposed in this paper has the same decoding complexity as the BHV code for square QAM constellations but has full-diversity as well. Compared to the best code in the DjABBA family of codes, our code has lower decoding complexity, a better coding gain and hence a better error performance as well. Simulation results confirming these are presented.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
1,906
2210.02596
Role of Deep Learning in Wireless Communications
Traditional communication system design has always been based on the paradigm of first establishing a mathematical model of the communication channel, then designing and optimizing the system according to the model. The advent of modern machine learning techniques, specifically deep neural networks, has opened up opportunities for data-driven system design and optimization. This article draws examples from the optimization of reconfigurable intelligent surface, distributed channel estimation and feedback for multiuser beamforming, and active sensing for millimeter wave (mmWave) initial alignment to illustrate that a data-driven design that bypasses explicit channel modelling can often discover excellent solutions to communication system design and optimization problems that are otherwise computationally difficult to solve. We show that by performing an end-to-end training of a deep neural network using a large number of channel samples, a machine learning based approach can potentially provide significant system-level improvements as compared to the traditional model-based approach for solving optimization problems. The key to the successful applications of machine learning techniques is in choosing the appropriate neural network architecture to match the underlying problem structure.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
321,701
1512.07103
A Class of Linear Codes with a Few Weights
Linear codes have been an interesting subject of study for many years, as linear codes with few weights have applications in secrete sharing, authentication codes, association schemes, and strongly regular graphs. In this paper, a class of linear codes with a few weights over the finite field $\gf(p)$ are presented and their weight distributions are also determined, where $p$ is an odd prime. Some of the linear codes obtained are optimal in the sense that they meet certain bounds on linear codes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
50,390
2108.02924
Interpretable Visual Understanding with Cognitive Attention Network
While image understanding on recognition-level has achieved remarkable advancements, reliable visual scene understanding requires comprehensive image understanding on recognition-level but also cognition-level, which calls for exploiting the multi-source information as well as learning different levels of understanding and extensive commonsense knowledge. In this paper, we propose a novel Cognitive Attention Network (CAN) for visual commonsense reasoning to achieve interpretable visual understanding. Specifically, we first introduce an image-text fusion module to fuse information from images and text collectively. Second, a novel inference module is designed to encode commonsense among image, query and response. Extensive experiments on large-scale Visual Commonsense Reasoning (VCR) benchmark dataset demonstrate the effectiveness of our approach. The implementation is publicly available at https://github.com/tanjatang/CAN
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
249,495
1706.07440
End-to-end Conversation Modeling Track in DSTC6
End-to-end training of neural networks is a promising approach to automatic construction of dialog systems using a human-to-human dialog corpus. Recently, Vinyals et al. tested neural conversation models using OpenSubtitles. Lowe et al. released the Ubuntu Dialogue Corpus for researching unstructured multi-turn dialogue systems. Furthermore, the approach has been extended to accomplish task oriented dialogs to provide information properly with natural conversation. For example, Ghazvininejad et al. proposed a knowledge grounded neural conversation model [3], where the research is aiming at combining conversational dialogs with task-oriented knowledge using unstructured data such as Twitter data for conversation and Foursquare data for external knowledge.However, the task is still limited to a restaurant information service, and has not yet been tested with a wide variety of dialog tasks. In addition, it is still unclear how to create intelligent dialog systems that can respond like a human agent. In consideration of these problems, we proposed a challenge track to the 6th dialog system technology challenges (DSTC6) using human-to-human dialog data to mimic human dialog behaviors. The focus of the challenge track is to train end-to-end conversation models from human-to-human conversation and accomplish end-to-end dialog tasks in various situations assuming a customer service, in which a system plays a role of human agent and generates natural and informative sentences in response to user's questions or comments given dialog context.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
75,839
1604.06486
Humans and deep networks largely agree on which kinds of variation make object recognition harder
View-invariant object recognition is a challenging problem, which has attracted much attention among the psychology, neuroscience, and computer vision communities. Humans are notoriously good at it, even if some variations are presumably more difficult to handle than others (e.g. 3D rotations). Humans are thought to solve the problem through hierarchical processing along the ventral stream, which progressively extracts more and more invariant visual features. This feed-forward architecture has inspired a new generation of bio-inspired computer vision systems called deep convolutional neural networks (DCNN), which are currently the best algorithms for object recognition in natural images. Here, for the first time, we systematically compared human feed-forward vision and DCNNs at view-invariant object recognition using the same images and controlling for both the kinds of transformation as well as their magnitude. We used four object categories and images were rendered from 3D computer models. In total, 89 human subjects participated in 10 experiments in which they had to discriminate between two or four categories after rapid presentation with backward masking. We also tested two recent DCNNs on the same tasks. We found that humans and DCNNs largely agreed on the relative difficulties of each kind of variation: rotation in depth is by far the hardest transformation to handle, followed by scale, then rotation in plane, and finally position. This suggests that humans recognize objects mainly through 2D template matching, rather than by constructing 3D object models, and that DCNNs are not too unreasonable models of human feed-forward vision. Also, our results show that the variation levels in rotation in depth and scale strongly modulate both humans' and DCNNs' recognition performances. We thus argue that these variations should be controlled in the image datasets used in vision research.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
54,943
1909.03683
Don't Take the Easy Way Out: Ensemble Based Methods for Avoiding Known Dataset Biases
State-of-the-art models often make use of superficial patterns in the data that do not generalize well to out-of-domain or adversarial settings. For example, textual entailment models often learn that particular key words imply entailment, irrespective of context, and visual question answering models learn to predict prototypical answers, without considering evidence in the image. In this paper, we show that if we have prior knowledge of such biases, we can train a model to be more robust to domain shift. Our method has two stages: we (1) train a naive model that makes predictions exclusively based on dataset biases, and (2) train a robust model as part of an ensemble with the naive one in order to encourage it to focus on other patterns in the data that are more likely to generalize. Experiments on five datasets with out-of-domain test sets show significantly improved robustness in all settings, including a 12 point gain on a changing priors visual question answering dataset and a 9 point gain on an adversarial question answering test set.
false
false
false
false
false
false
true
false
true
false
false
true
false
false
false
false
false
false
144,575
2412.17787
Cross-Lingual Text-Rich Visual Comprehension: An Information Theory Perspective
Recent Large Vision-Language Models (LVLMs) have shown promising reasoning capabilities on text-rich images from charts, tables, and documents. However, the abundant text within such images may increase the model's sensitivity to language. This raises the need to evaluate LVLM performance on cross-lingual text-rich visual inputs, where the language in the image differs from the language of the instructions. To address this, we introduce XT-VQA (Cross-Lingual Text-Rich Visual Question Answering), a benchmark designed to assess how LVLMs handle language inconsistency between image text and questions. XT-VQA integrates five existing text-rich VQA datasets and a newly collected dataset, XPaperQA, covering diverse scenarios that require faithful recognition and comprehension of visual information despite language inconsistency. Our evaluation of prominent LVLMs on XT-VQA reveals a significant drop in performance for cross-lingual scenarios, even for models with multilingual capabilities. A mutual information analysis suggests that this performance gap stems from cross-lingual questions failing to adequately activate relevant visual information. To mitigate this issue, we propose MVCL-MI (Maximization of Vision-Language Cross-Lingual Mutual Information), where a visual-text cross-lingual alignment is built by maximizing mutual information between the model's outputs and visual information. This is achieved by distilling knowledge from monolingual to cross-lingual settings through KL divergence minimization, where monolingual output logits serve as a teacher. Experimental results on the XT-VQA demonstrate that MVCL-MI effectively reduces the visual-text cross-lingual performance disparity while preserving the inherent capabilities of LVLMs, shedding new light on the potential practice for improving LVLMs. Codes are available at: https://github.com/Stardust-y/XTVQA.git
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
520,112
2412.09200
Accuracy Improvements for Convolutional and Differential Distance Function Approximations
Given a bounded domain, we deal with the problem of estimating the distance function from the internal points of the domain to the boundary of the domain. Convolutional and differential distance estimation schemes are considered and, for both the schemes, accuracy improvements are proposed and evaluated. Asymptotics of Laplace integrals and Taylor series extrapolations are used to achieve the improvements.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
516,394
1701.08939
Deep Submodular Functions
We start with an overview of a class of submodular functions called SCMMs (sums of concave composed with non-negative modular functions plus a final arbitrary modular). We then define a new class of submodular functions we call {\em deep submodular functions} or DSFs. We show that DSFs are a flexible parametric family of submodular functions that share many of the properties and advantages of deep neural networks (DNNs). DSFs can be motivated by considering a hierarchy of descriptive concepts over ground elements and where one wishes to allow submodular interaction throughout this hierarchy. Results in this paper show that DSFs constitute a strictly larger class of submodular functions than SCMMs. We show that, for any integer $k>0$, there are $k$-layer DSFs that cannot be represented by a $k'$-layer DSF for any $k'<k$. This implies that, like DNNs, there is a utility to depth, but unlike DNNs, the family of DSFs strictly increase with depth. Despite this, we show (using a "backpropagation" like method) that DSFs, even with arbitrarily large $k$, do not comprise all submodular functions. In offering the above results, we also define the notion of an antitone superdifferential of a concave function and show how this relates to submodular functions (in general), DSFs (in particular), negative second-order partial derivatives, continuous submodularity, and concave extensions. To further motivate our analysis, we provide various special case results from matroid theory, comparing DSFs with forms of matroid rank, in particular the laminar matroid. Lastly, we discuss strategies to learn DSFs, and define the classes of deep supermodular functions, deep difference of submodular functions, and deep multivariate submodular functions, and discuss where these can be useful in applications.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
67,560
1507.06682
Supervised Collective Classification for Crowdsourcing
Crowdsourcing utilizes the wisdom of crowds for collective classification via information (e.g., labels of an item) provided by labelers. Current crowdsourcing algorithms are mainly unsupervised methods that are unaware of the quality of crowdsourced data. In this paper, we propose a supervised collective classification algorithm that aims to identify reliable labelers from the training data (e.g., items with known labels). The reliability (i.e., weighting factor) of each labeler is determined via a saddle point algorithm. The results on several crowdsourced data show that supervised methods can achieve better classification accuracy than unsupervised methods, and our proposed method outperforms other algorithms.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
45,406
2106.05891
Temporal and Object Quantification Networks
We present Temporal and Object Quantification Networks (TOQ-Nets), a new class of neuro-symbolic networks with a structural bias that enables them to learn to recognize complex relational-temporal events. This is done by including reasoning layers that implement finite-domain quantification over objects and time. The structure allows them to generalize directly to input instances with varying numbers of objects in temporal sequences of varying lengths. We evaluate TOQ-Nets on input domains that require recognizing event-types in terms of complex temporal relational patterns. We demonstrate that TOQ-Nets can generalize from small amounts of data to scenarios containing more objects than were present during training and to temporal warpings of input sequences.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
240,270
1910.03320
One-To-Many Multilingual End-to-end Speech Translation
Nowadays, training end-to-end neural models for spoken language translation (SLT) still has to confront with extreme data scarcity conditions. The existing SLT parallel corpora are indeed orders of magnitude smaller than those available for the closely related tasks of automatic speech recognition (ASR) and machine translation (MT), which usually comprise tens of millions of instances. To cope with data paucity, in this paper we explore the effectiveness of transfer learning in end-to-end SLT by presenting a multilingual approach to the task. Multilingual solutions are widely studied in MT and usually rely on ``\textit{target forcing}'', in which multilingual parallel data are combined to train a single model by prepending to the input sequences a language token that specifies the target language. However, when tested in speech translation, our experiments show that MT-like \textit{target forcing}, used as is, is not effective in discriminating among the target languages. Thus, we propose a variant that uses target-language embeddings to shift the input representations in different portions of the space according to the language, so to better support the production of output in the desired target language. Our experiments on end-to-end SLT from English into six languages show important improvements when translating into similar languages, especially when these are supported by scarce data. Further improvements are obtained when using English ASR data as an additional language (up to $+2.5$ BLEU points).
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
148,469
1510.01288
5G Ultra-Reliable Vehicular Communication
Applications enabled by Cooperative Intelligent Transport Systems (C-ITS) represent a major step towards making the road transport system safer and more efficient (green), and thus suited for a sustainable future. Wireless communication between vehicles and road infrastructure is an enabler for high-performance C-ITS applications. State-of-the-art communication systems for supporting low-latency C-ITS applications are based on IEEE 802.11 medium access control (MAC) and physical (PHY) layers. In this paper, we argue that a well-designed 5G system can complement or even replace these systems. We will review the C-ITS application requirements and explain how these are well aligned with the foreseen generic 5G service of ultra-reliable machine-type communication (uMTC). Key technology components suitable for constructing the uMTC service are identified: reliable service composition (RSC) and device-to-device (D2D) links for all-to-all broadcast communication, operational at high mobility and with varying degree of network assistance. Important problems for future studies, including radio-resource management, medium access control, and physical layer challenges, are discussed.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
47,599
2405.02952
Accelerating Legacy Numerical Solvers by Non-intrusive Gradient-based Meta-solving
Scientific computing is an essential tool for scientific discovery and engineering design, and its computational cost is always a main concern in practice. To accelerate scientific computing, it is a promising approach to use machine learning (especially meta-learning) techniques for selecting hyperparameters of traditional numerical methods. There have been numerous proposals to this direction, but many of them require automatic-differentiable numerical methods. However, in reality, many practical applications still depend on well-established but non-automatic-differentiable legacy codes, which prevents practitioners from applying the state-of-the-art research to their own problems. To resolve this problem, we propose a non-intrusive methodology with a novel gradient estimation technique to combine machine learning and legacy numerical codes without any modification. We theoretically and numerically show the advantage of the proposed method over other baselines and present applications of accelerating established non-automatic-differentiable numerical solvers implemented in PETSc, a widely used open-source numerical software library.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
451,989
1512.07943
Toward a Research Agenda in Adversarial Reasoning: Computational Approaches to Anticipating the Opponent's Intent and Actions
This paper defines adversarial reasoning as computational approaches to inferring and anticipating an enemy's perceptions, intents and actions. It argues that adversarial reasoning transcends the boundaries of game theory and must also leverage such disciplines as cognitive modeling, control theory, AI planning and others. To illustrate the challenges of applying adversarial reasoning to real-world problems, the paper explores the lessons learned in the CADET - a battle planning system that focuses on brigade-level ground operations and involves adversarial reasoning. From this example of current capabilities, the paper proceeds to describe RAID - a DARPA program that aims to build capabilities in adversarial reasoning, and how such capabilities would address practical requirements in Defense and other application areas.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
50,468
2404.11803
TempBEV: Improving Learned BEV Encoders with Combined Image and BEV Space Temporal Aggregation
Autonomous driving requires an accurate representation of the environment. A strategy toward high accuracy is to fuse data from several sensors. Learned Bird's-Eye View (BEV) encoders can achieve this by mapping data from individual sensors into one joint latent space. For cost-efficient camera-only systems, this provides an effective mechanism to fuse data from multiple cameras with different views. Accuracy can further be improved by aggregating sensor information over time. This is especially important in monocular camera systems to account for the lack of explicit depth and velocity measurements. Thereby, the effectiveness of developed BEV encoders crucially depends on the operators used to aggregate temporal information and on the used latent representation spaces. We analyze BEV encoders proposed in the literature and compare their effectiveness, quantifying the effects of aggregation operators and latent representations. While most existing approaches aggregate temporal information either in image or in BEV latent space, our analyses and performance comparisons suggest that these latent representations exhibit complementary strengths. Therefore, we develop a novel temporal BEV encoder, TempBEV, which integrates aggregated temporal information from both latent spaces. We consider subsequent image frames as stereo through time and leverage methods from optical flow estimation for temporal stereo encoding. Empirical evaluation on the NuScenes dataset shows a significant improvement by TempBEV over the baseline for 3D object detection and BEV segmentation. The ablation uncovers a strong synergy of joint temporal aggregation in the image and BEV latent space. These results indicate the overall effectiveness of our approach and make a strong case for aggregating temporal information in both image and BEV latent spaces.
false
false
false
false
true
false
true
true
false
false
false
true
false
false
false
false
false
false
447,617
1606.07232
Distributed Wireless Power Transfer with Energy Feedback
Energy beamforming (EB) is a key technique for achieving efficient radio-frequency (RF) transmission enabled wireless energy transfer (WET). By optimally designing the waveforms from multiple energy transmitters (ETs) over the wireless channels, they can be constructively combined at the energy receiver (ER) to achieve an EB gain that scales with the number of ETs. However, the optimal design of EB waveforms requires accurate channel state information (CSI) at the ETs, which is challenging to obtain practically, especially in a distributed system with ETs at separate locations. In this paper, we study practical and efficient channel training methods to achieve optimal EB in a distributed WET system. We propose two protocols with and without centralized coordination, respectively, where distributed ETs either sequentially or in parallel adapt their transmit phases based on a low-complexity energy feedback from the ER. The energy feedback only depends on the received power level at the ER, where each feedback indicates one particular transmit phase that results in the maximum harvested power over a set of previously used phases. Simulation results show that the two proposed training protocols converge very fast in practical WET systems even with a large number of distributed ETs, while the protocol with sequential ET phase adaptation is also analytically shown to converge to the optimal EB design with perfect CSI by increasing the training time. Numerical results are also provided to evaluate the performance of the proposed distributed EB and training designs as compared to other benchmark schemes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
57,676
2410.08549
Score Neural Operator: A Generative Model for Learning and Generalizing Across Multiple Probability Distributions
Most existing generative models are limited to learning a single probability distribution from the training data and cannot generalize to novel distributions for unseen data. An architecture that can generate samples from both trained datasets and unseen probability distributions would mark a significant breakthrough. Recently, score-based generative models have gained considerable attention for their comprehensive mode coverage and high-quality image synthesis, as they effectively learn an operator that maps a probability distribution to its corresponding score function. In this work, we introduce the $\emph{Score Neural Operator}$, which learns the mapping from multiple probability distributions to their score functions within a unified framework. We employ latent space techniques to facilitate the training of score matching, which tends to over-fit in the original image pixel space, thereby enhancing sample generation quality. Our trained Score Neural Operator demonstrates the ability to predict score functions of probability measures beyond the training space and exhibits strong generalization performance in both 2-dimensional Gaussian Mixture Models and 1024-dimensional MNIST double-digit datasets. Importantly, our approach offers significant potential for few-shot learning applications, where a single image from a new distribution can be leveraged to generate multiple distinct images from that distribution.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
497,163
2302.08680
Modeling Polypharmacy and Predicting Drug-Drug Interactions using Deep Generative Models on Multimodal Graphs
Latent representations of drugs and their targets produced by contemporary graph autoencoder models have proved useful in predicting many types of node-pair interactions on large networks, including drug-drug, drug-target, and target-target interactions. However, most existing approaches model either the node's latent spaces in which node distributions are rigid or do not effectively capture the interrelations between drugs; these limitations hinder the methods from accurately predicting drug-pair interactions. In this paper, we present the effectiveness of variational graph autoencoders (VGAE) in modeling latent node representations on multimodal networks. Our approach can produce flexible latent spaces for each node type of the multimodal graph; the embeddings are used later for predicting links among node pairs under different edge types. To further enhance the models' performance, we suggest a new method that concatenates Morgan fingerprints, which capture the molecular structures of each drug, with their latent embeddings before preceding them to the decoding stage for link prediction. Our proposed model shows competitive results on three multimodal networks: (1) a multimodal graph consisting of drug and protein nodes, (2) a multimodal graph constructed from a subset of the DrugBank database involving drug nodes under different interaction types, and (3) a multimodal graph consisting of drug and cell line nodes. Our source code is publicly available at https://github.com/HySonLab/drug-interactions.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
346,140
2302.00189
Detecting Lexical Borrowings from Dominant Languages in Multilingual Wordlists
Language contact is a pervasive phenomenon reflected in the borrowing of words from donor to recipient languages. Most computational approaches to borrowing detection treat all languages under study as equally important, even though dominant languages have a stronger impact on heritage languages than vice versa. We test new methods for lexical borrowing detection in contact situations where dominant languages play an important role, applying two classical sequence comparison methods and one machine learning method to a sample of seven Latin American languages which have all borrowed extensively from Spanish. All methods perform well, with the supervised machine learning system outperforming the classical systems. A review of detection errors shows that borrowing detection could be substantially improved by taking into account donor words with divergent meanings from recipient words.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
343,130
1907.03250
Resource-Efficient Wearable Computing for Real-Time Reconfigurable Machine Learning: A Cascading Binary Classification
Advances in embedded systems have enabled integration of many lightweight sensory devices within our daily life. In particular, this trend has given rise to continuous expansion of wearable sensors in a broad range of applications from health and fitness monitoring to social networking and military surveillance. Wearables leverage machine learning techniques to profile behavioral routine of their end-users through activity recognition algorithms. Current research assumes that such machine learning algorithms are trained offline. In reality, however, wearables demand continuous reconfiguration of their computational algorithms due to their highly dynamic operation. Developing a personalized and adaptive machine learning model requires real-time reconfiguration of the model. Due to stringent computation and memory constraints of these embedded sensors, the training/re-training of the computational algorithms need to be memory- and computation-efficient. In this paper, we propose a framework, based on the notion of online learning, for real-time and on-device machine learning training. We propose to transform the activity recognition problem from a multi-class classification problem to a hierarchical model of binary decisions using cascading online binary classifiers. Our results, based on Pegasos online learning, demonstrate that the proposed approach achieves 97% accuracy in detecting activities of varying intensities using a limited memory while power usages of the system is reduced by more than 40%.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
137,821
2310.09929
Prompting Scientific Names for Zero-Shot Species Recognition
Trained on web-scale image-text pairs, Vision-Language Models (VLMs) such as CLIP can recognize images of common objects in a zero-shot fashion. However, it is underexplored how to use CLIP for zero-shot recognition of highly specialized concepts, e.g., species of birds, plants, and animals, for which their scientific names are written in Latin or Greek. Indeed, CLIP performs poorly for zero-shot species recognition with prompts that use scientific names, e.g., "a photo of Lepus Timidus" (which is a scientific name in Latin). Because these names are usually not included in CLIP's training set. To improve performance, prior works propose to use large-language models (LLMs) to generate descriptions (e.g., of species color and shape) and additionally use them in prompts. We find that they bring only marginal gains. Differently, we are motivated to translate scientific names (e.g., Lepus Timidus) to common English names (e.g., mountain hare) and use such in the prompts. We find that common names are more likely to be included in CLIP's training set, and prompting them achieves 2$\sim$5 times higher accuracy on benchmarking datasets of fine-grained species recognition.
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
400,010
2302.14421
Publicly verifiable delegative democracy with secret voting power
In a democratic setting, we introduce a commitment scheme which allows for transparent validation of transfers and reversible delegations of voting power between citizens without sacrificing their privacy. A unit of voting power is publicly represented by the Merkle root of a tree consisting of its latest owner's public key, a random nonce and the Merkle root of the tree of its previous owner's public key and random nonce and so on. A transition includes the input units, their owner's public keys and signatures, the hashes of their nonces and the output units generated with the new owners' public keys and random nonces. In case of a delegation, the receiver provides the sender with the hashed random nonces and hashed public keys for the output units. In case of a transfer, only the precomputed output units are provided by the receiver. In a reversal, a historical owner reveals the hashes of the nonces and public keys that resulted in the subsequent units. To vote, the owner reveals the actual nonces and public keys.
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
true
348,292
2105.05915
Better than BERT but Worse than Baseline
This paper compares BERT-SQuAD and Ab3P on the Abbreviation Definition Identification (ADI) task. ADI inputs a text and outputs short forms (abbreviations/acronyms) and long forms (expansions). BERT with reranking improves over BERT without reranking but fails to reach the Ab3P rule-based baseline. What is BERT missing? Reranking introduces two new features: charmatch and freq. The first feature identifies opportunities to take advantage of character constraints in acronyms and the second feature identifies opportunities to take advantage of frequency constraints across documents.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
234,955
2009.10619
An Exponential Factorization Machine with Percentage Error Minimization to Retail Sales Forecasting
This paper proposes a new approach to sales forecasting for new products with long lead time but short product life cycle. These SKUs are usually sold for one season only, without any replenishments. An exponential factorization machine (EFM) sales forecast model is developed to solve this problem which not only considers SKU attributes, but also pairwise interactions. The EFM model is significantly different from the original Factorization Machines (FM) from two-fold: (1) the attribute-level formulation for explanatory variables and (2) exponential formulation for the positive response variable. The attribute-level formation excludes infeasible intra-attribute interactions and results in more efficient feature engineering comparing with the conventional one-hot encoding, while the exponential formulation is demonstrated more effective than the log-transformation for the positive but not skewed distributed responses. In order to estimate the parameters, percentage error squares (PES) and error squares (ES) are minimized by a proposed adaptive batch gradient descent method over the training set. Real-world data provided by a footwear retailer in Singapore is used for testing the proposed approach. The forecasting performance in terms of both mean absolute percentage error (MAPE) and mean absolute error (MAE) compares favourably with not only off-the-shelf models but also results reported by extant sales and demand forecasting studies. The effectiveness of the proposed approach is also demonstrated by two external public datasets. Moreover, we prove the theoretical relationships between PES and ES minimization, and present an important property of the PES minimization for regression models; that it trains models to underestimate data. This property fits the situation of sales forecasting where unit-holding cost is much greater than the unit-shortage cost.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
196,936
1703.06541
Native Language Identification using Stacked Generalization
Ensemble methods using multiple classifiers have proven to be the most successful approach for the task of Native Language Identification (NLI), achieving the current state of the art. However, a systematic examination of ensemble methods for NLI has yet to be conducted. Additionally, deeper ensemble architectures such as classifier stacking have not been closely evaluated. We present a set of experiments using three ensemble-based models, testing each with multiple configurations and algorithms. This includes a rigorous application of meta-classification models for NLI, achieving state-of-the-art results on three datasets from different languages. We also present the first use of statistical significance testing for comparing NLI systems, showing that our results are significantly better than the previous state of the art. We make available a collection of test set predictions to facilitate future statistical tests.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
70,246
2406.03164
Topological Neural Networks go Persistent, Equivariant, and Continuous
Topological Neural Networks (TNNs) incorporate higher-order relational information beyond pairwise interactions, enabling richer representations than Graph Neural Networks (GNNs). Concurrently, topological descriptors based on persistent homology (PH) are being increasingly employed to augment the GNNs. We investigate the benefits of integrating these two paradigms. Specifically, we introduce TopNets as a broad framework that subsumes and unifies various methods in the intersection of GNNs/TNNs and PH such as (generalizations of) RePHINE and TOGL. TopNets can also be readily adapted to handle (symmetries in) geometric complexes, extending the scope of TNNs and PH to spatial settings. Theoretically, we show that PH descriptors can provably enhance the expressivity of simplicial message-passing networks. Empirically, (continuous and E(n)-equivariant extensions of) TopNets achieve strong performance across diverse tasks, including antibody design, molecular dynamics simulation, and drug property prediction.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
461,122
2105.05714
Representation in Dynamical Systems
The brain is often called a computer and likened to a Turing machine, in part because the mind can manipulate discrete symbols such as numbers. But the brain is a dynamical system, more like a Watt governor than a Turing machine. Can a dynamical system be said to operate using "representations"? This paper argues that it can, although not in the way a digital computer does. Instead, it uses phenomena best described using mathematic concepts such as chaotic attractors to stand in for aspects of the world.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
234,898
2410.02027
Quantifying the Gaps Between Translation and Native Perception in Training for Multimodal, Multilingual Retrieval
There is a scarcity of multilingual vision-language models that properly account for the perceptual differences that are reflected in image captions across languages and cultures. In this work, through a multimodal, multilingual retrieval case study, we quantify the existing lack of model flexibility. We empirically show performance gaps between training on captions that come from native German perception and captions that have been either machine-translated or human-translated from English into German. To address these gaps, we further propose and evaluate caption augmentation strategies. While we achieve mean recall improvements (+1.3), gaps still remain, indicating an open area of future work for the community.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
494,064
1607.00225
Evaluating Unsupervised Dutch Word Embeddings as a Linguistic Resource
Word embeddings have recently seen a strong increase in interest as a result of strong performance gains on a variety of tasks. However, most of this research also underlined the importance of benchmark datasets, and the difficulty of constructing these for a variety of language-specific tasks. Still, many of the datasets used in these tasks could prove to be fruitful linguistic resources, allowing for unique observations into language use and variability. In this paper we demonstrate the performance of multiple types of embeddings, created with both count and prediction-based architectures on a variety of corpora, in two language-specific tasks: relation evaluation, and dialect identification. For the latter, we compare unsupervised methods with a traditional, hand-crafted dictionary. With this research, we provide the embeddings themselves, the relation evaluation task benchmark for use in further research, and demonstrate how the benchmarked embeddings prove a useful unsupervised linguistic resource, effectively used in a downstream task.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
58,047
1205.0837
Indexing Reverse Top-k Queries
We consider the recently introduced monochromatic reverse top-k queries which ask for, given a new tuple q and a dataset D, all possible top-k queries on D union {q} for which q is in the result. Towards this problem, we focus on designing indexes in two dimensions for repeated (or batch) querying, a novel but practical consideration. We present the insight that by representing the dataset as an arrangement of lines, a critical k-polygon can be identified and used exclusively to respond to reverse top-k queries. We construct an index based on this observation which has guaranteed worst-case query cost that is logarithmic in the size of the k-polygon. We implement our work and compare it to related approaches, demonstrating that our index is fast in practice. Furthermore, we demonstrate through our experiments that a k-polygon is comprised of a small proportion of the original data, so our index structure consumes little disk space.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
true
15,791
2409.03701
LAST: Language Model Aware Speech Tokenization
Speech tokenization serves as the foundation of speech language model (LM), enabling them to perform various tasks such as spoken language modeling, text-to-speech, speech-to-text, etc. Most speech tokenizers are trained independently of the LM training process, relying on separate acoustic models and quantization methods. Following such an approach may create a mismatch between the tokenization process and its usage afterward. In this study, we propose a novel approach to training a speech tokenizer by leveraging objectives from pre-trained textual LMs. We advocate for the integration of this objective into the process of learning discrete speech representations. Our aim is to transform features from a pre-trained speech model into a new feature space that enables better clustering for speech LMs. We empirically investigate the impact of various model design choices, including speech vocabulary size and text LM size. Our results demonstrate the proposed tokenization method outperforms the evaluated baselines considering both spoken language modeling and speech-to-text. More importantly, unlike prior work, the proposed method allows the utilization of a single pre-trained LM for processing both speech and text inputs, setting it apart from conventional tokenization approaches.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
486,134
2207.06754
E2-AEN: End-to-End Incremental Learning with Adaptively Expandable Network
Expandable networks have demonstrated their advantages in dealing with catastrophic forgetting problem in incremental learning. Considering that different tasks may need different structures, recent methods design dynamic structures adapted to different tasks via sophisticated skills. Their routine is to search expandable structures first and then train on the new tasks, which, however, breaks tasks into multiple training stages, leading to suboptimal or overmuch computational cost. In this paper, we propose an end-to-end trainable adaptively expandable network named E2-AEN, which dynamically generates lightweight structures for new tasks without any accuracy drop in previous tasks. Specifically, the network contains a serial of powerful feature adapters for augmenting the previously learned representations to new tasks, and avoiding task interference. These adapters are controlled via an adaptive gate-based pruning strategy which decides whether the expanded structures can be pruned, making the network structure dynamically changeable according to the complexity of the new tasks. Moreover, we introduce a novel sparsity-activation regularization to encourage the model to learn discriminative features with limited parameters. E2-AEN reduces cost and can be built upon any feed-forward architectures in an end-to-end manner. Extensive experiments on both classification (i.e., CIFAR and VDD) and detection (i.e., COCO, VOC and ICCV2021 SSLAD challenge) benchmarks demonstrate the effectiveness of the proposed method, which achieves the new remarkable results.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
307,980
2402.17559
GraphMatch: Subgraph Query Processing on FPGAs
Efficiently finding subgraph embeddings in large graphs is crucial for many application areas like biology and social network analysis. Set intersections are the predominant and most challenging aspect of current join-based subgraph query processing systems for CPUs. Previous work has shown the viability of utilizing FPGAs for acceleration of graph and join processing. In this work, we propose GraphMatch, the first genearl-purpose stand-alone subgraph query processing accelerator based on worst-case optimal joins (WCOJ) that is fully designed for modern, field programmable gate array (FPGA) hardware. For efficient processing of various graph data sets and query graph patterns, it leverages a novel set intersection approach, called AllCompare, tailor-made for FPGAs. We show that this set intersection approach efficiently solves multi-set intersections in subgraph query processing, superior to CPU-based approaches. Overall, GraphMatch achieves a speedup of over 2.68x and 5.16x, compared to the state-of-the-art systems GraphFlow and RapidMatch, respectively.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
true
433,048
2303.12149
SPARTAN: Self-supervised Spatiotemporal Transformers Approach to Group Activity Recognition
In this paper, we propose a new, simple, and effective Self-supervised Spatio-temporal Transformers (SPARTAN) approach to Group Activity Recognition (GAR) using unlabeled video data. Given a video, we create local and global Spatio-temporal views with varying spatial patch sizes and frame rates. The proposed self-supervised objective aims to match the features of these contrasting views representing the same video to be consistent with the variations in spatiotemporal domains. To the best of our knowledge, the proposed mechanism is one of the first works to alleviate the weakly supervised setting of GAR using the encoders in video transformers. Furthermore, using the advantage of transformer models, our proposed approach supports long-term relationship modeling along spatio-temporal dimensions. The proposed SPARTAN approach performs well on two group activity recognition benchmarks, including NBA and Volleyball datasets, by surpassing the state-of-the-art results by a significant margin in terms of MCA and MPCA metrics.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
353,153
2003.02541
A Balanced and Uncertainty-aware Approach for Partial Domain Adaptation
This work addresses the unsupervised domain adaptation problem, especially in the case of class labels in the target domain being only a subset of those in the source domain. Such a partial transfer setting is realistic but challenging and existing methods always suffer from two key problems, negative transfer and uncertainty propagation. In this paper, we build on domain adversarial learning and propose a novel domain adaptation method BA$^3$US with two new techniques termed Balanced Adversarial Alignment (BAA) and Adaptive Uncertainty Suppression (AUS), respectively. On one hand, negative transfer results in misclassification of target samples to the classes only present in the source domain. To address this issue, BAA pursues the balance between label distributions across domains in a fairly simple manner. Specifically, it randomly leverages a few source samples to augment the smaller target domain during domain alignment so that classes in different domains are symmetric. On the other hand, a source sample would be denoted as uncertain if there is an incorrect class that has a relatively high prediction score, and such uncertainty easily propagates to unlabeled target data around it during alignment, which severely deteriorates adaptation performance. Thus we present AUS that emphasizes uncertain samples and exploits an adaptive weighted complement entropy objective to encourage incorrect classes to have uniform and low prediction scores. Experimental results on multiple benchmarks demonstrate our BA$^3$US surpasses state-of-the-arts for partial domain adaptation tasks. Code is available at \url{https://github.com/tim-learn/BA3US}.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
166,963
1612.04966
Design of Image Matched Non-Separable Wavelet using Convolutional Neural Network
Image-matched nonseparable wavelets can find potential use in many applications including image classification, segmen- tation, compressive sensing, etc. This paper proposes a novel design methodology that utilizes convolutional neural net- work (CNN) to design two-channel non-separable wavelet matched to a given image. The design is proposed on quin- cunx lattice. The loss function of the convolutional neural network is setup with total squared error between the given input image to CNN and the reconstructed image at the output of CNN, leading to perfect reconstruction at the end of train- ing. Simulation results have been shown on some standard images.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
65,614
1803.00357
Cross-lingual and Multilingual Speech Emotion Recognition on English and French
Research on multilingual speech emotion recognition faces the problem that most available speech corpora differ from each other in important ways, such as annotation methods or interaction scenarios. These inconsistencies complicate building a multilingual system. We present results for cross-lingual and multilingual emotion recognition on English and French speech data with similar characteristics in terms of interaction (human-human conversations). Further, we explore the possibility of fine-tuning a pre-trained cross-lingual model with only a small number of samples from the target language, which is of great interest for low-resource languages. To gain more insights in what is learned by the deployed convolutional neural network, we perform an analysis on the attention mechanism inside the network.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
91,656
1511.07271
Synthesizing Omnidirectional Antenna Patterns, Received Power and Path Loss from Directional Antennas for 5G Millimeter-Wave Communications
Omnidirectional path loss models are vital for radiosystem design in wireless communication systems, as they allow engineers to perform network simulations for systems with arbitrary antenna patterns. At millimeter-wave frequencies, channel measurements are frequently conducted using steerable highgain directional antennas at both the transmitter and receiver to make up for the significant increase in free space path loss at these frequencies compared to traditional cellular systems that operate at lower frequencies. The omnidirectional antenna pattern, and resulting omnidirectional received power must therefore be synthesized from many unique pointing angles, where the transmit and receive antennas are rotated over many different azimuth and elevation planes. In this paper, the equivalent omnidirectional antenna pattern and omnidirectional received power are synthesized by summing the received powers from all measured unique pointing angles obtained at antenna halfpower beamwidth step increments in the azimuth and elevation planes, and this method is validated by demonstrating that the synthesized omnidirectional received power and path loss are independent of antenna beamwidth, through theoretical analyses and millimeter-wave propagation measurements using antennas with different beamwidths. The method in this paper is shown to provide accurate results while enhancing the measurement range substantially through the use of directional antennas.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
49,402
2312.10040
Robust Errant Beam Prognostics with Conditional Modeling for Particle Accelerators
Particle accelerators are complex and comprise thousands of components, with many pieces of equipment running at their peak power. Consequently, particle accelerators can fault and abort operations for numerous reasons. These faults impact the availability of particle accelerators during scheduled run-time and hamper the efficiency and the overall science output. To avoid these faults, we apply anomaly detection techniques to predict any unusual behavior and perform preemptive actions to improve the total availability of particle accelerators. Semi-supervised Machine Learning (ML) based anomaly detection approaches such as autoencoders and variational autoencoders are often used for such tasks. However, supervised ML techniques such as Siamese Neural Network (SNN) models can outperform unsupervised or semi-supervised approaches for anomaly detection by leveraging the label information. One of the challenges specific to anomaly detection for particle accelerators is the data's variability due to system configuration changes. To address this challenge, we employ Conditional Siamese Neural Network (CSNN) models and Conditional Variational Auto Encoder (CVAE) models to predict errant beam pulses at the Spallation Neutron Source (SNS) under different system configuration conditions and compare their performance. We demonstrate that CSNN outperforms CVAE in our application.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
415,976
1206.6449
Monte Carlo Bayesian Reinforcement Learning
Bayesian reinforcement learning (BRL) encodes prior knowledge of the world in a model and represents uncertainty in model parameters by maintaining a probability distribution over them. This paper presents Monte Carlo BRL (MC-BRL), a simple and general approach to BRL. MC-BRL samples a priori a finite set of hypotheses for the model parameter values and forms a discrete partially observable Markov decision process (POMDP) whose state space is a cross product of the state space for the reinforcement learning task and the sampled model parameter space. The POMDP does not require conjugate distributions for belief representation, as earlier works do, and can be solved relatively easily with point-based approximation algorithms. MC-BRL naturally handles both fully and partially observable worlds. Theoretical and experimental results show that the discrete POMDP approximates the underlying BRL task well with guaranteed performance.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
16,984
1809.03985
On The Alignment Problem In Multi-Head Attention-Based Neural Machine Translation
This work investigates the alignment problem in state-of-the-art multi-head attention models based on the transformer architecture. We demonstrate that alignment extraction in transformer models can be improved by augmenting an additional alignment head to the multi-head source-to-target attention component. This is used to compute sharper attention weights. We describe how to use the alignment head to achieve competitive performance. To study the effect of adding the alignment head, we simulate a dictionary-guided translation task, where the user wants to guide translation using pre-defined dictionary entries. Using the proposed approach, we achieve up to $3.8$ % BLEU improvement when using the dictionary, in comparison to $2.4$ % BLEU in the baseline case. We also propose alignment pruning to speed up decoding in alignment-based neural machine translation (ANMT), which speeds up translation by a factor of $1.8$ without loss in translation performance. We carry out experiments on the shared WMT 2016 English$\to$Romanian news task and the BOLT Chinese$\to$English discussion forum task.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
107,441
1804.06579
Semi-Supervised Co-Analysis of 3D Shape Styles from Projected Lines
We present a semi-supervised co-analysis method for learning 3D shape styles from projected feature lines, achieving style patch localization with only weak supervision. Given a collection of 3D shapes spanning multiple object categories and styles, we perform style co-analysis over projected feature lines of each 3D shape and then backproject the learned style features onto the 3D shapes. Our core analysis pipeline starts with mid-level patch sampling and pre-selection of candidate style patches. Projective features are then encoded via patch convolution. Multi-view feature integration and style clustering are carried out under the framework of partially shared latent factor (PSLF) learning, a multi-view feature learning scheme. PSLF achieves effective multi-view feature fusion by distilling and exploiting consistent and complementary feature information from multiple views, while also selecting style patches from the candidates. Our style analysis approach supports both unsupervised and semi-supervised analysis. For the latter, our method accepts both user-specified shape labels and style-ranked triplets as clustering constraints.We demonstrate results from 3D shape style analysis and patch localization as well as improvements over state-of-the-art methods. We also present several applications enabled by our style analysis.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
95,335