id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
1505.07593
Networking technologies for robotic applications
The ongoing progress in networking security, together with the growing range of robot applications in many fields of everyday life, makes robotics tangible reality in our near future. Accordingly, new advanced services, depends on the interplay between the robotics and cyber security, are being an important role in robotics world. This paper addresses technological implications of security enhancement to the Internet of Thing (IoT)-aided robotics domain, where networked robots are expected to work in complex environments. The security enhancement suggested by the NIST (National Institute of Standards and Technology) creates a security template for secure communications over the network are also discussed.
false
false
false
false
false
false
false
true
false
false
false
false
true
true
false
false
false
false
43,556
1211.4518
Hypothesis Testing in Feedforward Networks with Broadcast Failures
Consider a countably infinite set of nodes, which sequentially make decisions between two given hypotheses. Each node takes a measurement of the underlying truth, observes the decisions from some immediate predecessors, and makes a decision between the given hypotheses. We consider two classes of broadcast failures: 1) each node broadcasts a decision to the other nodes, subject to random erasure in the form of a binary erasure channel; 2) each node broadcasts a randomly flipped decision to the other nodes in the form of a binary symmetric channel. We are interested in whether there exists a decision strategy consisting of a sequence of likelihood ratio tests such that the node decisions converge in probability to the underlying truth. In both cases, we show that if each node only learns from a bounded number of immediate predecessors, then there does not exist a decision strategy such that the decisions converge in probability to the underlying truth. However, in case 1, we show that if each node learns from an unboundedly growing number of predecessors, then the decisions converge in probability to the underlying truth, even when the erasure probabilities converge to 1. We also derive the convergence rate of the error probability. In case 2, we show that if each node learns from all of its previous predecessors, then the decisions converge in probability to the underlying truth when the flipping probabilities of the binary symmetric channels are bounded away from 1/2. In the case where the flipping probabilities converge to 1/2, we derive a necessary condition on the convergence rate of the flipping probabilities such that the decisions still converge to the underlying truth. We also explicitly characterize the relationship between the convergence rate of the error probability and the convergence rate of the flipping probabilities.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
19,819
2411.08821
Model agnostic local variable importance for locally dependent relationships
Global variable importance measures are commonly used to interpret the results of machine learning models. Local variable importance techniques assess how variables contribute to individual observations. Current methods typically fail to accurately reflect locally dependent relationships between variables and instead focus on marginal importance values. Additionally, they are not natively adapted for multi-class classification problems. We propose a new model-agnostic method for calculating local variable importance, CLIQUE, that captures locally dependent relationships, improves over permutation-based methods, and can be directly applied to multi-category classification problems. Simulated and real-world examples show that CLIQUE emphasizes locally dependent information and properly reduces bias in regions where variables do not affect the response.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
508,025
2308.11027
Split Learning for Distributed Collaborative Training of Deep Learning Models in Health Informatics
Deep learning continues to rapidly evolve and is now demonstrating remarkable potential for numerous medical prediction tasks. However, realizing deep learning models that generalize across healthcare organizations is challenging. This is due, in part, to the inherent siloed nature of these organizations and patient privacy requirements. To address this problem, we illustrate how split learning can enable collaborative training of deep learning models across disparate and privately maintained health datasets, while keeping the original records and model parameters private. We introduce a new privacy-preserving distributed learning framework that offers a higher level of privacy compared to conventional federated learning. We use several biomedical imaging and electronic health record (EHR) datasets to show that deep learning models trained via split learning can achieve highly similar performance to their centralized and federated counterparts while greatly improving computational efficiency and reducing privacy risks.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
386,974
1503.07341
An Experiment on Using Bayesian Networks for Process Mining
Process mining is a technique that performs an automatic analysis of business processes from a log of events with the promise of understanding how processes are executed in an organisation. Several models have been proposed to address this problem, however, here we propose a different approach to deal with uncertainty. By uncertainty, we mean estimating the probability of some sequence of tasks occurring in a business process, given that only a subset of tasks may be observable. In this sense, this work proposes a new approach to perform process mining using Bayesian Networks. These structures can take into account the probability of a task being present or absent in the business process. Moreover, Bayesian Networks are able to automatically learn these probabilities through mechanisms such as the maximum likelihood estimate and EM clustering. Experiments made over a Loan Application Case study suggest that Bayesian Networks are adequate structures for process mining and enable a deep analysis of the business process model that can be used to answer queries about that process.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
41,465
2205.04187
Finite-State Semi-Markov Channels for Nanopore Sequencing
Nanopore sequencing is an emerging DNA sequencing technology that has been proposed for use in DNA storage systems. We propose the noisy nanopore channel model for nanopore sequencing. This model captures duplications, inter-symbol interference, and noisy measurements by concatenating an i.i.d. duplication channel with a finite-state semi-Markov channel. Compared to previous models, this channel models the dominant distortions of the nanopore while remaining tractable. Anticipating future coding schemes, we derive MAP detection algorithms and estimate achievable rates. Given that finite-state semi-Markov channels are a subclass of channels with memory, we conjecture that the achievable rate of the noisy nanopore channel can be optimised using a variation of the generalised Blahut-Arimoto algorithm.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
295,569
2112.13388
The brain as a probabilistic transducer: an evolutionarily plausible network architecture for knowledge representation, computation, and behavior
We offer a general theoretical framework for brain and behavior that is evolutionarily and computationally plausible. The brain in our abstract model is a network of nodes and edges. Although it has some similarities to standard neural network models, as we show, there are some significant differences. Both nodes and edges in our network have weights and activation levels. They act as probabilistic transducers that use a set of relatively simple rules to determine how activation levels and weights are affected by input, generate output, and affect each other. We show that these simple rules enable a learning process that allows the network to represent increasingly complex knowledge, and simultaneously to act as a computing device that facilitates planning, decision-making, and the execution of behavior. By specifying the innate (genetic) components of the network, we show how evolution could endow the network with initial adaptive rules and goals that are then enriched through learning. We demonstrate how the developing structure of the network (which determines what the brain can do and how well) is critically affected by the co-evolved coordination between the mechanisms affecting the distribution of data input and those determining the learning parameters (used in the programs run by nodes and edges). Finally, we consider how the model accounts for various findings in the field of learning and decision making, how it can address some challenging problems in mind and behavior, such as those related to setting goals and self-control, and how it can help understand some cognitive disorders.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
273,228
1904.03152
Data-driven Modelling of Dynamical Systems Using Tree Adjoining Grammar and Genetic Programming
State-of-the-art methods for data-driven modelling of non-linear dynamical systems typically involve interactions with an expert user. In order to partially automate the process of modelling physical systems from data, many EA-based approaches have been proposed for model-structure selection, with special focus on non-linear systems. Recently, an approach for data-driven modelling of non-linear dynamical systems using Genetic Programming (GP) was proposed. The novelty of the method was the modelling of noise and the use of Tree Adjoining Grammar to shape the search-space explored by GP. In this paper, we report results achieved by the proposed method on three case studies. Each of the case studies considered here is based on real physical systems. The case studies pose a variety of challenges. In particular, these challenges range over varying amounts of prior knowledge of the true system, amount of data available, the complexity of the dynamics of the system, and the nature of non-linearities in the system. Based on the results achieved for the case studies, we critically analyse the performance of the proposed method.
false
false
false
false
false
false
false
false
true
false
true
false
false
false
false
true
false
false
126,623
2109.09888
Chemical-Reaction-Aware Molecule Representation Learning
Molecule representation learning (MRL) methods aim to embed molecules into a real vector space. However, existing SMILES-based (Simplified Molecular-Input Line-Entry System) or GNN-based (Graph Neural Networks) MRL methods either take SMILES strings as input that have difficulty in encoding molecule structure information, or over-emphasize the importance of GNN architectures but neglect their generalization ability. Here we propose using chemical reactions to assist learning molecule representation. The key idea of our approach is to preserve the equivalence of molecules with respect to chemical reactions in the embedding space, i.e., forcing the sum of reactant embeddings and the sum of product embeddings to be equal for each chemical equation. This constraint is proven effective to 1) keep the embedding space well-organized and 2) improve the generalization ability of molecule embeddings. Moreover, our model can use any GNN as the molecule encoder and is thus agnostic to GNN architectures. Experimental results demonstrate that our method achieves state-of-the-art performance in a variety of downstream tasks, e.g., 17.4% absolute Hit@1 gain in chemical reaction prediction, 2.3% absolute AUC gain in molecule property prediction, and 18.5% relative RMSE gain in graph-edit-distance prediction, respectively, over the best baseline method. The code is available at https://github.com/hwwang55/MolR.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
256,435
1811.09142
Construction of optimal locally recoverable codes and connection with hypergraph
Recently, it was discovered by several authors that a $q$-ary optimal locally recoverable code, i.e., a locally recoverable code archiving the Singleton-type bound, can have length much bigger than $q+1$. This is quite different from the classical $q$-ary MDS codes where it is conjectured that the code length is upper bounded by $q+1$ (or $q+2$ for some special case). This discovery inspired some recent studies on length of an optimal locally recoverable code. It was shown in \cite{LXY} that a $q$-ary optimal locally recoverable code is unbounded for $d=3,4$. Soon after, it was proved that a $q$-ary optimal locally recoverable code with distance $d$ and locality $r$ can have length $\Omega_{d,r}(q^{1 + 1/\lfloor(d-3)/2\rfloor})$. Recently, an explicit construction of $q$-ary optimal locally recoverable codes for distance $d=5,6$ was given in \cite{J18} and \cite{BCGLP}. In this paper, we further investigate construction optimal locally recoverable codes along the line of using parity-check matrices. Inspired by classical Reed-Solomon codes and \cite{J18}, we equip parity-check matrices with the Vandermond structure. It is turns out that a parity-check matrix with the Vandermond structure produces an optimal locally recoverable code must obey certain disjoint property for subsets of $\mathbb{F}_q$. To our surprise, this disjoint condition is equivalent to a well-studied problem in extremal graph theory. With the help of extremal graph theory, we succeed to improve all of the best known results in \cite{GXY} for $d\geq 7$. In addition, for $d=6$, we are able to remove the constraint required in \cite{J18} that $q$ is even.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
114,196
1709.02911
Semi-Supervised Instance Population of an Ontology using Word Vector Embeddings
In many modern day systems such as information extraction and knowledge management agents, ontologies play a vital role in maintaining the concept hierarchies of the selected domain. However, ontology population has become a problematic process due to its nature of heavy coupling with manual human intervention. With the use of word embeddings in the field of natural language processing, it became a popular topic due to its ability to cope up with semantic sensitivity. Hence, in this study, we propose a novel way of semi-supervised ontology population through word embeddings as the basis. We built several models including traditional benchmark models and new types of models which are based on word embeddings. Finally, we ensemble them together to come up with a synergistic model with better accuracy. We demonstrate that our ensemble model can outperform the individual models.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
80,358
1508.07563
Analysis and Practice of Uniquely Decodable One-to-One Code
In this paper, we consider the so-called uniquely decodable one-to-one code (UDOOC) that is formed by inserting a "comma" indicator, termed the unique word (UW), between consecutive one-to-one codewords for separation. Along this research direction, we first investigate several general combinatorial properties of UDOOCs, in particular the enumeration of the number of UDOOC codewords for any (finite) codeword length. Based on the obtained formula on the number of length-n codewords for a given UW, the per-letter average codeword length of UDOOC for the optimal compression of a given source statistics can be computed. Several upper bounds on the average codeword length of such UDOOCs are next established. The analysis on the bounds of average codeword length then leads to two asymptotic bounds for sources having infinitely many alphabets, one of which is achievable and hence tight for a certain source statistics and UW, and the other of which proves the achievability of source entropy rate of UDOOCs when both the block size of source letters for UDOOC compression and UW length go to infinity. Efficient encoding and decoding algorithms for UDOOCs are also given in this paper. Numerical results show that the proposed UDOOCs can potentially result in comparable compression rate to the Huffman code under similar decoding complexity and yield a smaller average codeword length than that of the Lempel-Ziv code, thereby confirming the practicability of UDOOCs.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
46,415
2201.02275
Well-Conditioned Linear Minimum Mean Square Error Estimation
Linear minimum mean square error (LMMSE) estimation is often ill-conditioned, suggesting that unconstrained minimization of the mean square error is an inadequate approach to filter design. To address this, we first develop a unifying framework for studying constrained LMMSE estimation problems. Using this framework, we explore an important structural property of constrained LMMSE filters involving a certain prefilter. Optimality is invariant under invertible linear transformations of the prefilter. This parameterizes all optimal filters by equivalence classes of prefilters. We then clarify that merely constraining the rank of the filter does not suitably address the problem of ill-conditioning. Instead, we adopt a constraint that explicitly requires solutions to be well-conditioned in a certain specific sense. We introduce two well-conditioned filters and show that they converge to the unconstrained LMMSE filter as their truncation-power loss goes to zero, at the same rate as the low-rank Wiener filter. We also show extensions to the case of weighted trace and determinant of the error covariance as objective functions. Finally, our quantitative results with historical VIX data demonstrate that our two well-conditioned filters have stable performance while the standard LMMSE filter deteriorates with increasing condition number.
false
false
false
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
274,490
2403.10830
View-Centric Multi-Object Tracking with Homographic Matching in Moving UAV
In this paper, we address the challenge of multi-object tracking (MOT) in moving Unmanned Aerial Vehicle (UAV) scenarios, where irregular flight trajectories, such as hovering, turning left/right, and moving up/down, lead to significantly greater complexity compared to fixed-camera MOT. Specifically, changes in the scene background not only render traditional frame-to-frame object IOU association methods ineffective but also introduce significant view shifts in the objects, which complicates tracking. To overcome these issues, we propose a novel universal HomView-MOT framework, which for the first time, harnesses the view Homography inherent in changing scenes to solve MOT challenges in moving environments, incorporating Homographic Matching and View-Centric concepts. We introduce a Fast Homography Estimation (FHE) algorithm for rapid computation of Homography matrices between video frames, enabling object View-Centric ID Learning (VCIL) and leveraging multi-view Homography to learn cross-view ID features. Concurrently, our Homographic Matching Filter (HMF) maps object bounding boxes from different frames onto a common view plane for a more realistic physical IOU association. Extensive experiments have proven that these innovations allow HomView-MOT to achieve state-of-the-art performance on prominent UAV MOT datasets VisDrone and UAVDT.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
438,384
2311.01646
SemiGPC: Distribution-Aware Label Refinement for Imbalanced Semi-Supervised Learning Using Gaussian Processes
In this paper we introduce SemiGPC, a distribution-aware label refinement strategy based on Gaussian Processes where the predictions of the model are derived from the labels posterior distribution. Differently from other buffer-based semi-supervised methods such as CoMatch and SimMatch, our SemiGPC includes a normalization term that addresses imbalances in the global data distribution while maintaining local sensitivity. This explicit control allows SemiGPC to be more robust to confirmation bias especially under class imbalance. We show that SemiGPC improves performance when paired with different Semi-Supervised methods such as FixMatch, ReMixMatch, SimMatch and FreeMatch and different pre-training strategies including MSN and Dino. We also show that SemiGPC achieves state of the art results under different degrees of class imbalance on standard CIFAR10-LT/CIFAR100-LT especially in the low data-regime. Using SemiGPC also results in about 2% avg.accuracy increase compared to a new competitive baseline on the more challenging benchmarks SemiAves, SemiCUB, SemiFungi and Semi-iNat.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
405,116
2403.04894
ConstitutionalExperts: Training a Mixture of Principle-based Prompts
Large language models (LLMs) are highly capable at a variety of tasks given the right prompt, but writing one is still a difficult and tedious process. In this work, we introduce ConstitutionalExperts, a method for learning a prompt consisting of constitutional principles (i.e. rules), given a training dataset. Unlike prior methods that optimize the prompt as a single entity, our method incrementally improves the prompt by surgically editing individual principles. We also show that we can improve overall performance by learning unique prompts for different semantic regions of the training data and using a mixture-of-experts (MoE) architecture to route inputs at inference time. We compare our method to other state of the art prompt-optimization techniques across six benchmark datasets. We also investigate whether MoE improves these other techniques. Our results suggest that ConstitutionalExperts outperforms other prompt optimization techniques by 10.9% (F1) and that mixture-of-experts improves all techniques, suggesting its broad applicability.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
435,778
2305.18513
SlimFit: Memory-Efficient Fine-Tuning of Transformer-based Models Using Training Dynamics
Transformer-based models, such as BERT and ViT, have achieved state-of-the-art results across different natural language processing (NLP) and computer vision (CV) tasks. However, these models are extremely memory intensive during their fine-tuning process, making them difficult to deploy on GPUs with limited memory resources. To address this issue, we introduce a new tool called SlimFit that reduces the memory requirements of these models by dynamically analyzing their training dynamics and freezing less-contributory layers during fine-tuning. The layers to freeze are chosen using a runtime inter-layer scheduling algorithm. SlimFit adopts quantization and pruning for particular layers to balance the load of dynamic activations and to minimize the memory footprint of static activations, where static activations refer to those that cannot be discarded regardless of freezing. This allows SlimFit to freeze up to 95% of layers and reduce the overall on-device GPU memory usage of transformer-based models such as ViT and BERT by an average of 2.2x, across different NLP and CV benchmarks/datasets such as GLUE, SQuAD 2.0, CIFAR-10, CIFAR-100 and ImageNet with an average degradation of 0.2% in accuracy. For such NLP and CV tasks, SlimFit can reduce up to 3.1x the total on-device memory usage with an accuracy degradation of only up to 0.4%. As a result, while fine-tuning of ViT on ImageNet and BERT on SQuAD 2.0 with a batch size of 128 requires 3 and 2 32GB GPUs respectively, SlimFit enables their fine-tuning on a single 32GB GPU without any significant accuracy degradation.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
369,126
2502.12178
Direct Preference Optimization-Enhanced Multi-Guided Diffusion Model for Traffic Scenario Generation
Diffusion-based models are recognized for their effectiveness in using real-world driving data to generate realistic and diverse traffic scenarios. These models employ guided sampling to incorporate specific traffic preferences and enhance scenario realism. However, guiding the sampling process to conform to traffic rules and preferences can result in deviations from real-world traffic priors and potentially leading to unrealistic behaviors. To address this challenge, we introduce a multi-guided diffusion model that utilizes a novel training strategy to closely adhere to traffic priors, even when employing various combinations of guides. This model adopts a multi-task learning framework, enabling a single diffusion model to process various guide inputs. For increased guided sampling precision, our model is fine-tuned using the Direct Preference Optimization (DPO) algorithm. This algorithm optimizes preferences based on guide scores, effectively navigating the complexities and challenges associated with the expensive and often non-differentiable gradient calculations during the guided sampling fine-tuning process. Evaluated using the nuScenes dataset our model provides a strong baseline for balancing realism, diversity and controllability in the traffic scenario generation.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
534,733
1806.08027
Physical-Layer Security in Cache-Enabled Cooperative Small Cell Networks Against Randomly Distributed Eavesdroppers
This paper explores the physical-layer security in a small cell network (SCN) with cooperative cache-enabled small base stations (SBSs) in the presence of randomly distributed eavesdroppers. We propose a joint design on the caching placement and the physical-layer transmission to improve the secure content delivery probability (SCDP). We first put forward a hybrid caching placement strategy in which a proportion of the cache unit in each SBS is assigned to store the most popular files (MPFs), while the remaining is used to cache the disjoint subfiles (DSFs) of the less popular files in different SBSs as a means to enhance transmission secrecy and content diversity. We then introduce two coordinated multi-point (CoMP) techniques, namely, joint transmission (JT) and orthogonal transmission (OT), to deliver the MPFs and DSFs, respectively. We derive analytical expressions for the SCDP in each transmission scheme, considering both non-colluding and colluding eavesdropping scenarios. Based on the obtained analytical results, we jointly design the optimal transmission rates and the optimal caching assignment for maximizing the overall SCDP. Various insights into the optimal transmission and caching designs are further provided. Numerical results are also presented to verify our theoretical findings and to demonstrate the superiority of the proposed caching and transmission strategies.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
101,070
2210.11652
Motion Primitives Based Kinodynamic RRT for Autonomous Vehicle Navigation in Complex Environments
In this work, we have implemented a SLAM-assisted navigation module for a real autonomous vehicle with unknown dynamics. The navigation objective is to reach a desired goal configuration along a collision-free trajectory while adhering to the dynamics of the system. Specifically, we use LiDAR-based Hector SLAM for building the map of the environment, detecting obstacles, and for tracking vehicle's conformance to the trajectory as it passes through various states. For motion planning, we use rapidly exploring random trees (RRTs) on a set of generated motion primitives to search for dynamically feasible trajectory sequences and collision-free path to the goal. We demonstrate complex maneuvers such as parallel parking, perpendicular parking, and reversing motion by the real vehicle in a constrained environment using the presented approach.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
325,392
2110.13492
TUNet: A Block-online Bandwidth Extension Model based on Transformers and Self-supervised Pretraining
We introduce a block-online variant of the temporal feature-wise linear modulation (TFiLM) model to achieve bandwidth extension. The proposed architecture simplifies the UNet backbone of the TFiLM to reduce inference time and employs an efficient transformer at the bottleneck to alleviate performance degradation. We also utilize self-supervised pretraining and data augmentation to enhance the quality of bandwidth extended signals and reduce the sensitivity with respect to downsampling methods. Experiment results on the VCTK dataset show that the proposed method outperforms several recent baselines in both intrusive and non-intrusive metrics. Pretraining and filter augmentation also help stabilize and enhance the overall performance.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
263,203
1708.02420
Mining fine-grained opinions on closed captions of YouTube videos with an attention-RNN
Video reviews are the natural evolution of written product reviews. In this paper we target this phenomenon and introduce the first dataset created from closed captions of YouTube product review videos as well as a new attention-RNN model for aspect extraction and joint aspect extraction and sentiment classification. Our model provides state-of-the-art performance on aspect extraction without requiring the usage of hand-crafted features on the SemEval ABSA corpus, while it outperforms the baseline on the joint task. In our dataset, the attention-RNN model outperforms the baseline for both tasks, but we observe important performance drops for all models in comparison to SemEval. These results, as well as further experiments on domain adaptation for aspect extraction, suggest that differences between speech and written text, which have been discussed extensively in the literature, also extend to the domain of product reviews, where they are relevant for fine-grained opinion mining.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
78,587
2311.00987
CML-MOTS: Collaborative Multi-task Learning for Multi-Object Tracking and Segmentation
The advancement of computer vision has pushed visual analysis tasks from still images to the video domain. In recent years, video instance segmentation, which aims to track and segment multiple objects in video frames, has drawn much attention for its potential applications in various emerging areas such as autonomous driving, intelligent transportation, and smart retail. In this paper, we propose an effective framework for instance-level visual analysis on video frames, which can simultaneously conduct object detection, instance segmentation, and multi-object tracking. The core idea of our method is collaborative multi-task learning which is achieved by a novel structure, named associative connections among detection, segmentation, and tracking task heads in an end-to-end learnable CNN. These additional connections allow information propagation across multiple related tasks, so as to benefit these tasks simultaneously. We evaluate the proposed method extensively on KITTI MOTS and MOTS Challenge datasets and obtain quite encouraging results.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
404,858
1310.0132
The 4-error linear complexity distribution for $2^n$-periodic binary sequences
By using the sieve method of combinatorics, we study $k$-error linear complexity distribution of $2^n$-periodic binary sequences based on Games-Chan algorithm. For $k=4,5$, the complete counting functions on the $k$-error linear complexity of $2^n$-periodic balanced binary sequences (with linear complexity less than $2^n$) are presented. As a consequence of the result, the complete counting functions on the 4-error linear complexity of $2^n$-periodic binary sequences (with linear complexity $2^n$ or less than $2^n$) are obvious. Generally, the complete counting functions on the $k$-error linear complexity of $2^n$-periodic binary sequences can be obtained with a similar approach.
false
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
27,455
2305.07955
On Semi-Supervised Estimation of Distributions
We study the problem of estimating the joint probability mass function (pmf) over two random variables. In particular, the estimation is based on the observation of $m$ samples containing both variables and $n$ samples missing one fixed variable. We adopt the minimax framework with $l^p_p$ loss functions, and we show that the composition of uni-variate minimax estimators achieves minimax risk with the optimal first-order constant for $p \ge 2$, in the regime $m = o(n)$.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
364,096
1303.5919
Heart Disease Prediction System using Associative Classification and Genetic Algorithm
Associative classification is a recent and rewarding technique which integrates association rule mining and classification to a model for prediction and achieves maximum accuracy. Associative classifiers are especially fit to applications where maximum accuracy is desired to a model for prediction. There are many domains such as medical where the maximum accuracy of the model is desired. Heart disease is a single largest cause of death in developed countries and one of the main contributors to disease burden in developing countries. Mortality data from the registrar general of India shows that heart disease are a major cause of death in India, and in Andhra Pradesh coronary heart disease cause about 30%of deaths in rural areas. Hence there is a need to develop a decision support system for predicting heart disease of a patient. In this paper we propose efficient associative classification algorithm using genetic approach for heart disease prediction. The main motivation for using genetic algorithm in the discovery of high level prediction rules is that the discovered rules are highly comprehensible, having high predictive accuracy and of high interestingness values. Experimental Results show that most of the classifier rules help in the best prediction of heart disease which even helps doctors in their diagnosis decisions.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
23,225
2102.11033
IFoodCloud: A Platform for Real-time Sentiment Analysis of Public Opinion about Food Safety in China
The Internet contains a wealth of public opinion on food safety, including views on food adulteration, food-borne diseases, agricultural pollution, irregular food distribution, and food production issues. In order to systematically collect and analyse public opinion on food safety, we developed IFoodCloud, a platform for the real-time sentiment analysis of public opinion on food safety in China. It collects data from more than 3,100 public sources that can be used to explore public opinion trends, public sentiment, and regional attention differences of food safety incidents. At the same time, we constructed a sentiment classification model using multiple lexicon-based and deep learning-based algorithms integrated with IFoodCloud that provide an unprecedented rapid means of understanding the public sentiment toward specific food safety incidents. Our best model's F1-score achieved 0.9737. Further, three real-world cases are presented to demonstrate the application and robustness. IFoodCloud could be considered a valuable tool for promote scientisation of food safety supervision and risk communication.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
221,303
1701.04439
Dandelion: Redesigning the Bitcoin Network for Anonymity
Bitcoin and other cryptocurrencies have surged in popularity over the last decade. Although Bitcoin does not claim to provide anonymity for its users, it enjoys a public perception of being a `privacy-preserving' financial system. In reality, cryptocurrencies publish users' entire transaction histories in plaintext, albeit under a pseudonym; this is required for transaction validation. Therefore, if a user's pseudonym can be linked to their human identity, the privacy fallout can be significant. Recently, researchers have demonstrated deanonymization attacks that exploit weaknesses in the Bitcoin network's peer-to-peer (P2P) networking protocols. In particular, the P2P network currently forwards content in a structured way that allows observers to deanonymize users. In this work, we redesign the P2P network from first principles with the goal of providing strong, provable anonymity guarantees. We propose a simple networking policy called Dandelion, which achieves nearly-optimal anonymity guarantees at minimal cost to the network's utility. We also provide a practical implementation of Dandelion.
false
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
66,852
1108.3286
A Lookahead algorithm to compute Betweenness Centrality
The Betweenness Centrality index is a very important centrality measure in the analysis of a large number of networks. Despite its significance in a lot of interdisciplinary applications, its computation is very expensive. The fastest known algorithm presently is by Brandes which takes O(|V || E|) time for computation. In real life scenarios, it happens very frequently that a single vertex or a set of vertices is sequentially removed from a network. The recomputation of Betweenness Centrality on removing a single vertex becomes expensive when the Brandes algorithm is repeated. It is to be understood that as the size of the network increases, Betweenness Centrality calculation becomes more and more expensive and even a decrease in running time by a small fraction results in a phenomenal decrease in the actual running time. The algorithm introduced in this paper achieves the same in a significantly lesser time than repetition of the Brandes algorithm. The algorithm can also be extended to a general case.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
11,692
2309.00350
How You Split Matters: Data Leakage and Subject Characteristics Studies in Longitudinal Brain MRI Analysis
Deep learning models have revolutionized the field of medical image analysis, offering significant promise for improved diagnostics and patient care. However, their performance can be misleadingly optimistic due to a hidden pitfall called 'data leakage'. In this study, we investigate data leakage in 3D medical imaging, specifically using 3D Convolutional Neural Networks (CNNs) for brain MRI analysis. While 3D CNNs appear less prone to leakage than 2D counterparts, improper data splitting during cross-validation (CV) can still pose issues, especially with longitudinal imaging data containing repeated scans from the same subject. We explore the impact of different data splitting strategies on model performance for longitudinal brain MRI analysis and identify potential data leakage concerns. GradCAM visualization helps reveal shortcuts in CNN models caused by identity confounding, where the model learns to identify subjects along with diagnostic features. Our findings, consistent with prior research, underscore the importance of subject-wise splitting and evaluating our model further on hold-out data from different subjects to ensure the integrity and reliability of deep learning models in medical image analysis.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
389,280
2006.05281
Extensive Error Analysis and a Learning-Based Evaluation of Medical Entity Recognition Systems to Approximate User Experience
When comparing entities extracted by a medical entity recognition system with gold standard annotations over a test set, two types of mismatches might occur, label mismatch or span mismatch. Here we focus on span mismatch and show that its severity can vary from a serious error to a fully acceptable entity extraction due to the subjectivity of span annotations. For a domain-specific BERT-based NER system, we showed that 25% of the errors have the same labels and overlapping span with gold standard entities. We collected expert judgement which shows more than 90% of these mismatches are accepted or partially accepted by the user. Using the training set of the NER system, we built a fast and lightweight entity classifier to approximate the user experience of such mismatches through accepting or rejecting them. The decisions made by this classifier are used to calculate a learning-based F-score which is shown to be a better approximation of a forgiving user's experience than the relaxed F-score. We demonstrated the results of applying the proposed evaluation metric for a variety of deep learning medical entity recognition models trained with two datasets.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
181,012
2212.01806
Recognizing Object by Components with Human Prior Knowledge Enhances Adversarial Robustness of Deep Neural Networks
Adversarial attacks can easily fool object recognition systems based on deep neural networks (DNNs). Although many defense methods have been proposed in recent years, most of them can still be adaptively evaded. One reason for the weak adversarial robustness may be that DNNs are only supervised by category labels and do not have part-based inductive bias like the recognition process of humans. Inspired by a well-known theory in cognitive psychology -- recognition-by-components, we propose a novel object recognition model ROCK (Recognizing Object by Components with human prior Knowledge). It first segments parts of objects from images, then scores part segmentation results with predefined human prior knowledge, and finally outputs prediction based on the scores. The first stage of ROCK corresponds to the process of decomposing objects into parts in human vision. The second stage corresponds to the decision process of the human brain. ROCK shows better robustness than classical recognition models across various attack settings. These results encourage researchers to rethink the rationality of currently widely-used DNN-based object recognition models and explore the potential of part-based models, once important but recently ignored, for improving robustness.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
334,582
2004.13821
Fine-tuning Multi-hop Question Answering with Hierarchical Graph Network
In this paper, we present a two stage model for multi-hop question answering. The first stage is a hierarchical graph network, which is used to reason over multi-hop question and is capable to capture different levels of granularity using the nature structure(i.e., paragraphs, questions, sentences and entities) of documents. The reasoning process is convert to node classify task(i.e., paragraph nodes and sentences nodes). The second stage is a language model fine-tuning task. In a word, stage one use graph neural network to select and concatenate support sentences as one paragraph, and stage two find the answer span in language model fine-tuning paradigm.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
174,664
2201.09792
Patches Are All You Need?
Although convolutional networks have been the dominant architecture for vision tasks for many years, recent experiments have shown that Transformer-based models, most notably the Vision Transformer (ViT), may exceed their performance in some settings. However, due to the quadratic runtime of the self-attention layers in Transformers, ViTs require the use of patch embeddings, which group together small regions of the image into single input features, in order to be applied to larger image sizes. This raises a question: Is the performance of ViTs due to the inherently-more-powerful Transformer architecture, or is it at least partly due to using patches as the input representation? In this paper, we present some evidence for the latter: specifically, we propose the ConvMixer, an extremely simple model that is similar in spirit to the ViT and the even-more-basic MLP-Mixer in that it operates directly on patches as input, separates the mixing of spatial and channel dimensions, and maintains equal size and resolution throughout the network. In contrast, however, the ConvMixer uses only standard convolutions to achieve the mixing steps. Despite its simplicity, we show that the ConvMixer outperforms the ViT, MLP-Mixer, and some of their variants for similar parameter counts and data set sizes, in addition to outperforming classical vision models such as the ResNet. Our code is available at https://github.com/locuslab/convmixer.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
276,785
2006.05664
OpEvo: An Evolutionary Method for Tensor Operator Optimization
Training and inference efficiency of deep neural networks highly rely on the performance of tensor operators on hardware platforms. Manually optimizing tensor operators has limitations in terms of supporting new operators or hardware platforms. Therefore, automatically optimizing device code configurations of tensor operators is getting increasingly attractive. However, current methods for tensor operator optimization usually suffer from poor sample-efficiency due to the combinatorial search space. In this work, we propose a novel evolutionary method, OpEvo, which efficiently explores the search spaces of tensor operators by introducing a topology-aware mutation operation based on q-random walk to leverage the topological structures over the search spaces. Our comprehensive experiment results show that compared with state-of-the-art (SOTA) methods OpEvo can find the best configuration with the lowest variance and least efforts in the number of trials and wall-clock time. All code of this work is available online.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
181,158
2009.03191
COVCOR20 at WNUT-2020 Task 2: An Attempt to Combine Deep Learning and Expert rules
In the scope of WNUT-2020 Task 2, we developed various text classification systems, using deep learning models and one using linguistically informed rules. While both of the deep learning systems outperformed the system using the linguistically informed rules, we found that through the integration of (the output of) the three systems a better performance could be achieved than the standalone performance of each approach in a cross-validation setting. However, on the test data the performance of the integration was slightly lower than our best performing deep learning model. These results hardly indicate any progress in line of integrating machine learning and expert rules driven systems. We expect that the release of the annotation manuals and gold labels of the test data after this workshop will shed light on these perplexing results.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
194,766
2412.06689
Impact of Privacy Parameters on Deep Learning Models for Image Classification
The project aims to develop differentially private deep learning models for image classification on CIFAR-10 datasets \cite{cifar10} and analyze the impact of various privacy parameters on model accuracy. We have implemented five different deep learning models, namely ConvNet, ResNet18, EfficientNet, ViT, and DenseNet121 and three supervised classifiers namely K-Nearest Neighbors, Naive Bayes Classifier and Support Vector Machine. We evaluated the performance of these models under varying settings. Our best performing model to date is EfficientNet with test accuracy of $59.63\%$ with the following parameters (Adam optimizer, batch size 256, epoch size 100, epsilon value 5.0, learning rate $1e-3$, clipping threshold 1.0, and noise multiplier 0.912).
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
515,331
2009.02807
A Hierarchical Architecture for Human-Robot Cooperation Processes
In this paper we propose FlexHRC+, a hierarchical human-robot cooperation architecture designed to provide collaborative robots with an extended degree of autonomy when supporting human operators in high-variability shop-floor tasks. The architecture encompasses three levels, namely for perception, representation, and action. Building up on previous work, here we focus on (i) an in-the-loop decision making process for the operations of collaborative robots coping with the variability of actions carried out by human operators, and (ii) the representation level, integrating a hierarchical AND/OR graph whose online behaviour is formally specified using First Order Logic. The architecture is accompanied by experiments including collaborative furniture assembly and object positioning tasks.
true
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
194,667
1806.01610
Training Generative Reversible Networks
Generative models with an encoding component such as autoencoders currently receive great interest. However, training of autoencoders is typically complicated by the need to train a separate encoder and decoder model that have to be enforced to be reciprocal to each other. To overcome this problem, by-design reversible neural networks (RevNets) had been previously used as generative models either directly optimizing the likelihood of the data under the model or using an adversarial approach on the generated data. Here, we instead investigate their performance using an adversary on the latent space in the adversarial autoencoder framework. We investigate the generative performance of RevNets on the CelebA dataset, showing that generative RevNets can generate coherent faces with similar quality as Variational Autoencoders. This first attempt to use RevNets inside the adversarial autoencoder framework slightly underperformed relative to recent advanced generative models using an autoencoder component on CelebA, but this gap may diminish with further optimization of the training setup of generative RevNets. In addition to the experiments on CelebA, we show a proof-of-principle experiment on the MNIST dataset suggesting that adversary-free trained RevNets can discover meaningful latent dimensions without pre-specifying the number of dimensions of the latent sampling distribution. In summary, this study shows that RevNets can be employed in different generative training settings. Source code for this study is at https://github.com/robintibor/generative-reversible
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
99,592
2408.10706
Performance Analysis of Physical Layer Security: From Far-Field to Near-Field
The secrecy performance in both near-field and far-field communications is analyzed using two fundamental metrics: the secrecy capacity under a power constraint and the minimum power requirement to achieve a specified secrecy rate target. 1) For the secrecy capacity, a closed-form expression is derived under a discrete-time memoryless setup. This expression is further analyzed under several far-field and near-field channel models, and the capacity scaling law is revealed by assuming an infinitely large transmit array and an infinitely high power. A novel concept of "depth of insecurity" is proposed to evaluate the secrecy performance achieved by near-field beamfocusing. It is demonstrated that increasing the number of transmit antennas reduces this depth and thus improves the secrecy performance. 2) Regarding the minimum required power, a closed-form expression is derived and analyzed within far-field and near-field scenarios. Asymptotic analyses are performed by setting the number of transmit antennas to infinity to unveil the power scaling law. Numerical results are provided to demonstrate that: i) compared to far-field communications, near-field communications expand the areas where secure transmission is feasible, specifically when the eavesdropper is located in the same direction as the intended receiver; ii) as the number of transmit antennas increases, neither the secrecy capacity nor the minimum required power scales or vanishes unboundedly, adhering to the principle of energy conservation.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
481,988
2209.12842
Risk-Aware Model Predictive Path Integral Control Using Conditional Value-at-Risk
In this paper, we present a novel Model Predictive Control method for autonomous robots subject to arbitrary forms of uncertainty. The proposed Risk-Aware Model Predictive Path Integral (RA-MPPI) control utilizes the Conditional Value-at-Risk (CVaR) measure to generate optimal control actions for safety-critical robotic applications. Different from most existing Stochastic MPCs and CVaR optimization methods that linearize the original dynamics and formulate control tasks as convex programs, the proposed method directly uses the original dynamics without restricting the form of the cost functions or the noise. We apply the novel RA-MPPI controller to an autonomous vehicle to perform aggressive driving maneuvers in cluttered environments. Our simulations and experiments show that the proposed RA-MPPI controller can achieve about the same lap time with significantly fewer collisions compared to the baseline MPPI controller. The proposed controller performs on-line computation at an update frequency of up to 80Hz, utilizing modern Graphics Processing Units (GPUs) to multi-thread the generation of trajectories as well as the CVaR values.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
319,680
1611.04495
Detection Performance with Many Antennas Available for Bandwidth-Efficient Uplink Transmission in MU-MIMO Systems
This paper is concerned with SC/FDE for bandwidth-efficient uplink block transmission, with QAM schemes, in a MU MIMO system. The number of BS receiver antennas is assumed to be large, but not necessarily much larger than the overall number of transmitter antennas jointly using the same time/frequency resource at MT. In this context, we consider several detection techniques and evaluate, in detail, the corresponding detection performances (discussed with the help of selected performance bounds), for a range of values regarding the number of available BS receiver antennas. From our performance results, we conclude that simple linear detection techniques, designed to avoid the need of complex matrix inversions, can lead to unacceptably high error floor levels. However, by combining the use of such simple linear detectors with an appropriate interference cancellation procedure - within an iterative DF technique -, a close approximation to the SIMO MFB performance can be achieved after a few iterations, even for 64-QAM schemes, when the number of BS antennas is, at least, five times higher than the number of antennas which are jointly used at the user terminals.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
63,851
1601.07377
Optimal Energy Management for SmartGrids Considering Thermal Load and Dynamic Pricing
More active participation of the demand side and efficient integration of distributed energy resources (DERs) such as electric vehicles (trVs), energy storage (ES), and renewable energy sources (RESs) into the existing power systems are important design objectives of the future smart grid. In general, effective demand side management (DSM) would benefit both system operators (e.g., peak demand reduction) and electricity customers (e.g., cost saving). For building and home energy scheduling design, heating, ventilation, and air-conditioning (HVAC) systems play a very important role since HVAC power consumption is very significant and the HVAC load can be scheduled flexibly while still maintaining user comfort requirements. This thesis focuses on energy scheduling design for two different application scenarios where HVAC and various DERs are considered to optimize the benefits electric users.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
51,424
1312.6834
Face Detection from still and Video Images using Unsupervised Cellular Automata with K means clustering algorithm
Pattern recognition problem rely upon the features inherent in the pattern of images. Face detection and recognition is one of the challenging research areas in the field of computer vision. In this paper, we present a method to identify skin pixels from still and video images using skin color. Face regions are identified from this skin pixel region. Facial features such as eyes, nose and mouth are then located. Faces are recognized from color images using an RBF based neural network. Unsupervised Cellular Automata with K means clustering algorithm is used to locate different facial elements. Orientation is corrected by using eyes. Parameters like inter eye distance, nose length, mouth position, Discrete Cosine Transform (DCT) coefficients etc. are computed and used for a Radial Basis Function (RBF) based neural network. This approach reliably works for face sequence with orientation in head, expressions etc.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
29,408
2105.07530
Advances in Multi-Variate Analysis Methods for New Physics Searches at the Large Hadron Collider
Between the years 2015 and 2019, members of the Horizon 2020-funded Innovative Training Network named "AMVA4NewPhysics" studied the customization and application of advanced multivariate analysis methods and statistical learning tools to high-energy physics problems, as well as developed entirely new ones. Many of those methods were successfully used to improve the sensitivity of data analyses performed by the ATLAS and CMS experiments at the CERN Large Hadron Collider; several others, still in the testing phase, promise to further improve the precision of measurements of fundamental physics parameters and the reach of searches for new phenomena. In this paper, the most relevant new tools, among those studied and developed, are presented along with the evaluation of their performances.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
235,461
2201.13309
Accelerating Laue Depth Reconstruction Algorithm with CUDA
The Laue diffraction microscopy experiment uses the polychromatic Laue micro-diffraction technique to examine the structure of materials with sub-micron spatial resolution in all three dimensions. During this experiment, local crystallographic orientations, orientation gradients and strains are measured as properties which will be recorded in HDF5 image format. The recorded images will be processed with a depth reconstruction algorithm for future data analysis. But the current depth reconstruction algorithm consumes considerable processing time and might take up to 2 weeks for reconstructing data collected from one single experiment. To improve the depth reconstruction computation speed, we propose a scalable GPU program solution on the depth reconstruction problem in this paper. The test result shows that the running time would be 10 to 20 times faster than the prior CPU design for various size of input data.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
277,943
2409.00007
Federated Sequence-to-Sequence Learning for Load Disaggregation from Unbalanced Low-Resolution Smart Meter Data
The importance of Non-Intrusive Load Monitoring (NILM) has been increasingly recognized, given that NILM can enhance energy awareness and provide valuable insights for energy program design. Many existing NILM methods often rely on specialized devices to retrieve high-sampling complex signal data and focus on the high consumption appliances, hindering their applicability in real-world applications, especially when smart meters only provide low-resolution active power readings for households. In this paper, we propose a new approach using easily accessible weather data to achieve load disaggregation for a total of 12 appliances, encompassing both high and low consumption, in scenarios with very low sampling rates (hourly). Moreover, We develop a federated learning (FL) model that builds upon a sequence-to-sequence model to fulfil load disaggregation without data sharing. Our experiments demonstrate that the FL framework - L2GD can effectively handle statistical heterogeneity and avoid overfitting problems. By incorporating weather data, our approach significantly improves the performance of NILM.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
484,712
2202.12847
Building a 3-Player Mahjong AI using Deep Reinforcement Learning
Mahjong is a popular multi-player imperfect-information game developed in China in the late 19th-century, with some very challenging features for AI research. Sanma, being a 3-player variant of the Japanese Riichi Mahjong, possesses unique characteristics including fewer tiles and, consequently, a more aggressive playing style. It is thus challenging and of great research interest in its own right, but has not yet been explored. In this paper, we present Meowjong, an AI for Sanma using deep reinforcement learning. We define an informative and compact 2-dimensional data structure for encoding the observable information in a Sanma game. We pre-train 5 convolutional neural networks (CNNs) for Sanma's 5 actions -- discard, Pon, Kan, Kita and Riichi, and enhance the major action's model, namely the discard model, via self-play reinforcement learning using the Monte Carlo policy gradient method. Meowjong's models achieve test accuracies comparable with AIs for 4-player Mahjong through supervised learning, and gain a significant further enhancement from reinforcement learning. Being the first ever AI in Sanma, we claim that Meowjong stands as a state-of-the-art in this game.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
282,383
2012.01699
Content-Adaptive Pixel Discretization to Improve Model Robustness
Preprocessing defenses such as pixel discretization are appealing to remove adversarial attacks due to their simplicity. However, they have been shown to be ineffective except on simple datasets like MNIST. We hypothesize that existing discretization approaches failed because using a fixed codebook for the entire dataset limits their ability to balance image representation and codeword separability. We first formally prove that adaptive codebooks can provide stronger robustness guarantees than fixed codebooks as a preprocessing defense on some datasets. Based on that insight, we propose a content-adaptive pixel discretization defense called Essential Features, which discretizes the image to a per-image adaptive codebook to reduce the color space. We then find that Essential Features can be further optimized by applying adaptive blurring before the discretization to push perturbed pixel values back to their original value before determining the codebook. Against adaptive attacks, we show that content-adaptive pixel discretization extends the range of datasets that benefit in terms of both L_2 and L_infinity robustness where previously fixed codebooks were found to have failed. Our findings suggest that content-adaptive pixel discretization should be part of the repertoire for making models robust.
false
false
false
false
false
false
true
false
false
false
false
true
true
false
false
false
false
false
209,498
1109.6841
Learning Dependency-Based Compositional Semantics
Suppose we want to build a system that answers a natural language question by representing its semantics as a logical form and computing the answer given a structured database of facts. The core part of such a system is the semantic parser that maps questions to logical forms. Semantic parsers are typically trained from examples of questions annotated with their target logical forms, but this type of annotation is expensive. Our goal is to learn a semantic parser from question-answer pairs instead, where the logical form is modeled as a latent variable. Motivated by this challenging learning problem, we develop a new semantic formalism, dependency-based compositional semantics (DCS), which has favorable linguistic, statistical, and computational properties. We define a log-linear distribution over DCS logical forms and estimate the parameters using a simple procedure that alternates between beam search and numerical optimization. On two standard semantic parsing benchmarks, our system outperforms all existing state-of-the-art systems, despite using no annotated logical forms.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
12,415
1601.08190
On MBR codes with replication
An early paper by Rashmi et. al. presented the construction of an $(n,k,d=n-1)$ MBR regenerating code featuring the inherent double replication of all code symbols and repair-by-transfer (RBT), both of which are important in practice. We first show that no MBR code can contain even a single code symbol that is replicated more than twice. We then go on to present two new families of MBR codes which feature double replication of all systematic message symbols. The codes also possess a set of $d$ nodes whose contents include the message symbols and which can be repaired through help-by-transfer (HBT). As a corollary, we obtain systematic RBT codes for the case $d=(n-1)$ that possess inherent double replication of all code symbols and having a field size of $O(n)$ in comparison with the general, $O(n^2)$ field size requirement of the earlier construction by Rashmi et. al. For the cases $(k=d=n-2)$ or $(k+1=d=n-2)$, the field size can be reduced to $q=2$, and hence the codes can be binary. We also give a necessary and sufficient condition for the existence of MBR codes having double replication of all code symbols and also suggest techniques which will enable an arbitrary MBR code to be converted to one with double replication of all code symbols.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
51,509
2102.10958
Bilingual Language Modeling, A transfer learning technique for Roman Urdu
Pretrained language models are now of widespread use in Natural Language Processing. Despite their success, applying them to Low Resource languages is still a huge challenge. Although Multilingual models hold great promise, applying them to specific low-resource languages e.g. Roman Urdu can be excessive. In this paper, we show how the code-switching property of languages may be used to perform cross-lingual transfer learning from a corresponding high resource language. We also show how this transfer learning technique termed Bilingual Language Modeling can be used to produce better performing models for Roman Urdu. To enable training and experimentation, we also present a collection of novel corpora for Roman Urdu extracted from various sources and social networking sites, e.g. Twitter. We train Monolingual, Multilingual, and Bilingual models of Roman Urdu - the proposed bilingual model achieves 23% accuracy compared to the 2% and 11% of the monolingual and multilingual models respectively in the Masked Language Modeling (MLM) task.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
221,273
2410.07389
MIMO MAC Empowered by Reconfigurable Intelligent Surfaces: Capacity Region and Large System Analysis
Smart wireless environments enabled by multiple distributed Reconfigurable Intelligent Surfaces (RISs) have recently attracted significant research interest as a wireless connectivity paradigm for sixth Generation (6G) networks. In this paper, using random matrix theory methods, we calculate the mean of the sum Mutual Information (MI) for the correlated Multiple-Input Multiple-Output (MIMO) Multiple Access Channel (MAC) in the presence of multiple RISs, in the large-antenna number limit. We thus obtain the capacity region boundaries, after optimizing over the tunable RISs' phase configurations. Furthermore, we obtain a closed-form expression for the variance of the sum-MI metric, which together with the mean provides a tight Gaussian approximation for the outage probability. The derived results become relevant in the presence of fast-fading, when channel estimation is extremely challenging. Our numerical investigations showcased that, when the angle-spread in the neighborhood of each RIS is small, which is expected for higher carrier frequencies, the communication link strongly improves from optimizing the ergodic MI of the multiple RISs.We also found that, increasing the number of transmitting users in such MIMO-MAC-RIS systems results to rapidly diminishing sum-MI gains, hence, providing limits on the number of users that can be efficiently served by a given RIS.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
496,599
2303.14392
On Improved Commutation for Moving-Magnet Planar Actuators
The demand for high-precision and high-throughput motion control systems has increased significantly in recent years. The use of moving-magnet planar actuators (MMPAs) is gaining popularity due to their advantageous characteristics, such as complete environmental decoupling and reduction of stage mass. Nonetheless, model-based commutation techniques for MMPAs are compromised by misalignment between the mover and coil array and mismatch between the ideal electromagnetic model and the physical system, often leading to decreased system performance. To address this issue, a novel improved commutation approach is proposed in this paper\YB{, which is applicable for general planar motor applications,} by means of dynamic regulation of the position dependence of the ideal model-based commutation algorithm, which allows for attenuation of magnetic misalignment, manufacturing inaccuracies and other unmodelled phenomena. The effectiveness of the proposed approach is validated through experiments using a state-of-the-art moving-magnet planar actuator prototype.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
354,070
2311.13809
Responsive Hydrogel-based Modular Microrobots for Multi-functional Micromanipulation
Microrobots show great potential in biomedical applications such as drug delivery and cell manipulations. However, current microrobots are mostly fabricated as a single entity and type and the tasks they can perform are limited. In this paper, modular microrobots, with an overall size of 120 $\mu$m $\times$ 200 $\mu$m, are proposed with responsive mating components, made from stimuli-responsive hydrogels, and application specific end-effectors for microassembly tasks. The modular microrobots are fabricated based on photolithography and two-photon polymerization together or separately. Two types of modular microrobots are created based on the location of the responsive mating component. The first type of modular microrobot has a mating component that can shrink upon stimulation while the second type has a double bilayer structure that can realize an open and close motion. The exchange of end-effectors with an identical actuation base is demonstrated for both types of microrobots. Finally, different manipulation tasks are performed with different types of end-effectors.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
409,879
2110.07882
PolyNet: Polynomial Neural Network for 3D Shape Recognition with PolyShape Representation
3D shape representation and its processing have substantial effects on 3D shape recognition. The polygon mesh as a 3D shape representation has many advantages in computer graphics and geometry processing. However, there are still some challenges for the existing deep neural network (DNN)-based methods on polygon mesh representation, such as handling the variations in the degree and permutations of the vertices and their pairwise distances. To overcome these challenges, we propose a DNN-based method (PolyNet) and a specific polygon mesh representation (PolyShape) with a multi-resolution structure. PolyNet contains two operations; (1) a polynomial convolution (PolyConv) operation with learnable coefficients, which learns continuous distributions as the convolutional filters to share the weights across different vertices, and (2) a polygonal pooling (PolyPool) procedure by utilizing the multi-resolution structure of PolyShape to aggregate the features in a much lower dimension. Our experiments demonstrate the strength and the advantages of PolyNet on both 3D shape classification and retrieval tasks compared to existing polygon mesh-based methods and its superiority in classifying graph representations of images. The code is publicly available from https://myavartanoo.github.io/polynet/.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
261,169
1812.02802
End-to-End Streaming Keyword Spotting
We present a system for keyword spotting that, except for a frontend component for feature generation, it is entirely contained in a deep neural network (DNN) model trained "end-to-end" to predict the presence of the keyword in a stream of audio. The main contributions of this work are, first, an efficient memoized neural network topology that aims at making better use of the parameters and associated computations in the DNN by holding a memory of previous activations distributed over the depth of the DNN. The second contribution is a method to train the DNN, end-to-end, to produce the keyword spotting score. This system significantly outperforms previous approaches both in terms of quality of detection as well as size and computation.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
115,853
2407.10703
Towards Robust Event-based Networks for Nighttime via Unpaired Day-to-Night Event Translation
Event cameras with high dynamic range ensure scene capture even in low-light conditions. However, night events exhibit patterns different from those captured during the day. This difference causes performance degradation when applying night events to a model trained solely on day events. This limitation persists due to a lack of annotated night events. To overcome the limitation, we aim to alleviate data imbalance by translating annotated day data into night events. However, generating events from different modalities challenges reproducing their unique properties. Accordingly, we propose an unpaired event-to-event day-to-night translation model that effectively learns to map from one domain to another using Diffusion GAN. The proposed translation model analyzes events in spatio-temporal dimension with wavelet decomposition and disentangled convolution layers. We also propose a new temporal contrastive learning with a novel shuffling and sampling strategy to regularize temporal continuity. To validate the efficacy of the proposed methodology, we redesign metrics for evaluating events translated in an unpaired setting, aligning them with the event modality for the first time. Our framework shows the successful day-to-night event translation while preserving the characteristics of events. In addition, through our translation method, we facilitate event-based modes to learn about night events by translating annotated day events into night events. Our approach effectively mitigates the performance degradation of applying real night events to downstream tasks. The code is available at https://github.com/jeongyh98/UDNET.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
473,089
1701.02900
Optimal Compression for Two-Field Entries in Fixed-Width Memories
Data compression is a well-studied (and well-solved) problem in the setup of long coding blocks. But important emerging applications need to compress data to memory words of small fixed widths. This new setup is the subject of this paper. In the problem we consider we have two sources with known discrete distributions, and we wish to find codes that maximize the success probability that the two source outputs are represented in $L$ bits or less. A good practical use for this problem is a table with two-field entries that is stored in a memory of a fixed width $L$. Such tables of very large sizes drive the core functionalities of network switches and routers. After defining the problem formally, we solve it optimally with an efficient code-design algorithm. We also solve the problem in the more constrained case where a single code is used in both fields (to save space for storing code dictionaries). For both code-design problems we find decompositions that yield efficient dynamic-programming algorithms. With an empirical study we show the success probabilities of the optimal codes for different distributions and memory widths. In particular, the study demonstrates the superiority of the new codes over existing compression algorithms.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
66,615
2206.08600
On Integrating Prior Knowledge into Gaussian Processes for Prognostic Health Monitoring
Gaussian process regression is a powerful method for predicting states based on given data. It has been successfully applied for probabilistic predictions of structural systems to quantify, for example, the crack growth in mechanical structures. Typically, predefined mean and covariance functions are employed to construct the Gaussian process model. Then, the model is updated using current data during operation while prior information based on previous data is ignored. However, predefined mean and covariance functions without prior information reduce the potential of Gaussian processes. This paper proposes a method to improve the predictive capabilities of Gaussian processes. We integrate prior knowledge by deriving the mean and covariance functions from previous data. More specifically, we first approximate previous data by a weighted sum of basis functions and then derive the mean and covariance functions directly from the estimated weight coefficients. Basis functions may be either estimated or derived from problem-specific governing equations to incorporate physical information. The applicability and effectiveness of this approach are demonstrated for fatigue crack growth, laser degradation, and milling machine wear data. We show that well-chosen mean and covariance functions, like those based on previous data, significantly increase look-ahead time and accuracy. Using physical basis functions further improves accuracy. In addition, computation effort for training is significantly reduced.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
303,215
2410.03264
Enriching Music Descriptions with a Finetuned-LLM and Metadata for Text-to-Music Retrieval
Text-to-Music Retrieval, finding music based on a given natural language query, plays a pivotal role in content discovery within extensive music databases. To address this challenge, prior research has predominantly focused on a joint embedding of music audio and text, utilizing it to retrieve music tracks that exactly match descriptive queries related to musical attributes (i.e. genre, instrument) and contextual elements (i.e. mood, theme). However, users also articulate a need to explore music that shares similarities with their favorite tracks or artists, such as \textit{I need a similar track to Superstition by Stevie Wonder}. To address these concerns, this paper proposes an improved Text-to-Music Retrieval model, denoted as TTMR++, which utilizes rich text descriptions generated with a finetuned large language model and metadata. To accomplish this, we obtained various types of seed text from several existing music tag and caption datasets and a knowledge graph dataset of artists and tracks. The experimental results show the effectiveness of TTMR++ in comparison to state-of-the-art music-text joint embedding models through a comprehensive evaluation involving various musical text queries.
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
true
494,706
2205.01663
Adversarial Training for High-Stakes Reliability
In the future, powerful AI systems may be deployed in high-stakes settings, where a single failure could be catastrophic. One technique for improving AI safety in high-stakes settings is adversarial training, which uses an adversary to generate examples to train on in order to achieve better worst-case performance. In this work, we used a safe language generation task (``avoid injuries'') as a testbed for achieving high reliability through adversarial training. We created a series of adversarial training techniques -- including a tool that assists human adversaries -- to find and eliminate failures in a classifier that filters text completions suggested by a generator. In our task, we determined that we can set very conservative classifier thresholds without significantly impacting the quality of the filtered outputs. We found that adversarial training increased robustness to the adversarial attacks that we trained on -- doubling the time for our contractors to find adversarial examples both with our tool (from 13 to 26 minutes) and without (from 20 to 44 minutes) -- without affecting in-distribution performance. We hope to see further work in the high-stakes reliability setting, including more powerful tools for enhancing human adversaries and better ways to measure high levels of reliability, until we can confidently rule out the possibility of catastrophic deployment-time failures of powerful models.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
294,667
2104.10685
Locomotor transitions in the potential energy landscape-dominated regime
To traverse complex three-dimensional terrainwith large obstacles, animals and robots must transition across different modes. However, the most mechanistic understanding of terrestrial locomotion concerns how to generate and stabilize near-steady-state, single-mode locomotion (e.g. walk, run). We know little about how to use physical interaction to make robust locomotor transitions. Here, we review our progress towards filling this gap by discovering terradynamic principles of multi-legged locomotor transitions, using simplified model systems representing distinct challenges in complex three-dimensional terrain. Remarkably, general physical principles emerge across diverse model systems, by modelling locomotor-terrain interaction using a potential energy landscape approach. The animal and robots' stereotyped locomotor modes are constrained by physical interaction. Locomotor transitions are stochastic, destabilizing, barrier-crossing transitions on the landscape. They can be induced by feed-forward self-propulsion and are facilitated by feedbackcontrolled active adjustment. General physical principles and strategies from our systematic studies already advanced robot performance in simple model systems. Efforts remain to better understand the intelligence aspect of locomotor transitions and how to compose larger-scale potential energy landscapes of complex three-dimensional terrains from simple landscapes of abstracted challenges. This will elucidate how the neuromechanical control system mediates physical interaction to generate multi-pathway locomotor transitions and lead to advancements in biology, physics, robotics and dynamical systems theory.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
231,667
1710.06832
The Origins of Computational Mechanics: A Brief Intellectual History and Several Clarifications
The principle goal of computational mechanics is to define pattern and structure so that the organization of complex systems can be detected and quantified. Computational mechanics developed from efforts in the 1970s and early 1980s to identify strange attractors as the mechanism driving weak fluid turbulence via the method of reconstructing attractor geometry from measurement time series and in the mid-1980s to estimate equations of motion directly from complex time series. In providing a mathematical and operational definition of structure it addressed weaknesses of these early approaches to discovering patterns in natural systems. Since then, computational mechanics has led to a range of results from theoretical physics and nonlinear mathematics to diverse applications---from closed-form analysis of Markov and non-Markov stochastic processes that are ergodic or nonergodic and their measures of information and intrinsic computation to complex materials and deterministic chaos and intelligence in Maxwellian demons to quantum compression of classical processes and the evolution of computation and language. This brief review clarifies several misunderstandings and addresses concerns recently raised regarding early works in the field (1980s). We show that misguided evaluations of the contributions of computational mechanics are groundless and stem from a lack of familiarity with its basic goals and from a failure to consider its historical context. For all practical purposes, its modern methods and results largely supersede the early works. This not only renders recent criticism moot and shows the solid ground on which computational mechanics stands but, most importantly, shows the significant progress achieved over three decades and points to the many intriguing and outstanding challenges in understanding the computational nature of complex dynamic systems.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
82,835
1712.03866
Using a single RGB frame for real time 3D hand pose estimation in the wild
We present a method for the real-time estimation of the full 3D pose of one or more human hands using a single commodity RGB camera. Recent work in the area has displayed impressive progress using RGBD input. However, since the introduction of RGBD sensors, there has been little progress for the case of monocular color input. We capitalize on the latest advancements of deep learning, combining them with the power of generative hand pose estimation techniques to achieve real-time monocular 3D hand pose estimation in unrestricted scenarios. More specifically, given an RGB image and the relevant camera calibration information, we employ a state-of-the-art detector to localize hands. Given a crop of a hand in the image, we run the pretrained network of OpenPose for hands to estimate the 2D location of hand joints. Finally, non-linear least-squares minimization fits a 3D model of the hand to the estimated 2D joint positions, recovering the 3D hand pose. Extensive experimental results provide comparison to the state of the art as well as qualitative assessment of the method in the wild.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
86,512
1304.7211
Algorithmic Optimisations for Iterative Deconvolution Methods
We investigate possibilities to speed up iterative algorithms for non-blind image deconvolution. We focus on algorithms in which convolution with the point-spread function to be deconvolved is used in each iteration, and aim at accelerating these convolution operations as they are typically the most expensive part of the computation. We follow two approaches: First, for some practically important specific point-spread functions, algorithmically efficient sliding window or list processing techniques can be used. In some constellations this allows faster computation than via the Fourier domain. Second, as iterations progress, computation of convolutions can be restricted to subsets of pixels. For moderate thinning rates this can be done with almost no impact on the reconstruction quality. Both approaches are demonstrated in the context of Richardson-Lucy deconvolution but are not restricted to this method.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
24,231
2104.07184
Gyrator-Capacitor Modeling of a Continuously Variable Series Reactor in Different Operating Modes
Continuously Variable Series Reactor (CVSR) can regulate its reactance on the ac side using the nonlinear ferromagnetic core, shared by an ac and a dc winding for applications like power flow control, oscillation damping, load balancing and others. In order to use a CVSR in the power grid, it is essential to know its operating characteristics. The Gyrator-Capacitor approach has been applied to model the magnetic circuit coupled with the electrical circuit. In the paper, we investigate the CVSR behavior in terms of induced voltages across the dc winding, flux densities (B) through the core legs, and power exchange between the ac controlled and the dc control circuit.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
230,320
2310.00413
SSIF: Learning Continuous Image Representation for Spatial-Spectral Super-Resolution
Existing digital sensors capture images at fixed spatial and spectral resolutions (e.g., RGB, multispectral, and hyperspectral images), and each combination requires bespoke machine learning models. Neural Implicit Functions partially overcome the spatial resolution challenge by representing an image in a resolution-independent way. However, they still operate at fixed, pre-defined spectral resolutions. To address this challenge, we propose Spatial-Spectral Implicit Function (SSIF), a neural implicit model that represents an image as a function of both continuous pixel coordinates in the spatial domain and continuous wavelengths in the spectral domain. We empirically demonstrate the effectiveness of SSIF on two challenging spatio-spectral super-resolution benchmarks. We observe that SSIF consistently outperforms state-of-the-art baselines even when the baselines are allowed to train separate models at each spectral resolution. We show that SSIF generalizes well to both unseen spatial resolutions and spectral resolutions. Moreover, SSIF can generate high-resolution images that improve the performance of downstream tasks (e.g., land use classification) by 1.7%-7%.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
395,970
2411.07396
Toward Optimal Search and Retrieval for RAG
Retrieval-augmented generation (RAG) is a promising method for addressing some of the memory-related challenges associated with Large Language Models (LLMs). Two separate systems form the RAG pipeline, the retriever and the reader, and the impact of each on downstream task performance is not well-understood. Here, we work towards the goal of understanding how retrievers can be optimized for RAG pipelines for common tasks such as Question Answering (QA). We conduct experiments focused on the relationship between retrieval and RAG performance on QA and attributed QA and unveil a number of insights useful to practitioners developing high-performance RAG pipelines. For example, lowering search accuracy has minor implications for RAG performance while potentially increasing retrieval speed and memory efficiency.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
507,505
1910.12121
Robust GNSS Denied Localization for UAV Using Particle Filter and Visual Odometry
Conventional autonomous Unmanned Air Vehicle (abbr. UAV) autopilot systems use Global Navigation Satellite System (abbr. GNSS) signal for navigation. However, autopilot systems fail to navigate due to lost or jammed GNSS signal. To solve this problem, information from other sensors such as optical sensors are used. Monocular Simultaneous Localization and Mapping algorithms have been developed over the last few years and achieved state-of-the-art accuracy. Also, map matching localization approaches are used for UAV localization relatively to imagery from static maps such as Google Maps. Unfortunately, the accuracy and robustness of these algorithms are very dependent on up-to-date maps. The purpose of this research is to improve the accuracy and robustness of map relative Particle Filter based localization using a downward-facing optical camera mounted on an autonomous aircraft. This research shows how image similarity to likelihood conversion function impacts the results of Particle Filter localization algorithm. Two parametric image similarity to likelihood conversion functions (logistic and rectifying) are proposed. A dataset of simulated aerial imagery is used for experiments. The experiment results are shown, that the Particle Filter localization algorithm using the logistic function was able to surpass the accuracy of state-of-the-art ORB-SLAM2 algorithm by 2.6 times. The algorithm is shown to be able to navigate using up-to-date maps more accurately and with an average decrease of precision by 30% using out-of-date maps.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
150,974
2302.05323
On Achieving Privacy-Preserving State-of-the-Art Edge Intelligence
Deep Neural Network (DNN) Inference in Edge Computing, often called Edge Intelligence, requires solutions to insure that sensitive data confidentiality and intellectual property are not revealed in the process. Privacy-preserving Edge Intelligence is only emerging, despite the growing prevalence of Edge Computing as a context of Machine-Learning-as-a-Service. Solutions are yet to be applied, and possibly adapted, to state-of-the-art DNNs. This position paper provides an original assessment of the compatibility of existing techniques for privacy-preserving DNN Inference with the characteristics of an Edge Computing setup, highlighting the appropriateness of secret sharing in this context. We then address the future role of model compression methods in the research towards secret sharing on DNNs with state-of-the-art performance.
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
false
true
345,004
1910.09275
Text Matters but Speech Influences: A Computational Analysis of Syntactic Ambiguity Resolution
Analyzing how human beings resolve syntactic ambiguity has long been an issue of interest in the field of linguistics. It is, at the same time, one of the most challenging issues for spoken language understanding (SLU) systems as well. As syntactic ambiguity is intertwined with issues regarding prosody and semantics, the computational approach toward speech intention identification is expected to benefit from the observations of the human language processing mechanism. In this regard, we address the task with attentive recurrent neural networks that exploit acoustic and textual features simultaneously and reveal how the modalities interact with each other to derive sentence meaning. Utilizing a speech corpus recorded on Korean scripts of syntactically ambiguous utterances, we revealed that co-attention frameworks, namely multi-hop attention and cross-attention, show significantly superior performance in disambiguating speech intention. With further analysis, we demonstrate that the computational models reflect the internal relationship between auditory and linguistic processes.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
150,131
2410.12961
Super-resolving Real-world Image Illumination Enhancement: A New Dataset and A Conditional Diffusion Model
Most existing super-resolution methods and datasets have been developed to improve the image quality in well-lighted conditions. However, these methods do not work well in real-world low-light conditions as the images captured in such conditions lose most important information and contain significant unknown noises. To solve this problem, we propose a SRRIIE dataset with an efficient conditional diffusion probabilistic models-based method. The proposed dataset contains 4800 paired low-high quality images. To ensure that the dataset are able to model the real-world image degradation in low-illumination environments, we capture images using an ILDC camera and an optical zoom lens with exposure levels ranging from -6 EV to 0 EV and ISO levels ranging from 50 to 12800. We comprehensively evaluate with various reconstruction and perceptual metrics and demonstrate the practicabilities of the SRRIIE dataset for deep learning-based methods. We show that most existing methods are less effective in preserving the structures and sharpness of restored images from complicated noises. To overcome this problem, we revise the condition for Raw sensor data and propose a novel time-melding condition for diffusion probabilistic model. Comprehensive quantitative and qualitative experimental results on the real-world benchmark datasets demonstrate the feasibility and effectivenesses of the proposed conditional diffusion probabilistic model on Raw sensor data. Code and dataset will be available at https://github.com/Yaofang-Liu/Super-Resolving
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
499,300
1808.00737
Binary Weighted Memristive Analog Deep Neural Network for Near-Sensor Edge Processing
The memristive crossbar aims to implement analog weighted neural network, however, the realistic implementation of such crossbar arrays is not possible due to limited switching states of memristive devices. In this work, we propose the design of an analog deep neural network with binary weight update through backpropagation algorithm using binary state memristive devices. We show that such networks can be successfully used for image processing task and has the advantage of lower power consumption and small on-chip area in comparison with digital counterparts. The proposed network was benchmarked for MNIST handwritten digits recognition achieving an accuracy of approximately 90%.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
true
104,449
2404.19063
SuperCLUE-Fin: Graded Fine-Grained Analysis of Chinese LLMs on Diverse Financial Tasks and Applications
The SuperCLUE-Fin (SC-Fin) benchmark is a pioneering evaluation framework tailored for Chinese-native financial large language models (FLMs). It assesses FLMs across six financial application domains and twenty-five specialized tasks, encompassing theoretical knowledge and practical applications such as compliance, risk management, and investment analysis. Using multi-turn, open-ended conversations that mimic real-life scenarios, SC-Fin measures models on a range of criteria, including accurate financial understanding, logical reasoning, clarity, computational efficiency, business acumen, risk perception, and compliance with Chinese regulations. In a rigorous evaluation involving over a thousand questions, SC-Fin identifies a performance hierarchy where domestic models like GLM-4 and MoonShot-v1-128k outperform others with an A-grade, highlighting the potential for further development in transforming theoretical knowledge into pragmatic financial solutions. This benchmark serves as a critical tool for refining FLMs in the Chinese context, directing improvements in financial knowledge databases, standardizing financial interpretations, and promoting models that prioritize compliance, risk management, and secure practices. We create a contextually relevant and comprehensive benchmark that drives the development of AI in the Chinese financial sector. SC-Fin facilitates the advancement and responsible deployment of FLMs, offering valuable insights for enhancing model performance and usability for both individual and institutional users in the Chinese market..~\footnote{Our benchmark can be found at \url{https://www.CLUEbenchmarks.com}}.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
450,487
2303.04040
Uncertainty Quantification of Spatiotemporal Travel Demand with Probabilistic Graph Neural Networks
Recent studies have significantly improved the prediction accuracy of travel demand using graph neural networks. However, these studies largely ignored uncertainty that inevitably exists in travel demand prediction. To fill this gap, this study proposes a framework of probabilistic graph neural networks (Prob-GNN) to quantify the spatiotemporal uncertainty of travel demand. This Prob-GNN framework is substantiated by deterministic and probabilistic assumptions, and empirically applied to the task of predicting the transit and ridesharing demand in Chicago. We found that the probabilistic assumptions (e.g. distribution tail, support) have a greater impact on uncertainty prediction than the deterministic ones (e.g. deep modules, depth). Among the family of Prob-GNNs, the GNNs with truncated Gaussian and Laplace distributions achieve the highest performance in transit and ridesharing data. Even under significant domain shifts, Prob-GNNs can predict the ridership uncertainty in a stable manner, when the models are trained on pre-COVID data and tested across multiple periods during and after the COVID-19 pandemic. Prob-GNNs also reveal the spatiotemporal pattern of uncertainty, which is concentrated on the afternoon peak hours and the areas with large travel volumes. Overall, our findings highlight the importance of incorporating randomness into deep learning for spatiotemporal ridership prediction. Future research should continue to investigate versatile probabilistic assumptions to capture behavioral randomness, and further develop methods to quantify uncertainty to build resilient cities.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
349,945
1903.06593
PifPaf: Composite Fields for Human Pose Estimation
We propose a new bottom-up method for multi-person 2D human pose estimation that is particularly well suited for urban mobility such as self-driving cars and delivery robots. The new method, PifPaf, uses a Part Intensity Field (PIF) to localize body parts and a Part Association Field (PAF) to associate body parts with each other to form full human poses. Our method outperforms previous methods at low resolution and in crowded, cluttered and occluded scenes thanks to (i) our new composite field PAF encoding fine-grained information and (ii) the choice of Laplace loss for regressions which incorporates a notion of uncertainty. Our architecture is based on a fully convolutional, single-shot, box-free design. We perform on par with the existing state-of-the-art bottom-up method on the standard COCO keypoint task and produce state-of-the-art results on a modified COCO keypoint task for the transportation domain.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
124,423
2303.03338
Optimizing L1 cache for embedded systems through grammatical evolution
Nowadays, embedded systems are provided with cache memories that are large enough to influence in both performance and energy consumption as never occurred before in this kind of systems. In addition, the cache memory system has been identified as a component that improves those metrics by adapting its configuration according to the memory access patterns of the applications being run. However, given that cache memories have many parameters which may be set to a high number of different values, designers face to a wide and time-consuming exploration space. In this paper we propose an optimization framework based on Grammatical Evolution (GE) which is able to efficiently find the best cache configurations for a given set of benchmark applications. This metaheuristic allows an important reduction of the optimization runtime obtaining good results in a low number of generations. Besides, this reduction is also increased due to the efficient storage of evaluated caches. Moreover, we selected GE because the plasticity of the grammar eases the creation of phenotypes that form the call to the cache simulator required for the evaluation of the different configurations. Experimental results for the Mediabench suite show that our proposal is able to find cache configurations that obtain an average improvement of $62\%$ versus a real world baseline configuration.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
false
349,688
2402.07860
On the Detection of Reviewer-Author Collusion Rings From Paper Bidding
A major threat to the peer-review systems of computer science conferences is the existence of "collusion rings" between reviewers. In such collusion rings, reviewers who have also submitted their own papers to the conference work together to manipulate the conference's paper assignment, with the aim of being assigned to review each other's papers. The most straightforward way that colluding reviewers can manipulate the paper assignment is by indicating their interest in each other's papers through strategic paper bidding. One potential approach to solve this important problem would be to detect the colluding reviewers from their manipulated bids, after which the conference can take appropriate action. While prior work has developed effective techniques to detect other kinds of fraud, no research has yet established that detecting collusion rings is even possible. In this work, we tackle the question of whether it is feasible to detect collusion rings from the paper bidding. To answer this question, we conduct empirical analysis of two realistic conference bidding datasets, including evaluations of existing algorithms for fraud detection in other applications. We find that collusion rings can achieve considerable success at manipulating the paper assignment while remaining hidden from detection: for example, in one dataset, undetected colluders are able to achieve assignment to up to 30% of the papers authored by other colluders. In addition, when 10 colluders bid on all of each other's papers, no detection algorithm outputs a group of reviewers with more than 31% overlap with the true colluders. These results suggest that collusion cannot be effectively detected from the bidding using popular existing tools, demonstrating the need to develop more complex detection algorithms as well as those that leverage additional metadata (e.g., reviewer-paper text-similarity scores).
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
true
428,871
2112.10485
Scale-Net: Learning to Reduce Scale Differences for Large-Scale Invariant Image Matching
Most image matching methods perform poorly when encountering large scale changes in images. To solve this problem, firstly, we propose a scale-difference-aware image matching method (SDAIM) that reduces image scale differences before local feature extraction, via resizing both images of an image pair according to an estimated scale ratio. Secondly, in order to accurately estimate the scale ratio, we propose a covisibility-attention-reinforced matching module (CVARM) and then design a novel neural network, termed as Scale-Net, based on CVARM. The proposed CVARM can lay more stress on covisible areas within the image pair and suppress the distraction from those areas visible in only one image. Quantitative and qualitative experiments confirm that the proposed Scale-Net has higher scale ratio estimation accuracy and much better generalization ability compared with all the existing scale ratio estimation methods. Further experiments on image matching and relative pose estimation tasks demonstrate that our SDAIM and Scale-Net are able to greatly boost the performance of representative local features and state-of-the-art local feature matching methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
272,446
1808.09809
An Efficient Matheuristic for the Minimum-Weight Dominating Set Problem
A minimum dominating set in a graph is a minimum set of vertices such that every vertex of the graph either belongs to it, or is adjacent to one vertex of this set. This mathematical object is of high relevance in a number of applications related to social networks analysis, design of wireless networks, coding theory, and data mining, among many others. When vertex weights are given, minimizing the total weight of the dominating set gives rise to a problem variant known as the minimum weight dominating set problem. To solve this problem, we introduce a hybrid matheuristic combining a tabu search with an integer programming solver. The latter is used to solve subproblems in which only a fraction of the decision variables, selected relatively to the search history, are left free while the others are fixed. Moreover, we introduce an adaptive penalty to promote the exploration of intermediate infeasible solutions during the search, enhance the algorithm with perturbations and node elimination procedures, and exploit richer neighborhood classes. Extensive experimental analyses on a variety of instance classes demonstrate the good performance of the algorithm, and the contribution of each component in the success of the search is analyzed.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
106,271
2112.15278
Data-Free Knowledge Transfer: A Survey
In the last decade, many deep learning models have been well trained and made a great success in various fields of machine intelligence, especially for computer vision and natural language processing. To better leverage the potential of these well-trained models in intra-domain or cross-domain transfer learning situations, knowledge distillation (KD) and domain adaptation (DA) are proposed and become research highlights. They both aim to transfer useful information from a well-trained model with original training data. However, the original data is not always available in many cases due to privacy, copyright or confidentiality. Recently, the data-free knowledge transfer paradigm has attracted appealing attention as it deals with distilling valuable knowledge from well-trained models without requiring to access to the training data. In particular, it mainly consists of the data-free knowledge distillation (DFKD) and source data-free domain adaptation (SFDA). On the one hand, DFKD aims to transfer the intra-domain knowledge of original data from a cumbersome teacher network to a compact student network for model compression and efficient inference. On the other hand, the goal of SFDA is to reuse the cross-domain knowledge stored in a well-trained source model and adapt it to a target domain. In this paper, we provide a comprehensive survey on data-free knowledge transfer from the perspectives of knowledge distillation and unsupervised domain adaptation, to help readers have a better understanding of the current research status and ideas. Applications and challenges of the two areas are briefly reviewed, respectively. Furthermore, we provide some insights to the subject of future research.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
273,728
1704.01851
Fixed versus Dynamic Co-Occurrence Windows in TextRank Term Weights for Information Retrieval
TextRank is a variant of PageRank typically used in graphs that represent documents, and where vertices denote terms and edges denote relations between terms. Quite often the relation between terms is simple term co-occurrence within a fixed window of k terms. The output of TextRank when applied iteratively is a score for each vertex, i.e. a term weight, that can be used for information retrieval (IR) just like conventional term frequency based term weights. So far, when computing TextRank term weights over co- occurrence graphs, the window of term co-occurrence is al- ways ?xed. This work departs from this, and considers dy- namically adjusted windows of term co-occurrence that fol- low the document structure on a sentence- and paragraph- level. The resulting TextRank term weights are used in a ranking function that re-ranks 1000 initially returned search results in order to improve the precision of the ranking. Ex- periments with two IR collections show that adjusting the vicinity of term co-occurrence when computing TextRank term weights can lead to gains in early precision.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
71,333
1606.04778
The Learning and Prediction of Application-level Traffic Data in Cellular Networks
Traffic learning and prediction is at the heart of the evaluation of the performance of telecommunications networks and attracts a lot of attention in wired broadband networks. Now, benefiting from the big data in cellular networks, it becomes possible to make the analyses one step further into the application level. In this paper, we firstly collect a significant amount of application-level traffic data from cellular network operators. Afterwards, with the aid of the traffic "big data", we make a comprehensive study over the modeling and prediction framework of cellular network traffic. Our results solidly demonstrate that there universally exist some traffic statistical modeling characteristics, including ALPHA-stable modeled property in the temporal domain and the sparsity in the spatial domain. Meanwhile, the results also demonstrate the distinctions originated from the uniqueness of different service types of applications. Furthermore, we propose a new traffic prediction framework to encompass and explore these aforementioned characteristics and then develop a dictionary learning-based alternating direction method to solve it. Besides, we validate the prediction accuracy improvement and the robustness of the proposed framework through extensive simulation results.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
57,313
2005.11580
Evolution of Cooperative Hunting in Artificial Multi-layered Societies
The complexity of cooperative behavior is a crucial issue in multiagent-based social simulation. In this paper, an agent-based model is proposed to study the evolution of cooperative hunting behaviors in an artificial society. In this model, the standard hunting game of stag is modified into a new situation with social hierarchy and penalty. The agent society is divided into multiple layers with supervisors and subordinates. In each layer, the society is divided into multiple clusters. A supervisor controls all subordinates in a cluster locally. Subordinates interact with rivals through reinforcement learning, and report learning information to their corresponding supervisor. Supervisors process the reported information through repeated affiliation-based aggregation and by information exchange with other supervisors, then pass down the reprocessed information to subordinates as guidance. Subordinates, in turn, update learning information according to guidance, following the "win stay, lose shift" strategy. Experiments are carried out to test the evolution of cooperation in this closed-loop semi-supervised emergent system with different parameters. We also study the variations and phase transitions in this game setting.
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
true
false
false
178,505
2209.08776
NeRF-SOS: Any-View Self-supervised Object Segmentation on Complex Scenes
Neural volumetric representations have shown the potential that Multi-layer Perceptrons (MLPs) can be optimized with multi-view calibrated images to represent scene geometry and appearance, without explicit 3D supervision. Object segmentation can enrich many downstream applications based on the learned radiance field. However, introducing hand-crafted segmentation to define regions of interest in a complex real-world scene is non-trivial and expensive as it acquires per view annotation. This paper carries out the exploration of self-supervised learning for object segmentation using NeRF for complex real-world scenes. Our framework, called NeRF with Self-supervised Object Segmentation NeRF-SOS, couples object segmentation and neural radiance field to segment objects in any view within a scene. By proposing a novel collaborative contrastive loss in both appearance and geometry levels, NeRF-SOS encourages NeRF models to distill compact geometry-aware segmentation clusters from their density fields and the self-supervised pre-trained 2D visual features. The self-supervised object segmentation framework can be applied to various NeRF models that both lead to photo-realistic rendering results and convincing segmentation maps for both indoor and outdoor scenarios. Extensive results on the LLFF, Tank & Temple, and BlendedMVS datasets validate the effectiveness of NeRF-SOS. It consistently surpasses other 2D-based self-supervised baselines and predicts finer semantics masks than existing supervised counterparts. Please refer to the video on our project page for more details:https://zhiwenfan.github.io/NeRF-SOS.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
318,266
2205.03798
Fast and Structured Block-Term Tensor Decomposition For Hyperspectral Unmixing
The block-term tensor decomposition model with multilinear rank-$(L_r,L_r,1)$ terms (or, the "LL1 tensor decomposition" in short) offers a valuable alternative for hyperspectral unmixing (HU) under the linear mixture model. Particularly, the LL1 decomposition ensures the endmember/abundance identifiability in scenarios where such guarantees are not supported by the classic matrix factorization (MF) approaches. However, existing LL1-based HU algorithms use a three-factor parameterization of the tensor (i.e., the hyperspectral image cube), which leads to a number of challenges including high per-iteration complexity, slow convergence, and difficulties in incorporating structural prior information. This work puts forth an LL1 tensor decomposition-based HU algorithm that uses a constrained two-factor re-parameterization of the tensor data. As a consequence, a two-block alternating gradient projection (GP)-based LL1 algorithm is proposed for HU. With carefully designed projection solvers, the GP algorithm enjoys a relatively low per-iteration complexity. Like in MF-based HU, the factors under our parameterization correspond to the endmembers and abundances. Thus, the proposed framework is natural to incorporate physics-motivated priors that arise in HU. The proposed algorithm often attains orders-of-magnitude speedup and substantial HU performance gains compared to the existing three-factor parameterization-based HU algorithms.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
295,426
1907.07600
Fast Distributed Coordination of Distributed Energy Resources Over Time-Varying Communication Networks
In this paper, we consider the problem of optimally coordinating the response of a group of distributed energy resources (DERs) so they collectively meet the electric power demanded by a collection of loads, while minimizing the total generation cost and respecting the DER capacity limits. This problem can be cast as a convex optimization problem, where the global objective is to minimize a sum of convex functions corresponding to individual DER generation cost, while satisfying (i) linear inequality constraints corresponding to the DER capacity limits and (ii) a linear equality constraint corresponding to the total power generated by the DERs being equal to the total power demand. We develop distributed algorithms to solve the DER coordination problem over time-varying communication networks with either bidirectional or unidirectional communication links. The proposed algorithms can be seen as distributed versions of a centralized primal-dual algorithm. One of the algorithms proposed for directed communication graphs has geometric convergence rate even when communication out-degrees are unknown to agents. We showcase the proposed algorithms using the standard IEEE 39-bus test system, and compare their performance against other ones proposed in the literature.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
138,918
2104.08735
Learning with Instance Bundles for Reading Comprehension
When training most modern reading comprehension models, all the questions associated with a context are treated as being independent from each other. However, closely related questions and their corresponding answers are not independent, and leveraging these relationships could provide a strong supervision signal to a model. Drawing on ideas from contrastive estimation, we introduce several new supervision techniques that compare question-answer scores across multiple related instances. Specifically, we normalize these scores across various neighborhoods of closely contrasting questions and/or answers, adding another cross entropy loss term that is used in addition to traditional maximum likelihood estimation. Our techniques require bundles of related question-answer pairs, which we can either mine from within existing data or create using various automated heuristics. We empirically demonstrate the effectiveness of training with instance bundles on two datasets -- HotpotQA and ROPES -- showing up to 11% absolute gains in accuracy.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
230,946
2011.06985
Robotic self-representation improves manipulation skills and transfer learning
Cognitive science suggests that the self-representation is critical for learning and problem-solving. However, there is a lack of computational methods that relate this claim to cognitively plausible robots and reinforcement learning. In this paper, we bridge this gap by developing a model that learns bidirectional action-effect associations to encode the representations of body schema and the peripersonal space from multisensory information, which is named multimodal BidAL. Through three different robotic experiments, we demonstrate that this approach significantly stabilizes the learning-based problem-solving under noisy conditions and that it improves transfer learning of robotic manipulation skills.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
206,405
2305.15055
Iteratively Improving Speech Recognition and Voice Conversion
Many existing works on voice conversion (VC) tasks use automatic speech recognition (ASR) models for ensuring linguistic consistency between source and converted samples. However, for the low-data resource domains, training a high-quality ASR remains to be a challenging task. In this work, we propose a novel iterative way of improving both the ASR and VC models. We first train an ASR model which is used to ensure content preservation while training a VC model. In the next iteration, the VC model is used as a data augmentation method to further fine-tune the ASR model and generalize it to diverse speakers. By iteratively leveraging the improved ASR model to train VC model and vice-versa, we experimentally show improvement in both the models. Our proposed framework outperforms the ASR and one-shot VC baseline models on English singing and Hindi speech domains in subjective and objective evaluations in low-data resource settings.
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
367,424
2006.11446
MALOnt: An Ontology for Malware Threat Intelligence
Malware threat intelligence uncovers deep information about malware, threat actors, and their tactics, Indicators of Compromise(IoC), and vulnerabilities in different platforms from scattered threat sources. This collective information can guide decision making in cyber defense applications utilized by security operation centers(SoCs). In this paper, we introduce an open-source malware ontology - MALOnt that allows the structured extraction of information and knowledge graph generation, especially for threat intelligence. The knowledge graph that uses MALOnt is instantiated from a corpus comprising hundreds of annotated malware threat reports. The knowledge graph enables the analysis, detection, classification, and attribution of cyber threats caused by malware. We also demonstrate the annotation process using MALOnt on exemplar threat intelligence reports. A work in progress, this research is part of a larger effort towards auto-generation of knowledge graphs (KGs)for gathering malware threat intelligence from heterogeneous online resources.
false
false
false
false
true
true
false
false
false
false
false
false
true
false
false
false
false
false
183,236
1608.01972
Bridging the Gap: Incorporating a Semantic Similarity Measure for Effectively Mapping PubMed Queries to Documents
The main approach of traditional information retrieval (IR) is to examine how many words from a query appear in a document. A drawback of this approach, however, is that it may fail to detect relevant documents where no or only few words from a query are found. The semantic analysis methods such as LSA (latent semantic analysis) and LDA (latent Dirichlet allocation) have been proposed to address the issue, but their performance is not superior compared to common IR approaches. Here we present a query-document similarity measure motivated by the Word Mover's Distance. Unlike other similarity measures, the proposed method relies on neural word embeddings to compute the distance between words. This process helps identify related words when no direct matches are found between a query and a document. Our method is efficient and straightforward to implement. The experimental results on TREC Genomics data show that our approach outperforms the BM25 ranking function by an average of 12% in mean average precision. Furthermore, for a real-world dataset collected from the PubMed search logs, we combine the semantic measure with BM25 using a learning to rank method, which leads to improved ranking scores by up to 25%. This experiment demonstrates that the proposed approach and BM25 nicely complement each other and together produce superior performance.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
59,490
2212.11685
Text Generation with Diffusion Language Models: A Pre-training Approach with Continuous Paragraph Denoise
In this paper, we introduce a novel dIffusion language modEl pre-training framework for text generation, which we call GENIE. GENIE is a large-scale pretrained diffusion language model that consists of an encoder and a diffusion-based decoder, which can generate text by gradually transforming a random noise sequence into a coherent text sequence. To pre-train GENIE on a large-scale language corpus, we design a new continuous paragraph denoise objective, which encourages the diffusion-decoder to reconstruct a clean text paragraph from a corrupted version, while preserving the semantic and syntactic coherence. We evaluate GENIE on four downstream text generation benchmarks, namely XSum, CNN/DailyMail, Gigaword, and CommonGen. Our experimental results show that GENIE achieves comparable performance with the state-of-the-art autoregressive models on these benchmarks, and generates more diverse text samples. The code and models of GENIE are available at https://github.com/microsoft/ProphetNet/tree/master/GENIE.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
337,852
2106.01981
ProtoRes: Proto-Residual Network for Pose Authoring via Learned Inverse Kinematics
Our work focuses on the development of a learnable neural representation of human pose for advanced AI assisted animation tooling. Specifically, we tackle the problem of constructing a full static human pose based on sparse and variable user inputs (e.g. locations and/or orientations of a subset of body joints). To solve this problem, we propose a novel neural architecture that combines residual connections with prototype encoding of a partially specified pose to create a new complete pose from the learned latent space. We show that our architecture outperforms a baseline based on Transformer, both in terms of accuracy and computational efficiency. Additionally, we develop a user interface to integrate our neural model in Unity, a real-time 3D development platform. Furthermore, we introduce two new datasets representing the static human pose modeling problem, based on high-quality human motion capture data, which will be released publicly along with model code.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
true
238,686
2202.03192
Reward is not enough: can we liberate AI from the reinforcement learning paradigm?
I present arguments against the hypothesis put forward by Silver, Singh, Precup, and Sutton ( https://www.sciencedirect.com/science/article/pii/S0004370221000862 ) : reward maximization is not enough to explain many activities associated with natural and artificial intelligence including knowledge, learning, perception, social intelligence, evolution, language, generalisation and imitation. I show such reductio ad lucrum has its intellectual origins in the political economy of Homo economicus and substantially overlaps with the radical version of behaviourism. I show why the reinforcement learning paradigm, despite its demonstrable usefulness in some practical application, is an incomplete framework for intelligence -- natural and artificial. Complexities of intelligent behaviour are not simply second-order complications on top of reward maximisation. This fact has profound implications for the development of practically usable, smart, safe and robust artificially intelligent agents.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
279,111
2207.01131
Capacity Bounds for the Two-User IM/DD Interference Channel
This paper studies the capacity of the two-user intensity-modulation/direct-detection (IM/DD) interference channel (IC), which is relevant in the context of multi-user optical wireless communications. Despite some known single-letter capacity characterizations for general discrete-memoryless ICs, a computable capacity expression for the IM/DD IC is missing. In this paper, we provide tight and easily computable inner and outer bounds for a general two-user IM/DD IC under peak and average optical intensity constraints. The bounds enable characterizing the asymptotic sum-rate capacity in the strong and weak interference regimes, as well as the generalized degrees of freedom (GDoF) in the symmetric case. Using the obtained bounds, the GDoF of the IM/DD IC is shown to have a `W' shape similar to the Gaussian IC with power constraints. The obtained bounds are also evaluated numerically in different interference regimes to show their tightness, and used to study the performance of on-chip and indoor OWC systems.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
306,046
1602.04207
Fundamental Limits of Cache-Aided Interference Management
We consider a system comprising a library of $N$ files (e.g., movies) and a wireless network with $K_T$ transmitters, each equipped with a local cache of size of $M_T$ files, and $K_R$ receivers, each equipped with a local cache of size of $M_R$ files. Each receiver will ask for one of the $N$ files in the library, which needs to be delivered. The objective is to design the cache placement (without prior knowledge of receivers' future requests) and the communication scheme to maximize the throughput of the delivery. In this setting, we show that the sum degrees-of-freedom (sum-DoF) of $\min\left\{\frac{K_T M_T+K_R M_R}{N},K_R\right\}$ is achievable, and this is within a factor of 2 of the optimum, under one-shot linear schemes. This result shows that (i) the one-shot sum-DoF scales linearly with the aggregate cache size in the network (i.e., the cumulative memory available at all nodes), (ii) the transmitters' and receivers' caches contribute equally in the one-shot sum-DoF, and (iii) caching can offer a throughput gain that scales linearly with the size of the network. To prove the result, we propose an achievable scheme that exploits the redundancy of the content at transmitters' caches to cooperatively zero-force some outgoing interference and availability of the unintended content at receivers' caches to cancel (subtract) some of the incoming interference. We develop a particular pattern for cache placement that maximizes the overall gains of cache-aided transmit and receive interference cancellations. For the converse, we present an integer optimization problem which minimizes the number of communication blocks needed to deliver any set of requested files to the receivers. We then provide a lower bound on the value of this optimization problem, hence leading to an upper bound on the linear one-shot sum-DoF of the network, which is within a factor of 2 of the achievable sum-DoF.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
52,091
2303.09269
ELFIS: Expert Learning for Fine-grained Image Recognition Using Subsets
Fine-Grained Visual Recognition (FGVR) tackles the problem of distinguishing highly similar categories. One of the main approaches to FGVR, namely subset learning, tries to leverage information from existing class taxonomies to improve the performance of deep neural networks. However, these methods rely on the existence of handcrafted hierarchies that are not necessarily optimal for the models. In this paper, we propose ELFIS, an expert learning framework for FGVR that clusters categories of the dataset into meta-categories using both dataset-inherent lexical and model-specific information. A set of neural networks-based experts are trained focusing on the meta-categories and are integrated into a multi-task framework. Extensive experimentation shows improvements in the SoTA FGVR benchmarks of up to +1.3% of accuracy using both CNNs and transformer-based networks. Overall, the obtained results evidence that ELFIS can be applied on top of any classification model, enabling the obtention of SoTA results. The source code will be made public soon.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
351,976
2004.08607
Accumulator Bet Selection Through Stochastic Diffusion Search
An accumulator is a bet that presents a rather unique payout structure, in that it combines multiple bets into a wager that can generate a total payout given by the multiplication of the individual odds of its parts. These potentially important returns come however at an increased risk of a loss. Indeed, the presence of a single incorrect bet in this selection would make the whole accumulator lose. The complexity of selecting a set of matches to place an accumulator bet on, as well as the number of opportunities to identify winning combinations have both dramatically increased with the easier access to online and offline bookmakers that bettors have nowadays. We address this relatively under-studied combinatorial aspect of sports betting, and propose a binary optimization model for the problem of selecting the most promising combinations of matches, in terms of their total potential payout and probability of a win, to form an accumulator bet. The results of an ongoing computational experiment, in which our model is applied to real data pertaining to the four main football leagues in the world over a complete season, are presented and compared to those of single bet selection methods.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
173,117