id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2105.05650
Unbiased Monte Carlo Cluster Updates with Autoregressive Neural Networks
Efficient sampling of complex high-dimensional probability distributions is a central task in computational science. Machine learning methods like autoregressive neural networks, used with Markov chain Monte Carlo sampling, provide good approximations to such distributions, but suffer from either intrinsic bias or high variance. In this Letter, we propose a way to make this approximation unbiased and with low variance. Our method uses physical symmetries and variable-size cluster updates which utilize the structure of autoregressive factorization. We test our method for first- and second-order phase transitions of classical spin systems, showing its viability for critical systems and in the presence of metastable states.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
234,886
1912.01810
Learning with Multiplicative Perturbations
Adversarial Training (AT) and Virtual Adversarial Training (VAT) are the regularization techniques that train Deep Neural Networks (DNNs) with adversarial examples generated by adding small but worst-case perturbations to input examples. In this paper, we propose xAT and xVAT, new adversarial training algorithms, that generate \textbf{multiplicative} perturbations to input examples for robust training of DNNs. Such perturbations are much more perceptible and interpretable than their \textbf{additive} counterparts exploited by AT and VAT. Furthermore, the multiplicative perturbations can be generated transductively or inductively while the standard AT and VAT only support a transductive implementation. We conduct a series of experiments that analyze the behavior of the multiplicative perturbations and demonstrate that xAT and xVAT match or outperform state-of-the-art classification accuracies across multiple established benchmarks while being about 30\% faster than their additive counterparts. Furthermore, the resulting DNNs also demonstrate distinct weight distributions.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
156,179
1902.00120
Learning to Make Analogies by Contrasting Abstract Relational Structure
Analogical reasoning has been a principal focus of various waves of AI research. Analogy is particularly challenging for machines because it requires relational structures to be represented such that they can be flexibly applied across diverse domains of experience. Here, we study how analogical reasoning can be induced in neural networks that learn to perceive and reason about raw visual data. We find that the critical factor for inducing such a capacity is not an elaborate architecture, but rather, careful attention to the choice of data and the manner in which it is presented to the model. The most robust capacity for analogical reasoning is induced when networks learn analogies by contrasting abstract relational structures in their input domains, a training method that uses only the input data to force models to learn about important abstract features. Using this technique we demonstrate capacities for complex, visual and symbolic analogy making and generalisation in even the simplest neural network architectures.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
120,321
2401.11605
Scalable High-Resolution Pixel-Space Image Synthesis with Hourglass Diffusion Transformers
We present the Hourglass Diffusion Transformer (HDiT), an image generative model that exhibits linear scaling with pixel count, supporting training at high-resolution (e.g. $1024 \times 1024$) directly in pixel-space. Building on the Transformer architecture, which is known to scale to billions of parameters, it bridges the gap between the efficiency of convolutional U-Nets and the scalability of Transformers. HDiT trains successfully without typical high-resolution training techniques such as multiscale architectures, latent autoencoders or self-conditioning. We demonstrate that HDiT performs competitively with existing models on ImageNet $256^2$, and sets a new state-of-the-art for diffusion models on FFHQ-$1024^2$.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
423,065
2412.16186
Formal Modeling and Verification of Publisher-Subscriber Paradigm in ROS 2
The Robot Operating System (ROS) is one of the most popular middleware for developing robot applications, but it is subject to major shortcomings when applied to real-time robotic systems in safety-critical environments. For this reason, ROS 2 was released in 2017 for implementing real-time capabilities in distributed robotic systems while supporting the most prominent aspects of the original ROS. There is still not much work done to provide formal guarantees and correctness of a ROS program. In this paper, we propose a framework to address this challenging problem of guaranteeing the correct behaviour of robotic systems. We propose a formal modelling of a ROS 2 program, and also describe the program using a network of timed automata. We then prove that the sets of executions of a ROS program in the model and in the network of timed automata are the same. Thus to analyze a publisher-subscriber scenario of ROS 2 program, our algorithm first converts the program into the model, and then into the network of timed automata. The applicability and validity of our approach are verified by conducting several experiments on a simplified system and an actual robotic system, and the results and limitations are discussed.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
519,397
2210.04261
Noise-Robust De-Duplication at Scale
Identifying near duplicates within large, noisy text corpora has a myriad of applications that range from de-duplicating training datasets, reducing privacy risk, and evaluating test set leakage, to identifying reproduced news articles and literature within large corpora. Across these diverse applications, the overwhelming majority of work relies on N-grams. Limited efforts have been made to evaluate how well N-gram methods perform, in part because it is unclear how one could create an unbiased evaluation dataset for a massive corpus. This study uses the unique timeliness of historical news wires to create a 27,210 document dataset, with 122,876 positive duplicate pairs, for studying noise-robust de-duplication. The time-sensitivity of news makes comprehensive hand labelling feasible - despite the massive overall size of the corpus - as duplicates occur within a narrow date range. The study then develops and evaluates a range of de-duplication methods: hashing and N-gram overlap (which predominate in the literature), a contrastively trained bi-encoder, and a re-rank style approach combining a bi- and cross-encoder. The neural approaches significantly outperform hashing and N-gram overlap. We show that the bi-encoder scales well, de-duplicating a 10 million article corpus on a single GPU card in a matter of hours. We also apply our pre-trained model to the RealNews and patent portions of C4 (Colossal Clean Crawled Corpus), illustrating that a neural approach can identify many near duplicates missed by hashing, in the presence of various types of noise. The public release of our NEWS-COPY de-duplication dataset, codebase, and the pre-trained models will facilitate further research and applications.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
322,387
1803.00719
RankDCG: Rank-Ordering Evaluation Measure
Ranking is used for a wide array of problems, most notably information retrieval (search). There are a number of popular approaches to the evaluation of ranking such as Kendall's $\tau$, Average Precision, and nDCG. When dealing with problems such as user ranking or recommendation systems, all these measures suffer from various problems, including an inability to deal with elements of the same rank, inconsistent and ambiguous lower bound scores, and an inappropriate cost function. We propose a new measure, rankDCG, that addresses these problems. This is a modification of the popular nDCG algorithm. We provide a number of criteria for any effective ranking algorithm and show that only rankDCG satisfies all of them. Results are presented on constructed and real data sets. We release a publicly available rankDCG evaluation package.
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
false
91,733
2110.05177
Learning Division with Neural Arithmetic Logic Modules
To achieve systematic generalisation, it first makes sense to master simple tasks such as arithmetic. Of the four fundamental arithmetic operations (+,-,$\times$,$\div$), division is considered the most difficult for both humans and computers. In this paper we show that robustly learning division in a systematic manner remains a challenge even at the simplest level of dividing two numbers. We propose two novel approaches for division which we call the Neural Reciprocal Unit (NRU) and the Neural Multiplicative Reciprocal Unit (NMRU), and present improvements for an existing division module, the Real Neural Power Unit (Real NPU). Experiments in learning division with input redundancy on 225 different training sets, find that our proposed modifications to the Real NPU obtains an average success of 85.3$\%$ improving over the original by 15.1$\%$. In light of the suggestion above, our NMRU approach can further improve the success to 91.6$\%$.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
260,191
1006.0054
Anti-measurement Matrix Uncertainty Sparse Signal Recovery for Compressive Sensing
Compressive sensing (CS) is a technique for estimating a sparse signal from the random measurements and the measurement matrix. Traditional sparse signal recovery methods have seriously degeneration with the measurement matrix uncertainty (MMU). Here the MMU is modeled as a bounded additive error. An anti-uncertainty constraint in the form of a mixed L2 and L1 norm is deduced from the sparse signal model with MMU. Then we combine the sparse constraint with the anti-uncertainty constraint to get an anti-uncertainty sparse signal recovery operator. Numerical simulations demonstrate that the proposed operator has a better reconstructing performance with the MMU than traditional methods.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
6,627
2311.01454
NOIR: Neural Signal Operated Intelligent Robots for Everyday Activities
We present Neural Signal Operated Intelligent Robots (NOIR), a general-purpose, intelligent brain-robot interface system that enables humans to command robots to perform everyday activities through brain signals. Through this interface, humans communicate their intended objects of interest and actions to the robots using electroencephalography (EEG). Our novel system demonstrates success in an expansive array of 20 challenging, everyday household activities, including cooking, cleaning, personal care, and entertainment. The effectiveness of the system is improved by its synergistic integration of robot learning algorithms, allowing for NOIR to adapt to individual users and predict their intentions. Our work enhances the way humans interact with robots, replacing traditional channels of interaction with direct, neural communication. Project website: https://noir-corl.github.io/.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
405,049
2412.04671
Fully Distributed, Flexible Compositional Visual Representations via Soft Tensor Products
Since the inception of the classicalist vs. connectionist debate, it has been argued that the ability to systematically combine symbol-like entities into compositional representations is crucial for human intelligence. In connectionist systems, the field of disentanglement has gained prominence for its ability to produce explicitly compositional representations; however, it relies on a fundamentally symbolic, concatenative representation of compositional structure that clashes with the continuous, distributed foundations of deep learning. To resolve this tension, we extend Smolensky's Tensor Product Representation (TPR) and introduce Soft TPR, a representational form that encodes compositional structure in an inherently distributed, flexible manner, along with Soft TPR Autoencoder, a theoretically-principled architecture designed specifically to learn Soft TPRs. Comprehensive evaluations in the visual representation learning domain demonstrate that the Soft TPR framework consistently outperforms conventional disentanglement alternatives -- achieving state-of-the-art disentanglement, boosting representation learner convergence, and delivering superior sample efficiency and low-sample regime performance in downstream tasks. These findings highlight the promise of a distributed and flexible approach to representing compositional structure by potentially enhancing alignment with the core principles of deep learning over the conventional symbolic approach.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
514,510
1905.12022
Bayesian Nonparametric Federated Learning of Neural Networks
In federated learning problems, data is scattered across different servers and exchanging or pooling it is often impractical or prohibited. We develop a Bayesian nonparametric framework for federated learning with neural networks. Each data server is assumed to provide local neural network weights, which are modeled through our framework. We then develop an inference approach that allows us to synthesize a more expressive global network without additional supervision, data pooling and with as few as a single communication round. We then demonstrate the efficacy of our approach on federated learning problems simulated from two popular image classification datasets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
132,623
1406.5143
The Sample Complexity of Learning Linear Predictors with the Squared Loss
In this short note, we provide a sample complexity lower bound for learning linear predictors with respect to the squared loss. Our focus is on an agnostic setting, where no assumptions are made on the data distribution. This contrasts with standard results in the literature, which either make distributional assumptions, refer to specific parameter settings, or use other performance measures.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
34,002
2402.11306
Linear and Non-Linear Models for Master Scheduling of Dynamic Resources Product Mix
The literature on master production scheduling for product mix problems under the Theory of Constraints (TOC) was considered by many previous studies. Most studies assume a static resources availability. In this study, the raw materials supplied to the manufacturer is considered as dynamic depending on the results of the problem. Thus, an integer linear heuristic, an integer non-linear optimization model, and a basic non-linear model are developed to find a good solution of the problem. The results of the three models were compared to each other in terms of profit, raw materials costs, inventory costs and raw materials utilization. Recent studies in the field are reviewed and conclusions are drawn.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
430,331
2409.16872
Ethical and Scalable Automation: A Governance and Compliance Framework for Business Applications
The popularisation of applying AI in businesses poses significant challenges relating to ethical principles, governance, and legal compliance. Although businesses have embedded AI into their day-to-day processes, they lack a unified approach for mitigating its potential risks. This paper introduces a framework ensuring that AI must be ethical, controllable, viable, and desirable. Balancing these factors ensures the design of a framework that addresses its trade-offs, such as balancing performance against explainability. A successful framework provides practical advice for businesses to meet regulatory requirements in sectors such as finance and healthcare, where it is critical to comply with standards like GPDR and the EU AI Act. Different case studies validate this framework by integrating AI in both academic and practical environments. For instance, large language models are cost-effective alternatives for generating synthetic opinions that emulate attitudes to environmental issues. These case studies demonstrate how having a structured framework could enhance transparency and maintain performance levels as shown from the alignment between synthetic and expected distributions. This alignment is quantified using metrics like Chi-test scores, normalized mutual information, and Jaccard indexes. Future research should explore the framework's empirical validation in diverse industrial settings further, ensuring the model's scalability and adaptability.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
491,556
1812.09207
Solution Dominance over Constraint Satisfaction Problems
Constraint Satisfaction Problems (CSPs) typically have many solutions that satisfy all constraints. Often though, some solutions are preferred over others, that is, some solutions dominate other solutions. We present solution dominance as a formal framework to reason about such settings. We define Constraint Dominance Problems (CDPs) as CSPs with a dominance relation, that is, a preorder over the solutions of the CSP. This framework captures many well-known variants of constraint satisfaction, including optimization, multi-objective optimization, Max-CSP, minimal models, minimum correction subsets as well as optimization over CP-nets and arbitrary dominance relations. We extend MiniZinc, a declarative language for modeling CSPs, to CDPs by introducing dominance nogoods; these can be derived from dominance relations in a principled way. A generic method for solving arbitrary CDPs incrementally calls a CSP solver and is compatible with any existing solver that supports MiniZinc. This encourages experimenting with different solution dominance relations for a problem, as well as comparing different solvers without having to modify their implementations.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
117,116
1503.03660
Capturing, Documenting and Visualizing Search Contexts for building Multimedia Corpora
In Social Science research, multimedia documents are often collected to answer particular research questions like: "Which of the aesthetic properties of a photo are considered important on the web" or "How has Street Art developed over the past 50 years". Therefore, a researcher generally issues multiple queries to a number of search engines. This activity may span over long time intervals and results in a collection which can be further analyzed. Documenting the collection building process which includes the context of the carried out searches is imperative for social scientists to reproduce their research. Such context documentation consists of several user actions and search attributes like: the issued queries; the results clicked and saved; duration a particular result was viewed for; the set of results that was displayed but neither clicked, nor saved; as well as user annotations like comments or tags. In this work we will describe a search process tracking module and a search history visualization module. These modules can be integrated into keyword based search systems through a REST API which was developed to help capture, document and revisit past search contexts while building a web corpora. Finally, we detail the implementation of how the module was integrated into the LearnWeb2.0 platform - a multimedia web2.0 search and sharing application which can obtain resources from various web2.0 tools such as Youtube, Bing, Flickr, etc using keyword search.
true
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
true
41,082
2501.16370
Advanced Physics-Informed Neural Network with Residuals for Solving Complex Integral Equations
In this paper, we present the Residual Integral Solver Network (RISN), a novel neural network architecture designed to solve a wide range of integral and integro-differential equations, including one-dimensional, multi-dimensional, ordinary and partial integro-differential, systems, and fractional types. RISN integrates residual connections with high-accurate numerical methods such as Gaussian quadrature and fractional derivative operational matrices, enabling it to achieve higher accuracy and stability than traditional Physics-Informed Neural Networks (PINN). The residual connections help mitigate vanishing gradient issues, allowing RISN to handle deeper networks and more complex kernels, particularly in multi-dimensional problems. Through extensive experiments, we demonstrate that RISN consistently outperforms PINN, achieving significantly lower Mean Absolute Errors (MAE) across various types of equations. The results highlight RISN's robustness and efficiency in solving challenging integral and integro-differential problems, making it a valuable tool for real-world applications where traditional methods often struggle.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
true
false
true
527,935
2110.14227
Emoji-based Co-attention Network for Microblog Sentiment Analysis
Emojis are widely used in online social networks to express emotions, attitudes, and opinions. As emotional-oriented characters, emojis can be modeled as important features of emotions towards the recipient or subject for sentiment analysis. However, existing methods mainly take emojis as heuristic information that fails to resolve the problem of ambiguity noise. Recent researches have utilized emojis as an independent input to classify text sentiment but they ignore the emotional impact of the interaction between text and emojis. It results that the emotional semantics of emojis cannot be fully explored. In this paper, we propose an emoji-based co-attention network that learns the mutual emotional semantics between text and emojis on microblogs. Our model adopts the co-attention mechanism based on bidirectional long short-term memory incorporating the text and emojis, and integrates a squeeze-and-excitation block in a convolutional neural network classifier to increase its sensitivity to emotional semantic features. Experimental results show that the proposed method can significantly outperform several baselines for sentiment analysis on short texts of social media.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
263,472
2103.05104
New Methods for Detecting Concentric Objects With High Accuracy
Fitting concentric geometric objects to digitized data is an important problem in many areas such as iris detection, autonomous navigation, and industrial robotics operations. There are two common approaches to fitting geometric shapes to data: the geometric (iterative) approach and algebraic (non-iterative) approach. The geometric approach is a nonlinear iterative method that minimizes the sum of the squares of Euclidean distances of the observed points to the ellipses and regarded as the most accurate method, but it needs a good initial guess to improve the convergence rate. The algebraic approach is based on minimizing the algebraic distances with some constraints imposed on parametric space. Each algebraic method depends on the imposed constraint, and it can be solved with the aid of the generalized eigenvalue problem. Only a few methods in literature were developed to solve the problem of concentric ellipses. Here we study the statistical properties of existing methods by firstly establishing a general mathematical and statistical framework for this problem. Using rigorous perturbation analysis, we derive the variances and biasedness of each method under the small-sigma model. We also develop new estimators, which can be used as reliable initial guesses for other iterative methods. Then we compare the performance of each method according to their theoretical accuracy. Not only do our methods described here outperform other existing non-iterative methods, they are also quite robust against large noise. These methods and their practical performances are assessed by a series of numerical experiments on both synthetic and real data.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
223,854
2412.21171
Quantum Error Correction near the Coding Theoretical Bound
Recent advancements in quantum computing have led to the realization of systems comprising tens of reliable logical qubits, constructed from thousands of noisy physical qubits. However, many of the critical applications that quantum computers aim to solve require quantum computations involving millions or more logical qubits. This necessitates highly efficient quantum error correction capable of handling large numbers of logical qubits. Classical error correction theory is well-developed, with low-density parity-check (LDPC) codes achieving performance limits by encoding large classical bits. Despite more than two decades of effort, no efficiently decodable quantum error-correcting code that approaches the hashing bound, which is a fundamental lower bound on quantum capacity, had been discovered. Here, we present quantum error-correcting codes constructed from classical LDPC codes that approach the hashing bound while maintaining linear computational complexity in the number of physical qubits. This result establishes a pathway toward realizing large-scale, fault-tolerant quantum computers. By integrating our quantum error correction scheme with devices capable of managing vast numbers of qubits, the prospect of solving critical real-world problems through quantum computation is brought significantly closer.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
521,473
2302.05397
A Practical Mixed Precision Algorithm for Post-Training Quantization
Neural network quantization is frequently used to optimize model size, latency and power consumption for on-device deployment of neural networks. In many cases, a target bit-width is set for an entire network, meaning every layer get quantized to the same number of bits. However, for many networks some layers are significantly more robust to quantization noise than others, leaving an important axis of improvement unused. As many hardware solutions provide multiple different bit-width settings, mixed-precision quantization has emerged as a promising solution to find a better performance-efficiency trade-off than homogeneous quantization. However, most existing mixed precision algorithms are rather difficult to use for practitioners as they require access to the training data, have many hyper-parameters to tune or even depend on end-to-end retraining of the entire model. In this work, we present a simple post-training mixed precision algorithm that only requires a small unlabeled calibration dataset to automatically select suitable bit-widths for each layer for desirable on-device performance. Our algorithm requires no hyper-parameter tuning, is robust to data variation and takes into account practical hardware deployment constraints making it a great candidate for practical use. We experimentally validate our proposed method on several computer vision tasks, natural language processing tasks and many different networks, and show that we can find mixed precision networks that provide a better trade-off between accuracy and efficiency than their homogeneous bit-width equivalents.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
345,027
1709.06934
REACT to Cyber Attacks on Power Grids
Motivated by the recent cyber attack on the Ukrainian power grid, we study cyber attacks on power grids that affect both the physical infrastructure and the data at the control center. In particular, we assume that an adversary attacks an area by: (i) remotely disconnecting some lines within the attacked area, and (ii) modifying the information received from the attacked area to mask the line failures and hide the attacked area from the control center. For the latter, we consider two types of attacks: (i) data distortion: which distorts the data by adding powerful noise to the actual data, and (ii) data replay: which replays a locally consistent old data instead of the actual data. We use the DC power flow model and prove that the problem of finding the set of line failures given the phase angles of the nodes outside of the attacked area is strongly NP-hard, even when the attacked area is known. However, we introduce the polynomial time REcurrent Attack Containment and deTection (REACT) Algorithm to approximately detect the attacked area and line failures after a cyber attack. We numerically show that it performs very well in detecting the attacked area, and detecting single, double, and triple line failures in small and large attacked areas.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
81,202
2104.09138
Cures, Treatments and Vaccines for Covid-19: International differences in interest on Twitter
Since the Covid-19 pandemic is a global threat to health that few can fully escape, it has given a unique opportunity to study international reactions to a common problem. Such reactions can be partly obtained from public posts to Twitter, allowing investigations of changes in interest over time. This study analysed English-language Covid-19 tweets mentioning cures, treatments, or vaccines from 1 January 2020 to 8 April 2021, seeking trends and international differences. The results have methodological limitations but show a tendency for countries with a lower human development index score to tweet more about cures, although they were a minor topic for all countries. Vaccines were discussed about as much as treatments until July 2020, when they generated more interest because of developments in Russia. The November 2020 Pfizer-BioNTech preliminary Phase 3 trials results generated an immediate and sustained sharp increase, however, followed by a continuing roughly linear increase in interest for vaccines until at least April 2021. Against this background, national deviations from the average were triggered by country-specific news about cures, treatments or vaccines. Nevertheless, interest in vaccines in all countries increased in parallel to some extent, despite substantial international differences in national regulatory approval and availability. The results also highlight that unsubstantiated claims about alternative medicine remedies gained traction in several countries, apparently posing a threat to public health.
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
true
231,132
1912.05909
MAGSAC++, a fast, reliable and accurate robust estimator
A new method for robust estimation, MAGSAC++, is proposed. It introduces a new model quality (scoring) function that does not require the inlier-outlier decision, and a novel marginalization procedure formulated as an iteratively re-weighted least-squares approach. We also propose a new sampler, Progressive NAPSAC, for RANSAC-like robust estimators. Exploiting the fact that nearby points often originate from the same model in real-world data, it finds local structures earlier than global samplers. The progressive transition from local to global sampling does not suffer from the weaknesses of purely localized samplers. On six publicly available real-world datasets for homography and fundamental matrix fitting, MAGSAC++ produces results superior to state-of-the-art robust methods. It is faster, more geometrically accurate and fails less often.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
157,222
1502.06732
Convergence Analysis using the Edge Laplacian: Robust Consensus of Nonlinear Multi-agent Systems via ISS Method
This study develops an original and innovative matrix representation with respect to the information flow for networked multi-agent system. To begin with, the general concepts of the edge Laplacian of digraph are proposed with its algebraic properties. Benefit from this novel graph-theoretic tool, we can build a bridge between the consensus problem and the edge agreement problem; we also show that the edge Laplacian sheds a new light on solving the leaderless consensus problem. Based on the edge agreement framework, the technical challenges caused by unknown but bounded disturbances and inherently nonlinear dynamics can be well handled. In particular, we design an integrated procedure for a new robust consensus protocol that is based on a blend of algebraic graph theory and the newly developed cyclic-small-gain theorem. Besides, to highlight the intricate relationship between the original graph and cyclic-small-gain theorem, the concept of edge-interconnection graph is introduced for the first time. Finally, simulation results are provided to verify the theoretical analysis.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
40,518
2004.11536
Leveraging inter-firm influence in the diffusion of energy efficiency technologies: An agent-based model
Energy efficiency technologies (EETs) are crucial for saving energy and reducing carbon dioxide emissions. However, the diffusion of EETs in small and medium-sized enterprises is rather slow. Literature shows the interactions between innovation adopters and potential adopters have significant impacts on innovation diffusion. Enterprises lack the motivation to share information, and EETs usually lack observability, which suppress the inter-firm influence. Therefore, an information platform, together with proper policies encouraging or forcing enterprises to disclose EET-related information, should help harness inter-firm influence to accelerate EETs' diffusion. To explore whether and how such an information platform affects EETs' diffusion in small and medium-sized enterprises, this study builds an agent-based model to mimic EET diffusion processes. Based on a series of controlled numerical experiments, some counter-intuitive phenomena are discovered and explained. The results show that the information platform is a double-edged sword that notably accelerates EETs' diffusion by approximately 47% but may also boost negative information to diffuse even faster and delay massive adoption of EETs. Increasing network density and the intensity of inter-firm influence are effective to speed EET diffusion, but their impacts diminish drastically after reaching some critical values (0.05 and 0.15 respectively) and eventually harm the stability of the system. Hence, the findings implicate that EET suppliers should carefully launch their promising but immature products; policies that can reduce the perceived risk by enterprises and the effort to maintain an informative rather than judgmental information platform can prominently mitigate the negative side effects brought by high fluidity of information.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
173,941
1707.05955
Session-aware Information Embedding for E-commerce Product Recommendation
Most of the existing recommender systems assume that user's visiting history can be constantly recorded. However, in recent online services, the user identification may be usually unknown and only limited online user behaviors can be used. It is of great importance to model the temporal online user behaviors and conduct recommendation for the anonymous users. In this paper, we propose a list-wise deep neural network based architecture to model the limited user behaviors within each session. To train the model efficiently, we first design a session embedding method to pre-train a session representation, which incorporates different kinds of user search behaviors such as clicks and views. Based on the learnt session representation, we further propose a list-wise ranking model to generate the recommendation result for each anonymous user session. We conduct quantitative experiments on a recently published dataset from an e-commerce company. The evaluation results validate the effectiveness of the proposed method, which can outperform the state-of-the-art significantly.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
77,326
2406.06195
2D Moore CA with new boundary conditions and its reversibility
In this paper, under certain conditions we consider two-dimensional cellular automata with the Moore neighborhood. Namely, the characterization of 2D linear cellular automata defined by the Moore neighborhood with some mixed boundary conditions over the field $\mathbb{Z}_{p}$ is studied. Furthermore, we investigate the rule matrices of 2D Moore CA under some mixed boundary conditions by applying rotation. Finally, we give the conditions under which the obtained rule matrices for 2D finite CAs are reversible.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
462,474
1903.10157
Down-Scaling with Learned Kernels in Multi-Scale Deep Neural Networks for Non-Uniform Single Image Deblurring
Multi-scale approach has been used for blind image / video deblurring problems to yield excellent performance for both conventional and recent deep-learning-based state-of-the-art methods. Bicubic down-sampling is a typical choice for multi-scale approach to reduce spatial dimension after filtering with a fixed kernel. However, this fixed kernel may be sub-optimal since it may destroy important information for reliable deblurring such as strong edges. We propose convolutional neural network (CNN)-based down-scale methods for multi-scale deep-learning-based non-uniform single image deblurring. We argue that our CNN-based down-scaling effectively reduces the spatial dimension of the original image, while learned kernels with multiple channels may well-preserve necessary details for deblurring tasks. For each scale, we adopt to use RCAN (Residual Channel Attention Networks) as a backbone network to further improve performance. Our proposed method yielded state-of-the-art performance on GoPro dataset by large margin. Our proposed method was able to achieve 2.59dB higher PSNR than the current state-of-the-art method by Tao. Our proposed CNN-based down-scaling was the key factor for this excellent performance since the performance of our network without it was decreased by 1.98dB. The same networks trained with GoPro set were also evaluated on large-scale Su dataset and our proposed method yielded 1.15dB better PSNR than the Tao's method. Qualitative comparisons on Lai dataset also confirmed the superior performance of our proposed method over other state-of-the-art methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
125,217
2402.12303
UncertaintyTrack: Exploiting Detection and Localization Uncertainty in Multi-Object Tracking
Multi-object tracking (MOT) methods have seen a significant boost in performance recently, due to strong interest from the research community and steadily improving object detection methods. The majority of tracking methods follow the tracking-by-detection (TBD) paradigm, blindly trust the incoming detections with no sense of their associated localization uncertainty. This lack of uncertainty awareness poses a problem in safety-critical tasks such as autonomous driving where passengers could be put at risk due to erroneous detections that have propagated to downstream tasks, including MOT. While there are existing works in probabilistic object detection that predict the localization uncertainty around the boxes, no work in 2D MOT for autonomous driving has studied whether these estimates are meaningful enough to be leveraged effectively in object tracking. We introduce UncertaintyTrack, a collection of extensions that can be applied to multiple TBD trackers to account for localization uncertainty estimates from probabilistic object detectors. Experiments on the Berkeley Deep Drive MOT dataset show that the combination of our method and informative uncertainty estimates reduces the number of ID switches by around 19\% and improves mMOTA by 2-3%. The source code is available at https://github.com/TRAILab/UncertaintyTrack
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
430,795
cs/0304035
Exploiting Sublanguage and Domain Characteristics in a Bootstrapping Approach to Lexicon and Ontology Creation
It is very costly to build up lexical resources and domain ontologies. Especially when confronted with a new application domain lexical gaps and a poor coverage of domain concepts are a problem for the successful exploitation of natural language document analysis systems that need and exploit such knowledge sources. In this paper we report about ongoing experiments with `bootstrapping techniques' for lexicon and ontology creation.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
537,825
1911.01040
Online Debiasing for Adaptively Collected High-dimensional Data with Applications to Time Series Analysis
Adaptive collection of data is commonplace in applications throughout science and engineering. From the point of view of statistical inference however, adaptive data collection induces memory and correlation in the samples, and poses significant challenge. We consider the high-dimensional linear regression, where the samples are collected adaptively, and the sample size $n$ can be smaller than $p$, the number of covariates. In this setting, there are two distinct sources of bias: the first due to regularization imposed for consistent estimation, e.g. using the LASSO, and the second due to adaptivity in collecting the samples. We propose "online debiasing", a general procedure for estimators such as the LASSO, which addresses both sources of bias. In two concrete contexts $(i)$ time series analysis and $(ii)$ batched data collection, we demonstrate that online debiasing optimally debiases the LASSO estimate when the underlying parameter $\theta_0$ has sparsity of order $o(\sqrt{n}/\log p)$. In this regime, the debiased estimator can be used to compute $p$-values and confidence intervals of optimal size.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
151,992
2302.10391
Low-Complexity Three-Dimensional AOA-Cross Geometric Center Localization Methods via Multi-UAV network
Angle of arrival (AOA) is widely used to locate a wireless signal emitter in unmanned aerial vehicle (UAV) localization. Compared with received signal strength (RSS) and time of arrival (TOA), it has higher accuracy and is not sensitive to time synchronization of the distributed sensors. However, there are few works focused on three-dimensional (3-D) scenario. Furthermore, although maximum likelihood estimator (MLE) has a relatively high performance, its computational complexity is ultra high. It is hard to employ it in practical applications. This paper proposed two multiplane geometric center based methods for 3-D AOA in UAV positioning. The first method could estimate the source position and angle measurement noise at the same time by seeking a center of the inscribed sphere, called CIS. Firstly, every sensor could measure two angles, azimuth angle and elevation angle. Based on that, two planes are constructed. Then, the estimated values of source position and angle noise are achieved by seeking the center and radius of the corresponding inscribed sphere. Deleting the estimation of the radius, the second algorithm, called MSD-LS, is born. It is not able to estimate angle noise but has lower computational complexity. Theoretical analysis and simulation results show that proposed methods could approach the Cramer-Rao lower bound (CRLB) and have lower complexity than MLE.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
346,783
2403.07822
Fusing Climate Data Products using a Spatially Varying Autoencoder
Autoencoders are powerful machine learning models used to compress information from multiple data sources. However, autoencoders, like all artificial neural networks, are often unidentifiable and uninterpretable. This research focuses on creating an identifiable and interpretable autoencoder that can be used to meld and combine climate data products. The proposed autoencoder utilizes a Bayesian statistical framework, allowing for probabilistic interpretations while also varying spatially to capture useful spatial patterns across the various data products. Constraints are placed on the autoencoder as it learns patterns in the data, creating an interpretable consensus that includes the important features from each input. We demonstrate the utility of the autoencoder by combining information from multiple precipitation products in High Mountain Asia.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
437,040
2406.12687
Using LLMs to Aid Annotation and Collection of Clinically-Enriched Data in Bipolar Disorder and Schizophrenia
NLP in mental health has been primarily social media focused. Real world practitioners also have high case loads and often domain specific variables, of which modern LLMs lack context. We take a dataset made by recruiting 644 participants, including individuals diagnosed with Bipolar Disorder (BD), Schizophrenia (SZ), and Healthy Controls (HC). Participants undertook tasks derived from a standardized mental health instrument, and the resulting data were transcribed and annotated by experts across five clinical variables. This paper demonstrates the application of contemporary language models in sequence-to-sequence tasks to enhance mental health research. Specifically, we illustrate how these models can facilitate the deployment of mental health instruments, data collection, and data annotation with high accuracy and scalability. We show that small models are capable of annotation for domain-specific clinical variables, data collection for mental-health instruments, and perform better then commercial large models.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
465,530
1805.00652
MX-LSTM: mixing tracklets and vislets to jointly forecast trajectories and head poses
Recent approaches on trajectory forecasting use tracklets to predict the future positions of pedestrians exploiting Long Short Term Memory (LSTM) architectures. This paper shows that adding vislets, that is, short sequences of head pose estimations, allows to increase significantly the trajectory forecasting performance. We then propose to use vislets in a novel framework called MX-LSTM, capturing the interplay between tracklets and vislets thanks to a joint unconstrained optimization of full covariance matrices during the LSTM backpropagation. At the same time, MX-LSTM predicts the future head poses, increasing the standard capabilities of the long-term trajectory forecasting approaches. With standard head pose estimators and an attentional-based social pooling, MX-LSTM scores the new trajectory forecasting state-of-the-art in all the considered datasets (Zara01, Zara02, UCY, and TownCentre) with a dramatic margin when the pedestrians slow down, a case where most of the forecasting approaches struggle to provide an accurate solution.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
96,481
2208.12986
6D Robotic Assembly Based on RGB-only Object Pose Estimation
Vision-based robotic assembly is a crucial yet challenging task as the interaction with multiple objects requires high levels of precision. In this paper, we propose an integrated 6D robotic system to perceive, grasp, manipulate and assemble blocks with tight tolerances. Aiming to provide an off-the-shelf RGB-only solution, our system is built upon a monocular 6D object pose estimation network trained solely with synthetic images leveraging physically-based rendering. Subsequently, pose-guided 6D transformation along with collision-free assembly is proposed to construct any designed structure with arbitrary initial poses. Our novel 3-axis calibration operation further enhances the precision and robustness by disentangling 6D pose estimation and robotic assembly. Both quantitative and qualitative results demonstrate the effectiveness of our proposed 6D robotic assembly system.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
314,911
1608.05001
An image compression and encryption scheme based on deep learning
Stacked Auto-Encoder (SAE) is a kind of deep learning algorithm for unsupervised learning. Which has multi layers that project the vector representation of input data into a lower vector space. These projection vectors are dense representations of the input data. As a result, SAE can be used for image compression. Using chaotic logistic map, the compression ones can further be encrypted. In this study, an application of image compression and encryption is suggested using SAE and chaotic logistic map. Experiments show that this application is feasible and effective. It can be used for image transmission and image protection on internet simultaneously.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
true
59,910
1910.04749
Defending Neural Backdoors via Generative Distribution Modeling
Neural backdoor attack is emerging as a severe security threat to deep learning, while the capability of existing defense methods is limited, especially for complex backdoor triggers. In the work, we explore the space formed by the pixel values of all possible backdoor triggers. An original trigger used by an attacker to build the backdoored model represents only a point in the space. It then will be generalized into a distribution of valid triggers, all of which can influence the backdoored model. Thus, previous methods that model only one point of the trigger distribution is not sufficient. Getting the entire trigger distribution, e.g., via generative modeling, is a key to effective defense. However, existing generative modeling techniques for image generation are not applicable to the backdoor scenario as the trigger distribution is completely unknown. In this work, we propose max-entropy staircase approximator (MESA), an algorithm for high-dimensional sampling-free generative modeling and use it to recover the trigger distribution. We also develop a defense technique to remove the triggers from the backdoored model. Our experiments on Cifar10/100 dataset demonstrate the effectiveness of MESA in modeling the trigger distribution and the robustness of the proposed defense method.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
148,846
2010.03658
How Out-of-Distribution Data Hurts Semi-Supervised Learning
Recent semi-supervised learning algorithms have demonstrated greater success with higher overall performance due to better-unlabeled data representations. Nonetheless, recent research suggests that the performance of the SSL algorithm can be degraded when the unlabeled set contains out-of-distribution examples (OODs). This work addresses the following question: How do out-of-distribution (OOD) data adversely affect semi-supervised learning algorithms? To answer this question, we investigate the critical causes of OOD's negative effect on SSL algorithms. In particular, we found that 1) certain kinds of OOD data instances that are close to the decision boundary have a more significant impact on performance than those that are further away, and 2) Batch Normalization (BN), a popular module, may degrade rather than improve performance when the unlabeled set contains OODs. In this context, we developed a unified weighted robust SSL framework that can be easily extended to many existing SSL algorithms and improve their robustness against OODs. More specifically, we developed an efficient bi-level optimization algorithm that could accommodate high-order approximations of the objective and scale to multiple inner optimization steps to learn a massive number of weight parameters while outperforming existing low-order approximations of bi-level optimization. Further, we conduct a theoretical study of the impact of faraway OODs in the BN step and propose a weighted batch normalization (WBN) procedure for improved performance. Finally, we discuss the connection between our approach and low-order approximation techniques. Our experiments on synthetic and real-world datasets demonstrate that our proposed approach significantly enhances the robustness of four representative SSL algorithms against OODs compared to four state-of-the-art robust SSL strategies.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
199,475
2401.14636
Efficient Constraint Generation for Stochastic Shortest Path Problems
Current methods for solving Stochastic Shortest Path Problems (SSPs) find states' costs-to-go by applying Bellman backups, where state-of-the-art methods employ heuristics to select states to back up and prune. A fundamental limitation of these algorithms is their need to compute the cost-to-go for every applicable action during each state backup, leading to unnecessary computation for actions identified as sub-optimal. We present new connections between planning and operations research and, using this framework, we address this issue of unnecessary computation by introducing an efficient version of constraint generation for SSPs. This technique allows algorithms to ignore sub-optimal actions and avoid computing their costs-to-go. We also apply our novel technique to iLAO* resulting in a new algorithm, CG-iLAO*. Our experiments show that CG-iLAO* ignores up to 57% of iLAO*'s actions and it solves problems up to 8x and 3x faster than LRTDP and iLAO*.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
424,168
2412.12588
PerSphere: A Comprehensive Framework for Multi-Faceted Perspective Retrieval and Summarization
As online platforms and recommendation algorithms evolve, people are increasingly trapped in echo chambers, leading to biased understandings of various issues. To combat this issue, we have introduced PerSphere, a benchmark designed to facilitate multi-faceted perspective retrieval and summarization, thus breaking free from these information silos. For each query within PerSphere, there are two opposing claims, each supported by distinct, non-overlapping perspectives drawn from one or more documents. Our goal is to accurately summarize these documents, aligning the summaries with the respective claims and their underlying perspectives. This task is structured as a two-step end-to-end pipeline that includes comprehensive document retrieval and multi-faceted summarization. Furthermore, we propose a set of metrics to evaluate the comprehensiveness of the retrieval and summarization content. Experimental results on various counterparts for the pipeline show that recent models struggle with such a complex task. Analysis shows that the main challenge lies in long context and perspective extraction, and we propose a simple but effective multi-agent summarization system, offering a promising solution to enhance performance on PerSphere.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
517,935
2302.10406
Time to Embrace Natural Language Processing (NLP)-based Digital Pathology: Benchmarking NLP- and Convolutional Neural Network-based Deep Learning Pipelines
NLP-based computer vision models, particularly vision transformers, have been shown to outperform CNN models in many imaging tasks. However, most digital pathology artificial-intelligence models are based on CNN architectures, probably owing to a lack of data regarding NLP models for pathology images. In this study, we developed digital pathology pipelines to benchmark the five most recently proposed NLP models (vision transformer (ViT), Swin Transformer, MobileViT, CMT, and Sequencer2D) and four popular CNN models (ResNet18, ResNet50, MobileNetV2, and EfficientNet) to predict biomarkers in colorectal cancer (microsatellite instability, CpG island methylator phenotype, and BRAF mutation). Hematoxylin and eosin-stained whole-slide images from Molecular and Cellular Oncology and The Cancer Genome Atlas were used as training and external validation datasets, respectively. Cross-study external validations revealed that the NLP-based models significantly outperformed the CNN-based models in biomarker prediction tasks, improving the overall prediction and precision up to approximately 10% and 26%, respectively. Notably, compared with existing models in the current literature using large training datasets, our NLP models achieved state-of-the-art predictions for all three biomarkers using a relatively small training dataset, suggesting that large training datasets are not a prerequisite for NLP models or transformers, and NLP may be more suitable for clinical studies in which small training datasets are commonly collected. The superior performance of Sequencer2D suggests that further research and innovation on both transformer and bidirectional long short-term memory architectures are warranted in the field of digital pathology. NLP models can replace classic CNN architectures and become the new workhorse backbone in the field of digital pathology.
false
false
false
false
false
false
true
false
true
false
false
true
false
false
false
false
false
false
346,789
1804.10418
Exploiting the Superposition Property of Wireless Communication For Average Consensus Problems in Multi-Agent Systems
This paper studies system stability and performance of multi-agent systems in the context of consensus problems over wireless multiple-access channels (MAC). We propose a consensus algorithm that exploits the broadcast property of the wireless channel. Therefore, the algorithm is expected to exhibit fast convergence and high efficiency in terms of the usage of scarce wireless resources. The designed algorithm shows robustness against variations in the channel and consensus is always reached. However the consensus value will be depending on these variations.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
96,157
2303.06380
Semi-supervised Hand Appearance Recovery via Structure Disentanglement and Dual Adversarial Discrimination
Enormous hand images with reliable annotations are collected through marker-based MoCap. Unfortunately, degradations caused by markers limit their application in hand appearance reconstruction. A clear appearance recovery insight is an image-to-image translation trained with unpaired data. However, most frameworks fail because there exists structure inconsistency from a degraded hand to a bare one. The core of our approach is to first disentangle the bare hand structure from those degraded images and then wrap the appearance to this structure with a dual adversarial discrimination (DAD) scheme. Both modules take full advantage of the semi-supervised learning paradigm: The structure disentanglement benefits from the modeling ability of ViT, and the translator is enhanced by the dual discrimination on both translation processes and translation results. Comprehensive evaluations have been conducted to prove that our framework can robustly recover photo-realistic hand appearance from diverse marker-contained and even object-occluded datasets. It provides a novel avenue to acquire bare hand appearance data for other downstream learning problems.The codes will be publicly available at https://www.yangangwang.com
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
350,825
2207.11228
Classifying Crop Types using Gaussian Bayesian Models and Neural Networks on GHISACONUS USGS data from NASA Hyperspectral Satellite Imagery
Hyperspectral Imagining is a type of digital imaging in which each pixel contains typically hundreds of wavelengths of light providing spectroscopic information about the materials present in the pixel. In this paper we provide classification methods for determining crop type in the USGS GHISACONUS data, which contains around 7,000 pixel spectra from the five major U.S. agricultural crops (winter wheat, rice, corn, soybeans, and cotton) collected by the NASA Hyperion satellite, and includes the spectrum, geolocation, crop type, and stage of growth for each pixel. We apply standard LDA and QDA as well as Bayesian custom versions that compute the joint probability of crop type and stage, and then the marginal probability for crop type, outperforming the non-Bayesian methods. We also test a single layer neural network with dropout on the data, which performs comparable to LDA and QDA but not as well as the Bayesian methods.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
309,556
1106.5122
Clustering with Prototype Extraction for Census Data Analysis
Not long ago primary census data became available to publicity. It opened qualitatively new perspectives not only for researchers in demography and sociology, but also for those people, who somehow face processes occurring in society. In this paper authors propose using Data Mining methods for searching hidden patterns in census data. A novel clustering-based technique is described as well. It allows determining factors which influence people behavior, in particular decision-making process (as an example, a decision whether to have a baby or not). Proposed technique is based on clustering a set of respondents, for whom a certain event have already happened (for instance, a baby was born), and discovering clusters' prototypes from a set of respondents, for whom this event hasn't occurred yet. By means of analyzing clusters' and their prototypes' characteristics it is possible to identify which factors influence the decision-making process. Authors also provide an experimental example of the described approach usage.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
false
10,991
2312.13096
In Generative AI we Trust: Can Chatbots Effectively Verify Political Information?
This article presents a comparative analysis of the ability of two large language model (LLM)-based chatbots, ChatGPT and Bing Chat, recently rebranded to Microsoft Copilot, to detect veracity of political information. We use AI auditing methodology to investigate how chatbots evaluate true, false, and borderline statements on five topics: COVID-19, Russian aggression against Ukraine, the Holocaust, climate change, and LGBTQ+ related debates. We compare how the chatbots perform in high- and low-resource languages by using prompts in English, Russian, and Ukrainian. Furthermore, we explore the ability of chatbots to evaluate statements according to political communication concepts of disinformation, misinformation, and conspiracy theory, using definition-oriented prompts. We also systematically test how such evaluations are influenced by source bias which we model by attributing specific claims to various political and social actors. The results show high performance of ChatGPT for the baseline veracity evaluation task, with 72 percent of the cases evaluated correctly on average across languages without pre-training. Bing Chat performed worse with a 67 percent accuracy. We observe significant disparities in how chatbots evaluate prompts in high- and low-resource languages and how they adapt their evaluations to political communication concepts with ChatGPT providing more nuanced outputs than Bing Chat. Finally, we find that for some veracity detection-related tasks, the performance of chatbots varied depending on the topic of the statement or the source to which it is attributed. These findings highlight the potential of LLM-based chatbots in tackling different forms of false information in online environments, but also points to the substantial variation in terms of how such potential is realized due to specific factors, such as language of the prompt or the topic.
false
false
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
417,205
2302.10578
Don't guess what's true: choose what's optimal. A probability transducer for machine-learning classifiers
In fields such as medicine and drug discovery, the ultimate goal of a classification is not to guess a class, but to choose the optimal course of action among a set of possible ones, usually not in one-one correspondence with the set of classes. This decision-theoretic problem requires sensible probabilities for the classes. Probabilities conditional on the features are computationally almost impossible to find in many important cases. The main idea of the present work is to calculate probabilities conditional not on the features, but on the trained classifier's output. This calculation is cheap, needs to be made only once, and provides an output-to-probability "transducer" that can be applied to all future outputs of the classifier. In conjunction with problem-dependent utilities, the probabilities of the transducer allow us to find the optimal choice among the classes or among a set of more general decisions, by means of expected-utility maximization. This idea is demonstrated in a simplified drug-discovery problem with a highly imbalanced dataset. The transducer and utility maximization together always lead to improved results, sometimes close to theoretical maximum, for all sets of problem-dependent utilities. The one-time-only calculation of the transducer also provides, automatically: (i) a quantification of the uncertainty about the transducer itself; (ii) the expected utility of the augmented algorithm (including its uncertainty), which can be used for algorithm selection; (iii) the possibility of using the algorithm in a "generative mode", useful if the training dataset is biased.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
346,857
2305.15102
Analysis of modular CMA-ES on strict box-constrained problems in the SBOX-COST benchmarking suite
Box-constraints limit the domain of decision variables and are common in real-world optimization problems, for example, due to physical, natural or spatial limitations. Consequently, solutions violating a box-constraint may not be evaluable. This assumption is often ignored in the literature, e.g., existing benchmark suites, such as COCO/BBOB, allow the optimizer to evaluate infeasible solutions. This paper presents an initial study on the strict-box-constrained benchmarking suite (SBOX-COST), which is a variant of the well-known BBOB benchmark suite that enforces box-constraints by returning an invalid evaluation value for infeasible solutions. Specifically, we want to understand the performance difference between BBOB and SBOX-COST as a function of two initialization methods and six constraint-handling strategies all tested with modular CMA-ES. We find that, contrary to what may be expected, handling box-constraints by saturation is not always better than not handling them at all. However, across all BBOB functions, saturation is better than not handling, and the difference increases with the number of dimensions. Strictly enforcing box-constraints also has a clear negative effect on the performance of classical CMA-ES (with uniform random initialization and no constraint handling), especially as problem dimensionality increases.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
367,457
2201.08831
Reliable Detection of Doppelg\"angers based on Deep Face Representations
Doppelg\"angers (or lookalikes) usually yield an increased probability of false matches in a facial recognition system, as opposed to random face image pairs selected for non-mated comparison trials. In this work, we assess the impact of doppelg\"angers on the HDA Doppelg\"anger and Disguised Faces in The Wild databases using a state-of-the-art face recognition system. It is found that doppelg\"anger image pairs yield very high similarity scores resulting in a significant increase of false match rates. Further, we propose a doppelg\"anger detection method which distinguishes doppelg\"angers from mated comparison trials by analysing differences in deep representations obtained from face image pairs. The proposed detection system employs a machine learning-based classifier, which is trained with generated doppelg\"anger image pairs utilising face morphing techniques. Experimental evaluations conducted on the HDA Doppelg\"anger and Look-Alike Face databases reveal a detection equal error rate of approximately 2.7% for the task of separating mated authentication attempts from doppelg\"angers.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
276,462
1711.09708
Classifier Selection with Permutation Tests
This work presents a content-based recommender system for machine learning classifier algorithms. Given a new data set, a recommendation of what classifier is likely to perform best is made based on classifier performance over similar known data sets. This similarity is measured according to a data set characterization that includes several state-of-the-art metrics taking into account physical structure, statis- tics, and information theory. A novelty with respect to prior work is the use of a robust approach based on permutation tests to directly assess whether a given learning algorithm is able to exploit the attributes in a data set to predict class labels, and compare it to the more commonly used F-score metric for evalu- ating classifier performance. To evaluate our approach, we have conducted an extensive experimentation including 8 of the main machine learning classification methods with varying configurations and 65 bi- nary data sets, leading to over 2331 experiments. Our results show that using the information from the permutation test clearly improves the quality of the recommendations.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
85,462
1610.03807
Question Generation from a Knowledge Base with Web Exploration
Question generation from a knowledge base (KB) is the task of generating questions related to the domain of the input KB. We propose a system for generating fluent and natural questions from a KB, which significantly reduces the human effort by leveraging massive web resources. In more detail, a seed question set is first generated by applying a small number of hand-crafted templates on the input KB, then more questions are retrieved by iteratively forming already obtained questions as search queries into a standard search engine, before finally questions are selected by estimating their fluency and domain relevance. Evaluated by human graders on 500 random-selected triples from Freebase, questions generated by our system are judged to be more fluent than those of \newcite{serban-EtAl:2016:P16-1} by human graders.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
62,303
2003.00070
Inexpensive surface electromyography sleeve with consistent electrode placement enables dexterous and stable prosthetic control through deep learning
The dexterity of conventional myoelectric prostheses is limited in part by the small datasets used to train the control algorithms. Variations in surface electrode positioning make it difficult to collect consistent data and to estimate motor intent reliably over time. To address these challenges, we developed an inexpensive, easy-to-don sleeve that can record robust and repeatable surface electromyography from 32 embedded monopolar electrodes. Embedded grommets are used to consistently align the sleeve with natural skin markings (e.g., moles, freckles, scars). The sleeve can be manufactured in a few hours for less than $60. Data from seven intact participants show the sleeve provides a signal-to-noise ratio of 14, a don-time under 11 seconds, and sub-centimeter precision for electrode placement. Furthermore, in a case study with one intact participant, we use the sleeve to demonstrate that neural networks can provide simultaneous and proportional control of six degrees of freedom, even 263 days after initial algorithm training. We also highlight that consistent recordings, accumulated over time to establish a large dataset, significantly improve dexterity. These results suggest that deep learning with a 74-layer neural network can substantially improve the dexterity and stability of myoelectric prosthetic control, and that deep-learning techniques can be readily instantiated and further validated through inexpensive sleeves/sockets with consistent recording locations.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
166,185
2110.06596
Logic Constraints to Feature Importances
In recent years, Artificial Intelligence (AI) algorithms have been proven to outperform traditional statistical methods in terms of predictivity, especially when a large amount of data was available. Nevertheless, the "black box" nature of AI models is often a limit for a reliable application in high-stakes fields like diagnostic techniques, autonomous guide, etc. Recent works have shown that an adequate level of interpretability could enforce the more general concept of model trustworthiness. The basic idea of this paper is to exploit the human prior knowledge of the features' importance for a specific task, in order to coherently aid the phase of the model's fitting. This sort of "weighted" AI is obtained by extending the empirical loss with a regularization term encouraging the importance of the features to follow predetermined constraints. This procedure relies on local methods for the feature importance computation, e.g. LRP, LIME, etc. that are the link between the model weights to be optimized and the user-defined constraints on feature importance. In the fairness area, promising experimental results have been obtained for the Adult dataset. Many other possible applications of this model agnostic theoretical framework are described.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
260,678
1908.06334
Energy-Efficient Proactive Caching for Fog Computing with Correlated Task Arrivals
With the proliferation of latency-critical applications, fog-radio network (FRAN) has been envisioned as a paradigm shift enabling distributed deployment of cloud-clone facilities at the network edge. In this paper, we consider proactive caching for a one-user one-access point (AP) fog computing system over a finite time horizon, in which consecutive tasks of the same type of application are temporarily correlated. Under the assumption of predicable length of the task-input bits, we formulate a long-term weighted-sum energy minimization problem with three-slot correlation to jointly optimize computation offloading policies and caching decisions subject to stringent per-slot deadline constraints. The formulated problem is hard to solve due to the mixed-integer non-convexity. To tackle this challenge, first, we assume that task-related information are perfectly known {\em a priori}, and provide offline solution leveraging the technique of semi-definite relaxation (SDR), thereby serving as theoretical upper bound. Next, based on the offline solution, we propose a sliding-window based online algorithm under arbitrarily distributed prediction error. Finally, the advantage of computation caching as well the proposed algorithm is verified by numerical examples by comparison with several benchmarks.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
141,982
2301.00088
A Transient Electrical-Thermal Co-Simulation Method with LTS for Multiscale Structures
In this article, an efficient transient electricalthermal co-simulation method based on the finite element method (FEM) and the discontinuous Galerkin time-domain (DGTD) method is developed for electrical-thermal coupling analysis of multiscale structures. Two Independent meshes are adopted by the steady electrical analysis and the transient thermal simulation to avoid redundant overhead. In order to enhance the feasibility and efficiency of solving multiscale and sophisticated structures, a local time stepping (LTS) technique coupled with an interpolation method is incorporated into the co-simulation method. Several numerical examples from simple structures to complex multiscale PDN structures are carried out to demonstrate the accuracy and efficiency of the proposed method by comparing with the COMSOL. Finally, two practical numerical examples are considered to confirm the performance of the proposed method for complex and multiscale structures.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
338,793
2406.12266
Towards a Client-Centered Assessment of LLM Therapists by Client Simulation
Although there is a growing belief that LLMs can be used as therapists, exploring LLMs' capabilities and inefficacy, particularly from the client's perspective, is limited. This work focuses on a client-centered assessment of LLM therapists with the involvement of simulated clients, a standard approach in clinical medical education. However, there are two challenges when applying the approach to assess LLM therapists at scale. Ethically, asking humans to frequently mimic clients and exposing them to potentially harmful LLM outputs can be risky and unsafe. Technically, it can be difficult to consistently compare the performances of different LLM therapists interacting with the same client. To this end, we adopt LLMs to simulate clients and propose ClientCAST, a client-centered approach to assessing LLM therapists by client simulation. Specifically, the simulated client is utilized to interact with LLM therapists and complete questionnaires related to the interaction. Based on the questionnaire results, we assess LLM therapists from three client-centered aspects: session outcome, therapeutic alliance, and self-reported feelings. We conduct experiments to examine the reliability of ClientCAST and use it to evaluate LLMs therapists implemented by Claude-3, GPT-3.5, LLaMA3-70B, and Mixtral 8*7B. Codes are released at https://github.com/wangjs9/ClientCAST.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
465,309
2210.02829
Melody Infilling with User-Provided Structural Context
This paper proposes a novel Transformer-based model for music score infilling, to generate a music passage that fills in the gap between given past and future contexts. While existing infilling approaches can generate a passage that connects smoothly locally with the given contexts, they do not take into account the musical form or structure of the music and may therefore generate overly smooth results. To address this issue, we propose a structure-aware conditioning approach that employs a novel attention-selecting module to supply user-provided structure-related information to the Transformer for infilling. With both objective and subjective evaluations, we show that the proposed model can harness the structural information effectively and generate melodies in the style of pop of higher quality than the two existing structure-agnostic infilling models.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
321,790
2209.08810
LMBAO: A Landmark Map for Bundle Adjustment Odometry in LiDAR SLAM
LiDAR odometry is one of the essential parts of LiDAR simultaneous localization and mapping (SLAM). However, existing LiDAR odometry tends to match a new scan simply iteratively with previous fixed-pose scans, gradually accumulating errors. Furthermore, as an effective joint optimization mechanism, bundle adjustment (BA) cannot be directly introduced into real-time odometry due to the intensive computation of large-scale global landmarks. Therefore, this letter designs a new strategy named a landmark map for bundle adjustment odometry (LMBAO) in LiDAR SLAM to solve these problems. First, BA-based odometry is further developed with an active landmark maintenance strategy for a more accurate local registration and avoiding cumulative errors. Specifically, this paper keeps entire stable landmarks on the map instead of just their feature points in the sliding window and deletes the landmarks according to their active grade. Next, the sliding window length is reduced, and marginalization is performed to retain the scans outside the window but corresponding to active landmarks on the map, greatly simplifying the computation and improving the real-time properties. In addition, experiments on three challenging datasets show that our algorithm achieves real-time performance in outdoor driving and outperforms state-of-the-art LiDAR SLAM algorithms, including Lego-LOAM and VLOM.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
318,279
2208.03792
Domain Randomization-Enhanced Depth Simulation and Restoration for Perceiving and Grasping Specular and Transparent Objects
Commercial depth sensors usually generate noisy and missing depths, especially on specular and transparent objects, which poses critical issues to downstream depth or point cloud-based tasks. To mitigate this problem, we propose a powerful RGBD fusion network, SwinDRNet, for depth restoration. We further propose Domain Randomization-Enhanced Depth Simulation (DREDS) approach to simulate an active stereo depth system using physically based rendering and generate a large-scale synthetic dataset that contains 130K photorealistic RGB images along with their simulated depths carrying realistic sensor noises. To evaluate depth restoration methods, we also curate a real-world dataset, namely STD, that captures 30 cluttered scenes composed of 50 objects with different materials from specular, transparent, to diffuse. Experiments demonstrate that the proposed DREDS dataset bridges the sim-to-real domain gap such that, trained on DREDS, our SwinDRNet can seamlessly generalize to other real depth datasets, e.g. ClearGrasp, and outperform the competing methods on depth restoration with a real-time speed. We further show that our depth restoration effectively boosts the performance of downstream tasks, including category-level pose estimation and grasping tasks. Our data and code are available at https://github.com/PKU-EPIC/DREDS
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
311,900
1011.0187
A Distributed AI Aided 3D Domino Game
In the article a turn-based game played on four computers connected via network is investigated. There are three computers with natural intelligence and one with artificial intelligence. Game table is seen by each player's own view point in all players' monitors. Domino pieces are three dimensional. For distributed systems TCP/IP protocol is used. In order to get 3D image, Microsoft XNA technology is applied. Domino 101 game is nondeterministic game that is result of the game depends on the initial random distribution of the pieces. Number of the distributions is equal to the multiplication of following combinations: . Moreover, in this game that is played by four people, players are divided into 2 pairs. Accordingly, we cannot predict how the player uses the dominoes that is according to the dominoes of his/her partner or according to his/her own dominoes. The fact that the natural intelligence can be a player in any level affects the outcome. These reasons make it difficult to develop an AI. In the article four levels of AI are developed. The AI in the first level is equivalent to the intelligence of a child who knows the rules of the game and recognizes the numbers. The AI in this level plays if it has any domino, suitable to play or says pass. In most of the games which can be played on the internet, the AI does the same. But the AI in the last level is a master player, and it can develop itself according to its competitors' levels.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
8,082
0803.2460
Upper Bound on Error Exponent of Regular LDPC Codes Transmitted over the BEC
The error performance of the ensemble of typical LDPC codes transmitted over the binary erasure channel (BEC) is analyzed. In the past, lower bounds on the error exponents were derived. In this paper a probabilistic upper bound on this error exponent is derived. This bound holds with some confidence level.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
1,451
1402.7340
Hierarchical community structure in complex (social) networks
The investigation of community structure in networks is a task of great importance in many disciplines, namely physics, sociology, biology and computer science where systems are often represented as graphs. One of the challenges is to find local communities from a local viewpoint in a graph without global information in order to reproduce the subjective hierarchical vision for each vertex. In this paper we present the improvement of an information dynamics algorithm in which the label propagation of nodes is based on the Markovian flow of information in the network under cognitive-inspired constraints \cite{Massaro2012}. In this framework we have introduced two more complex heuristics that allow the algorithm to detect the multi-resolution hierarchical community structure of networks from a source vertex or communities adopting fixed values of model's parameters. Experimental results show that the proposed methods are efficient and well-behaved in both real-world and synthetic networks.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
31,248
2104.07666
A Note on Data Simulations for Voting by Evaluation
Voting rules based on evaluation inputs rather than preference orders have been recently proposed, like majority judgement, range voting or approval voting. Traditionally, probabilistic analysis of voting rules supposes the use of simulation models to generate preferences data, like the Impartial Culture (IC) or Impartial and Anonymous Culture (IAC) models. But these simulation models are not suitable for the analysis of evaluation-based voting rules as they generate preference orders instead of the needed evaluations. We propose in this paper several simulation models for generating evaluation-based voting inputs. These models, inspired by classical ones, are defined, tested and compared for recommendation purpose.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
230,506
1708.03615
Unsupervised Incremental Learning of Deep Descriptors From Video Streams
We present a novel unsupervised method for face identity learning from video sequences. The method exploits the ResNet deep network for face detection and VGGface fc7 face descriptors together with a smart learning mechanism that exploits the temporal coherence of visual data in video streams. We present a novel feature matching solution based on Reverse Nearest Neighbour and a feature forgetting strategy that supports incremental learning with memory size control, while time progresses. It is shown that the proposed learning procedure is asymptotically stable and can be effectively applied to relevant applications like multiple face tracking.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
78,795
2210.07802
Object-Category Aware Reinforcement Learning
Object-oriented reinforcement learning (OORL) is a promising way to improve the sample efficiency and generalization ability over standard RL. Recent works that try to solve OORL tasks without additional feature engineering mainly focus on learning the object representations and then solving tasks via reasoning based on these object representations. However, none of these works tries to explicitly model the inherent similarity between different object instances of the same category. Objects of the same category should share similar functionalities; therefore, the category is the most critical property of an object. Following this insight, we propose a novel framework named Object-Category Aware Reinforcement Learning (OCARL), which utilizes the category information of objects to facilitate both perception and reasoning. OCARL consists of three parts: (1) Category-Aware Unsupervised Object Discovery (UOD), which discovers the objects as well as their corresponding categories; (2) Object-Category Aware Perception, which encodes the category information and is also robust to the incompleteness of (1) at the same time; (3) Object-Centric Modular Reasoning, which adopts multiple independent and object-category-specific networks when reasoning based on objects. Our experiments show that OCARL can improve both the sample efficiency and generalization in the OORL domain.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
323,874
2101.03609
Neurocognitive Informatics Manifesto
Informatics studies all aspects of the structure of natural and artificial information systems. Theoretical and abstract approaches to information have made great advances, but human information processing is still unmatched in many areas, including information management, representation and understanding. Neurocognitive informatics is a new, emerging field that should help to improve the matching of artificial and natural systems, and inspire better computational algorithms to solve problems that are still beyond the reach of machines. In this position paper examples of neurocognitive inspirations and promising directions in this area are given.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
214,950
2502.02997
Assessing Research Impact in Indian Conference Proceedings: Insights from Collaboration and Citations
Conferences serve as a crucial avenue for scientific communication. However, the increase in conferences and the subsequent publication of proceedings have prompted inquiries regarding the research quality being showcased at such events. This investigation delves into the conference publications indexed by Springer's Lecture Notes in Networks and Systems Series. Among the 570 international conferences held worldwide in this series, 177 were exclusively hosted in India. These 177 conferences collectively published 11,066 papers as conference proceedings. All these publications, along with conference details, were sourced from the Scopus database. The study aims to evaluate the research impact of these conference proceedings and identify the primary contributors. The results reveal a downward trend in the average number of citations per year. The collective average citation for all publications is 1.01. Papers co-authored by Indian and international authors (5.6%) exhibit a higher average impact of 1.44, in contrast to those authored solely by Indian authors (84.9%), which have an average impact of 0.97. Notably, Indian-collaborated papers, among the largest contributors, predominantly originate from private colleges and universities. Only 19% of papers exhibit collaboration with institutes of different prestige, yet their impact is considerably higher as compared to collaboration with institutes of similar prestige. This study highlights the importance of improving research quality in academic forums.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
530,557
2408.09615
The First Competition on Resource-Limited Infrared Small Target Detection Challenge: Methods and Results
In this paper, we briefly summarize the first competition on resource-limited infrared small target detection (namely, LimitIRSTD). This competition has two tracks, including weakly-supervised infrared small target detection (Track 1) and lightweight infrared small target detection (Track 2). 46 and 60 teams successfully registered and took part in Tracks 1 and Track 2, respectively. The top-performing methods and their results in each track are described with details. This competition inspires the community to explore the tough problems in the application of infrared small target detection, and ultimately promote the deployment of this technology under limited resource.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
481,511
1912.07582
A Nonlinear Regression Method for Composite Protection Modeling of Induction Motor Loads
Protection equipment is used to prevent damage to induction motor loads by isolating them from power systems in the event of severe faults. Modeling the response of induction motor loads and their protection is vital for power system planning and operation, especially in understanding system's dynamic performance and stability after a fault occurs. Induction motors are usually equipped with several types of protection with different operation mechanisms, making it challenging to develop adequate yet not overly complex protection models and determine their parameters for aggregate induction motor models. This paper proposes an optimization-based nonlinear regression framework to determine protection model parameters for aggregate induction motor loads in commercial buildings. Using a mathematical abstraction, the task of determining a suitable set of parameters for the protection model in composite load models is formulated as a nonlinear regression problem. Numerical examples are provided to illustrate the application of the framework. Sensitivity studies are presented to demonstrate the impact of lack of available motor load information on the accuracy of the protection models.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
157,640
2404.12786
Unlocking the Potential of Local CSI in Cell-Free Networks with Channel Aging and Fronthaul Delays
It is generally believed that downlink cell-free networks perform best under centralized implementations where the local channel state information (CSI) acquired by the access-points (AP) is forwarded to one or more central processing units (CPU) for the computation of the joint precoders based on global CSI. However, mostly due to limited fronthaul capabilities, this procedure incurs some delay that may lead to partially outdated precoding decisions and hence performance degradation. In some scenarios, this may even lead to worse performance than distributed implementations where the precoders are locally computed by the APs based on partial yet timely local CSI. To address this issue, this study considers the problem of robust precoding design merging the benefits of timely local CSI and delayed global CSI. As main result, we provide a novel distributed precoding design based on the recently proposed team minimum mean-square error method. As a byproduct, we also obtain novel insights related to the AP-CPU functional split problem. Our main conclusion, corroborated by simulations, is that the opportunity of performing some local precoding computations at the APs should not be neglected, even in centralized implementations.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
448,034
1908.03443
Tracking Temporal Evolution of Network Activity for Botnet Detection
Botnets are becoming increasingly prevalent as the primary enabling technology in a variety of malicious campaigns such as email spam, click fraud, distributed denial-of-service (DDoS) attacks, and cryptocurrency mining. Botnet technology has continued to evolve rapidly making detection a very challenging problem. There is a fundamental need for robust detection methods that are insensitive to characteristics of a specific botnet and are generalizable across different botnet types. We propose a novel supervised approach to detect malicious botnet hosts by tracking a host's network activity over time using a Long Short-Term Memory (LSTM) based neural network architecture. We build a prototype to demonstrate the feasibility of our approach, evaluate it on the CTU-13 dataset, and compare our performance against existing detection methods. We show that our approach results in a more generalizable, botnet-agnostic detection methodology, is amenable to real-time implementation, and performs well compared to existing approaches, with an overall accuracy score of 96.2%.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
141,239
2306.00009
Graph Exploration Matters: Improving both individual-level and system-level diversity in WeChat Feed Recommender
There are roughly three stages in real industrial recommendation systems, candidates generation (retrieval), ranking and reranking. Individual-level diversity and system-level diversity are both important for industrial recommender systems. The former focus on each single user's experience, while the latter focus on the difference among users. Graph-based retrieval strategies are inevitably hijacked by heavy users and popular items, leading to the convergence of candidates for users and the lack of system-level diversity. Meanwhile, in the reranking phase, Determinantal Point Process (DPP) is deployed to increase individual-level diverisity. Heavily relying on the semantic information of items, DPP suffers from clickbait and inaccurate attributes. Besides, most studies only focus on one of the two levels of diversity, and ignore the mutual influence among different stages in real recommender systems. We argue that individual-level diversity and system-level diversity should be viewed as an integrated problem, and we provide an efficient and deployable solution for web-scale recommenders. Generally, we propose to employ the retrieval graph information in diversity-based reranking, by which to weaken the hidden similarity of items exposed to users, and consequently gain more graph explorations to improve the system-level diveristy. Besides, we argue that users' propensity for diversity changes over time in content feed recommendation. Therefore, with the explored graph, we also propose to capture the user's real-time personalized propensity to the diversity. We implement and deploy the combined system in WeChat App's Top Stories used by hundreds of millions of users. Offline simulations and online A/B tests show our solution can effectively improve both user engagement and system revenue.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
369,825
2305.02960
Majorizing Measures, Codes, and Information
The majorizing measure theorem of Fernique and Talagrand is a fundamental result in the theory of random processes. It relates the boundedness of random processes indexed by elements of a metric space to complexity measures arising from certain multiscale combinatorial structures, such as packing and covering trees. This paper builds on the ideas first outlined in a little-noticed preprint of Andreas Maurer to present an information-theoretic perspective on the majorizing measure theorem, according to which the boundedness of random processes is phrased in terms of the existence of efficient variable-length codes for the elements of the indexing metric space.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
362,225
2109.13105
Does referent predictability affect the choice of referential form? A computational approach using masked coreference resolution
It is often posited that more predictable parts of a speaker's meaning tend to be made less explicit, for instance using shorter, less informative words. Studying these dynamics in the domain of referring expressions has proven difficult, with existing studies, both psycholinguistic and corpus-based, providing contradictory results. We test the hypothesis that speakers produce less informative referring expressions (e.g., pronouns vs. full noun phrases) when the context is more informative about the referent, using novel computational estimates of referent predictability. We obtain these estimates training an existing coreference resolution system for English on a new task, masked coreference resolution, giving us a probability distribution over referents that is conditioned on the context but not the referring expression. The resulting system retains standard coreference resolution performance while yielding a better estimate of human-derived referent predictability than previous attempts. A statistical analysis of the relationship between model output and mention form supports the hypothesis that predictability affects the form of a mention, both its morphosyntactic type and its length.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
257,531
1205.5863
Construction of LDGM lattices
Low density generator matrix (LDGM) codes have an acceptable performance under iterative decoding algorithms. This idea is used to construct a class of lattices with relatively good performance and low encoding and decoding complexity. To construct such lattices, Construction D is applied to a set of generator vectors of a class of LDGM codes. Bounds on the minimum distance and the coding gain of the corresponding lattices and a corollary for the cross sections and projections of these lattices are provided. The progressive edge growth (PEG) algorithm is used to construct a class of binary codes to generate the corresponding lattice. Simulation results confirm the acceptable performance of these class of lattices.
false
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
16,185
2305.12647
Reflective Linguistic Programming (RLP): A Stepping Stone in Socially-Aware AGI (SocialAGI)
This paper presents Reflective Linguistic Programming (RLP), a unique approach to conversational AI that emphasizes self-awareness and strategic planning. RLP encourages models to introspect on their own predefined personality traits, emotional responses to incoming messages, and planned strategies, enabling contextually rich, coherent, and engaging interactions. A striking illustration of RLP's potential involves a toy example, an AI persona with an adversarial orientation, a demon named `Bogus' inspired by the children's fairy tale Hansel & Gretel. Bogus exhibits sophisticated behaviors, such as strategic deception and sensitivity to user discomfort, that spontaneously arise from the model's introspection and strategic planning. These behaviors are not pre-programmed or prompted, but emerge as a result of the model's advanced cognitive modeling. The potential applications of RLP in socially-aware AGI (Social AGI) are vast, from nuanced negotiations and mental health support systems to the creation of diverse and dynamic AI personas. Our exploration of deception serves as a stepping stone towards a new frontier in AGI, one filled with opportunities for advanced cognitive modeling and the creation of truly human `digital souls'.
true
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
366,107
2107.05966
Secure Short-Packet Communications at the Physical Layer for 5G and Beyond
Short-packet communication is a key technology to support two emerging application scenarios in 5G and beyond 5G, massive machine type communication (mMTC) and ultra-reliable low latency communication (uRLLC), which are introduced to satisfy the broader communication requirements of potential applications such as the internet of vehicles and industrial internet of things (IoT). The sharp increase in privacy data in various IoT applications has made security issues more prominent. The typical upper-layer encryption mechanism could not fully address the security challenge considering the resource restriction of IoT terminals. In this article, we investigate secure short-packet communication from the perspective of physical layer security (PLS), which can be regarded as a promising security solution in 6G. Specifically, the state-of-the-art development of fundamental information theory of secure short-packet communications and corresponding performance evaluation criterion in fading channels are summarized. Then we review recent works, which investigate short-packet communication systems (CSs) in different communication scenarios or with different security strategies from the perspective of PLS. Finally, we give future research directions and challenges.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
245,942
2411.07072
An Interpretable X-ray Style Transfer via Trainable Local Laplacian Filter
Radiologists have preferred visual impressions or 'styles' of X-ray images that are manually adjusted to their needs to support their diagnostic performance. In this work, we propose an automatic and interpretable X-ray style transfer by introducing a trainable version of the Local Laplacian Filter (LLF). From the shape of the LLF's optimized remap function, the characteristics of the style transfer can be inferred and reliability of the algorithm can be ensured. Moreover, we enable the LLF to capture complex X-ray style features by replacing the remap function with a Multi-Layer Perceptron (MLP) and adding a trainable normalization layer. We demonstrate the effectiveness of the proposed method by transforming unprocessed mammographic X-ray images into images that match the style of target mammograms and achieve a Structural Similarity Index (SSIM) of 0.94 compared to 0.82 of the baseline LLF style transfer method from Aubry et al.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
507,379
2309.10513
Uncertainty Estimation in Instance Segmentation with Star-convex Shapes
Instance segmentation has witnessed promising advancements through deep neural network-based algorithms. However, these models often exhibit incorrect predictions with unwarranted confidence levels. Consequently, evaluating prediction uncertainty becomes critical for informed decision-making. Existing methods primarily focus on quantifying uncertainty in classification or regression tasks, lacking emphasis on instance segmentation. Our research addresses the challenge of estimating spatial certainty associated with the location of instances with star-convex shapes. Two distinct clustering approaches are evaluated which compute spatial and fractional certainty per instance employing samples by the Monte-Carlo Dropout or Deep Ensemble technique. Our study demonstrates that combining spatial and fractional certainty scores yields improved calibrated estimation over individual certainty scores. Notably, our experimental results show that the Deep Ensemble technique alongside our novel radial clustering approach proves to be an effective strategy. Our findings emphasize the significance of evaluating the calibration of estimated certainties for model reliability and decision-making.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
393,040
2208.09778
The Development of a Labelled te reo M\=aori-English Bilingual Database for Language Technology
Te reo M\=aori (referred to as M\=aori), New Zealand's indigenous language, is under-resourced in language technology. M\=aori speakers are bilingual, where M\=aori is code-switched with English. Unfortunately, there are minimal resources available for M\=aori language technology, language detection and code-switch detection between M\=aori-English pair. Both English and M\=aori use Roman-derived orthography making rule-based systems for detecting language and code-switching restrictive. Most M\=aori language detection is done manually by language experts. This research builds a M\=aori-English bilingual database of 66,016,807 words with word-level language annotation. The New Zealand Parliament Hansard debates reports were used to build the database. The language labels are assigned using language-specific rules and expert manual annotations. Words with the same spelling, but different meanings, exist for M\=aori and English. These words could not be categorised as M\=aori or English based on word-level language rules. Hence, manual annotations were necessary. An analysis reporting the various aspects of the database such as metadata, year-wise analysis, frequently occurring words, sentence length and N-grams is also reported. The database developed here is a valuable tool for future language and speech technology development for Aotearoa New Zealand. The methodology followed to label the database can also be followed by other low-resourced language pairs.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
313,828
2310.04612
A Topological Perspective on Demystifying GNN-Based Link Prediction Performance
Graph Neural Networks (GNNs) have shown great promise in learning node embeddings for link prediction (LP). While numerous studies aim to improve the overall LP performance of GNNs, none have explored its varying performance across different nodes and its underlying reasons. To this end, we aim to demystify which nodes will perform better from the perspective of their local topology. Despite the widespread belief that low-degree nodes exhibit poorer LP performance, our empirical findings provide nuances to this viewpoint and prompt us to propose a better metric, Topological Concentration (TC), based on the intersection of the local subgraph of each node with the ones of its neighbors. We empirically demonstrate that TC has a higher correlation with LP performance than other node-level topological metrics like degree and subgraph density, offering a better way to identify low-performing nodes than using cold-start. With TC, we discover a novel topological distribution shift issue in which newly joined neighbors of a node tend to become less interactive with that node's existing neighbors, compromising the generalizability of node embeddings for LP at testing time. To make the computation of TC scalable, We further propose Approximated Topological Concentration (ATC) and theoretically/empirically justify its efficacy in approximating TC and reducing the computation complexity. Given the positive correlation between node TC and its LP performance, we explore the potential of boosting LP performance via enhancing TC by re-weighting edges in the message-passing and discuss its effectiveness with limitations. Our code is publicly available at https://github.com/YuWVandy/Topo_LP_GNN.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
397,734
2412.17011
Robustness of Large Language Models Against Adversarial Attacks
The increasing deployment of Large Language Models (LLMs) in various applications necessitates a rigorous evaluation of their robustness against adversarial attacks. In this paper, we present a comprehensive study on the robustness of GPT LLM family. We employ two distinct evaluation methods to assess their resilience. The first method introduce character-level text attack in input prompts, testing the models on three sentiment classification datasets: StanfordNLP/IMDB, Yelp Reviews, and SST-2. The second method involves using jailbreak prompts to challenge the safety mechanisms of the LLMs. Our experiments reveal significant variations in the robustness of these models, demonstrating their varying degrees of vulnerability to both character-level and semantic-level adversarial attacks. These findings underscore the necessity for improved adversarial training and enhanced safety mechanisms to bolster the robustness of LLMs.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
519,795
1706.06954
Web-STAR: Towards a Visual Web-Based IDE for a Story Comprehension System
In this work, we present Web-STAR, an online platform for story understanding built on top of the STAR (STory comprehension through ARgumentation) reasoning engine. This platform includes a web-based IDE, integration with the STAR system and a web service infrastructure to support integration with other systems that rely on story understanding functionality to complete their tasks. The platform also delivers a number of "social" features like public story sharing with a built-in commenting system, a public repository for sharing stories with the community and collaboration tools that can be used from both project team members for development and educators for teaching. Moreover, we discuss the ongoing work on adding new features and functionality to this platform.
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
75,770
1502.03379
Locating a Tree in a Phylogenetic Network in Quadratic Time
A fundamental problem in the study of phylogenetic networks is to determine whether or not a given phylogenetic network contains a given phylogenetic tree. We develop a quadratic-time algorithm for this problem for binary nearly-stable phylogenetic networks. We also show that the number of reticulations in a reticulation visible or nearly stable phylogenetic network is bounded from above by a function linear in the number of taxa.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
40,140
2406.14043
Taxonomy-Guided Zero-Shot Recommendations with LLMs
With the emergence of large language models (LLMs) and their ability to perform a variety of tasks, their application in recommender systems (RecSys) has shown promise. However, we are facing significant challenges when deploying LLMs into RecSys, such as limited prompt length, unstructured item information, and un-constrained generation of recommendations, leading to sub-optimal performance. To address these issues, we propose a novel method using a taxonomy dictionary. This method provides a systematic framework for categorizing and organizing items, improving the clarity and structure of item information. By incorporating the taxonomy dictionary into LLM prompts, we achieve efficient token utilization and controlled feature generation, leading to more accurate and contextually relevant recommendations. Our Taxonomy-guided Recommendation (TaxRec) approach features a two-step process: one-time taxonomy categorization and LLM-based recommendation, enabling zero-shot recommendations without the need for domain-specific fine-tuning. Experimental results demonstrate TaxRec significantly enhances recommendation quality compared to traditional zero-shot approaches, showcasing its efficacy as personal recommender with LLMs. Code is available at https://github.com/yueqingliang1/TaxRec.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
466,124
2207.07361
Registration based Few-Shot Anomaly Detection
This paper considers few-shot anomaly detection (FSAD), a practical yet under-studied setting for anomaly detection (AD), where only a limited number of normal images are provided for each category at training. So far, existing FSAD studies follow the one-model-per-category learning paradigm used for standard AD, and the inter-category commonality has not been explored. Inspired by how humans detect anomalies, i.e., comparing an image in question to normal images, we here leverage registration, an image alignment task that is inherently generalizable across categories, as the proxy task, to train a category-agnostic anomaly detection model. During testing, the anomalies are identified by comparing the registered features of the test image and its corresponding support (normal) images. As far as we know, this is the first FSAD method that trains a single generalizable model and requires no re-training or parameter fine-tuning for new categories. Experimental results have shown that the proposed method outperforms the state-of-the-art FSAD methods by 3%-8% in AUC on the MVTec and MPDD benchmarks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
308,187
2405.17073
Soft Two-degree-of-freedom Dielectric Elastomer Position Sensor Exhibiting Linear Behavior
Soft robots could bring robotic systems to new horizons, by enabling safe human-machine interaction. For precise control, these soft structures require high level position feedback that is not easily achieved through conventional one-degree-of-freedom (DOF) sensing apparatus. In this paper, a soft two-DOF dielectric elastomer (DE) sensor is specifically designed to provide accurate position feedback for a soft polymer robotic manipulator. The technology is exemplified on a soft robot intended for MRI-guided prostate interventions. DEs are chosen for their major advantages of softness, high strains, low cost and embedded multiple-DOF sensing capability, providing excellent system integration. A geometrical model of the proposed DE sensor is developed and compared to experimental results in order to understand sensor mechanics. Using a differential measurement approach, a handmade prototype provided linear sensory behavior and 0.2 mm accuracy on two-DOF. This correlates to a 0.7\% error over the sensor's 30 mm x 30 mm planar range, demonstrating the outstanding potential of DE technology for accurate multi-DOF position sensing.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
457,752
2103.13253
Learning Versatile Neural Architectures by Propagating Network Codes
This work explores how to design a single neural network capable of adapting to multiple heterogeneous vision tasks, such as image segmentation, 3D detection, and video recognition. This goal is challenging because both network architecture search (NAS) spaces and methods in different tasks are inconsistent. We solve this challenge from both sides. We first introduce a unified design space for multiple tasks and build a multitask NAS benchmark (NAS-Bench-MR) on many widely used datasets, including ImageNet, Cityscapes, KITTI, and HMDB51. We further propose Network Coding Propagation (NCP), which back-propagates gradients of neural predictors to directly update architecture codes along the desired gradient directions to solve various tasks. In this way, optimal architecture configurations can be found by NCP in our large search space in seconds. Unlike prior arts of NAS that typically focus on a single task, NCP has several unique benefits. (1) NCP transforms architecture optimization from data-driven to architecture-driven, enabling joint search an architecture among multitasks with different data distributions. (2) NCP learns from network codes but not original data, enabling it to update the architecture efficiently across datasets. (3) In addition to our NAS-Bench-MR, NCP performs well on other NAS benchmarks, such as NAS-Bench-201. (4) Thorough studies of NCP on inter-, cross-, and intra-tasks highlight the importance of cross-task neural architecture design, i.e., multitask neural architectures and architecture transferring between different tasks. Code is available at https://github.com/dingmyu/NCP.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
226,433
1704.04723
Computational Models for Attitude and Actions Prediction
In this paper, we present computational models to predict Twitter users' attitude towards a specific brand through their personal and social characteristics. We also predict their likelihood to take different actions based on their attitudes. In order to operationalize our research on users' attitude and actions, we collected ground-truth data through surveys of Twitter users. We have conducted experiments using two real world datasets to validate the effectiveness of our attitude and action prediction framework. Finally, we show how our models can be integrated with a visual analytics system for customer intervention.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
71,876
1802.04931
Energy Spatio-Temporal Pattern Prediction for Electric Vehicle Networks
Information about the spatio-temporal pattern of electricity energy carried by EVs, instead of EVs themselves, is crucial for EVs to establish more effective and intelligent interactions with the smart grid. In this paper, we propose a framework for predicting the amount of the electricity energy stored by a large number of EVs aggregated within different city-scale regions, based on spatio-temporal pattern of the electricity energy. The spatial pattern is modeled via using a neural network based spatial predictor, while the temporal pattern is captured via using a linear-chain conditional random field (CRF) based temporal predictor. Two predictors are fed with spatial and temporal features respectively, which are extracted based on real trajectories data recorded in Beijing. Furthermore, we combine both predictors to build the spatio-temporal predictor, by using an optimal combination coefficient which minimizes the normalized mean square error (NMSE) of the predictions. The prediction performance is evaluated based on extensive experiments covering both spatial and temporal predictions, and the improvement achieved by the combined spatio-temporal predictor. The experiment results show that the NMSE of the spatio-temporal predictor is maintained below 0.1 for all investigate regions of Beijing. We further visualize the prediction and discuss the potential benefits can be brought to smart grid scheduling and EV charging by utilizing the proposed framework.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
90,342
2009.10524
Early detection of the advanced persistent threat attack using performance analysis of deep learning
One of the most common and important destructive attacks on the victim system is Advanced Persistent Threat (APT)-attack. The APT attacker can achieve his hostile goals by obtaining information and gaining financial benefits regarding the infrastructure of a network. One of the solutions to detect a secret APT attack is using network traffic. Due to the nature of the APT attack in terms of being on the network for a long time and the fact that the network may crash because of high traffic, it is difficult to detect this type of attack. Hence, in this study, machine learning methods such as C5.0 decision tree, Bayesian network and deep neural network are used for timely detection and classification of APT-attacks on the NSL-KDD dataset. Moreover, 10-fold cross validation method is used to experiment these models. As a result, the accuracy (ACC) of the C5.0 decision tree, Bayesian network and 6-layer deep learning models is obtained as 95.64%, 88.37% and 98.85%, respectively, and also, in terms of the important criterion of the false positive rate (FPR), the FPR value for the C5.0 decision tree, Bayesian network and 6-layer deep learning models is obtained as 2.56, 10.47 and 1.13, respectively. Other criterions such as sensitivity, specificity, accuracy, false negative rate and F-measure are also investigated for the models, and the experimental results show that the deep learning model with automatic multi-layered extraction of features has the best performance for timely detection of an APT-attack comparing to other classification models.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
196,911
1705.00430
Sub-Pixel Registration of Wavelet-Encoded Images
Sub-pixel registration is a crucial step for applications such as super-resolution in remote sensing, motion compensation in magnetic resonance imaging, and non-destructive testing in manufacturing, to name a few. Recently, these technologies have been trending towards wavelet encoded imaging and sparse/compressive sensing. The former plays a crucial role in reducing imaging artifacts, while the latter significantly increases the acquisition speed. In view of these new emerging needs for applications of wavelet encoded imaging, we propose a sub-pixel registration method that can achieve direct wavelet domain registration from a sparse set of coefficients. We make the following contributions: (i) We devise a method of decoupling scale, rotation, and translation parameters in the Haar wavelet domain, (ii) We derive explicit mathematical expressions that define in-band sub-pixel registration in terms of wavelet coefficients, (iii) Using the derived expressions, we propose an approach to achieve in-band subpixel registration, avoiding back and forth transformations. (iv) Our solution remains highly accurate even when a sparse set of coefficients are used, which is due to localization of signals in a sparse set of wavelet coefficients. We demonstrate the accuracy of our method, and show that it outperforms the state-of-the-art on simulated and real data, even when the data is sparse.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
72,678
1511.04685
Semi-Inner-Products for Convex Functionals and Their Use in Image Decomposition
Semi-inner-products in the sense of Lumer are extended to convex functionals. This yields a Hilbert-space like structure to convex functionals in Banach spaces. In particular, a general expression for semi-inner-products with respect to one homogeneous functionals is given. Thus one can use the new operator for the analysis of total variation and higher order functionals like total-generalized-variation (TGV). Having a semi-inner-product, an angle between functions can be defined in a straightforward manner. It is shown that in the one homogeneous case the Bregman distance can be expressed in terms of this newly defined angle. In addition, properties of the semi-inner-product of nonlinear eigenfunctions induced by the functional are derived. We use this construction to state a sufficient condition for a perfect decomposition of two signals and suggest numerical measures which indicate when those conditions are approximately met.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
48,931
2501.05735
ELENA: Epigenetic Learning through Evolved Neural Adaptation
Despite the success of metaheuristic algorithms in solving complex network optimization problems, they often struggle with adaptation, especially in dynamic or high-dimensional search spaces. Traditional approaches can become stuck in local optima, leading to inefficient exploration and suboptimal solutions. Most of the widely accepted advanced algorithms do well either on highly complex or smaller search spaces due to the lack of adaptation. To address these limitations, we present ELENA (Epigenetic Learning through Evolved Neural Adaptation), a new evolutionary framework that incorporates epigenetic mechanisms to enhance the adaptability of the core evolutionary approach. ELENA leverages compressed representation of learning parameters improved dynamically through epigenetic tags that serve as adaptive memory. Three epigenetic tags (mutation resistance, crossover affinity, and stability score) assist with guiding solution space search, facilitating a more intelligent hypothesis landscape exploration. To assess the framework performance, we conduct experiments on three critical network optimization problems: the Traveling Salesman Problem (TSP), the Vehicle Routing Problem (VRP), and the Maximum Clique Problem (MCP). Experiments indicate that ELENA achieves competitive results, often surpassing state-of-the-art methods on network optimization tasks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
523,704
1608.04080
Dynamic Hand Gesture Recognition for Wearable Devices with Low Complexity Recurrent Neural Networks
Gesture recognition is a very essential technology for many wearable devices. While previous algorithms are mostly based on statistical methods including the hidden Markov model, we develop two dynamic hand gesture recognition techniques using low complexity recurrent neural network (RNN) algorithms. One is based on video signal and employs a combined structure of a convolutional neural network (CNN) and an RNN. The other uses accelerometer data and only requires an RNN. Fixed-point optimization that quantizes most of the weights into two bits is conducted to optimize the amount of memory size for weight storage and reduce the power consumption in hardware and software based implementations.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
59,775
1809.06709
Document Informed Neural Autoregressive Topic Models with Distributional Prior
We address two challenges in topic models: (1) Context information around words helps in determining their actual meaning, e.g., "networks" used in the contexts "artificial neural networks" vs. "biological neuron networks". Generative topic models infer topic-word distributions, taking no or only little context into account. Here, we extend a neural autoregressive topic model to exploit the full context information around words in a document in a language modeling fashion. The proposed model is named as iDocNADE. (2) Due to the small number of word occurrences (i.e., lack of context) in short text and data sparsity in a corpus of few documents, the application of topic models is challenging on such texts. Therefore, we propose a simple and efficient way of incorporating external knowledge into neural autoregressive topic models: we use embeddings as a distributional prior. The proposed variants are named as DocNADEe and iDocNADEe. We present novel neural autoregressive topic model variants that consistently outperform state-of-the-art generative topic models in terms of generalization, interpretability (topic coherence) and applicability (retrieval and classification) over 7 long-text and 8 short-text datasets from diverse domains.
false
false
false
false
true
true
true
false
true
false
false
false
false
false
false
false
false
false
108,128
1703.05623
Semantic-level Decentralized Multi-Robot Decision-Making using Probabilistic Macro-Observations
Robust environment perception is essential for decision-making on robots operating in complex domains. Intelligent task execution requires principled treatment of uncertainty sources in a robot's observation model. This is important not only for low-level observations (e.g., accelerometer data), but also for high-level observations such as semantic object labels. This paper formalizes the concept of macro-observations in Decentralized Partially Observable Semi-Markov Decision Processes (Dec-POSMDPs), allowing scalable semantic-level multi-robot decision making. A hierarchical Bayesian approach is used to model noise statistics of low-level classifier outputs, while simultaneously allowing sharing of domain noise characteristics between classes. Classification accuracy of the proposed macro-observation scheme, called Hierarchical Bayesian Noise Inference (HBNI), is shown to exceed existing methods. The macro-observation scheme is then integrated into a Dec-POSMDP planner, with hardware experiments running onboard a team of dynamic quadrotors in a challenging domain where noise-agnostic filtering fails. To the best of our knowledge, this is the first demonstration of a real-time, convolutional neural net-based classification framework running fully onboard a team of quadrotors in a multi-robot decision-making domain.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
70,112