id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2111.09930
Learning To Estimate Regions Of Attraction Of Autonomous Dynamical Systems Using Physics-Informed Neural Networks
When learning to perform motor tasks in a simulated environment, neural networks must be allowed to explore their action space to discover new potentially viable solutions. However, in an online learning scenario with physical hardware, this exploration must be constrained by relevant safety considerations in order to avoid damage to the agent's hardware and environment. We aim to address this problem by training a neural network, which we will refer to as a "safety network", to estimate the region of attraction (ROA) of a controlled autonomous dynamical system. This safety network can thereby be used to quantify the relative safety of proposed control actions and prevent the selection of damaging actions. Here we present our development of the safety network by training an artificial neural network (ANN) to represent the ROA of several autonomous dynamical system benchmark problems. The training of this network is predicated upon both Lyapunov theory and neural solutions to partial differential equations (PDEs). By learning to approximate the viscosity solution to a specially chosen PDE that contains the dynamics of the system of interest, the safety network learns to approximate a particular function, similar to a Lyapunov function, whose zero level set is boundary of the ROA. We train our safety network to solve these PDEs in a semi-supervised manner following a modified version of the Physics Informed Neural Network (PINN) approach, utilizing a loss function that penalizes disagreement with the PDE's initial and boundary conditions, as well as non-zero residual and variational terms. In future work we intend to apply this technique to reinforcement learning agents during motor learning tasks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
267,150
2210.11113
PAC-Bayesian Learning of Optimization Algorithms
We apply the PAC-Bayes theory to the setting of learning-to-optimize. To the best of our knowledge, we present the first framework to learn optimization algorithms with provable generalization guarantees (PAC-bounds) and explicit trade-off between a high probability of convergence and a high convergence speed. Even in the limit case, where convergence is guaranteed, our learned optimization algorithms provably outperform related algorithms based on a (deterministic) worst-case analysis. Our results rely on PAC-Bayes bounds for general, unbounded loss-functions based on exponential families. By generalizing existing ideas, we reformulate the learning procedure into a one-dimensional minimization problem and study the possibility to find a global minimum, which enables the algorithmic realization of the learning procedure. As a proof-of-concept, we learn hyperparameters of standard optimization algorithms to empirically underline our theory.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
325,187
2311.00474
Diffusion models for probabilistic programming
We propose Diffusion Model Variational Inference (DMVI), a novel method for automated approximate inference in probabilistic programming languages (PPLs). DMVI utilizes diffusion models as variational approximations to the true posterior distribution by deriving a novel bound to the marginal likelihood objective used in Bayesian modelling. DMVI is easy to implement, allows hassle-free inference in PPLs without the drawbacks of, e.g., variational inference using normalizing flows, and does not make any constraints on the underlying neural network model. We evaluate DMVI on a set of common Bayesian models and show that its posterior inferences are in general more accurate than those of contemporary methods used in PPLs while having a similar computational cost and requiring less manual tuning.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
404,668
1906.03741
BIGPATENT: A Large-Scale Dataset for Abstractive and Coherent Summarization
Most existing text summarization datasets are compiled from the news domain, where summaries have a flattened discourse structure. In such datasets, summary-worthy content often appears in the beginning of input articles. Moreover, large segments from input articles are present verbatim in their respective summaries. These issues impede the learning and evaluation of systems that can understand an article's global content structure as well as produce abstractive summaries with high compression ratio. In this work, we present a novel dataset, BIGPATENT, consisting of 1.3 million records of U.S. patent documents along with human written abstractive summaries. Compared to existing summarization datasets, BIGPATENT has the following properties: i) summaries contain a richer discourse structure with more recurring entities, ii) salient content is evenly distributed in the input, and iii) lesser and shorter extractive fragments are present in the summaries. Finally, we train and evaluate baselines and popular learning models on BIGPATENT to shed light on new challenges and motivate future directions for summarization research.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
134,478
2210.11947
Generalizing over Long Tail Concepts for Medical Term Normalization
Medical term normalization consists in mapping a piece of text to a large number of output classes. Given the small size of the annotated datasets and the extremely long tail distribution of the concepts, it is of utmost importance to develop models that are capable to generalize to scarce or unseen concepts. An important attribute of most target ontologies is their hierarchical structure. In this paper we introduce a simple and effective learning strategy that leverages such information to enhance the generalizability of both discriminative and generative models. The evaluation shows that the proposed strategy produces state-of-the-art performance on seen concepts and consistent improvements on unseen ones, allowing also for efficient zero-shot knowledge transfer across text typologies and datasets.
false
false
false
false
true
true
false
false
true
false
false
false
false
false
false
false
false
false
325,520
2309.13457
Turbulence in Focus: Benchmarking Scaling Behavior of 3D Volumetric Super-Resolution with BLASTNet 2.0 Data
Analysis of compressible turbulent flows is essential for applications related to propulsion, energy generation, and the environment. Here, we present BLASTNet 2.0, a 2.2 TB network-of-datasets containing 744 full-domain samples from 34 high-fidelity direct numerical simulations, which addresses the current limited availability of 3D high-fidelity reacting and non-reacting compressible turbulent flow simulation data. With this data, we benchmark a total of 49 variations of five deep learning approaches for 3D super-resolution - which can be applied for improving scientific imaging, simulations, turbulence models, as well as in computer vision applications. We perform neural scaling analysis on these models to examine the performance of different machine learning (ML) approaches, including two scientific ML techniques. We demonstrate that (i) predictive performance can scale with model size and cost, (ii) architecture matters significantly, especially for smaller models, and (iii) the benefits of physics-based losses can persist with increasing model size. The outcomes of this benchmark study are anticipated to offer insights that can aid the design of 3D super-resolution models, especially for turbulence models, while this data is expected to foster ML methods for a broad range of flow physics applications. This data is publicly available with download links and browsing tools consolidated at https://blastnet.github.io.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
394,215
2407.21553
CXSimulator: A User Behavior Simulation using LLM Embeddings for Web-Marketing Campaign Assessment
This paper presents the Customer Experience (CX) Simulator, a novel framework designed to assess the effects of untested web-marketing campaigns through user behavior simulations. The proposed framework leverages large language models (LLMs) to represent various events in a user's behavioral history, such as viewing an item, applying a coupon, or purchasing an item, as semantic embedding vectors. We train a model to predict transitions between events from their LLM embeddings, which can even generalize to unseen events by learning from diverse training data. In web-marketing applications, we leverage this transition prediction model to simulate how users might react differently when new campaigns or products are presented to them. This allows us to eliminate the need for costly online testing and enhance the marketers' abilities to reveal insights. Our numerical evaluation and user study, utilizing BigQuery Public Datasets from the Google Merchandise Store, demonstrate the effectiveness of our framework.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
477,587
2304.04579
Coherent Concept-based Explanations in Medical Image and Its Application to Skin Lesion Diagnosis
Early detection of melanoma is crucial for preventing severe complications and increasing the chances of successful treatment. Existing deep learning approaches for melanoma skin lesion diagnosis are deemed black-box models, as they omit the rationale behind the model prediction, compromising the trustworthiness and acceptability of these diagnostic methods. Attempts to provide concept-based explanations are based on post-hoc approaches, which depend on an additional model to derive interpretations. In this paper, we propose an inherently interpretable framework to improve the interpretability of concept-based models by incorporating a hard attention mechanism and a coherence loss term to assure the visual coherence of concept activations by the concept encoder, without requiring the supervision of additional annotations. The proposed framework explains its decision in terms of human-interpretable concepts and their respective contribution to the final prediction, as well as a visual interpretation of the locations where the concept is present in the image. Experiments on skin image datasets demonstrate that our method outperforms existing black-box and concept-based models for skin lesion classification.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
357,271
2306.03235
Information Flow Control in Machine Learning through Modular Model Architecture
In today's machine learning (ML) models, any part of the training data can affect the model output. This lack of control for information flow from training data to model output is a major obstacle in training models on sensitive data when access control only allows individual users to access a subset of data. To enable secure machine learning for access-controlled data, we propose the notion of information flow control for machine learning, and develop an extension to the Transformer language model architecture that strictly adheres to the IFC definition we propose. Our architecture controls information flow by limiting the influence of training data from each security domain to a single expert module, and only enables a subset of experts at inference time based on the access control policy.The evaluation using large text and code datasets show that our proposed parametric IFC architecture has minimal (1.9%) performance overhead and can significantly improve model accuracy (by 38% for the text dataset, and between 44%--62% for the code datasets) by enabling training on access-controlled data.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
371,239
2101.08490
Estimating Average Treatment Effects via Orthogonal Regularization
Decision-making often requires accurate estimation of treatment effects from observational data. This is challenging as outcomes of alternative decisions are not observed and have to be estimated. Previous methods estimate outcomes based on unconfoundedness but neglect any constraints that unconfoundedness imposes on the outcomes. In this paper, we propose a novel regularization framework for estimating average treatment effects that exploits unconfoundedness. To this end, we formalize unconfoundedness as an orthogonality constraint, which ensures that the outcomes are orthogonal to the treatment assignment. This orthogonality constraint is then included in the loss function via a regularization. Based on our regularization framework, we develop deep orthogonal networks for unconfounded treatments (DONUT), which learn outcomes that are orthogonal to the treatment assignment. Using a variety of benchmark datasets for estimating average treatment effects, we demonstrate that DONUT outperforms the state-of-the-art substantially.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
216,338
2407.00719
A Whole-Process Certifiably Robust Aggregation Method Against Backdoor Attacks in Federated Learning
Federated Learning (FL) has garnered widespread adoption across various domains such as finance, healthcare, and cybersecurity. Nonetheless, FL remains under significant threat from backdoor attacks, wherein malicious actors insert triggers into trained models, enabling them to perform certain tasks while still meeting FL's primary objectives. In response, robust aggregation methods have been proposed, which can be divided into three types: ex-ante, ex-durante, and ex-post methods. Given the complementary nature of these methods, combining all three types is promising yet unexplored. Such a combination is non-trivial because it requires leveraging their advantages while overcoming their disadvantages. Our study proposes a novel whole-process certifiably robust aggregation (WPCRA) method for FL, which enhances robustness against backdoor attacks across three phases: ex-ante, ex-durante, and ex-post. Moreover, since the current geometric median estimation method fails to consider differences among clients, we propose a novel weighted geometric median estimation algorithm (WGME). This algorithm estimates the geometric median of model updates from clients based on each client's weight, further improving the robustness of WPCRA against backdoor attacks. We also theoretically prove that WPCRA offers improved certified robustness guarantees with a larger certified radius. We evaluate the advantages of our methods based on the task of loan status prediction. Comparison with baselines shows that our methods significantly improve FL's robustness against backdoor attacks. This study contributes to the literature with a novel WPCRA method and a novel WGME algorithm. Our code is available at https://github.com/brick-brick/WPCRAM.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
true
468,983
1901.08572
Width Provably Matters in Optimization for Deep Linear Neural Networks
We prove that for an $L$-layer fully-connected linear neural network, if the width of every hidden layer is $\tilde\Omega (L \cdot r \cdot d_{\mathrm{out}} \cdot \kappa^3 )$, where $r$ and $\kappa$ are the rank and the condition number of the input data, and $d_{\mathrm{out}}$ is the output dimension, then gradient descent with Gaussian random initialization converges to a global minimum at a linear rate. The number of iterations to find an $\epsilon$-suboptimal solution is $O(\kappa \log(\frac{1}{\epsilon}))$. Our polynomial upper bound on the total running time for wide deep linear networks and the $\exp\left(\Omega\left(L\right)\right)$ lower bound for narrow deep linear neural networks [Shamir, 2018] together demonstrate that wide layers are necessary for optimizing deep models.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
119,518
2405.01553
Empirical Studies of Parameter Efficient Methods for Large Language Models of Code and Knowledge Transfer to R
Parameter Efficient Fine-Tuning (PEFT) methods are proposed as an alternative fine-tuning approach for Large Language Models (LLM) to minimize high training costs. While prior research demonstrates the effectiveness of PEFT methods in knowledge transfer using smaller language models, their application to larger LLMs, particularly in low-resource and unseen programming languages such as R, remains under-explored. In this work, we evaluate PEFT methods, LoRA, Compacter, and IA^3 on LLMs for code summarization and generation, with a particular emphasis on knowledge transfer to R as an unseen under-explored target language. Our experiments reveal that LoRA consistently outperforms Compacter and IA^3 in all settings, while Compacter offers significant resource efficiency with minimal performance trade-offs. Additionally, we find that the number of trainable parameters has a greater influence on the functional accuracy of the generated code than PEFT architecture. Our study can direct future research in developing code intelligent tasks for unseen languages including R, as well as the choice of PEFT methods for knowledge transfer, especially when balancing the computational cost and performance.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
451,394
1906.08804
Derivation of the Variational Bayes Equations
The derivation of key equations for the variational Bayes approach is well-known in certain circles. However, translating the fundamental derivations (e.g., as found in Beal's work) to Friston's notation is somewhat delicate. Further, the notion of using variational Bayes in the context of a system with a Markov blanket requires special attention. This Technical Report presents the derivation in detail. It further illustrates how the variational Bayes method provides a framework for a new computational engine, incorporating the 2-D cluster variation method (CVM), which provides a necessary free energy equation that can be minimized across both the external and representational systems' states, respectively.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
135,967
1807.01425
Region Growing Curriculum Generation for Reinforcement Learning
Learning a policy capable of moving an agent between any two states in the environment is important for many robotics problems involving navigation and manipulation. Due to the sparsity of rewards in such tasks, applying reinforcement learning in these scenarios can be challenging. Common approaches for tackling this problem include reward engineering with auxiliary rewards, requiring domain-specific knowledge or changing the objective. In this work, we introduce a method based on region-growing that allows learning in an environment with any pair of initial and goal states. Our algorithm first learns how to move between nearby states and then increases the difficulty of the start-goal transitions as the agent's performance improves. This approach creates an efficient curriculum for learning the objective behavior of reaching any goal from any initial state. In addition, we describe a method to adaptively adjust expansion of the growing region that allows automatic adjustment of the key exploration hyperparameter to environments with different requirements. We evaluate our approach on a set of simulated navigation and manipulation tasks, where we demonstrate that our algorithm can efficiently learn a policy in the presence of sparse rewards.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
102,058
2402.05232
Universal Neural Functionals
A challenging problem in many modern machine learning tasks is to process weight-space features, i.e., to transform or extract information from the weights and gradients of a neural network. Recent works have developed promising weight-space models that are equivariant to the permutation symmetries of simple feedforward networks. However, they are not applicable to general architectures, since the permutation symmetries of a weight space can be complicated by recurrence or residual connections. This work proposes an algorithm that automatically constructs permutation equivariant models, which we refer to as universal neural functionals (UNFs), for any weight space. Among other applications, we demonstrate how UNFs can be substituted into existing learned optimizer designs, and find promising improvements over prior methods when optimizing small image classifiers and language models. Our results suggest that learned optimizers can benefit from considering the (symmetry) structure of the weight space they optimize. We open-source our library for constructing UNFs at https://github.com/AllanYangZhou/universal_neural_functional.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
427,781
2308.11501
Four years of multi-modal odometry and mapping on the rail vehicles
Precise, seamless, and efficient train localization as well as long-term railway environment monitoring is the essential property towards reliability, availability, maintainability, and safety (RAMS) engineering for railroad systems. Simultaneous localization and mapping (SLAM) is right at the core of solving the two problems concurrently. In this end, we propose a high-performance and versatile multi-modal framework in this paper, targeted for the odometry and mapping task for various rail vehicles. Our system is built atop an inertial-centric state estimator that tightly couples light detection and ranging (LiDAR), visual, optionally satellite navigation and map-based localization information with the convenience and extendibility of loosely coupled methods. The inertial sensors IMU and wheel encoder are treated as the primary sensor, which achieves the observations from subsystems to constrain the accelerometer and gyroscope biases. Compared to point-only LiDAR-inertial methods, our approach leverages more geometry information by introducing both track plane and electric power pillars into state estimation. The Visual-inertial subsystem also utilizes the environmental structure information by employing both lines and points. Besides, the method is capable of handling sensor failures by automatic reconfiguration bypassing failure modules. Our proposed method has been extensively tested in the long-during railway environments over four years, including general-speed, high-speed and metro, both passenger and freight traffic are investigated. Further, we aim to share, in an open way, the experience, problems, and successes of our group with the robotics community so that those that work in such environments can avoid these errors. In this view, we open source some of the datasets to benefit the research community.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
387,164
2302.07116
Team DETR: Guide Queries as a Professional Team in Detection Transformers
Recent proposed DETR variants have made tremendous progress in various scenarios due to their streamlined processes and remarkable performance. However, the learned queries usually explore the global context to generate the final set prediction, resulting in redundant burdens and unfaithful results. More specifically, a query is commonly responsible for objects of different scales and positions, which is a challenge for the query itself, and will cause spatial resource competition among queries. To alleviate this issue, we propose Team DETR, which leverages query collaboration and position constraints to embrace objects of interest more precisely. We also dynamically cater to each query member's prediction preference, offering the query better scale and spatial priors. In addition, the proposed Team DETR is flexible enough to be adapted to other existing DETR variants without increasing parameters and calculations. Extensive experiments on the COCO dataset show that Team DETR achieves remarkable gains, especially for small and large objects. Code is available at \url{https://github.com/horrible-dong/TeamDETR}.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
345,624
cs/0512006
Capacity-Achieving Ensembles of Accumulate-Repeat-Accumulate Codes for the Erasure Channel with Bounded Complexity
The paper introduces ensembles of accumulate-repeat-accumulate (ARA) codes which asymptotically achieve capacity on the binary erasure channel (BEC) with {\em bounded complexity}, per information bit, of encoding and decoding. It also introduces symmetry properties which play a central role in the construction of capacity-achieving ensembles for the BEC with bounded complexity. The results here improve on the tradeoff between performance and complexity provided by previous constructions of capacity-achieving ensembles of codes defined on graphs. The superiority of ARA codes with moderate to large block length is exemplified by computer simulations which compare their performance with those of previously reported capacity-achieving ensembles of LDPC and IRA codes. The ARA codes also have the advantage of being systematic.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
539,119
2311.08152
Towards Reasoning in Large Language Models via Multi-Agent Peer Review Collaboration
Large Language Models (LLMs) have shown remarkable capabilities in general natural language processing tasks but often fall short in complex reasoning tasks. Recent studies have explored human-like problem-solving strategies, such as self-correct, to push further the boundary of single-model reasoning ability. In this work, we let a single model "step outside the box" by engaging multiple models to correct each other. We introduce a multi-agent collaboration strategy that emulates the academic peer review process. Each agent independently constructs its own solution, provides reviews on the solutions of others, and assigns confidence levels to its reviews. Upon receiving peer reviews, agents revise their initial solutions. Extensive experiments on three different types of reasoning tasks show that our collaboration approach delivers superior accuracy across all ten datasets compared to existing methods. Further study underscores the effectiveness of integrating confidence in reviews, demonstrates the superiority of feedback exchange over mere solution sharing, and highlights the role of capability and diversity in fostering successful collaboration.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
407,611
0806.0905
Channel Capacity Limits of Cognitive Radio in Asymmetric Fading Environments
Cognitive radio technology is an innovative radio design concept which aims to increase spectrum utilization by exploiting unused spectrum in dynamically changing environments. By extending previous results, we investigate the capacity gains achievable with this dynamic spectrum approach in asymmetric fading channels. More specifically, we allow the secondary-to-primary and secondary-to-secondary user channels to undergo Rayleigh or Rician fading, with arbitrary link power. In order to compute the capacity, we derive the distributions of ratios of Rayleigh and Rician variables. Compared to the symmetric fading scenario, our results indicate several interesting features of the capacity behaviour under both average and peak received power constraints. Finally, the impact of multiple primary users on the capacity under asymmetric fading has also been studied.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
1,876
1909.04518
Virtual organelle self-coding for fluorescence imaging via adversarial learning
Fluorescence microscopy plays a vital role in understanding the subcellular structures of living cells. However, it requires considerable effort in sample preparation related to chemical fixation, staining, cost, and time. To reduce those factors, we present a virtual fluorescence staining method based on deep neural networks (VirFluoNet) to transform fluorescence images of molecular labels into other molecular fluorescence labels in the same field-of-view. To achieve this goal, we develop and train a conditional generative adversarial network (cGAN) to perform digital fluorescence imaging demonstrated on human osteosarcoma U2OS cell fluorescence images captured under Cell Painting staining protocol. A detailed comparative analysis is also conducted on the performance of the cGAN network between predicting fluorescence channels based on phase contrast or based on another fluorescence channel using human breast cancer MDA-MB-231 cell line as a test case. In addition, we implement a deep learning model to perform autofocusing on another human U2OS fluorescence dataset as a preprocessing step to defocus an out-focus channel in U2OS dataset. A quantitative index of image prediction error is introduced based on signal pixel-wise spatial and intensity differences with ground truth to evaluate the performance of prediction to high-complex and throughput fluorescence. This index provides a rational way to perform image segmentation on error signals and to understand the likelihood of mis-interpreting biology from the predicted image. In total, these findings contribute to the utility of deep learning image regression for fluorescence microscopy datasets of biological cells, balanced against savings of cost, time, and experimental effort. Furthermore, the approach introduced here holds promise for modeling the internal relationships between organelles and biomolecules within living cells.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
144,827
2305.15793
Feature space reduction method for ultrahigh-dimensional, multiclass data: Random forest-based multiround screening (RFMS)
In recent years, numerous screening methods have been published for ultrahigh-dimensional data that contain hundreds of thousands of features; however, most of these features cannot handle data with thousands of classes. Prediction models built to authenticate users based on multichannel biometric data result in this type of problem. In this study, we present a novel method known as random forest-based multiround screening (RFMS) that can be effectively applied under such circumstances. The proposed algorithm divides the feature space into small subsets and executes a series of partial model builds. These partial models are used to implement tournament-based sorting and the selection of features based on their importance. To benchmark RFMS, a synthetic biometric feature space generator known as BiometricBlender is employed. Based on the results, the RFMS is on par with industry-standard feature screening methods while simultaneously possessing many advantages over these methods.
false
true
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
367,787
2407.09032
DRM Revisited: A Complete Error Analysis
In this work, we address a foundational question in the theoretical analysis of the Deep Ritz Method (DRM) under the over-parameteriztion regime: Given a target precision level, how can one determine the appropriate number of training samples, the key architectural parameters of the neural networks, the step size for the projected gradient descent optimization procedure, and the requisite number of iterations, such that the output of the gradient descent process closely approximates the true solution of the underlying partial differential equation to the specified precision?
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
472,423
2107.05176
Zero-Shot Compositional Concept Learning
In this paper, we study the problem of recognizing compositional attribute-object concepts within the zero-shot learning (ZSL) framework. We propose an episode-based cross-attention (EpiCA) network which combines merits of cross-attention mechanism and episode-based training strategy to recognize novel compositional concepts. Firstly, EpiCA bases on cross-attention to correlate concept-visual information and utilizes the gated pooling layer to build contextualized representations for both images and concepts. The updated representations are used for a more in-depth multi-modal relevance calculation for concept recognition. Secondly, a two-phase episode training strategy, especially the transductive phase, is adopted to utilize unlabeled test examples to alleviate the low-resource learning problem. Experiments on two widely-used zero-shot compositional learning (ZSCL) benchmarks have demonstrated the effectiveness of the model compared with recent approaches on both conventional and generalized ZSCL settings.
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
245,695
1807.06214
Knockoffs for the mass: new feature importance statistics with false discovery guarantees
An important problem in machine learning and statistics is to identify features that causally affect the outcome. This is often impossible to do from purely observational data, and a natural relaxation is to identify features that are correlated with the outcome even conditioned on all other observed features. For example, we want to identify that smoking really is correlated with cancer conditioned on demographics. The knockoff procedure is a recent breakthrough in statistics that, in theory, can identify truly correlated features while guaranteeing that the false discovery is limited. The idea is to create synthetic data -- knockoffs -- that captures correlations amongst the features. However there are substantial computational and practical challenges to generating and using knockoffs. This paper makes several key advances that enable knockoff application to be more efficient and powerful. We develop an efficient algorithm to generate valid knockoffs from Bayesian Networks. Then we systematically evaluate knockoff test statistics and develop new statistics with improved power. The paper combines new mathematical guarantees with systematic experiments on real and synthetic data.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
103,083
2311.05903
Establishing Performance Baselines in Fine-Tuning, Retrieval-Augmented Generation and Soft-Prompting for Non-Specialist LLM Users
Research into methods for improving the performance of large language models (LLMs) through fine-tuning, retrieval-augmented generation (RAG) and soft-prompting has tended to focus on the use of highly technical or high-cost techniques, making many of the newly discovered approaches comparatively inaccessible to non-technical users. In this paper we tested an unmodified version of GPT 3.5, a fine-tuned version, and the same unmodified model when given access to a vectorised RAG database, both in isolation and in combination with a basic, non-algorithmic soft prompt. In each case we tested the model's ability to answer a set of 100 questions relating primarily to events that occurred after September 2021 (the point at which GPT 3.5's training data set ends). We found that if commercial platforms are used and default settings are applied with no iteration in order to establish a baseline set of outputs, a fine-tuned model outperforms GPT 3.5 Turbo, while the RAG approach out-performed both. The application of a soft prompt significantly improved the performance of each approach.
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
406,750
2209.04562
Bayan Algorithm: Detecting Communities in Networks Through Exact and Approximate Optimization of Modularity
Community detection is a classic network problem with extensive applications in various fields. Its most common method is using modularity maximization heuristics which rarely return an optimal partition or anything similar. Partitions with globally optimal modularity are difficult to compute, and therefore have been underexplored. Using structurally diverse networks, we compare 30 community detection methods including our proposed algorithm that offers optimality and approximation guarantees: the Bayan algorithm. Unlike existing methods, Bayan globally maximizes modularity or approximates it within a factor. Our results show the distinctive accuracy and stability of maximum-modularity partitions in retrieving planted partitions at rates higher than most alternatives for a wide range of parameter settings in two standard benchmarks. Compared to the partitions from 29 other algorithms, maximum-modularity partitions have the best medians for description length, coverage, performance, average conductance, and well clusteredness. These advantages come at the cost of additional computations which Bayan makes possible for small networks (networks that have up to 3000 edges in their largest connected component). Bayan is several times faster than using open-source and commercial solvers for modularity maximization, making it capable of finding optimal partitions for instances that cannot be optimized by any other existing method. Our results point to a few well performing algorithms, among which Bayan stands out as the most reliable method for small networks. A Python implementation of the Bayan algorithm (bayanpy) is publicly available through the package installer for Python.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
true
316,813
2203.04961
Sharing Generative Models Instead of Private Data: A Simulation Study on Mammography Patch Classification
Early detection of breast cancer in mammography screening via deep-learning based computer-aided detection systems shows promising potential in improving the curability and mortality rates of breast cancer. However, many clinical centres are restricted in the amount and heterogeneity of available data to train such models to (i) achieve promising performance and to (ii) generalise well across acquisition protocols and domains. As sharing data between centres is restricted due to patient privacy concerns, we propose a potential solution: sharing trained generative models between centres as substitute for real patient data. In this work, we use three well known mammography datasets to simulate three different centres, where one centre receives the trained generator of Generative Adversarial Networks (GANs) from the two remaining centres in order to augment the size and heterogeneity of its training dataset. We evaluate the utility of this approach on mammography patch classification on the test set of the GAN-receiving centre using two different classification models, (a) a convolutional neural network and (b) a transformer neural network. Our experiments demonstrate that shared GANs notably increase the performance of both transformer and convolutional classification models and highlight this approach as a viable alternative to inter-centre data sharing.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
284,666
2209.02062
"Dummy Grandpa, do you know anything?": Identifying and Characterizing Ad hominem Fallacy Usage in the Wild
Today, participating in discussions on online forums is extremely commonplace and these discussions have started rendering a strong influence on the overall opinion of online users. Naturally, twisting the flow of the argument can have a strong impact on the minds of naive users, which in the long run might have socio-political ramifications, for example, winning an election or spreading targeted misinformation. Thus, these platforms are potentially highly vulnerable to malicious players who might act individually or as a cohort to breed fallacious arguments with a motive to sway public opinion. Ad hominem arguments are one of the most effective forms of such fallacies. Although a simple fallacy, it is effective enough to sway public debates in offline world and can be used as a precursor to shutting down the voice of opposition by slander. In this work, we take a first step in shedding light on the usage of ad hominem fallacies in the wild. First, we build a powerful ad hominem detector with high accuracy (F1 more than 83%, showing a significant improvement over prior work), even for datasets for which annotated instances constitute a very small fraction. We then used our detector on 265k arguments collected from the online debate forum - CreateDebate. Our crowdsourced surveys validate our in-the-wild predictions on CreateDebate data (94% match with manual annotation). Our analysis revealed that a surprising 31.23% of CreateDebate content contains ad hominem fallacy, and a cohort of highly active users post significantly more ad hominem to suppress opposing views. Then, our temporal analysis revealed that ad hominem argument usage increased significantly since the 2016 US Presidential election, not only for topics like Politics, but also for Science and Law. We conclude by discussing important implications of our work to detect and defend against ad hominem fallacies.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
316,091
2202.11217
Differentiable and Learnable Robot Models
Building differentiable simulations of physical processes has recently received an increasing amount of attention. Specifically, some efforts develop differentiable robotic physics engines motivated by the computational benefits of merging rigid body simulations with modern differentiable machine learning libraries. Here, we present a library that focuses on the ability to combine data driven methods with analytical rigid body computations. More concretely, our library \emph{Differentiable Robot Models} implements both \emph{differentiable} and \emph{learnable} models of the kinematics and dynamics of robots in Pytorch. The source-code is available at \url{https://github.com/facebookresearch/differentiable-robot-model}
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
281,807
1810.06305
Hyperparameter Learning via Distributional Transfer
Bayesian optimisation is a popular technique for hyperparameter learning but typically requires initial exploration even in cases where similar prior tasks have been solved. We propose to transfer information across tasks using learnt representations of training datasets used in those tasks. This results in a joint Gaussian process model on hyperparameters and data representations. Representations make use of the framework of distribution embeddings into reproducing kernel Hilbert spaces. The developed method has a faster convergence compared to existing baselines, in some cases requiring only a few evaluations of the target objective.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
110,413
2110.05523
UnfairGAN: An Enhanced Generative Adversarial Network for Raindrop Removal from A Single Image
Image deraining is a new challenging problem in real-world applications, such as autonomous vehicles. In a bad weather condition of heavy rainfall, raindrops, mainly hitting glasses or windshields, can significantly reduce observation ability. Moreover, raindrops spreading over the glass can yield refraction's physical effect, which seriously impedes the sightline or undermine machine learning systems. In this paper, we propose an enhanced generative adversarial network to deal with the challenging problems of raindrops. UnfairGAN is an enhanced generative adversarial network that can utilize prior high-level information, such as edges and rain estimation, to boost deraining performance. To demonstrate UnfairGAN, we introduce a large dataset for training deep learning models of rain removal. The experimental results show that our proposed method is superior to other state-of-the-art approaches of deraining raindrops regarding quantitative metrics and visual quality.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
260,299
2009.07632
Helping Users Tackle Algorithmic Threats on Social Media: A Multimedia Research Agenda
Participation on social media platforms has many benefits but also poses substantial threats. Users often face an unintended loss of privacy, are bombarded with mis-/disinformation, or are trapped in filter bubbles due to over-personalized content. These threats are further exacerbated by the rise of hidden AI-driven algorithms working behind the scenes to shape users' thoughts, attitudes, and behavior. We investigate how multimedia researchers can help tackle these problems to level the playing field for social media users. We perform a comprehensive survey of algorithmic threats on social media and use it as a lens to set a challenging but important research agenda for effective and real-time user nudging. We further implement a conceptual prototype and evaluate it with experts to supplement our research agenda. This paper calls for solutions that combat the algorithmic threats on social media by utilizing machine learning and multimedia content analysis techniques but in a transparent manner and for the benefit of the users.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
true
195,999
1809.03734
How much should you ask? On the question structure in QA systems
Datasets that boosted state-of-the-art solutions for Question Answering (QA) systems prove that it is possible to ask questions in natural language manner. However, users are still used to query-like systems where they type in keywords to search for answer. In this study we validate which parts of questions are essential for obtaining valid answer. In order to conclude that, we take advantage of LIME - a framework that explains prediction by local approximation. We find that grammar and natural language is disregarded by QA. State-of-the-art model can answer properly even if 'asked' only with a few words with high coefficients calculated with LIME. According to our knowledge, it is the first time that QA model is being explained by LIME.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
107,397
2402.07632
Overconfident and Unconfident AI Hinder Human-AI Collaboration
AI transparency is a central pillar of responsible AI deployment and effective human-AI collaboration. A critical approach is communicating uncertainty, such as displaying AI's confidence level, or its correctness likelihood (CL), to users. However, these confidence levels are often uncalibrated, either overestimating or underestimating actual CL, posing risks and harms to human-AI collaboration. This study examines the effects of uncalibrated AI confidence on users' trust in AI, AI advice adoption, and collaboration outcomes. We further examined the impact of increased transparency, achieved through trust calibration support, on these outcomes. Our results reveal that uncalibrated AI confidence leads to both the misuse of overconfident AI and disuse of unconfident AI, thereby hindering outcomes of human-AI collaboration. Deficiency of trust calibration support exacerbates this issue by making it harder to detect uncalibrated confidence, promoting misuse and disuse of AI. Conversely, trust calibration support aids in recognizing uncalibration and reducing misuse, but it also fosters distrust and causes disuse of AI. Our findings highlight the importance of AI confidence calibration for enhancing human-AI collaboration and suggest directions for AI design and regulation.
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
428,790
2309.09658
A Novel Method of Fuzzy Topic Modeling based on Transformer Processing
Topic modeling is admittedly a convenient way to monitor markets trend. Conventionally, Latent Dirichlet Allocation, LDA, is considered a must-do model to gain this type of information. By given the merit of deducing keyword with token conditional probability in LDA, we can know the most possible or essential topic. However, the results are not intuitive because the given topics cannot wholly fit human knowledge. LDA offers the first possible relevant keywords, which also brings out another problem of whether the connection is reliable based on the statistic possibility. It is also hard to decide the topic number manually in advance. As the booming trend of using fuzzy membership to cluster and using transformers to embed words, this work presents the fuzzy topic modeling based on soft clustering and document embedding from state-of-the-art transformer-based model. In our practical application in a press release monitoring, the fuzzy topic modeling gives a more natural result than the traditional output from LDA.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
392,693
2306.00091
A General Framework for Equivariant Neural Networks on Reductive Lie Groups
Reductive Lie Groups, such as the orthogonal groups, the Lorentz group, or the unitary groups, play essential roles across scientific fields as diverse as high energy physics, quantum mechanics, quantum chromodynamics, molecular dynamics, computer vision, and imaging. In this paper, we present a general Equivariant Neural Network architecture capable of respecting the symmetries of the finite-dimensional representations of any reductive Lie Group G. Our approach generalizes the successful ACE and MACE architectures for atomistic point clouds to any data equivariant to a reductive Lie group action. We also introduce the lie-nn software library, which provides all the necessary tools to develop and implement such general G-equivariant neural networks. It implements routines for the reduction of generic tensor products of representations into irreducible representations, making it easy to apply our architecture to a wide range of problems and groups. The generality and performance of our approach are demonstrated by applying it to the tasks of top quark decay tagging (Lorentz group) and shape recognition (orthogonal group).
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
369,863
2212.03282
MobilePTX: Sparse Coding for Pneumothorax Detection Given Limited Training Examples
Point-of-Care Ultrasound (POCUS) refers to clinician-performed and interpreted ultrasonography at the patient's bedside. Interpreting these images requires a high level of expertise, which may not be available during emergencies. In this paper, we support POCUS by developing classifiers that can aid medical professionals by diagnosing whether or not a patient has pneumothorax. We decomposed the task into multiple steps, using YOLOv4 to extract relevant regions of the video and a 3D sparse coding model to represent video features. Given the difficulty in acquiring positive training videos, we trained a small-data classifier with a maximum of 15 positive and 32 negative examples. To counteract this limitation, we leveraged subject matter expert (SME) knowledge to limit the hypothesis space, thus reducing the cost of data collection. We present results using two lung ultrasound datasets and demonstrate that our model is capable of achieving performance on par with SMEs in pneumothorax identification. We then developed an iOS application that runs our full system in less than 4 seconds on an iPad Pro, and less than 8 seconds on an iPhone 13 Pro, labeling key regions in the lung sonogram to provide interpretable diagnoses.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
335,057
2103.04957
Learning to Represent and Predict Sets with Deep Neural Networks
In this thesis, we develop various techniques for working with sets in machine learning. Each input or output is not an image or a sequence, but a set: an unordered collection of multiple objects, each object described by a feature vector. Their unordered nature makes them suitable for modeling a wide variety of data, ranging from objects in images to point clouds to graphs. Deep learning has recently shown great success on other types of structured data, so we aim to build the necessary structures for sets into deep neural networks. The first focus of this thesis is the learning of better set representations (sets as input). Existing approaches have bottlenecks that prevent them from properly modeling relations between objects within the set. To address this issue, we develop a variety of techniques for different scenarios and show that alleviating the bottleneck leads to consistent improvements across many experiments. The second focus of this thesis is the prediction of sets (sets as output). Current approaches do not take the unordered nature of sets into account properly. We determine that this results in a problem that causes discontinuity issues with many set prediction tasks and prevents them from learning some extremely simple datasets. To avoid this problem, we develop two models that properly take the structure of sets into account. Various experiments show that our set prediction techniques can significantly benefit over existing approaches.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
223,816
1601.06023
Gaussian Approximation for the Downlink Interference in Heterogeneous Cellular Networks
This paper derives Gaussian approximation bounds for the standardized aggregate wireless interference (AWI) in the downlink of K-tier heterogeneous cellular networks when base stations in each tier are distributed over the plane according to a (possibly non-homogeneous) Poisson process. The proposed methodology is general enough to account for general bounded path-loss models and fading statistics. The deviations of the distribution of the standardized AWI from the standard normal distribution are measured in terms of the Kolmogorov-Smirnov distance. An explicit expression bounding the Kolmogorov-Smirnov distance between these two distributions is obtained as a function of a broad range of network parameters such as per-tier transmission power levels, base station locations, fading statistics and the path-loss model. A simulation study is performed to corroborate the analytical results. In particular, a good statistical match between the standardized AWI distribution and its normal approximation occurs even for moderately dense heterogeneous cellular networks. These results are expected to have important ramifications on the characterization of performance upper and lower bounds for emerging 5G network architectures.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
51,199
2404.14591
Predicting the Temporal Dynamics of Prosthetic Vision
Retinal implants are a promising treatment option for degenerative retinal disease. While numerous models have been developed to simulate the appearance of elicited visual percepts ("phosphenes"), these models often either focus solely on spatial characteristics or inadequately capture the complex temporal dynamics observed in clinical trials, which vary heavily across implant technologies, subjects, and stimulus conditions. Here we introduce two computational models designed to accurately predict phosphene fading and persistence under varying stimulus conditions, cross-validated on behavioral data reported by nine users of the Argus II Retinal Prosthesis System. Both models segment the time course of phosphene perception into discrete intervals, decomposing phosphene fading and persistence into either sinusoidal or exponential components. Our spectral model demonstrates state-of-the-art predictions of phosphene intensity over time (r = 0.7 across all participants). Overall, this study lays the groundwork for enhancing prosthetic vision by improving our understanding of phosphene temporal dynamics.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
448,734
1812.07546
Supervised Domain Enablement Attention for Personalized Domain Classification
In large-scale domain classification for natural language understanding, leveraging each user's domain enablement information, which refers to the preferred or authenticated domains by the user, with attention mechanism has been shown to improve the overall domain classification performance. In this paper, we propose a supervised enablement attention mechanism, which utilizes sigmoid activation for the attention weighting so that the attention can be computed with more expressive power without the weight sum constraint of softmax attention. The attention weights are explicitly encouraged to be similar to the corresponding elements of the ground-truth's one-hot vector by supervised attention, and the attention information of the other enabled domains is leveraged through self-distillation. By evaluating on the actual utterances from a large-scale IPDA, we show that our approach significantly improves domain classification performance.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
116,834
2102.06336
Dancing along Battery: Enabling Transformer with Run-time Reconfigurability on Mobile Devices
A pruning-based AutoML framework for run-time reconfigurability, namely RT3, is proposed in this work. This enables Transformer-based large Natural Language Processing (NLP) models to be efficiently executed on resource-constrained mobile devices and reconfigured (i.e., switching models for dynamic hardware conditions) at run-time. Such reconfigurability is the key to save energy for battery-powered mobile devices, which widely use dynamic voltage and frequency scaling (DVFS) technique for hardware reconfiguration to prolong battery life. In this work, we creatively explore a hybrid block-structured pruning (BP) and pattern pruning (PP) for Transformer-based models and first attempt to combine hardware and software reconfiguration to maximally save energy for battery-powered mobile devices. Specifically, RT3 integrates two-level optimizations: First, it utilizes an efficient BP as the first-step compression for resource-constrained mobile devices; then, RT3 heuristically generates a shrunken search space based on the first level optimization and searches multiple pattern sets with diverse sparsity for PP via reinforcement learning to support lightweight software reconfiguration, which corresponds to available frequency levels of DVFS (i.e., hardware reconfiguration). At run-time, RT3 can switch the lightweight pattern sets within 45ms to guarantee the required real-time constraint at different frequency levels. Results further show that RT3 can prolong battery life over 4x improvement with less than 1% accuracy loss for Transformer and 1.5% score decrease for DistilBERT.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
219,717
2306.15619
DCID: Deep Canonical Information Decomposition
We consider the problem of identifying the signal shared between two one-dimensional target variables, in the presence of additional multivariate observations. Canonical Correlation Analysis (CCA)-based methods have traditionally been used to identify shared variables, however, they were designed for multivariate targets and only offer trivial solutions for univariate cases. In the context of Multi-Task Learning (MTL), various models were postulated to learn features that are sparse and shared across multiple tasks. However, these methods were typically evaluated by their predictive performance. To the best of our knowledge, no prior studies systematically evaluated models in terms of correctly recovering the shared signal. Here, we formalize the setting of univariate shared information retrieval, and propose ICM, an evaluation metric which can be used in the presence of ground-truth labels, quantifying 3 aspects of the learned shared features. We further propose Deep Canonical Information Decomposition (DCID) - a simple, yet effective approach for learning the shared variables. We benchmark the models on a range of scenarios on synthetic data with known ground-truths and observe DCID outperforming the baselines in a wide range of settings. Finally, we demonstrate a real-life application of DCID on brain Magnetic Resonance Imaging (MRI) data, where we are able to extract more accurate predictors of changes in brain regions and obesity. The code for our experiments as well as the supplementary materials are available at https://github.com/alexrakowski/dcid
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
376,080
2204.10743
An Evaluation of Intra-Transaction Parallelism in Actor-Relational Database Systems
Over the past decade, we have witnessed a dramatic evolution in main-memory capacity and multi-core parallelism of server hardware. To leverage this hardware potential, multi-core in-memory OLTP database systems have been extensively re-designed. The core objective of this re-design has been scaling up sequential execution of OLTP transactions, wherein alternative database architectures have been explored to eliminate system bottlenecks and overheads impeding inter-transaction parallelism to fully manifest. However, intra-transaction parallelism has been largely ignored by this previous work. We conjecture this situation to have developed because OLTP workloads are sometimes deemed to have far too little intra-transactional parallelism and, even when this kind of parallelism is available, program analyses to recognize it in arbitrary stored procedures are considered too brittle to be used as a general tool. Recently, however, a new concept of actor-relational database systems has been proposed, raising hopes that application developers can specify intra-transaction parallelism by modeling database applications as communicating actors. In this scenario, a natural question is whether an actor-relational database system with an asynchronous programming model can adequately expose the intra-transaction parallelism available in application logic to modern multi-core hardware. Towards that aim, we conduct in this paper an experimental evaluation of the factors that affect intra-transaction parallelism in the prototype actor-relational database system REACTDB, with an OLTP application designed with task-level parallelism in mind, and with the system running on a multi-core machine.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
292,897
2204.04991
TRUE: Re-evaluating Factual Consistency Evaluation
Grounded text generation systems often generate text that contains factual inconsistencies, hindering their real-world applicability. Automatic factual consistency evaluation may help alleviate this limitation by accelerating evaluation cycles, filtering inconsistent outputs and augmenting training data. While attracting increasing attention, such evaluation metrics are usually developed and evaluated in silo for a single task or dataset, slowing their adoption. Moreover, previous meta-evaluation protocols focused on system-level correlations with human annotations, which leave the example-level accuracy of such metrics unclear. In this work, we introduce TRUE: a comprehensive survey and assessment of factual consistency metrics on a standardized collection of existing texts from diverse tasks, manually annotated for factual consistency. Our standardization enables an example-level meta-evaluation protocol that is more actionable and interpretable than previously reported correlations, yielding clearer quality measures. Across diverse state-of-the-art metrics and 11 datasets we find that large-scale NLI and question generation-and-answering-based approaches achieve strong and complementary results. We recommend those methods as a starting point for model and metric developers, and hope TRUE will foster progress towards even better evaluation methods.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
290,873
2310.09916
Socially reactive navigation models for mobile robots in dynamic environments
The objective of this work is to expand upon previous works, considering socially acceptable behaviours within robot navigation and interaction, and allow a robot to closely approach static and dynamic individuals or groups. The space models developed in this dissertation are adaptive, that is, capable of changing over time to accommodate the changing circumstances often existent within a social environment. The space model's parameters' adaptation occurs with the end goal of enabling a close interaction between humans and robots and is thus capable of taking into account not only the arrangement of the groups, but also the basic characteristics of the robot itself. This work also further develops a preexisting approach pose estimation algorithm in order to better guarantee the safety and comfort of the humans involved in the interaction, by taking into account basic human sensibilities. The algorithms are integrated into ROS's navigation system through the use of the $costmap2d$ and the $move\_base$ packages. The space model adaptation is tested via comparative evaluation against previous algorithms through the use of datasets. The entire navigation system is then evaluated through both simulations (static and dynamic) and real life situations (static). These experiments demonstrate that the developed space model and approach pose estimation algorithms are capable of enabling a robot to closely approach individual humans and groups, while maintaining considerations for their comfort and sensibilities.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
400,004
2410.10054
AlphaLoRA: Assigning LoRA Experts Based on Layer Training Quality
Parameter-efficient fine-tuning methods, such as Low-Rank Adaptation (LoRA), are known to enhance training efficiency in Large Language Models (LLMs). Due to the limited parameters of LoRA, recent studies seek to combine LoRA with Mixture-of-Experts (MoE) to boost performance across various tasks. However, inspired by the observed redundancy in traditional MoE structures, previous studies identify similar redundancy among LoRA experts within the MoE architecture, highlighting the necessity for non-uniform allocation of LoRA experts across different layers. In this paper, we leverage Heavy-Tailed Self-Regularization (HT-SR) Theory to design a fine-grained allocation strategy. Our analysis reveals that the number of experts per layer correlates with layer training quality, which exhibits significant variability across layers. Based on this, we introduce AlphaLoRA, a theoretically principled and training-free method for allocating LoRA experts to further mitigate redundancy. Experiments on three models across ten language processing and reasoning benchmarks demonstrate that AlphaLoRA achieves comparable or superior performance over all baselines. Our code is available at https://github.com/morelife2017/alphalora.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
497,893
1302.7314
Torque Saturation in Bipedal Robotic Walking through Control Lyapunov Function Based Quadratic Programs
This paper presents a novel method for directly incorporating user-defined control input saturations into the calculation of a control Lyapunov function (CLF)-based walking controller for a biped robot. Previous work by the authors has demonstrated the effectiveness of CLF controllers for stabilizing periodic gaits for biped walkers, and the current work expands on those results by providing a more effective means for handling control saturations. The new approach, based on a convex optimization routine running at a 1 kHz control update rate, is useful not only for handling torque saturations but also for incorporating a whole family of user-defined constraints into the online computation of a CLF controller. The paper concludes with an experimental implementation of the main results on the bipedal robot MABEL.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
22,519
2402.15784
IRConStyle: Image Restoration Framework Using Contrastive Learning and Style Transfer
Recently, the contrastive learning paradigm has achieved remarkable success in high-level tasks such as classification, detection, and segmentation. However, contrastive learning applied in low-level tasks, like image restoration, is limited, and its effectiveness is uncertain. This raises a question: Why does the contrastive learning paradigm not yield satisfactory results in image restoration? In this paper, we conduct in-depth analyses and propose three guidelines to address the above question. In addition, inspired by style transfer and based on contrastive learning, we propose a novel module for image restoration called \textbf{ConStyle}, which can be efficiently integrated into any U-Net structure network. By leveraging the flexibility of ConStyle, we develop a \textbf{general restoration network} for image restoration. ConStyle and the general restoration network together form an image restoration framework, namely \textbf{IRConStyle}. To demonstrate the capability and compatibility of ConStyle, we replace the general restoration network with transformer-based, CNN-based, and MLP-based networks, respectively. We perform extensive experiments on various image restoration tasks, including denoising, deblurring, deraining, and dehazing. The results on 19 benchmarks demonstrate that ConStyle can be integrated with any U-Net-based network and significantly enhance performance. For instance, ConStyle NAFNet significantly outperforms the original NAFNet on SOTS outdoor (dehazing) and Rain100H (deraining) datasets, with PSNR improvements of 4.16 dB and 3.58 dB with 85% fewer parameters.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
432,288
2411.02594
"It's a conversation, not a quiz": A Risk Taxonomy and Reflection Tool for LLM Adoption in Public Health
Recent breakthroughs in large language models (LLMs) have generated both interest and concern about their potential adoption as accessible information sources or communication tools across different domains. In public health -- where stakes are high and impacts extend across populations -- adopting LLMs poses unique challenges that require thorough evaluation. However, structured approaches for assessing potential risks in public health remain under-explored. To address this gap, we conducted focus groups with health professionals and health issue experiencers to unpack their concerns, situated across three distinct and critical public health issues that demand high-quality information: vaccines, opioid use disorder, and intimate partner violence. We synthesize participants' perspectives into a risk taxonomy, distinguishing and contextualizing the potential harms LLMs may introduce when positioned alongside traditional health communication. This taxonomy highlights four dimensions of risk in individual behaviors, human-centered care, information ecosystem, and technology accountability. For each dimension, we discuss specific risks and example reflection questions to help practitioners adopt a risk-reflexive approach. This work offers a shared vocabulary and reflection tool for experts in both computing and public health to collaboratively anticipate, evaluate, and mitigate risks in deciding when to employ LLM capabilities (or not) and how to mitigate harm when they are used.
true
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
505,570
2107.04891
Inference of Shape Expression Schemas Typed RDF Graphs
We consider the problem of constructing a Shape Expression Schema (ShEx) that describes the structure of a given input RDF graph. We employ the framework of grammatical inference, where the objective is to find an inference algorithm that is both sound i.e., always producing a schema that validates the input RDF graph, and complete i.e., able to produce any schema, within a given class of schemas, provided that a sufficiently informative input graph is presented. We study the case where the input graph is typed i.e., every node is given with its types. We limit our attention to a practical fragment ShEx0 of Shape Expressions Schemas that has an equivalent graphical representation in the form of shape graphs. We investigate the problem of constructing a canonical representative of a given shape graph. Finally, we present a sound and complete algorithm for shape graphs thus showing that ShEx0 is learnable from typed graphs.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
245,593
1903.05157
Simple Physical Adversarial Examples against End-to-End Autonomous Driving Models
Recent advances in machine learning, especially techniques such as deep neural networks, are promoting a range of high-stakes applications, including autonomous driving, which often relies on deep learning for perception. While deep learning for perception has been shown to be vulnerable to a host of subtle adversarial manipulations of images, end-to-end demonstrations of successful attacks, which manipulate the physical environment and result in physical consequences, are scarce. Moreover, attacks typically involve carefully constructed adversarial examples at the level of pixels. We demonstrate the first end-to-end attacks on autonomous driving in simulation, using simple physically realizable attacks: the painting of black lines on the road. These attacks target deep neural network models for end-to-end autonomous driving control. A systematic investigation shows that such attacks are surprisingly easy to engineer, and we describe scenarios (e.g., right turns) in which they are highly effective, and others that are less vulnerable (e.g., driving straight). Further, we use network deconvolution to demonstrate that the attacks succeed by inducing activation patterns similar to entirely different scenarios used in training.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
124,104
2211.07350
Does Debiasing Inevitably Degrade the Model Performance
Gender bias in language models has attracted sufficient attention because it threatens social justice. However, most of the current debiasing methods degraded the model's performance on other tasks while the degradation mechanism is still mysterious. We propose a theoretical framework explaining the three candidate mechanisms of the language model's gender bias. We use our theoretical framework to explain why the current debiasing methods cause performance degradation. We also discover a pathway through which debiasing will not degrade the model performance. We further develop a causality-detection fine-tuning approach to correct gender bias. The numerical experiment demonstrates that our method is able to lead to double dividends: partially mitigating gender bias while avoiding performance degradation.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
330,208
1501.06166
HARQ Buffer Management: An Information-Theoretic View
A key practical constraint on the design of Hybrid automatic repeat request (HARQ) schemes is the size of the on-chip buffer that is available at the receiver to store previously received packets. In fact, in modern wireless standards such as LTE and LTE-A, the HARQ buffer size is one of the main drivers of the modem area and power consumption. This has recently highlighted the importance of HARQ buffer management, that is, of the use of buffer-aware transmission schemes and of advanced compression policies for the storage of received data. This work investigates HARQ buffer management by leveraging information-theoretic achievability arguments based on random coding. Specifically, standard HARQ schemes, namely Type-I, Chase Combining and Incremental Redundancy, are first studied under the assumption of a finite-capacity HARQ buffer by considering both coded modulation, via Gaussian signaling, and Bit Interleaved Coded Modulation (BICM). The analysis sheds light on the impact of different compression strategies, namely the conventional compression log-likelihood ratios and the direct digitization of baseband signals, on the throughput. Then, coding strategies based on layered modulation and optimized coding blocklength are investigated, highlighting the benefits of HARQ buffer-aware transmission schemes. The optimization of baseband compression for multiple-antenna links is also studied, demonstrating the optimality of a transform coding approach.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
39,579
2109.01120
Automatic Diagnosis of Schizophrenia in EEG Signals Using CNN-LSTM Models
Schizophrenia (SZ) is a mental disorder whereby due to the secretion of specific chemicals in the brain, the function of some brain regions is out of balance, leading to the lack of coordination between thoughts, actions, and emotions. This study provides various intelligent deep learning (DL)-based methods for automated SZ diagnosis via electroencephalography (EEG) signals. The obtained results are compared with those of conventional intelligent methods. To implement the proposed methods, the dataset of the Institute of Psychiatry and Neurology in Warsaw, Poland, has been used. First, EEG signals were divided into 25 s time frames and then were normalized by z-score or norm L2. In the classification step, two different approaches were considered for SZ diagnosis via EEG signals. In this step, the classification of EEG signals was first carried out by conventional machine learning methods, e.g., support vector machine, k-nearest neighbors, decision tree, na\"ive Bayes, random forest, extremely randomized trees, and bagging. Various proposed DL models, namely, long short-term memories (LSTMs), one-dimensional convolutional networks (1D-CNNs), and 1D-CNN-LSTMs, were used in the following. In this step, the DL models were implemented and compared with different activation functions. Among the proposed DL models, the CNN-LSTM architecture has had the best performance. In this architecture, the ReLU activation function with the z-score and L2-combined normalization was used. The proposed CNN-LSTM model has achieved an accuracy percentage of 99.25%, better than the results of most former studies in this field. It is worth mentioning that to perform all simulations, the k-fold cross-validation method with k = 5 has been used.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
253,344
2108.11368
CDCGen: Cross-Domain Conditional Generation via Normalizing Flows and Adversarial Training
How to generate conditional synthetic data for a domain without utilizing information about its labels/attributes? Our work presents a solution to the above question. We propose a transfer learning-based framework utilizing normalizing flows, coupled with both maximum-likelihood and adversarial training. We model a source domain (labels available) and a target domain (labels unavailable) with individual normalizing flows, and perform domain alignment to a common latent space using adversarial discriminators. Due to the invertible property of flow models, the mapping has exact cycle consistency. We also learn the joint distribution of the data samples and attributes in the source domain by employing an encoder to map attributes to the latent space via adversarial training. During the synthesis phase, given any combination of attributes, our method can generate synthetic samples conditioned on them in the target domain. Empirical studies confirm the effectiveness of our method on benchmarked datasets. We envision our method to be particularly useful for synthetic data generation in label-scarce systems by generating non-trivial augmentations via attribute transformations. These synthetic samples will introduce more entropy into the label-scarce domain than their geometric and photometric transformation counterparts, helpful for robust downstream tasks.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
252,166
1803.11285
Revisiting Oxford and Paris: Large-Scale Image Retrieval Benchmarking
In this paper we address issues with image retrieval benchmarking on standard and popular Oxford 5k and Paris 6k datasets. In particular, annotation errors, the size of the dataset, and the level of challenge are addressed: new annotation for both datasets is created with an extra attention to the reliability of the ground truth. Three new protocols of varying difficulty are introduced. The protocols allow fair comparison between different methods, including those using a dataset pre-processing stage. For each dataset, 15 new challenging queries are introduced. Finally, a new set of 1M hard, semi-automatically cleaned distractors is selected. An extensive comparison of the state-of-the-art methods is performed on the new benchmark. Different types of methods are evaluated, ranging from local-feature-based to modern CNN based methods. The best results are achieved by taking the best of the two worlds. Most importantly, image retrieval appears far from being solved.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
93,860
2410.06114
UnSeGArmaNet: Unsupervised Image Segmentation using Graph Neural Networks with Convolutional ARMA Filters
The data-hungry approach of supervised classification drives the interest of the researchers toward unsupervised approaches, especially for problems such as medical image segmentation, where labeled data are difficult to get. Motivated by the recent success of Vision transformers (ViT) in various computer vision tasks, we propose an unsupervised segmentation framework with a pre-trained ViT. Moreover, by harnessing the graph structure inherent within the image, the proposed method achieves a notable performance in segmentation, especially in medical images. We further introduce a modularity-based loss function coupled with an Auto-Regressive Moving Average (ARMA) filter to capture the inherent graph topology within the image. Finally, we observe that employing Scaled Exponential Linear Unit (SELU) and SILU (Swish) activation functions within the proposed Graph Neural Network (GNN) architecture enhances the performance of segmentation. The proposed method provides state-of-the-art performance (even comparable to supervised methods) on benchmark image segmentation datasets such as ECSSD, DUTS, and CUB, as well as challenging medical image segmentation datasets such as KVASIR, CVC-ClinicDB, ISIC-2018. The github repository of the code is available on \url{https://github.com/ksgr5566/UnSeGArmaNet}.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
496,054
2403.02698
Causal Walk: Debiasing Multi-Hop Fact Verification with Front-Door Adjustment
Conventional multi-hop fact verification models are prone to rely on spurious correlations from the annotation artifacts, leading to an obvious performance decline on unbiased datasets. Among the various debiasing works, the causal inference-based methods become popular by performing theoretically guaranteed debiasing such as casual intervention or counterfactual reasoning. However, existing causal inference-based debiasing methods, which mainly formulate fact verification as a single-hop reasoning task to tackle shallow bias patterns, cannot deal with the complicated bias patterns hidden in multiple hops of evidence. To address the challenge, we propose Causal Walk, a novel method for debiasing multi-hop fact verification from a causal perspective with front-door adjustment. Specifically, in the structural causal model, the reasoning path between the treatment (the input claim-evidence graph) and the outcome (the veracity label) is introduced as the mediator to block the confounder. With the front-door adjustment, the causal effect between the treatment and the outcome is decomposed into the causal effect between the treatment and the mediator, which is estimated by applying the idea of random walk, and the causal effect between the mediator and the outcome, which is estimated with normalized weighted geometric mean approximation. To investigate the effectiveness of the proposed method, an adversarial multi-hop fact verification dataset and a symmetric multi-hop fact verification dataset are proposed with the help of the large language model. Experimental results show that Causal Walk outperforms some previous debiasing methods on both existing datasets and the newly constructed datasets. Code and data will be released at https://github.com/zcccccz/CausalWalk.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
434,905
2407.05489
How Effective are State Space Models for Machine Translation?
Transformers are the current architecture of choice for NLP, but their attention layers do not scale well to long contexts. Recent works propose to replace attention with linear recurrent layers -- this is the case for state space models, which enjoy efficient training and inference. However, it remains unclear whether these models are competitive with transformers in machine translation (MT). In this paper, we provide a rigorous and comprehensive experimental comparison between transformers and linear recurrent models for MT. Concretely, we experiment with RetNet, Mamba, and hybrid versions of Mamba which incorporate attention mechanisms. Our findings demonstrate that Mamba is highly competitive with transformers on sentence and paragraph-level datasets, where in the latter both models benefit from shifting the training distribution towards longer sequences. Further analysis show that integrating attention into Mamba improves translation quality, robustness to sequence length extrapolation, and the ability to recall named entities.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
471,000
2101.01601
Bilateral Grid Learning for Stereo Matching Networks
Real-time performance of stereo matching networks is important for many applications, such as automatic driving, robot navigation and augmented reality (AR). Although significant progress has been made in stereo matching networks in recent years, it is still challenging to balance real-time performance and accuracy. In this paper, we present a novel edge-preserving cost volume upsampling module based on the slicing operation in the learned bilateral grid. The slicing layer is parameter-free, which allows us to obtain a high quality cost volume of high resolution from a low-resolution cost volume under the guide of the learned guidance map efficiently. The proposed cost volume upsampling module can be seamlessly embedded into many existing stereo matching networks, such as GCNet, PSMNet, and GANet. The resulting networks are accelerated several times while maintaining comparable accuracy. Furthermore, we design a real-time network (named BGNet) based on this module, which outperforms existing published real-time deep stereo matching networks, as well as some complex networks on the KITTI stereo datasets. The code is available at https://github.com/YuhuaXu/BGNet.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
214,404
2012.00493
Problems of representation of electrocardiograms in convolutional neural networks
Using electrocardiograms as an example, we demonstrate the characteristic problems that arise when modeling one-dimensional signals containing inaccurate repeating pattern by means of standard convolutional networks. We show that these problems are systemic in nature. They are due to how convolutional networks work with composite objects, parts of which are not fixed rigidly, but have significant mobility. We also demonstrate some counterintuitive effects related to generalization in deep networks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
209,158
1804.03288
Enabling a Pepper Robot to provide Automated and Interactive Tours of a Robotics Laboratory
The Pepper robot has become a widely recognised face for the perceived potential of social robots to enter our homes and businesses. However, to date, commercial and research applications of the Pepper have been largely restricted to roles in which the robot is able to remain stationary. This restriction is the result of a number of technical limitations, including limited sensing capabilities, and have as a result, reduced the number of roles in which use of the robot can be explored. In this paper, we present our approach to solving these problems, with the intention of opening up new research applications for the robot. To demonstrate the applicability of our approach, we have framed this work within the context of providing interactive tours of an open-plan robotics laboratory.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
94,593
2410.04929
Goal-Conditioned Terminal Value Estimation for Real-time and Multi-task Model Predictive Control
While MPC enables nonlinear feedback control by solving an optimal control problem at each timestep, the computational burden tends to be significantly large, making it difficult to optimize a policy within the control period. To address this issue, one possible approach is to utilize terminal value learning to reduce computational costs. However, the learned value cannot be used for other tasks in situations where the task dynamically changes in the original MPC setup. In this study, we develop an MPC framework with goal-conditioned terminal value learning to achieve multitask policy optimization while reducing computational time. Furthermore, by using a hierarchical control structure that allows the upper-level trajectory planner to output appropriate goal-conditioned trajectories, we demonstrate that a robot model is able to generate diverse motions. We evaluate the proposed method on a bipedal inverted pendulum robot model and confirm that combining goal-conditioned terminal value learning with an upper-level trajectory planner enables real-time control; thus, the robot successfully tracks a target trajectory on sloped terrain.
false
false
false
false
false
false
true
true
false
false
true
false
false
false
false
false
false
false
495,499
2001.11535
SGP-DT: Semantic Genetic Programming Based on Dynamic Targets
Semantic GP is a promising approach that introduces semantic awareness during genetic evolution. This paper presents a new Semantic GP approach based on Dynamic Target (SGP-DT) that divides the search problem into multiple GP runs. The evolution in each run is guided by a new (dynamic) target based on the residual errors. To obtain the final solution, SGP-DT combines the solutions of each run using linear scaling. SGP-DT presents a new methodology to produce the offspring that does not rely on the classic crossover. The synergy between such a methodology and linear scaling yields to final solutions with low approximation error and computational cost. We evaluate SGP-DT on eight well-known data sets and compare with {\epsilon}-lexicase, a state-of-the-art evolutionary technique. SGP-DT achieves small RMSE values, on average 23.19% smaller than the one of {\epsilon}-lexicase.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
162,094
2010.04412
Fair and Representative Subset Selection from Data Streams
We study the problem of extracting a small subset of representative items from a large data stream. In many data mining and machine learning applications such as social network analysis and recommender systems, this problem can be formulated as maximizing a monotone submodular function subject to a cardinality constraint $k$. In this work, we consider the setting where data items in the stream belong to one of several disjoint groups and investigate the optimization problem with an additional \emph{fairness} constraint that limits selection to a given number of items from each group. We then propose efficient algorithms for the fairness-aware variant of the streaming submodular maximization problem. In particular, we first give a $ (\frac{1}{2}-\varepsilon) $-approximation algorithm that requires $ O(\frac{1}{\varepsilon} \log \frac{k}{\varepsilon}) $ passes over the stream for any constant $ \varepsilon>0 $. Moreover, we give a single-pass streaming algorithm that has the same approximation ratio of $(\frac{1}{2}-\varepsilon)$ when unlimited buffer sizes and post-processing time are permitted, and discuss how to adapt it to more practical settings where the buffer sizes are bounded. Finally, we demonstrate the efficiency and effectiveness of our proposed algorithms on two real-world applications, namely \emph{maximum coverage on large graphs} and \emph{personalized recommendation}.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
199,732
2502.12442
HopRAG: Multi-Hop Reasoning for Logic-Aware Retrieval-Augmented Generation
Retrieval-Augmented Generation (RAG) systems often struggle with imperfect retrieval, as traditional retrievers focus on lexical or semantic similarity rather than logical relevance. To address this, we propose HopRAG, a novel RAG framework that augments retrieval with logical reasoning through graph-structured knowledge exploration. During indexing, HopRAG constructs a passage graph, with text chunks as vertices and logical connections established via LLM-generated pseudo-queries as edges. During retrieval, it employs a retrieve-reason-prune mechanism: starting with lexically or semantically similar passages, the system explores multi-hop neighbors guided by pseudo-queries and LLM reasoning to identify truly relevant ones. Extensive experiments demonstrate HopRAG's superiority, achieving 76.78\% higher answer accuracy and 65.07\% improved retrieval F1 score compared to conventional methods. The repository is available at https://github.com/LIU-Hao-2002/HopRAG.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
534,864
1705.05424
Attack Detection in Sensor Network Target Localization Systems with Quantized Data
We consider a sensor network focused on target localization, where sensors measure the signal strength emitted from the target. Each measurement is quantized to one bit and sent to the fusion center. A general attack is considered at some sensors that attempts to cause the fusion center to produce an inaccurate estimation of the target location with a large mean-square-error. The attack is a combination of man-in-the-middle, hacking, and spoofing attacks that can effectively change both signals going into and coming out of the sensor nodes in a realistic manner. We show that the essential effect of attacks is to alter the estimated distance between the target and each attacked sensor to a different extent, giving rise to a geometric inconsistency among the attacked and unattacked sensors. Hence, with the help of two secure sensors, a class of detectors are proposed to detect the attacked sensors by scrutinizing the existence of the geometric inconsistency. We show that the false alarm and miss probabilities of the proposed detectors decrease exponentially as the number of measurement samples increases, which implies that for sufficiently large number of samples, the proposed detectors can identify the attacked and unattacked sensors with any required accuracy.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
73,485
1708.04343
Spectral Methods for Passive Imaging: Non-asymptotic Performance and Robustness
We study the problem of passive imaging through convolutive channels. A scene is illuminated with an unknown, unstructured source, and the measured response is the convolution of this source with multiple channel responses, each of which is time-limited. Spectral methods based on the commutativity of convolution, first proposed and analyzed in the 1990s, provide an elegant mathematical framework for attacking this problem. However, these now classical methods are very sensitive to noise, especially when working from relatively small sample sizes. In this paper, we show that a linear subspace model on the coefficients of the impulse responses of the channels can make this problem well-posed. We derive non-asymptotic error bounds for the generic subspace model by analyzing the spectral gap of the cross-correlation matrix of the channels relative to the perturbation introduced by noise. Numerical results show that this modified spectral method offers significant improvements over the classical method and outperforms other competing methods for multichannel blind deconvolution.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
78,924
2302.10870
On Provable Copyright Protection for Generative Models
There is a growing concern that learned conditional generative models may output samples that are substantially similar to some copyrighted data $C$ that was in their training set. We give a formal definition of $\textit{near access-freeness (NAF)}$ and prove bounds on the probability that a model satisfying this definition outputs a sample similar to $C$, even if $C$ is included in its training set. Roughly speaking, a generative model $p$ is $\textit{$k$-NAF}$ if for every potentially copyrighted data $C$, the output of $p$ diverges by at most $k$-bits from the output of a model $q$ that $\textit{did not access $C$ at all}$. We also give generative model learning algorithms, which efficiently modify the original generative model learning algorithm in a black box manner, that output generative models with strong bounds on the probability of sampling protected content. Furthermore, we provide promising experiments for both language (transformers) and image (diffusion) generative models, showing minimal degradation in output quality while ensuring strong protections against sampling protected content.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
346,984
1902.06992
Stochastic Conditional Gradient++
In this paper, we consider the general non-oblivious stochastic optimization where the underlying stochasticity may change during the optimization procedure and depends on the point at which the function is evaluated. We develop Stochastic Frank-Wolfe++ ($\text{SFW}{++} $), an efficient variant of the conditional gradient method for minimizing a smooth non-convex function subject to a convex body constraint. We show that $\text{SFW}{++} $ converges to an $\epsilon$-first order stationary point by using $O(1/\epsilon^3)$ stochastic gradients. Once further structures are present, $\text{SFW}{++}$'s theoretical guarantees, in terms of the convergence rate and quality of its solution, improve. In particular, for minimizing a convex function, $\text{SFW}{++} $ achieves an $\epsilon$-approximate optimum while using $O(1/\epsilon^2)$ stochastic gradients. It is known that this rate is optimal in terms of stochastic gradient evaluations. Similarly, for maximizing a monotone continuous DR-submodular function, a slightly different form of $\text{SFW}{++} $, called Stochastic Continuous Greedy++ ($\text{SCG}{++} $), achieves a tight $[(1-1/e)\text{OPT} -\epsilon]$ solution while using $O(1/\epsilon^2)$ stochastic gradients. Through an information theoretic argument, we also prove that $\text{SCG}{++} $'s convergence rate is optimal. Finally, for maximizing a non-monotone continuous DR-submodular function, we can achieve a $[(1/e)\text{OPT} -\epsilon]$ solution by using $O(1/\epsilon^2)$ stochastic gradients. We should highlight that our results and our novel variance reduction technique trivially extend to the standard and easier oblivious stochastic optimization settings for (non-)covex and continuous submodular settings.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
121,892
1811.06060
Hybrid Generative-Discriminative Models for Inverse Materials Design
Discovering new physical products and processes often demands enormous experimentation and expensive simulation. To design a new product with certain target characteristics, an extensive search is performed in the design space by trying out a large number of design combinations before reaching to the target characteristics. However, forward searching for the target design becomes prohibitive when the target is itself moving or only partially understood. To address this bottleneck, we propose to use backward prediction by leveraging the rich data generated during earlier exploration and construct a machine learning framework to predict the design parameters for any target in a single step. This poses two technical challenges: the first caused due to one-to-many mapping when learning the inverse problem and the second caused due to an user specifying the target specifications only partially. To overcome the challenges, we formulate this problem as conditional density estimation under high-dimensional setting with incomplete input and multimodal output. We solve the problem through a deep hybrid generative-discriminative model, which is trained end-to-end to predict the optimum design.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
113,441
2204.04968
Bimodal Camera Pose Prediction for Endoscopy
Deducing the 3D structure of endoscopic scenes from images is exceedingly challenging. In addition to deformation and view-dependent lighting, tubular structures like the colon present problems stemming from their self-occluding and repetitive anatomical structure. In this paper, we propose SimCol, a synthetic dataset for camera pose estimation in colonoscopy, and a novel method that explicitly learns a bimodal distribution to predict the endoscope pose. Our dataset replicates real colonoscope motion and highlights the drawbacks of existing methods. We publish 18k RGB images from simulated colonoscopy with corresponding depth and camera poses and make our data generation environment in Unity publicly available. We evaluate different camera pose prediction methods and demonstrate that, when trained on our data, they generalize to real colonoscopy sequences, and our bimodal approach outperforms prior unimodal work.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
290,863
2311.15994
Adversarial Doodles: Interpretable and Human-drawable Attacks Provide Describable Insights
DNN-based image classifiers are susceptible to adversarial attacks. Most previous adversarial attacks do not have clear patterns, making it difficult to interpret attacks' results and gain insights into classifiers' mechanisms. Therefore, we propose Adversarial Doodles, which have interpretable shapes. We optimize black bezier curves to fool the classifier by overlaying them onto the input image. By introducing random affine transformation and regularizing the doodled area, we obtain small-sized attacks that cause misclassification even when humans replicate them by hand. Adversarial doodles provide describable insights into the relationship between the human-drawn doodle's shape and the classifier's output, such as "When we add three small circles on a helicopter image, the ResNet-50 classifier mistakenly classifies it as an airplane."
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
410,716
2304.01636
Label-guided Attention Distillation for Lane Segmentation
Contemporary segmentation methods are usually based on deep fully convolutional networks (FCNs). However, the layer-by-layer convolutions with a growing receptive field is not good at capturing long-range contexts such as lane markers in the scene. In this paper, we address this issue by designing a distillation method that exploits label structure when training segmentation network. The intuition is that the ground-truth lane annotations themselves exhibit internal structure. We broadcast the structure hints throughout a teacher network, i.e., we train a teacher network that consumes a lane label map as input and attempts to replicate it as output. Then, the attention maps of the teacher network are adopted as supervisors of the student segmentation network. The teacher network, with label structure information embedded, knows distinctly where the convolution layers should pay visual attention into. The proposed method is named as Label-guided Attention Distillation (LGAD). It turns out that the student network learns significantly better with LGAD than when learning alone. As the teacher network is deprecated after training, our method do not increase the inference time. Note that LGAD can be easily incorporated in any lane segmentation network.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
356,158
2309.12729
Coordinated Information Campaigns on Social Media: A Multifaceted Framework for Detection and Analysis
The prevalence of coordinated information campaigns in social media platforms has significant negative consequences across various domains, including social, political, and economic processes. This paper proposes a multifaceted framework for detecting and analysing coordinated message promotion on social media. By simultaneously considering features related to content, time, and network dimensions, our framework can capture the diverse nature of coordinated activity and identify anomalous user accounts who likely engaged in suspicious behaviour. Unlike existing solutions that rely on specific constraints, our approach is more flexible as it employs specialised components to extract the significant structures within a network and to detect the most unusual interactions. We demonstrate the effectiveness of our framework using two Twitter datasets, the Russian Internet Research Agency (IRA), and long-term discussions on Data Science topics. The results demonstrate our framework's ability to isolate unusual activity from expected normal behaviour and provide valuable insights for further qualitative investigation.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
393,915
2212.07761
Successive Interference Cancellation for Bandlimited Channels with Direct Detection
The maximum information rates for bandlimited channels with direct detection are achieved with joint detection and decoding (JDD), but JDD is often too complex to implement. Two receiver structures are studied to reduce complexity: separate detection and decoding (SDD) and successive interference cancellation (SIC). For bipolar modulation, frequency-domain raised-cosine pulse shaping, and fiber-optic channels with chromatic dispersion, SIC achieves rates close to those of JDD, thereby attaining significant energy gains over SDD and intensity modulation. Gibbs sampling further reduces the detector complexity and achieves rates close to those of the forward-backward algorithm at low to intermediate signal-to-noise ratio (SNR) but stalls at high SNR. Simulations with polar codes, higher-order modulation, and multi-level coding confirm the predicted gains.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
336,512
1702.01783
From Formalised State Machines to Implementations of Robotic Controllers
Controllers for autonomous robotic systems can be specified using state machines. However, these are typically developed in an ad hoc manner without formal semantics, which makes it difficult to analyse the controller. Simulations are often used during the development, but a rigorous connection between the designed controller and the implementation is often overlooked. This paper presents a state-machine based notation, RoboChart, together with a tool to automatically create code from the state machines, establishing a rigorous connection between specification and implementation. In RoboChart, a robot's controller is specified either graphically or using a textual description language. The controller code for simulation is automatically generated through a direct mapping from the specification. We demonstrate our approach using two case studies (self-organized aggregation and swarm taxis) in swarm robotics. The simulations are presented using two different simulators showing the general applicability of our approach.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
67,865
2404.00727
A Controlled Reevaluation of Coreference Resolution Models
All state-of-the-art coreference resolution (CR) models involve finetuning a pretrained language model. Whether the superior performance of one CR model over another is due to the choice of language model or other factors, such as the task-specific architecture, is difficult or impossible to determine due to lack of a standardized experimental setup. To resolve this ambiguity, we systematically evaluate five CR models and control for certain design decisions including the pretrained language model used by each. When controlling for language model size, encoder-based CR models outperform more recent decoder-based models in terms of both accuracy and inference speed. Surprisingly, among encoder-based CR models, more recent models are not always more accurate, and the oldest CR model that we test generalizes the best to out-of-domain textual genres. We conclude that controlling for the choice of language model reduces most, but not all, of the increase in F1 score reported in the past five years.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
443,082
2402.09066
Solid Waste Detection, Monitoring and Mapping in Remote Sensing Images: A Survey
The detection and characterization of illegal solid waste disposal sites are essential for environmental protection, particularly for mitigating pollution and health hazards. Improperly managed landfills contaminate soil and groundwater via rainwater infiltration, posing threats to both animals and humans. Traditional landfill identification approaches, such as on-site inspections, are time-consuming and expensive. Remote sensing is a cost-effective solution for the identification and monitoring of solid waste disposal sites that enables broad coverage and repeated acquisitions over time. Earth Observation (EO) satellites, equipped with an array of sensors and imaging capabilities, have been providing high-resolution data for several decades. Researchers proposed specialized techniques that leverage remote sensing imagery to perform a range of tasks such as waste site detection, dumping site monitoring, and assessment of suitable locations for new landfills. This review aims to provide a detailed illustration of the most relevant proposals for the detection and monitoring of solid waste sites by describing and comparing the approaches, the implemented techniques, and the employed data. Furthermore, since the data sources are of the utmost importance for developing an effective solid waste detection model, a comprehensive overview of the satellites and publicly available data sets is presented. Finally, this paper identifies the open issues in the state-of-the-art and discusses the relevant research directions for reducing the costs and improving the effectiveness of novel solid waste detection methods.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
429,353
cmp-lg/9510003
A Proposal for Word Sense Disambiguation using Conceptual Distance
This paper presents a method for the resolution of lexical ambiguity and its automatic evaluation over the Brown Corpus. The method relies on the use of the wide-coverage noun taxonomy of WordNet and the notion of conceptual distance among concepts, captured by a Conceptual Density formula developed for this purpose. This fully automatic method requires no hand coding of lexical entries, hand tagging of text nor any kind of training process. The results of the experiment have been automatically evaluated against SemCor, the sense-tagged version of the Brown Corpus.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
536,464
1306.0710
On the Optimum Cyclic Subcode Chains of $\mathcal{RM}(2,m)^*$ for Increasing Message Length
The distance profiles of linear block codes can be employed to design variational coding scheme for encoding message with variational length and getting lower decoding error probability by large minimum Hamming distance. %, e.g. the design of TFCI in CDMA and the researches on the second-order Reed-Muller code $\mathcal{RM}(2,m)$, etc. Considering convenience for encoding, we focus on the distance profiles with respect to cyclic subcode chains (DPCs) of cyclic codes over $GF(q)$ with length $n$ such that $\mbox{gcd}(n,q) = 1$. In this paper the optimum DPCs and the corresponding optimum cyclic subcode chains are investigated on the punctured second-order Reed-Muller code $\mathcal{RM}(2,m)^*$ for increasing message length, where two standards on the optimums are studied according to the rhythm of increase.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
24,982
2207.13744
Lighting (In)consistency of Paint by Text
Whereas generative adversarial networks are capable of synthesizing highly realistic images of faces, cats, landscapes, or almost any other single category, paint-by-text synthesis engines can -- from a single text prompt -- synthesize realistic images of seemingly endless categories with arbitrary configurations and combinations. This powerful technology poses new challenges to the photo-forensic community. Motivated by the fact that paint by text is not based on explicit geometric or physical models, and the human visual system's general insensitivity to lighting inconsistencies, we provide an initial exploration of the lighting consistency of DALL-E-2 synthesized images to determine if physics-based forensic analyses will prove fruitful in detecting this new breed of synthetic media.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
true
310,388
2202.07857
Graph-Augmented Normalizing Flows for Anomaly Detection of Multiple Time Series
Anomaly detection is a widely studied task for a broad variety of data types; among them, multiple time series appear frequently in applications, including for example, power grids and traffic networks. Detecting anomalies for multiple time series, however, is a challenging subject, owing to the intricate interdependencies among the constituent series. We hypothesize that anomalies occur in low density regions of a distribution and explore the use of normalizing flows for unsupervised anomaly detection, because of their superior quality in density estimation. Moreover, we propose a novel flow model by imposing a Bayesian network among constituent series. A Bayesian network is a directed acyclic graph (DAG) that models causal relationships; it factorizes the joint probability of the series into the product of easy-to-evaluate conditional probabilities. We call such a graph-augmented normalizing flow approach GANF and propose joint estimation of the DAG with flow parameters. We conduct extensive experiments on real-world datasets and demonstrate the effectiveness of GANF for density estimation, anomaly detection, and identification of time series distribution drift.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
280,685
2404.07837
Data-Driven System Identification of Quadrotors Subject to Motor Delays
Recently non-linear control methods like Model Predictive Control (MPC) and Reinforcement Learning (RL) have attracted increased interest in the quadrotor control community. In contrast to classic control methods like cascaded PID controllers, MPC and RL heavily rely on an accurate model of the system dynamics. The process of quadrotor system identification is notoriously tedious and is often pursued with additional equipment like a thrust stand. Furthermore, low-level details like motor delays which are crucial for accurate end-to-end control are often neglected. In this work, we introduce a data-driven method to identify a quadrotor's inertia parameters, thrust curves, torque coefficients, and first-order motor delay purely based on proprioceptive data. The estimation of the motor delay is particularly challenging as usually, the RPMs can not be measured. We derive a Maximum A Posteriori (MAP)-based method to estimate the latent time constant. Our approach only requires about a minute of flying data that can be collected without any additional equipment and usually consists of three simple maneuvers. Experimental results demonstrate the ability of our method to accurately recover the parameters of multiple quadrotors. It also facilitates the deployment of RL-based, end-to-end quadrotor control of a large quadrotor under harsh, outdoor conditions.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
445,993
2309.01360
Random Projections of Sparse Adjacency Matrices
We analyze a random projection method for adjacency matrices, studying its utility in representing sparse graphs. We show that these random projections retain the functionality of their underlying adjacency matrices while having extra properties that make them attractive as dynamic graph representations. In particular, they can represent graphs of different sizes and vertex sets in the same space, allowing for the aggregation and manipulation of graphs in a unified manner. We also provide results on how the size of the projections need to scale in order to preserve accurate graph operations, showing that the size of the projections can scale linearly with the number of vertices while accurately retaining first-order graph information. We conclude by characterizing our random projection as a distance-preserving map of adjacency matrices analogous to the usual Johnson-Lindenstrauss map.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
389,656
2301.08815
DiffusionCT: Latent Diffusion Model for CT Image Standardization
Computed tomography (CT) is one of the modalities for effective lung cancer screening, diagnosis, treatment, and prognosis. The features extracted from CT images are now used to quantify spatial and temporal variations in tumors. However, CT images obtained from various scanners with customized acquisition protocols may introduce considerable variations in texture features, even for the same patient. This presents a fundamental challenge to downstream studies that require consistent and reliable feature analysis. Existing CT image harmonization models rely on GAN-based supervised or semi-supervised learning, with limited performance. This work addresses the issue of CT image harmonization using a new diffusion-based model, named DiffusionCT, to standardize CT images acquired from different vendors and protocols. DiffusionCT operates in the latent space by mapping a latent non-standard distribution into a standard one. DiffusionCT incorporates an Unet-based encoder-decoder, augmented by a diffusion model integrated into the bottleneck part. The model is designed in two training phases. The encoder-decoder is first trained, without embedding the diffusion model, to learn the latent representation of the input data. The latent diffusion model is then trained in the next training phase while fixing the encoder-decoder. Finally, the decoder synthesizes a standardized image with the transformed latent representation. The experimental results demonstrate a significant improvement in the performance of the standardization task using DiffusionCT.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
341,298
2502.14522
Investigating the Generalizability of ECG Noise Detection Across Diverse Data Sources and Noise Types
Electrocardiograms (ECGs) are essential for monitoring cardiac health, allowing clinicians to analyze heart rate variability (HRV), detect abnormal rhythms, and diagnose cardiovascular diseases. However, ECG signals, especially those from wearable devices, are often affected by noise artifacts caused by motion, muscle activity, or device-related interference. These artifacts distort R-peaks and the characteristic QRS complex, making HRV analysis unreliable and increasing the risk of misdiagnosis. Despite this, the few existing studies on ECG noise detection have primarily focused on a single dataset, limiting the understanding of how well noise detection models generalize across different datasets. In this paper, we investigate the generalizability of noise detection in ECG using a novel HRV-based approach through cross-dataset experiments on four datasets. Our results show that machine learning achieves an average accuracy of over 90\% and an AUPRC of more than 0.9. These findings suggest that regardless of the ECG data source or the type of noise, the proposed method maintains high accuracy even on unseen datasets, demonstrating the feasibility of generalizability.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
535,867
2305.11489
Incomplete Multi-view Clustering via Diffusion Completion
Incomplete multi-view clustering is a challenging and non-trivial task to provide effective data analysis for large amounts of unlabeled data in the real world. All incomplete multi-view clustering methods need to address the problem of how to reduce the impact of missing views. To address this issue, we propose diffusion completion to recover the missing views integrated into an incomplete multi-view clustering framework. Based on the observable views information, the diffusion model is used to recover the missing views, and then the consistency information of the multi-view data is learned by contrastive learning to improve the performance of multi-view clustering. To the best of our knowledge, this may be the first work to incorporate diffusion models into an incomplete multi-view clustering framework. Experimental results show that the proposed method performs well in recovering the missing views while achieving superior clustering performance compared to state-of-the-art methods.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
365,561
2409.01205
Geometric Scaling Laws for Axial Flux Permanent Magnet Motors in In-Wheel Powertrain Topologies
In this paper, we present geometric scaling models for axial flux motors (AFMs) to be used for in-wheel powertrain design optimization purposes. We first present a vehicle and powertrain model, with emphasis on the electric motor model. We construct the latter by formulating the analytical scaling laws for AFMs, based on the scaling concept of RFMs from the literature, specifically deriving the model of the main loss component in electric motors: the copper losses. We further present separate scaling models of motor parameters, losses and thermal models, as well as the torque limits and cost, as a function of the design variables. Second, we validate these scaling laws with several experiments leveraging high-fidelity finite-element simulations. Finally, we define an optimization problem that minimizes the energy consumption over a drive cycle, optimizing the motor size and transmission ratio for a wide range of electric vehicle powertrain topologies. In our study, we observe that the all-wheel drive topology equipped with in-wheel AFMs is the most efficient, but also generates the highest material cost.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
485,254
2007.06426
Multitask Non-Autoregressive Model for Human Motion Prediction
Human motion prediction, which aims at predicting future human skeletons given the past ones, is a typical sequence-to-sequence problem. Therefore, extensive efforts have been continued on exploring different RNN-based encoder-decoder architectures. However, by generating target poses conditioned on the previously generated ones, these models are prone to bringing issues such as error accumulation problem. In this paper, we argue that such issue is mainly caused by adopting autoregressive manner. Hence, a novel Non-auToregressive Model (NAT) is proposed with a complete non-autoregressive decoding scheme, as well as a context encoder and a positional encoding module. More specifically, the context encoder embeds the given poses from temporal and spatial perspectives. The frame decoder is responsible for predicting each future pose independently. The positional encoding module injects positional signal into the model to indicate temporal order. Moreover, a multitask training paradigm is presented for both low-level human skeleton prediction and high-level human action recognition, resulting in the convincing improvement for the prediction task. Our approach is evaluated on Human3.6M and CMU-Mocap benchmarks and outperforms state-of-the-art autoregressive methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
187,013
1602.02524
Mean and variance of the LQG cost function
Linear Quadratic Gaussian (LQG) systems are well-understood and methods to minimize the expected cost are readily available. Less is known about the statistical properties of the resulting cost function. The contribution of this paper is a set of analytic expressions for the mean and variance of the LQG cost function. These expressions are derived using two different methods, one using solutions to Lyapunov equations and the other using only matrix exponentials. Both the discounted and the non-discounted cost function are considered, as well as the finite-time and the infinite-time cost function. The derived expressions are successfully applied to an example system to reduce the probability of the cost exceeding a given threshold.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
51,870
2407.03704
Neural Probabilistic Logic Learning for Knowledge Graph Reasoning
Knowledge graph (KG) reasoning is a task that aims to predict unknown facts based on known factual samples. Reasoning methods can be divided into two categories: rule-based methods and KG-embedding based methods. The former possesses precise reasoning capabilities but finds it challenging to reason efficiently over large-scale knowledge graphs. While gaining the ability to reason over large-scale knowledge graphs, the latter sacrifices reasoning accuracy. This paper aims to design a reasoning framework called Neural Probabilistic Logic Learning(NPLL) that achieves accurate reasoning on knowledge graphs. Our approach introduces a scoring module that effectively enhances the expressive power of embedding networks, striking a balance between model simplicity and reasoning capabilities. We improve the interpretability of the model by incorporating a Markov Logic Network based on variational inference. We empirically evaluate our approach on several benchmark datasets, and the experimental results validate that our method substantially enhances the accuracy and quality of the reasoning results.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
470,260
2109.13101
Half a Dozen Real-World Applications of Evolutionary Multitasking, and More
Until recently, the potential to transfer evolved skills across distinct optimization problem instances (or tasks) was seldom explored in evolutionary computation. The concept of evolutionary multitasking (EMT) fills this gap. It unlocks a population's implicit parallelism to jointly solve a set of tasks, hence creating avenues for skills transfer between them. Despite it being early days, the idea of EMT has begun to show promise in a range of real-world applications. In the backdrop of recent advances, the contribution of this paper is twofold. First, a review of several application-oriented explorations of EMT in the literature is presented; the works are assimilated into half a dozen broad categories according to their respective application domains. Each of these six categories elaborates fundamental motivations to multitask, and contains a representative experimental study (referred from the literature). Second, a set of recipes is provided showing how problem formulations of general interest, those that cut across different disciplines, could be transformed in the new light of EMT. Our discussions emphasize the many practical use-cases of EMT, and is intended to spark future research towards crafting novel algorithms for real-world deployment.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
true
257,527
2305.19943
Central limit theorem for the overlaps on the Nishimori line
The overlap distribution of the Sherrington-Kirkpatrick model on the Nishimori line has been proved to be self averaging for large volumes. Here we study the joint distribution of the rescaled overlaps around their common mean and prove that it converges to a Gaussian vector.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
369,747
2401.09944
WindSeer: Real-time volumetric wind prediction over complex terrain aboard a small UAV
Real-time high-resolution wind predictions are beneficial for various applications including safe manned and unmanned aviation. Current weather models require too much compute and lack the necessary predictive capabilities as they are valid only at the scale of multiple kilometers and hours - much lower spatial and temporal resolutions than these applications require. Our work, for the first time, demonstrates the ability to predict low-altitude wind in real-time on limited-compute devices, from only sparse measurement data. We train a neural network, WindSeer, using only synthetic data from computational fluid dynamics simulations and show that it can successfully predict real wind fields over terrain with known topography from just a few noisy and spatially clustered wind measurements. WindSeer can generate accurate predictions at different resolutions and domain sizes on previously unseen topography without retraining. We demonstrate that the model successfully predicts historical wind data collected by weather stations and wind measured onboard drones.
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
422,434
2210.03264
Unsupervised Neural Stylistic Text Generation using Transfer learning and Adapters
Research has shown that personality is a key driver to improve engagement and user experience in conversational systems. Conversational agents should also maintain a consistent persona to have an engaging conversation with a user. However, text generation datasets are often crowd sourced and thereby have an averaging effect where the style of the generation model is an average style of all the crowd workers that have contributed to the dataset. While one can collect persona-specific datasets for each task, it would be an expensive and time consuming annotation effort. In this work, we propose a novel transfer learning framework which updates only $0.3\%$ of model parameters to learn style specific attributes for response generation. For the purpose of this study, we tackle the problem of stylistic story ending generation using the ROC stories Corpus. We learn style specific attributes from the PERSONALITY-CAPTIONS dataset. Through extensive experiments and evaluation metrics we show that our novel training procedure can improve the style generation by 200 over Encoder-Decoder baselines while maintaining on-par content relevance metrics with
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
321,958
2307.11289
PI-VEGAN: Physics Informed Variational Embedding Generative Adversarial Networks for Stochastic Differential Equations
We present a new category of physics-informed neural networks called physics informed variational embedding generative adversarial network (PI-VEGAN), that effectively tackles the forward, inverse, and mixed problems of stochastic differential equations. In these scenarios, the governing equations are known, but only a limited number of sensor measurements of the system parameters are available. We integrate the governing physical laws into PI-VEGAN with automatic differentiation, while introducing a variational encoder for approximating the latent variables of the actual distribution of the measurements. These latent variables are integrated into the generator to facilitate accurate learning of the characteristics of the stochastic partial equations. Our model consists of three components, namely the encoder, generator, and discriminator, each of which is updated alternatively employing the stochastic gradient descent algorithm. We evaluate the effectiveness of PI-VEGAN in addressing forward, inverse, and mixed problems that require the concurrent calculation of system parameters and solutions. Numerical results demonstrate that the proposed method achieves satisfactory stability and accuracy in comparison with the previous physics-informed generative adversarial network (PI-WGAN).
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
380,850