id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
1910.13328
Weakly Supervised Prostate TMA Classification via Graph Convolutional Networks
Histology-based grade classification is clinically important for many cancer types in stratifying patients distinct treatment groups. In prostate cancer, the Gleason score is a grading system used to measure the aggressiveness of prostate cancer from the spatial organization of cells and the distribution of glands. However, the subjective interpretation of Gleason score often suffers from large interobserver and intraobserver variability. Previous work in deep learning-based objective Gleason grading requires manual pixel-level annotation. In this work, we propose a weakly-supervised approach for grade classification in tissue micro-arrays (TMA) using graph convolutional networks (GCNs), in which we model the spatial organization of cells as a graph to better capture the proliferation and community structure of tumor cells. As node-level features in our graph representation, we learn the morphometry of each cell using a contrastive predictive coding (CPC)-based self-supervised approach. We demonstrate that on a five-fold cross validation our method can achieve $0.9659\pm0.0096$ AUC using only TMA-level labels. Our method demonstrates a 39.80\% improvement over standard GCNs with texture features and a 29.27% improvement over GCNs with VGG19 features. Our proposed pipeline can be used to objectively stratify low and high risk cases, reducing inter- and intra-observer variability and pathologist workload.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
151,368
2103.13225
Structure-Aware Face Clustering on a Large-Scale Graph with $\bf{10^{7}}$ Nodes
Face clustering is a promising method for annotating unlabeled face images. Recent supervised approaches have boosted the face clustering accuracy greatly, however their performance is still far from satisfactory. These methods can be roughly divided into global-based and local-based ones. Global-based methods suffer from the limitation of training data scale, while local-based ones are difficult to grasp the whole graph structure information and usually take a long time for inference. Previous approaches fail to tackle these two challenges simultaneously. To address the dilemma of large-scale training and efficient inference, we propose the STructure-AwaRe Face Clustering (STAR-FC) method. Specifically, we design a structure-preserved subgraph sampling strategy to explore the power of large-scale training data, which can increase the training data scale from ${10^{5}}$ to ${10^{7}}$. During inference, the STAR-FC performs efficient full-graph clustering with two steps: graph parsing and graph refinement. And the concept of node intimacy is introduced in the second step to mine the local structural information. The STAR-FC gets 91.97 pairwise F-score on partial MS1M within 310s which surpasses the state-of-the-arts. Furthermore, we are the first to train on very large-scale graph with 20M nodes, and achieve superior inference results on 12M testing data. Overall, as a simple and effective method, the proposed STAR-FC provides a strong baseline for large-scale face clustering. Code is available at \url{https://sstzal.github.io/STAR-FC/}.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
226,426
1907.05697
Dreaming machine learning: Lipschitz extensions for reinforcement learning on financial markets
We consider a quasi-metric topological structure for the construction of a new reinforcement learning model in the framework of financial markets. It is based on a Lipschitz type extension of reward functions defined in metric spaces. Specifically, the McShane and Whitney extensions are considered for a reward function which is defined by the total evaluation of the benefits produced by the investment decision at a given time. We define the metric as a linear combination of a Euclidean distance and an angular metric component. All information about the evolution of the system from the beginning of the time interval is used to support the extension of the reward function, but in addition this data set is enriched by adding some artificially produced states. Thus, the main novelty of our method is the way we produce more states -- which we call "dreams" -- to enrich learning. Using some known states of the dynamical system that represents the evolution of the financial market, we use our technique to simulate new states by interpolating real states and introducing some random variables. These new states are used to feed a learning algorithm designed to improve the investment strategy by following a typical reinforcement learning scheme.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
138,433
2104.03597
GKD: Semi-supervised Graph Knowledge Distillation for Graph-Independent Inference
The increased amount of multi-modal medical data has opened the opportunities to simultaneously process various modalities such as imaging and non-imaging data to gain a comprehensive insight into the disease prediction domain. Recent studies using Graph Convolutional Networks (GCNs) provide novel semi-supervised approaches for integrating heterogeneous modalities while investigating the patients' associations for disease prediction. However, when the meta-data used for graph construction is not available at inference time (e.g., coming from a distinct population), the conventional methods exhibit poor performance. To address this issue, we propose a novel semi-supervised approach named GKD based on knowledge distillation. We train a teacher component that employs the label-propagation algorithm besides a deep neural network to benefit from the graph and non-graph modalities only in the training phase. The teacher component embeds all the available information into the soft pseudo-labels. The soft pseudo-labels are then used to train a deep student network for disease prediction of unseen test data for which the graph modality is unavailable. We perform our experiments on two public datasets for diagnosing Autism spectrum disorder, and Alzheimer's disease, along with a thorough analysis on synthetic multi-modal datasets. According to these experiments, GKD outperforms the previous graph-based deep learning methods in terms of accuracy, AUC, and Macro F1.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
229,121
2502.00158
Resolving Editing-Unlearning Conflicts: A Knowledge Codebook Framework for Large Language Model Updating
Large Language Models (LLMs) excel in natural language processing by encoding extensive human knowledge, but their utility relies on timely updates as knowledge evolves. Updating LLMs involves two key tasks simultaneously: unlearning to remove unwanted knowledge and editing to incorporate new information. Existing methods face two major challenges: ineffective knowledge storage (either too sparse or too dense) and task conflicts between editing and unlearning, as validated through our theoretical and experimental results. To address these issues, we propose LOKA, a conflict-free framework for LLM updating based on a knowledge codebook. During training, updated knowledge is stored in multiple codebook memories. To optimize knowledge storage, a similarity-aware knowledge mapping ensures that related knowledge pieces are clustered and allocated to the same memory. Additionally, LOKA resolves task conflicts by employing task-specific and multi-task memories guided by a conflict score. In the inference stage, LOKA retrieves the most relevant memory from the codebook and plugs it into the original LLM to apply the updated knowledge. A learning-based router controls codebook activation to further improve knowledge utilization. Extensive experiments demonstrate the effectiveness of LOKA in LLM knowledge updating tasks.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
529,233
2407.17767
Online Learning for Autonomous Management of Intent-based 6G Networks
The growing complexity of networks and the variety of future scenarios with diverse and often stringent performance requirements call for a higher level of automation. Intent-based management emerges as a solution to attain high level of automation, enabling human operators to solely communicate with the network through high-level intents. The intents consist of the targets in the form of expectations (i.e., latency expectation) from a service and based on the expectations the required network configurations should be done accordingly. It is almost inevitable that when a network action is taken to fulfill one intent, it can cause negative impacts on the performance of another intent, which results in a conflict. In this paper, we aim to address the conflict issue and autonomous management of intent-based networking, and propose an online learning method based on the hierarchical multi-armed bandits approach for an effective management. Thanks to this hierarchical structure, it performs an efficient exploration and exploitation of network configurations with respect to the dynamic network conditions. We show that our algorithm is an effective approach regarding resource allocation and satisfaction of intent expectations.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
476,104
2212.13007
Visual Tactile Sensor Based Force Estimation for Position-Force Teleoperation
Vision-based tactile sensors have gained extensive attention in the robotics community. The sensors are highly expected to be capable of extracting contact information i.e. haptic information during in-hand manipulation. This nature of tactile sensors makes them a perfect match for haptic feedback applications. In this paper, we propose a contact force estimation method using the vision-based tactile sensor DIGIT, and apply it to a position-force teleoperation architecture for force feedback. The force estimation is done by building a depth map for DIGIT gel surface deformation measurement and applying a regression algorithm on estimated depth data and ground truth force data to get the depth-force relationship. The experiment is performed by constructing a grasping force feedback system with a haptic device as a leader robot and a parallel robot gripper as a follower robot, where the DIGIT sensor is attached to the tip of the robot gripper to estimate the contact force. The preliminary results show the capability of using the low-cost vision-based sensor for force feedback applications.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
338,196
2012.05966
A tuning algorithm for a sliding mode controller of buildings with ATMD
This paper proposes an automatic tuning algorithm for a sliding mode controller (SMC) based on the Ackermann's formula, that attenuates the structural vibrations of a seismically excited building equipped with an Active Tuned Mass Damper (ATMD) mounted on its top floor. The switching gain and sliding surface of the SMC are designed through the proposed tuning algorithm to suppress the structural vibrations by minimizing either the top floor displacement or the control force applied to the ATMD. Moreover, the tuning algorithm selects the SMC parameters to guarantee the following closed-loop characteristics: 1) the transient responses of the structure and the ATMD are sufficiently fast and damped; and 2) the control force, as well as the displacements and velocities of the building and ATMD are within acceptable limits under the frequency band of the earthquake excitation. The proposed SMC shows robustness against the unmodeled dynamics such as the friction of the damper. Experimental results on a reduced scale structure permits demonstrating the efficiency of the tuning algorithm for the SMC, which is compared with the traditional Linear Quadratic Regulator (LQR) and with the Optimal Sliding Mode Controller (OSMC).
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
210,940
2111.09485
3D Lip Event Detection via Interframe Motion Divergence at Multiple Temporal Resolutions
The lip is a dominant dynamic facial unit when a person is speaking. Detecting lip events is beneficial to speech analysis and support for the hearing impaired. This paper proposes a 3D lip event detection pipeline that automatically determines the lip events from a 3D speaking lip sequence. We define a motion divergence measure using 3D lip landmarks to quantify the interframe dynamics of a 3D speaking lip. Then, we cast the interframe motion detection in a multi-temporal-resolution framework that allows the detection to be applicable to different speaking speeds. The experiments on the S3DFM Dataset investigate the overall 3D lip dynamics based on the proposed motion divergence. The proposed 3D pipeline is able to detect opening and closing lip events across 100 sequences, achieving a state-of-the-art performance.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
267,028
1808.05577
Deeper Image Quality Transfer: Training Low-Memory Neural Networks for 3D Images
In this paper we address the memory demands that come with the processing of 3-dimensional, high-resolution, multi-channeled medical images in deep learning. We exploit memory-efficient backpropagation techniques, to reduce the memory complexity of network training from being linear in the network's depth, to being roughly constant $ - $ permitting us to elongate deep architectures with negligible memory increase. We evaluate our methodology in the paradigm of Image Quality Transfer, whilst noting its potential application to various tasks that use deep learning. We study the impact of depth on accuracy and show that deeper models have more predictive power, which may exploit larger training sets. We obtain substantially better results than the previous state-of-the-art model with a slight memory increase, reducing the root-mean-squared-error by $ 13\% $. Our code is publicly available.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
105,379
1910.05624
A Research Platform for Multi-Robot Dialogue with Humans
This paper presents a research platform that supports spoken dialogue interaction with multiple robots. The demonstration showcases our crafted MultiBot testing scenario in which users can verbally issue search, navigate, and follow instructions to two robotic teammates: a simulated ground robot and an aerial robot. This flexible language and robotic platform takes advantage of existing tools for speech recognition and dialogue management that are compatible with new domains, and implements an inter-agent communication protocol (tactical behavior specification), where verbal instructions are encoded for tasks assigned to the appropriate robot.
true
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
149,124
2312.01445
Classification of Home Network Problems with Transformers
We propose a classifier that can identify ten common home network problems based on the raw textual output of networking tools such as ping, dig, and ip. Our deep learning model uses an encoder-only transformer architecture with a particular pre-tokenizer that we propose for splitting the tool output into token sequences. The use of transformers distinguishes our approach from related work on network problem classification, which still primarily relies on non-deep-learning methods. Our model achieves high accuracy in our experiments, demonstrating the high potential of transformer-based problem classification for the home network.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
412,442
2110.15350
XDEEP-MSI: Explainable Bias-Rejecting Microsatellite Instability Deep Learning System In Colorectal Cancer
We present a system for the prediction of microsatellite instability (MSI) from H&E images of colorectal cancer using deep learning (DL) techniques customized for tissue microarrays (TMAs). The system incorporates an end-to-end image preprocessing module that produces tiles at multiple magnifications in the regions of interest as guided by a tissue classifier module, and a multiple-bias rejecting module. The training and validation TMA samples were obtained from the EPICOLON project and further enriched with samples from a single institution. A systematic study of biases at tile level identified three protected (bias) variables associated with the learned representations of a baseline model: the project of origin of samples, the patient spot and the TMA glass where each spot was placed. A multiple bias rejecting technique based on adversarial training is implemented at the DL architecture so to directly avoid learning the batch effects of those variables. The learned features from the bias-ablated model have maximum discriminative power with respect to the task and minimal statistical mean dependence with the biases. The impact of different magnifications, types of tissues and the model performance at tile vs patient level is analyzed. The AUC at tile level, and including all three selected tissues (tumor epithelium, mucine and lymphocytic regions) and 4 magnifications, was 0.87 +/- 0.03 and increased to 0.9 +/- 0.03 at patient level. To the best of our knowledge, this is the first work that incorporates a multiple bias ablation technique at the DL architecture in digital pathology, and the first using TMAs for the MSI prediction task.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
263,854
0805.1437
On the Spectrum of Large Random Hermitian Finite-Band Matrices
The open problem of calculating the limiting spectrum (or its Shannon transform) of increasingly large random Hermitian finite-band matrices is described. In general, these matrices include a finite number of non-zero diagonals around their main diagonal regardless of their size. Two different communication setups which may be modeled using such matrices are presented: a simple cellular uplink channel, and a time varying inter-symbol interference channel. Selected recent information-theoretic works dealing directly with such channels are reviewed. Finally, several characteristics of the still unknown limiting spectrum of such matrices are listed, and some reflections are touched upon.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
1,743
1702.06819
Distributed Representations of Signed Networks
Recent successes in word embedding and document embedding have motivated researchers to explore similar representations for networks and to use such representations for tasks such as edge prediction, node label prediction, and community detection. Such network embedding methods are largely focused on finding distributed representations for unsigned networks and are unable to discover embeddings that respect polarities inherent in edges. We propose SIGNet, a fast scalable embedding method suitable for signed networks. Our proposed objective function aims to carefully model the social structure implicit in signed networks by reinforcing the principles of social balance theory. Our method builds upon the traditional word2vec family of embedding approaches and adds a new targeted node sampling strategy to maintain structural balance in higher-order neighborhoods. We demonstrate the superiority of SIGNet over state-of-the-art methods proposed for both signed and unsigned networks on several real world datasets from different domains. In particular, SIGNet offers an approach to generate a richer vocabulary of features of signed networks to support representation and reasoning.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
68,678
2206.00007
A Cross-City Federated Transfer Learning Framework: A Case Study on Urban Region Profiling
Data insufficiency problems (i.e., data missing and label scarcity) caused by inadequate services and infrastructures or imbalanced development levels of cities have seriously affected the urban computing tasks in real scenarios. Prior transfer learning methods inspire an elegant solution to the data insufficiency, but are only concerned with one kind of insufficiency issue and fail to give consideration to both sides. In addition, most previous cross-city transfer methods overlook inter-city data privacy which is a public concern in practical applications. To address the above challenging problems, we propose a novel Cross-city Federated Transfer Learning framework (CcFTL) to cope with the data insufficiency and privacy problems. Concretely, CcFTL transfers the relational knowledge from multiple rich-data source cities to the target city. Besides, the model parameters specific to the target task are firstly trained on the source data and then fine-tuned to the target city by parameter transfer. With our adaptation of federated training and homomorphic encryption settings, CcFTL can effectively deal with the data privacy problem among cities. We take the urban region profiling as an application of smart cities and evaluate the proposed method with a real-world study. The experiments demonstrate the notable superiority of our framework over several competitive state-of-the-art methods.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
299,960
2309.14841
Towards a Neuronally Consistent Ontology for Robotic Agents
The Collaborative Research Center for Everyday Activity Science & Engineering (CRC EASE) aims to enable robots to perform environmental interaction tasks with close to human capacity. It therefore employs a shared ontology to model the activity of both kinds of agents, empowering robots to learn from human experiences. To properly describe these human experiences, the ontology will strongly benefit from incorporating characteristics of neuronal information processing which are not accessible from a behavioral perspective alone. We, therefore, propose the analysis of human neuroimaging data for evaluation and validation of concepts and events defined in the ontology model underlying most of the CRC projects. In an exploratory analysis, we employed an Independent Component Analysis (ICA) on functional Magnetic Resonance Imaging (fMRI) data from participants who were presented with the same complex video stimuli of activities as robotic and human agents in different environments and contexts. We then correlated the activity patterns of brain networks represented by derived components with timings of annotated event categories as defined by the ontology model. The present results demonstrate a subset of common networks with stable correlations and specificity towards particular event classes and groups, associated with environmental and contextual factors. These neuronal characteristics will open up avenues for adapting the ontology model to be more consistent with human information processing.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
394,759
2401.12471
Training-Free Action Recognition and Goal Inference with Dynamic Frame Selection
We introduce VidTFS, a Training-free, open-vocabulary video goal and action inference framework that combines the frozen vision foundational model (VFM) and large language model (LLM) with a novel dynamic Frame Selection module. Our experiments demonstrate that the proposed frame selection module improves the performance of the framework significantly. We validate the performance of the proposed VidTFS on four widely used video datasets, including CrossTask, COIN, UCF101, and ActivityNet, covering goal inference and action recognition tasks under open-vocabulary settings without requiring any training or fine-tuning. The results show that VidTFS outperforms pretrained and instruction-tuned multimodal language models that directly stack LLM and VFM for downstream video inference tasks. Our VidTFS with its adaptability shows the future potential for generalizing to new training-free video inference tasks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
423,388
2011.04021
On the role of planning in model-based deep reinforcement learning
Model-based planning is often thought to be necessary for deep, careful reasoning and generalization in artificial agents. While recent successes of model-based reinforcement learning (MBRL) with deep function approximation have strengthened this hypothesis, the resulting diversity of model-based methods has also made it difficult to track which components drive success and why. In this paper, we seek to disentangle the contributions of recent methods by focusing on three questions: (1) How does planning benefit MBRL agents? (2) Within planning, what choices drive performance? (3) To what extent does planning improve generalization? To answer these questions, we study the performance of MuZero (Schrittwieser et al., 2019), a state-of-the-art MBRL algorithm with strong connections and overlapping components with many other MBRL algorithms. We perform a number of interventions and ablations of MuZero across a wide range of environments, including control tasks, Atari, and 9x9 Go. Our results suggest the following: (1) Planning is most useful in the learning process, both for policy updates and for providing a more useful data distribution. (2) Using shallow trees with simple Monte-Carlo rollouts is as performant as more complex methods, except in the most difficult reasoning tasks. (3) Planning alone is insufficient to drive strong generalization. These results indicate where and how to utilize planning in reinforcement learning settings, and highlight a number of open questions for future MBRL research.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
205,435
2302.00284
Selective Uncertainty Propagation in Offline RL
We consider the finite-horizon offline reinforcement learning (RL) setting, and are motivated by the challenge of learning the policy at any step h in dynamic programming (DP) algorithms. To learn this, it is sufficient to evaluate the treatment effect of deviating from the behavioral policy at step h after having optimized the policy for all future steps. Since the policy at any step can affect next-state distributions, the related distributional shift challenges can make this problem far more statistically hard than estimating such treatment effects in the stochastic contextual bandit setting. However, the hardness of many real-world RL instances lies between the two regimes. We develop a flexible and general method called selective uncertainty propagation for confidence interval construction that adapts to the hardness of the associated distribution shift challenges. We show benefits of our approach on toy environments and demonstrate the benefits of these techniques for offline policy learning.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
343,169
1912.07277
ITENE: Intrinsic Transfer Entropy Neural Estimator
Quantifying the directionality of information flow is instrumental in understanding, and possibly controlling, the operation of many complex systems, such as transportation, social, neural, or gene-regulatory networks. The standard Transfer Entropy (TE) metric follows Granger's causality principle by measuring the Mutual Information (MI) between the past states of a source signal $X$ and the future state of a target signal $Y$ while conditioning on past states of $Y$. Hence, the TE quantifies the improvement, as measured by the log-loss, in the prediction of the target sequence $Y$ that can be accrued when, in addition to the past of $Y$, one also has available past samples from $X$. However, by conditioning on the past of $Y$, the TE also measures information that can be synergistically extracted by observing both the past of $X$ and $Y$, and not solely the past of $X$. Building on a private key agreement formulation, the Intrinsic TE (ITE) aims to discount such synergistic information to quantify the degree to which $X$ is \emph{individually} predictive of $Y$, independent of $Y$'s past. In this paper, an estimator of the ITE is proposed that is inspired by the recently proposed Mutual Information Neural Estimation (MINE). The estimator is based on variational bound on the KL divergence, two-sample neural network classifiers, and the pathwise estimator of Monte Carlo gradients.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
157,566
2502.10786
Epidemic-guided deep learning for spatiotemporal forecasting of Tuberculosis outbreak
Tuberculosis (TB) remains a formidable global health challenge, driven by complex spatiotemporal transmission dynamics and influenced by factors such as population mobility and behavioral changes. We propose an Epidemic-Guided Deep Learning (EGDL) approach that fuses mechanistic epidemiological principles with advanced deep learning techniques to enhance early warning systems and intervention strategies for TB outbreaks. Our framework is built upon a networked Susceptible-Infectious-Recovered (SIR) model augmented with a saturated incidence rate and graph Laplacian diffusion, capturing both long-term transmission dynamics and region-specific population mobility patterns. Compartmental model parameters are rigorously estimated using Bayesian inference via the Markov Chain Monte Carlo (MCMC) approach. Theoretical analysis leveraging the comparison principle and Green's formula establishes global stability properties of the disease-free and endemic equilibria. Building on these epidemiological insights, we design two forecasting architectures, EGDL-Parallel and EGDL-Series, that integrate the mechanistic outputs of the networked SIR model within deep neural networks. This integration mitigates the overfitting risks commonly encountered in data-driven methods and filters out noise inherent in surveillance data, resulting in reliable forecasts of real-world epidemic trends. Experiments conducted on TB incidence data from 47 prefectures in Japan demonstrate that our approach delivers robust and accurate predictions across multiple time horizons (short to medium-term forecasts). Additionally, incorporating uncertainty quantification through conformal prediction enhances the model's practical utility for guiding targeted public health interventions.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
534,044
2112.08808
Simple Questions Generate Named Entity Recognition Datasets
Recent named entity recognition (NER) models often rely on human-annotated datasets, requiring the significant engagement of professional knowledge on the target domain and entities. This research introduces an ask-to-generate approach that automatically generates NER datasets by asking questions in simple natural language to an open-domain question answering system (e.g., "Which disease?"). Despite using fewer in-domain resources, our models, solely trained on the generated datasets, largely outperform strong low-resource models by an average F1 score of 19.4 for six popular NER benchmarks. Furthermore, our models provide competitive performance with rich-resource models that additionally leverage in-domain dictionaries provided by domain experts. In few-shot NER, we outperform the previous best model by an F1 score of 5.2 on three benchmarks and achieve new state-of-the-art performance.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
271,945
2110.01463
Asynchronous Upper Confidence Bound Algorithms for Federated Linear Bandits
Linear contextual bandit is a popular online learning problem. It has been mostly studied in centralized learning settings. With the surging demand of large-scale decentralized model learning, e.g., federated learning, how to retain regret minimization while reducing communication cost becomes an open challenge. In this paper, we study linear contextual bandit in a federated learning setting. We propose a general framework with asynchronous model update and communication for a collection of homogeneous clients and heterogeneous clients, respectively. Rigorous theoretical analysis is provided about the regret and communication cost under this distributed learning framework; and extensive empirical evaluations demonstrate the effectiveness of our solution.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
258,781
2304.04752
A Practitioner's Guide to Bayesian Inference in Pharmacometrics using Pumas
This paper provides a comprehensive tutorial for Bayesian practitioners in pharmacometrics using Pumas workflows. We start by giving a brief motivation of Bayesian inference for pharmacometrics highlighting limitations in existing software that Pumas addresses. We then follow by a description of all the steps of a standard Bayesian workflow for pharmacometrics using code snippets and examples. This includes: model definition, prior selection, sampling from the posterior, prior and posterior simulations and predictions, counter-factual simulations and predictions, convergence diagnostics, visual predictive checks, and finally model comparison with cross-validation. Finally, the background and intuition behind many advanced concepts in Bayesian statistics are explained in simple language. This includes many important ideas and precautions that users need to keep in mind when performing Bayesian analysis. Many of the algorithms, codes, and ideas presented in this paper are highly applicable to clinical research and statistical learning at large but we chose to focus our discussions on pharmacometrics in this paper to have a narrower scope in mind and given the nature of Pumas as a software primarily for pharmacometricians.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
357,341
2307.01618
A Stackelberg viral marketing design for two competing players
A Stackelberg duopoly model in which two firms compete to maximize their market share is considered. The firms offer a service/product to customers that are spread over several geographical regions (e.g., countries, provinces, or states). Each region has its own characteristics (spreading and recovery rates) of each service propagation. We consider that the spreading rate can be controlled by each firm and is subject to some investment that the firm does in each region. One of the main objectives of this work is to characterize the advertising budget allocation strategy for each firm across regions to maximize its market share when competing. To achieve this goal we propose a Stackelberg game model that is relatively simple while capturing the main effects of the competition for market share. {By characterizing the strong/weak Stackelberg equilibria of the game, we provide the associated budget allocation strategy.} In this setting, it is established under which conditions the solution of the game is the so-called ``winner takes all". Numerical results expand upon our theoretical findings and we provide the equilibrium characterization for an example.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
377,414
1905.09797
Interpreting Adversarially Trained Convolutional Neural Networks
We attempt to interpret how adversarially trained convolutional neural networks (AT-CNNs) recognize objects. We design systematic approaches to interpret AT-CNNs in both qualitative and quantitative ways and compare them with normally trained models. Surprisingly, we find that adversarial training alleviates the texture bias of standard CNNs when trained on object recognition tasks, and helps CNNs learn a more shape-biased representation. We validate our hypothesis from two aspects. First, we compare the salience maps of AT-CNNs and standard CNNs on clean images and images under different transformations. The comparison could visually show that the prediction of the two types of CNNs is sensitive to dramatically different types of features. Second, to achieve quantitative verification, we construct additional test datasets that destroy either textures or shapes, such as style-transferred version of clean data, saturated images and patch-shuffled ones, and then evaluate the classification accuracy of AT-CNNs and normal CNNs on these datasets. Our findings shed some light on why AT-CNNs are more robust than those normally trained ones and contribute to a better understanding of adversarial training over CNNs from an interpretation perspective.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
131,837
1908.04682
CMB-GAN: Fast Simulations of Cosmic Microwave background anisotropy maps using Deep Learning
Cosmic Microwave Background (CMB) has been a cornerstone in many cosmology experiments and studies since it was discovered back in 1964. Traditional computational models like CAMB that are used for generating CMB temperature anisotropy maps are extremely resource intensive and act as a bottleneck in cosmology experiments that require a large amount of CMB data for analysis. In this paper, we present a new approach to the generation of CMB temperature maps using a specific class of neural networks called Generative Adversarial Network (GAN). We train our deep generative model to learn the complex distribution of CMB maps and efficiently generate new sets of CMB data in the form of 2D patches of anisotropy maps without losing much accuracy. We limit our experiment to the generation of 56$^{\circ}$ and 112$^{\circ}$ square patches of CMB maps. We have also trained a Multilayer perceptron model for estimation of baryon density from a CMB map, we will be using this model for the performance evaluation of our generative model using diagnostic measures like Histogram of pixel intensities, the standard deviation of pixel intensity distribution, Power Spectrum, Cross power spectrum, Correlation matrix of the power spectrum and Peak count. We show that the GAN model is able to efficiently generate CMB samples of multiple sizes and is sensitive to the cosmological parameters corresponding to the underlying distribution of the data. The primiary advantage of this method is the exponential reduction in the computational time needed to generate the CMB data, the GAN model is able to generate the samples within seconds as opposed to hours required by the CAMB package with an acceptable value to error and loss of information. We hope that future iterations of this methodology will replace traditional statistical methods of CMB data generation and help in large scale cosmological experiments.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
141,541
2303.16209
AmorProt: Amino Acid Molecular Fingerprints Repurposing based Protein Fingerprint
As protein therapeutics play an important role in almost all medical fields, numerous studies have been conducted on proteins using artificial intelligence. Artificial intelligence has enabled data driven predictions without the need for expensive experiments. Nevertheless, unlike the various molecular fingerprint algorithms that have been developed, protein fingerprint algorithms have rarely been studied. In this study, we proposed the amino acid molecular fingerprints repurposing based protein (AmorProt) fingerprint, a protein sequence representation method that effectively uses the molecular fingerprints corresponding to 20 amino acids. Subsequently, the performances of the tree based machine learning and artificial neural network models were compared using (1) amyloid classification and (2) isoelectric point regression. Finally, the applicability and advantages of the developed platform were demonstrated through a case study and the following experiments: (3) comparison of dataset dependence with feature based methods; (4) feature importance analysis; and (5) protein space analysis. Consequently, the significantly improved model performance and data set independent versatility of the AmorProt fingerprint were verified. The results revealed that the current protein representation method can be applied to various fields related to proteins, such as predicting their fundamental properties or interaction with ligands.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
354,786
1811.05721
A Deterministic Algorithm for Bridging Anaphora Resolution
Previous work on bridging anaphora resolution (Poesio et al., 2004; Hou et al., 2013b) use syntactic preposition patterns to calculate word relatedness. However, such patterns only consider NPs' head nouns and hence do not fully capture the semantics of NPs. Recently, Hou (2018) created word embeddings (embeddings_PP) to capture associative similarity (ie, relatedness) between nouns by exploring the syntactic structure of noun phrases. But embeddings_PP only contains word representations for nouns. In this paper, we create new word vectors by combining embeddings_PP with GloVe. This new word embeddings (embeddings_bridging) are a more general lexical knowledge resource for bridging and allow us to represent the meaning of an NP beyond its head easily. We therefore develop a deterministic approach for bridging anaphora resolution, which represents the semantics of an NP based on its head noun and modifications. We show that this simple approach achieves the competitive results compared to the best system in Hou et al.(2013b) which explores Markov Logic Networks to model the problem. Additionally, we further improve the results for bridging anaphora resolution reported in Hou (2018) by combining our simple deterministic approach with Hou et al.(2013b)'s best system MLN II.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
113,374
2111.01701
Improve Single-Point Zeroth-Order Optimization Using High-Pass and Low-Pass Filters
Single-point zeroth-order optimization (SZO) is useful in solving online black-box optimization and control problems in time-varying environments, as it queries the function value only once at each time step. However, the vanilla SZO method is known to suffer from a large estimation variance and slow convergence, which seriously limits its practical application. In this work, we borrow the idea of high-pass and low-pass filters from extremum seeking control (continuous-time version of SZO) and develop a novel SZO method called HLF-SZO by integrating these filters. It turns out that the high-pass filter coincides with the residual feedback method, and the low-pass filter can be interpreted as the momentum method. As a result, the proposed HLF-SZO achieves a much smaller variance and much faster convergence than the vanilla SZO method and empirically outperforms the residual-feedback SZO method, which is verified via extensive numerical experiments.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
264,638
2007.06512
Deep Learning for Distributed Channel Feedback and Multiuser Precoding in FDD Massive MIMO
This paper shows that deep neural network (DNN) can be used for efficient and distributed channel estimation, quantization, feedback, and downlink multiuser precoding for a frequency-division duplex massive multiple-input multiple-output system in which a base station (BS) serves multiple mobile users, but with rate-limited feedback from the users to the BS. A key observation is that the multiuser channel estimation and feedback problem can be thought of as a distributed source coding problem. In contrast to the traditional approach where the channel state information (CSI) is estimated and quantized at each user independently, this paper shows that a joint design of pilots and a new DNN architecture, which maps the received pilots directly into feedback bits at the user side then maps the feedback bits from all the users directly into the precoding matrix at the BS, can significantly improve the overall performance. This paper further proposes robust design strategies with respect to channel parameters and also a generalizable DNN architecture for varying number of users and number of feedback bits. Numerical results show that the DNN-based approach with short pilot sequences and very limited feedback overhead can already approach the performance of conventional linear precoding schemes with full CSI.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
187,033
1602.04375
Science Question Answering using Instructional Materials
We provide a solution for elementary science test using instructional materials. We posit that there is a hidden structure that explains the correctness of an answer given the question and instructional materials and present a unified max-margin framework that learns to find these hidden structures (given a corpus of question-answer pairs and instructional materials), and uses what it learns to answer novel elementary science questions. Our evaluation shows that our framework outperforms several strong baselines.
false
false
false
false
true
true
true
false
true
false
false
false
false
false
false
false
false
false
52,119
2210.02604
Spectral Regularization Allows Data-frugal Learning over Combinatorial Spaces
Data-driven machine learning models are being increasingly employed in several important inference problems in biology, chemistry, and physics which require learning over combinatorial spaces. Recent empirical evidence (see, e.g., [1], [2], [3]) suggests that regularizing the spectral representation of such models improves their generalization power when labeled data is scarce. However, despite these empirical studies, the theoretical underpinning of when and how spectral regularization enables improved generalization is poorly understood. In this paper, we focus on learning pseudo-Boolean functions and demonstrate that regularizing the empirical mean squared error by the L_1 norm of the spectral transform of the learned function reshapes the loss landscape and allows for data-frugal learning, under a restricted secant condition on the learner's empirical error measured against the ground truth function. Under a weaker quadratic growth condition, we show that stationary points which also approximately interpolate the training data points achieve statistically optimal generalization performance. Complementing our theory, we empirically demonstrate that running gradient descent on the regularized loss results in a better generalization performance compared to baseline algorithms in several data-scarce real-world problems.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
321,704
1711.05885
Crowdsourcing Question-Answer Meaning Representations
We introduce Question-Answer Meaning Representations (QAMRs), which represent the predicate-argument structure of a sentence as a set of question-answer pairs. We also develop a crowdsourcing scheme to show that QAMRs can be labeled with very little training, and gather a dataset with over 5,000 sentences and 100,000 questions. A detailed qualitative analysis demonstrates that the crowd-generated question-answer pairs cover the vast majority of predicate-argument relationships in existing datasets (including PropBank, NomBank, QA-SRL, and AMR) along with many previously under-resourced ones, including implicit arguments and relations. The QAMR data and annotation code is made publicly available to enable future work on how best to model these complex phenomena.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
84,664
2306.16619
Laxity-Aware Scalable Reinforcement Learning for HVAC Control
Demand flexibility plays a vital role in maintaining grid balance, reducing peak demand, and saving customers' energy bills. Given their highly shiftable load and significant contribution to a building's energy consumption, Heating, Ventilation, and Air Conditioning (HVAC) systems can provide valuable demand flexibility to the power systems by adjusting their energy consumption in response to electricity price and power system needs. To exploit this flexibility in both operation time and power, it is imperative to accurately model and aggregate the load flexibility of a large population of HVAC systems as well as designing effective control algorithms. In this paper, we tackle the curse of dimensionality issue in modeling and control by utilizing the concept of laxity to quantify the emergency level of each HVAC operation request. We further propose a two-level approach to address energy optimization for a large population of HVAC systems. The lower level involves an aggregator to aggregate HVAC load laxity information and use least-laxity-first (LLF) rule to allocate real-time power for individual HVAC systems based on the controller's total power. Due to the complex and uncertain nature of HVAC systems, we leverage a reinforcement learning (RL)-based controller to schedule the total power based on the aggregated laxity information and electricity price. We evaluate the temperature control and energy cost saving performance of a large-scale group of HVAC systems in both single-zone and multi-zone scenarios, under varying climate and electricity market conditions. The experiment results indicate that proposed approach outperforms the centralized methods in the majority of test scenarios, and performs comparably to model-based method in some scenarios.
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
376,418
2404.12317
Synthetic Participatory Planning of Shard Automated Electric Mobility Systems
Unleashing the synergies among rapidly evolving mobility technologies in a multi-stakeholder setting presents unique challenges and opportunities for addressing urban transportation problems. This paper introduces a novel synthetic participatory method that critically leverages large language models (LLMs) to create digital avatars representing diverse stakeholders to plan shared automated electric mobility systems (SAEMS). These calibratable agents collaboratively identify objectives, envision and evaluate SAEMS alternatives, and strategize implementation under risks and constraints. The results of a Montreal case study indicate that a structured and parameterized workflow provides outputs with higher controllability and comprehensiveness on an SAEMS plan than that generated using a single LLM-enabled expert agent. Consequently, this approach provides a promising avenue for cost-efficiently improving the inclusivity and interpretability of multi-objective transportation planning, suggesting a paradigm shift in how we envision and strategize for sustainable transportation systems.
true
true
false
false
true
false
false
false
false
false
false
false
false
true
true
false
false
false
447,833
2205.03096
Desaparecidxs: characterizing the population of missing children using Twitter
Missing children, i.e., children reported to a relevant authority as having "disappeared," constitute an important but often overlooked population. From a research perspective, missing children constitute a hard-to-reach population about which little is known. This is a particular problem in regions of the Global South that lack robust or centralized data collection systems. In this study, we analyze the composition of the population of missing children in Guatemala, a country with high levels of violence. We contrast the official aggregated-level data from the Guatemalan National Police during the 2018-2020 period with real-time individual-level data on missing children from the official Twitter account of the Alerta Alba-Keneth, a governmental warning system tasked with disseminating information about missing children. Using the Twitter data, we characterize the population of missing children in Guatemala by single-year age, sex, and place of disappearance. Our results show that women are more likely to be reported as missing, particularly those aged 13-17. We discuss the findings in light of the known links between missing people, violence, and human trafficking. Finally, the study highlights the potential of web data to contribute to society by improving our understanding of this and similar hard-to-reach populations.
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
false
295,168
2103.12681
A Distributed Active Set Method for Model Predictive Control
This paper presents a novel distributed active set method for model predictive control of linear systems. The method combines a primal active set strategy with a decentralized conjugate gradient method to solve convex quadratic programs. An advantage of the proposed method compared to existing distributed model predictive algorithms is the primal feasibility of the iterates. Numerical results show that the proposed method can compete with the alternating direction method of multipliers in terms of communication requirements for a chain of masses example.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
226,254
2307.02497
Multi-gauge Hydrological Variational Data Assimilation: Regionalization Learning with Spatial Gradients using Multilayer Perceptron and Bayesian-Guided Multivariate Regression
Tackling the difficult problem of estimating spatially distributed hydrological parameters, especially for floods on ungauged watercourses, this contribution presents a novel seamless regionalization technique for learning complex regional transfer functions designed for high-resolution hydrological models. The transfer functions rely on: (i) a multilayer perceptron enabling a seamless flow of gradient computation to employ machine learning optimization algorithms, or (ii) a multivariate regression mapping optimized by variational data assimilation algorithms and guided by Bayesian estimation, addressing the equifinality issue of feasible solutions. The approach involves incorporating the inferable regionalization mappings into a differentiable hydrological model and optimizing a cost function computed on multi-gauge data with accurate adjoint-based spatially distributed gradients.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
377,717
2311.09261
Emerging Drug Interaction Prediction Enabled by Flow-based Graph Neural Network with Biomedical Network
Accurately predicting drug-drug interactions (DDI) for emerging drugs, which offer possibilities for treating and alleviating diseases, with computational methods can improve patient care and contribute to efficient drug development. However, many existing computational methods require large amounts of known DDI information, which is scarce for emerging drugs. In this paper, we propose EmerGNN, a graph neural network (GNN) that can effectively predict interactions for emerging drugs by leveraging the rich information in biomedical networks. EmerGNN learns pairwise representations of drugs by extracting the paths between drug pairs, propagating information from one drug to the other, and incorporating the relevant biomedical concepts on the paths. The different edges on the biomedical network are weighted to indicate the relevance for the target DDI prediction. Overall, EmerGNN has higher accuracy than existing approaches in predicting interactions for emerging drugs and can identify the most relevant information on the biomedical network.
false
true
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
408,071
1703.04336
A Visual Representation of Wittgenstein's Tractatus Logico-Philosophicus
In this paper we present a data visualization method together with its potential usefulness in digital humanities and philosophy of language. We compile a multilingual parallel corpus from different versions of Wittgenstein's Tractatus Logico-Philosophicus, including the original in German and translations into English, Spanish, French, and Russian. Using this corpus, we compute a similarity measure between propositions and render a visual network of relations for different languages.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
69,878
1801.05768
The Asymptotic Capacity of Private Search
The private search problem is introduced, where a dataset comprised of $L$ i.i.d. records is replicated across $N$ non-colluding servers, each record takes values uniformly from an alphabet of size $K$, and a user wishes to search for all records that match a privately chosen value, without revealing any information about the chosen value to any individual server. The capacity of private search is the maximum number of bits of desired information that can be retrieved per bit of download. The asymptotic (large $K$) capacity of private search is shown to be $1-1/N$, even as the scope of private search is further generalized to allow approximate (OR) search over a number of realizations that grows with $K$. The results are based on the asymptotic behavior of a new converse bound for private information retrieval with arbitrarily dependent messages.
false
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
true
88,511
2411.12293
Generative Timelines for Instructed Visual Assembly
The objective of this work is to manipulate visual timelines (e.g. a video) through natural language instructions, making complex timeline editing tasks accessible to non-expert or potentially even disabled users. We call this task Instructed visual assembly. This task is challenging as it requires (i) identifying relevant visual content in the input timeline as well as retrieving relevant visual content in a given input (video) collection, (ii) understanding the input natural language instruction, and (iii) performing the desired edits of the input visual timeline to produce an output timeline. To address these challenges, we propose the Timeline Assembler, a generative model trained to perform instructed visual assembly tasks. The contributions of this work are three-fold. First, we develop a large multimodal language model, which is designed to process visual content, compactly represent timelines and accurately interpret timeline editing instructions. Second, we introduce a novel method for automatically generating datasets for visual assembly tasks, enabling efficient training of our model without the need for human-labeled data. Third, we validate our approach by creating two novel datasets for image and video assembly, demonstrating that the Timeline Assembler substantially outperforms established baseline models, including the recent GPT-4o, in accurately executing complex assembly instructions across various real-world inspired scenarios.
true
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
509,369
2010.12938
Comments on "Precoding and Artificial Noise Design for Cognitive MIMOME Wiretap Channels"
Several gaps and errors in [1] are identified and corrected. While accommodating these corrections, a rigours proof is given that the successive convex approximation algorithm in [1] for secrecy rate maximization (SRM) does generate an increasing and bounded sequence of true secrecy rates and hence converges. It is further shown that its convergence point is a KKT point of the original SRM problem and, if the original problem is convex, this convergence point is globally-optimal, which is not necessarily the case in general. An interlacing property of the sequences of the true and approximate secrecy rates is established.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
202,939
2203.16180
Millimeter-Wave Sensing for Avoidance of High-Risk Ground Conditions for Mobile Robots
Mobile robot autonomy has made significant advances in recent years, with navigation algorithms well developed and used commercially in certain well-defined environments, such as warehouses. The common link in usage scenarios is that the environments in which the robots are utilized have a high degree of certainty. Operating environments are often designed to be robot friendly, for example augmented reality markers are strategically placed and the ground is typically smooth, level, and clear of debris. For robots to be useful in a wider range of environments, especially environments that are not sanitized for their use, robots must be able to handle uncertainty. This requires a robot to incorporate new sensors and sources of information, and to be able to use this information to make decisions regarding navigation and the overall mission. When using autonomous mobile robots in unstructured and poorly defined environments, such as a natural disaster site or in a rural environment, ground condition is of critical importance and is a common cause of failure. Examples include loss of traction due to high levels of ground water, hidden cavities, or material boundary failures. To evaluate a non-contact sensing method to mitigate these risks, Frequency Modulated Continuous Wave (FMCW) radar is integrated with an Unmanned Ground Vehicle (UGV), representing a novel application of FMCW to detect new measurands for Robotic Autonomous Systems (RAS) navigation, informing on terrain integrity and adding to the state-of-the-art in sensing for optimized autonomous path planning. In this paper, the FMCW is first evaluated in a desktop setting to determine its performance in anticipated ground conditions. The FMCW is then fixed to a UGV and the sensor system is tested and validated in a representative environment containing regions with significant levels of ground water saturation.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
288,687
2405.20245
Retrieval Augmented Structured Generation: Business Document Information Extraction As Tool Use
Business Document Information Extraction (BDIE) is the problem of transforming a blob of unstructured information (raw text, scanned documents, etc.) into a structured format that downstream systems can parse and use. It has two main tasks: Key-Information Extraction (KIE) and Line Items Recognition (LIR). In this paper, we argue that BDIE is best modeled as a Tool Use problem, where the tools are these downstream systems. We then present Retrieval Augmented Structured Generation (RASG), a novel general framework for BDIE that achieves state of the art (SOTA) results on both KIE and LIR tasks on BDIE benchmarks. The contributions of this paper are threefold: (1) We show, with ablation benchmarks, that Large Language Models (LLMs) with RASG are already competitive with or surpasses current SOTA Large Multimodal Models (LMMs) without RASG on BDIE benchmarks. (2) We propose a new metric class for Line Items Recognition, General Line Items Recognition Metric (GLIRM), that is more aligned with practical BDIE use cases compared to existing metrics, such as ANLS*, DocILE, and GriTS. (3) We provide a heuristic algorithm for backcalculating bounding boxes of predicted line items and tables without the need for vision encoders. Finally, we claim that, while LMMs might sometimes offer marginal performance benefits, LLMs + RASG is oftentimes superior given real-world applications and constraints of BDIE.
false
false
false
false
true
true
true
false
true
false
false
false
false
false
false
false
false
false
459,256
2311.15460
Privacy-Preserving Data Sharing in Agriculture: Enforcing Policy Rules for Secure and Confidential Data Synthesis
Big Data empowers the farming community with the information needed to optimize resource usage, increase productivity, and enhance the sustainability of agricultural practices. The use of Big Data in farming requires the collection and analysis of data from various sources such as sensors, satellites, and farmer surveys. While Big Data can provide the farming community with valuable insights and improve efficiency, there is significant concern regarding the security of this data as well as the privacy of the participants. Privacy regulations, such as the EU GDPR, the EU Code of Conduct on agricultural data sharing by contractual agreement, and the proposed EU AI law, have been created to address the issue of data privacy and provide specific guidelines on when and how data can be shared between organizations. To make confidential agricultural data widely available for Big Data analysis without violating the privacy of the data subjects, we consider privacy-preserving methods of data sharing in agriculture. Deep learning-based synthetic data generation has been proposed for privacy-preserving data sharing. However, there is a lack of compliance with documented data privacy policies in such privacy-preserving efforts. In this study, we propose a novel framework for enforcing privacy policy rules in privacy-preserving data generation algorithms. We explore several available agricultural codes of conduct, extract knowledge related to the privacy constraints in data, and use the extracted knowledge to define privacy bounds in a privacy-preserving generative model. We use our framework to generate synthetic agricultural data and present experimental results that demonstrate the utility of the synthetic dataset in downstream tasks. We also show that our framework can evade potential threats and secure data based on applicable regulatory policy rules.
false
false
false
false
true
false
true
false
false
false
false
false
true
false
false
false
false
false
410,518
2109.13559
A Note on Nussbaum-type Control and Lie-bracket Approximation
In this paper, we propose an adaptive control law for completely unknown scalar linear systems based on Lie-bracket approximation methods. We investigate stability and convergence properties for the resulting Lie-bracket system, compare our proposal with existing Nussbaum-type solutions and demonstrate our results with an example. Even though we prove global stability properties of the Lie-bracket system, the stability properties of the proposed dynamics remain open, making the proposed control law an object of further studies. We elaborate the difficulties of establishing stability results by investigating connections to partial stability as well as studying the corresponding Chen-Fliess expansion.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
257,672
1311.6674
Modelling of the gravity compensators in robotic manufacturing cells
The paper deals with the modeling and identification of the gravity compensators used in heavy industrial robots. The main attention is paid to the geometrical parameters identification and calibration accuracy. To reduce impact of the measurement errors, the design of calibration experiments is used. The advantages of the developed technique are illustrated by experimental results
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
28,674
2103.10198
Phylogenetic typology
In this article we propose a novel method to estimate the frequency distribution of linguistic variables while controlling for statistical non-independence due to shared ancestry. Unlike previous approaches, our technique uses all available data, from language families large and small as well as from isolates, while controlling for different degrees of relatedness on a continuous scale estimated from the data. Our approach involves three steps: First, distributions of phylogenies are inferred from lexical data. Second, these phylogenies are used as part of a statistical model to statistically estimate transition rates between parameter states. Finally, the long-term equilibrium of the resulting Markov process is computed. As a case study, we investigate a series of potential word-order correlations across the languages of the world.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
225,381
2407.07596
Learning treatment effects while treating those in need
Many social programs attempt to allocate scarce resources to people with the greatest need. Indeed, public services increasingly use algorithmic risk assessments motivated by this goal. However, targeting the highest-need recipients often conflicts with attempting to evaluate the causal effect of the program as a whole, as the best evaluations would be obtained by randomizing the allocation. We propose a framework to design randomized allocation rules which optimally balance targeting high-need individuals with learning treatment effects, presenting policymakers with a Pareto frontier between the two goals. We give sample complexity guarantees for the policy learning problem and provide a computationally efficient strategy to implement it. We then apply our framework to data from human services in Allegheny County, Pennsylvania. Optimized policies can substantially mitigate the tradeoff between learning and targeting. For example, it is often possible to obtain 90% of the optimal utility in targeting high-need individuals while ensuring that the average treatment effect can be estimated with less than 2 times the samples that a randomized controlled trial would require. Mechanisms for targeting public services often focus on measuring need as accurately as possible. However, our results suggest that algorithmic systems in public services can be most impactful if they incorporate program evaluation as an explicit goal alongside targeting.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
471,824
2308.05184
PromptPaint: Steering Text-to-Image Generation Through Paint Medium-like Interactions
While diffusion-based text-to-image (T2I) models provide a simple and powerful way to generate images, guiding this generation remains a challenge. For concepts that are difficult to describe through language, users may struggle to create prompts. Moreover, many of these models are built as end-to-end systems, lacking support for iterative shaping of the image. In response, we introduce PromptPaint, which combines T2I generation with interactions that model how we use colored paints. PromptPaint allows users to go beyond language to mix prompts that express challenging concepts. Just as we iteratively tune colors through layered placements of paint on a physical canvas, PromptPaint similarly allows users to apply different prompts to different canvas areas and times of the generative process. Through a set of studies, we characterize different approaches for mixing prompts, design trade-offs, and socio-technical challenges for generative models. With PromptPaint we provide insight into future steerable generative tools.
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
384,707
1911.02972
Blockwise Self-Attention for Long Document Understanding
We present BlockBERT, a lightweight and efficient BERT model for better modeling long-distance dependencies. Our model extends BERT by introducing sparse block structures into the attention matrix to reduce both memory consumption and training/inference time, which also enables attention heads to capture either short- or long-range contextual information. We conduct experiments on language model pre-training and several benchmark question answering datasets with various paragraph lengths. BlockBERT uses 18.7-36.1% less memory and 12.0-25.1% less time to learn the model. During testing, BlockBERT saves 27.8% inference time, while having comparable and sometimes better prediction accuracy, compared to an advanced BERT-based model, RoBERTa.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
152,509
2309.12767
Furthest Reasoning with Plan Assessment: Stable Reasoning Path with Retrieval-Augmented Large Language Models
Large Language Models (LLMs), acting as a powerful reasoner and generator, exhibit extraordinary performance across various natural language tasks, such as question answering (QA). Among these tasks, Multi-Hop Question Answering (MHQA) stands as a widely discussed category, necessitating seamless integration between LLMs and the retrieval of external knowledge. Existing methods employ LLM to generate reasoning paths and plans, and utilize IR to iteratively retrieve related knowledge, but these approaches have inherent flaws. On one hand, Information Retriever (IR) is hindered by the low quality of generated queries by LLM. On the other hand, LLM is easily misguided by the irrelevant knowledge by IR. These inaccuracies, accumulated by the iterative interaction between IR and LLM, lead to a disaster in effectiveness at the end. To overcome above barriers, in this paper, we propose a novel pipeline for MHQA called Furthest-Reasoning-with-Plan-Assessment (FuRePA), including an improved framework (Furthest Reasoning) and an attached module (Plan Assessor). 1) Furthest reasoning operates by masking previous reasoning path and generated queries for LLM, encouraging LLM generating chain of thought from scratch in each iteration. This approach enables LLM to break the shackle built by previous misleading thoughts and queries (if any). 2) The Plan Assessor is a trained evaluator that selects an appropriate plan from a group of candidate plans proposed by LLM. Our methods are evaluated on three highly recognized public multi-hop question answering datasets and outperform state-of-the-art on most metrics (achieving a 10%-12% in answer accuracy).
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
393,929
2109.05472
Compute and Energy Consumption Trends in Deep Learning Inference
The progress of some AI paradigms such as deep learning is said to be linked to an exponential growth in the number of parameters. There are many studies corroborating these trends, but does this translate into an exponential increase in energy consumption? In order to answer this question we focus on inference costs rather than training costs, as the former account for most of the computing effort, solely because of the multiplicative factors. Also, apart from algorithmic innovations, we account for more specific and powerful hardware (leading to higher FLOPS) that is usually accompanied with important energy efficiency optimisations. We also move the focus from the first implementation of a breakthrough paper towards the consolidated version of the techniques one or two year later. Under this distinctive and comprehensive perspective, we study relevant models in the areas of computer vision and natural language processing: for a sustained increase in performance we see a much softer growth in energy consumption than previously anticipated. The only caveat is, yet again, the multiplicative factor, as future AI increases penetration and becomes more pervasive.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
254,804
2412.14559
ScaMo: Exploring the Scaling Law in Autoregressive Motion Generation Model
The scaling law has been validated in various domains, such as natural language processing (NLP) and massive computer vision tasks; however, its application to motion generation remains largely unexplored. In this paper, we introduce a scalable motion generation framework that includes the motion tokenizer Motion FSQ-VAE and a text-prefix autoregressive transformer. Through comprehensive experiments, we observe the scaling behavior of this system. For the first time, we confirm the existence of scaling laws within the context of motion generation. Specifically, our results demonstrate that the normalized test loss of our prefix autoregressive models adheres to a logarithmic law in relation to compute budgets. Furthermore, we also confirm the power law between Non-Vocabulary Parameters, Vocabulary Parameters, and Data Tokens with respect to compute budgets respectively. Leveraging the scaling law, we predict the optimal transformer size, vocabulary size, and data requirements for a compute budget of $1e18$. The test loss of the system, when trained with the optimal model size, vocabulary size, and required data, aligns precisely with the predicted test loss, thereby validating the scaling law.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
518,762
1804.07958
Expert Finding in Community Question Answering: A Review
The rapid development recently of Community Question Answering (CQA) satisfies users quest for professional and personal knowledge about anything. In CQA, one central issue is to find users with expertise and willingness to answer the given questions. Expert finding in CQA often exhibits very different challenges compared to traditional methods. Sparse data and new features violate fundamental assumptions of traditional recommendation systems. This paper focuses on reviewing and categorizing the current progress on expert finding in CQA. We classify all the existing solutions into four different categories: matrix factorization based models (MF-based models), gradient boosting tree based models (GBT-based models), deep learning based models (DL-based models) and ranking based models (R-based models). We find that MF-based models outperform other categories of models in the field of expert finding in CQA. Moreover, we use innovative diagrams to clarify several important concepts of ensemble learning, and find that ensemble models with several specific single models can further boosting the performance. Further, we compare the performance of different models on different types of matching tasks, including text vs. text, graph vs. text, audio vs. text and video vs. text. The results can help the model selection of expert finding in practice. Finally, we explore some potential future issues in expert finding research in CQA.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
95,649
2406.15331
Masked Extended Attention for Zero-Shot Virtual Try-On In The Wild
Virtual Try-On (VTON) is a highly active line of research, with increasing demand. It aims to replace a piece of garment in an image with one from another, while preserving person and garment characteristics as well as image fidelity. Current literature takes a supervised approach for the task, impairing generalization and imposing heavy computation. In this paper, we present a novel zero-shot training-free method for inpainting a clothing garment by reference. Our approach employs the prior of a diffusion model with no additional training, fully leveraging its native generalization capabilities. The method employs extended attention to transfer image information from reference to target images, overcoming two significant challenges. We first initially warp the reference garment over the target human using deep features, alleviating "texture sticking". We then leverage the extended attention mechanism with careful masking, eliminating leakage of reference background and unwanted influence. Through a user study, qualitative, and quantitative comparison to state-of-the-art approaches, we demonstrate superior image quality and garment preservation compared unseen clothing pieces or human figures.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
true
466,699
2401.17100
The Influence of Presentation and Performance on User Satisfaction
The effectiveness of an IR system is gauged not just by its ability to retrieve relevant results but also by how it presents these results to users; an engaging presentation often correlates with increased user satisfaction. While existing research has delved into the link between user satisfaction, IR performance metrics, and presentation, these aspects have typically been investigated in isolation. Our research aims to bridge this gap by examining the relationship between query performance, presentation and user satisfaction. For our analysis, we conducted a between-subjects experiment comparing the effectiveness of various result card layouts for an ad-hoc news search interface. Drawing data from the TREC WaPo 2018 collection, we centered our study on four specific topics. Within each of these topics, we assessed six distinct queries with varying nDCG values. Our study involved 164 participants who were exposed to one of five distinct layouts containing result cards, such as "title'', "title+image'', or "title+image+summary''. Our findings indicate that while nDCG is a strong predictor of user satisfaction at the query level, there exists no linear relationship between the performance of the query, presentation of results and user satisfaction. However, when considering the total gain on the initial result page, we observed that presentation does play a significant role in user satisfaction (at the query level) for certain layouts with result cards such as, title+image or title+image+summary. Our results also suggest that the layout differences have complex and multifaceted impacts on satisfaction. We demonstrate the capacity to equalize user satisfaction levels between queries of varying performance by changing how results are presented. This emphasizes the necessity to harmonize both performance and presentation in IR systems, considering users' diverse preferences.
true
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
425,103
2309.00832
ObjectLab: Automated Diagnosis of Mislabeled Images in Object Detection Data
Despite powering sensitive systems like autonomous vehicles, object detection remains fairly brittle in part due to annotation errors that plague most real-world training datasets. We propose ObjectLab, a straightforward algorithm to detect diverse errors in object detection labels, including: overlooked bounding boxes, badly located boxes, and incorrect class label assignments. ObjectLab utilizes any trained object detection model to score the label quality of each image, such that mislabeled images can be automatically prioritized for label review/correction. Properly handling erroneous data enables training a better version of the same object detection model, without any change in existing modeling code. Across different object detection datasets (including COCO) and different models (including Detectron-X101 and Faster-RCNN), ObjectLab consistently detects annotation errors with much better precision/recall compared to other label quality scores.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
389,437
1711.09550
Attention Clusters: Purely Attention Based Local Feature Integration for Video Classification
Recently, substantial research effort has focused on how to apply CNNs or RNNs to better extract temporal patterns from videos, so as to improve the accuracy of video classification. In this paper, however, we show that temporal information, especially longer-term patterns, may not be necessary to achieve competitive results on common video classification datasets. We investigate the potential of a purely attention based local feature integration. Accounting for the characteristics of such features in video classification, we propose a local feature integration framework based on attention clusters, and introduce a shifting operation to capture more diverse signals. We carefully analyze and compare the effect of different attention mechanisms, cluster sizes, and the use of the shifting operation, and also investigate the combination of attention clusters for multimodal integration. We demonstrate the effectiveness of our framework on three real-world video classification datasets. Our model achieves competitive results across all of these. In particular, on the large-scale Kinetics dataset, our framework obtains an excellent single model accuracy of 79.4% in terms of the top-1 and 94.0% in terms of the top-5 accuracy on the validation set. The attention clusters are the backbone of our winner solution at ActivityNet Kinetics Challenge 2017. Code and models will be released soon.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
85,432
2404.07061
A Tight $O(4^k/p_c)$ Runtime Bound for a ($\mu$+1) GA on Jump$_k$ for Realistic Crossover Probabilities
The Jump$_k$ benchmark was the first problem for which crossover was proven to give a speedup over mutation-only evolutionary algorithms. Jansen and Wegener (2002) proved an upper bound of $O({\rm poly}(n) + 4^k/p_c)$ for the ($\mu$+1)~Genetic Algorithm ($(\mu+1)$ GA), but only for unrealistically small crossover probabilities $p_c$. To this date, it remains an open problem to prove similar upper bounds for realistic~$p_c$; the best known runtime bound for $p_c = \Omega(1)$ is $O((n/\chi)^{k-1})$, $\chi$ a positive constant. Using recently developed techniques, we analyse the evolution of the population diversity, measured as sum of pairwise Hamming distances, for a variant of the \muga on Jump$_k$. We show that population diversity converges to an equilibrium of near-perfect diversity. This yields an improved and tight time bound of $O(\mu n \log(k) + 4^k/p_c)$ for a range of~$k$ under the mild assumptions $p_c = O(1/k)$ and $\mu \in \Omega(kn)$. For all constant~$k$ the restriction is satisfied for some $p_c = \Omega(1)$. Our work partially solves a problem that has been open for more than 20 years.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
445,703
1904.01648
Sequential Adaptive Design for Jump Regression Estimation
Selecting input variables or design points for statistical models has been of great interest in adaptive design and active learning. Motivated by two scientific examples, this paper presents a strategy of selecting the design points for a regression model when the underlying regression function is discontinuous. The first example we undertook was for the purpose of accelerating imaging speed in a high resolution material imaging; the second was use of sequential design for the purpose of mapping a chemical phase diagram. In both examples, the underlying regression functions have discontinuities, so many of the existing design optimization approaches cannot be applied because they mostly assume a continuous regression function. Although some existing adaptive design strategies developed from treed regression models can handle the discontinuities, the Bayesian approaches come with computationally expensive Markov Chain Monte Carlo techniques for posterior inferences and subsequent design point selections, which is not appropriate for the first motivating example that requires computation at least faster than the original imaging speed. In addition, the treed models are based on the domain partitioning that are inefficient when the discontinuities occurs over complex sub-domain boundaries. We propose a simple and effective adaptive design strategy for a regression analysis with discontinuities: some statistical properties with a fixed design will be presented first, and then these properties will be used to propose a new criterion of selecting the design points for the regression analysis. Sequential design with the new criterion will be presented with comprehensive simulated examples, and its application to the two motivating examples will be presented.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
126,196
1701.00874
Neural Probabilistic Model for Non-projective MST Parsing
In this paper, we propose a probabilistic parsing model, which defines a proper conditional probability distribution over non-projective dependency trees for a given sentence, using neural representations as inputs. The neural network architecture is based on bi-directional LSTM-CNNs which benefits from both word- and character-level representations automatically, by using combination of bidirectional LSTM and CNN. On top of the neural network, we introduce a probabilistic structured layer, defining a conditional log-linear model over non-projective trees. We evaluate our model on 17 different datasets, across 14 different languages. By exploiting Kirchhoff's Matrix-Tree Theorem (Tutte, 1984), the partition functions and marginals can be computed efficiently, leading to a straight-forward end-to-end model training procedure via back-propagation. Our parser achieves state-of-the-art parsing performance on nine datasets.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
66,328
2208.00287
Simplex Clustering via sBeta with Applications to Online Adjustment of Black-Box Predictions
We explore clustering the softmax predictions of deep neural networks and introduce a novel probabilistic clustering method, referred to as k-sBetas. In the general context of clustering discrete distributions, the existing methods focused on exploring distortion measures tailored to simplex data, such as the KL divergence, as alternatives to the standard Euclidean distance. We provide a general maximum a posteriori (MAP) perspective of clustering distributions, emphasizing that the statistical models underlying the existing distortion-based methods may not be descriptive enough. Instead, we optimize a mixed-variable objective measuring data conformity within each cluster to the introduced sBeta density function, whose parameters are constrained and estimated jointly with binary assignment variables. Our versatile formulation approximates various parametric densities for modeling simplex data and enables the control of the cluster-balance bias. This yields highly competitive performances for the unsupervised adjustment of black-box model predictions in various scenarios. Our code and comparisons with the existing simplex-clustering approaches and our introduced softmax-prediction benchmarks are publicly available: https://github.com/fchiaroni/Clustering_Softmax_Predictions.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
310,801
2311.12092
Concept Sliders: LoRA Adaptors for Precise Control in Diffusion Models
We present a method to create interpretable concept sliders that enable precise control over attributes in image generations from diffusion models. Our approach identifies a low-rank parameter direction corresponding to one concept while minimizing interference with other attributes. A slider is created using a small set of prompts or sample images; thus slider directions can be created for either textual or visual concepts. Concept Sliders are plug-and-play: they can be composed efficiently and continuously modulated, enabling precise control over image generation. In quantitative experiments comparing to previous editing techniques, our sliders exhibit stronger targeted edits with lower interference. We showcase sliders for weather, age, styles, and expressions, as well as slider compositions. We show how sliders can transfer latents from StyleGAN for intuitive editing of visual concepts for which textual description is difficult. We also find that our method can help address persistent quality issues in Stable Diffusion XL including repair of object deformations and fixing distorted hands. Our code, data, and trained sliders are available at https://sliders.baulab.info/
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
409,210
2109.13845
Not Color Blind: AI Predicts Racial Identity from Black and White Retinal Vessel Segmentations
Background: Artificial intelligence (AI) may demonstrate racial bias when skin or choroidal pigmentation is present in medical images. Recent studies have shown that convolutional neural networks (CNNs) can predict race from images that were not previously thought to contain race-specific features. We evaluate whether grayscale retinal vessel maps (RVMs) of patients screened for retinopathy of prematurity (ROP) contain race-specific features. Methods: 4095 retinal fundus images (RFIs) were collected from 245 Black and White infants. A U-Net generated RVMs from RFIs, which were subsequently thresholded, binarized, or skeletonized. To determine whether RVM differences between Black and White eyes were physiological, CNNs were trained to predict race from color RFIs, raw RVMs, and thresholded, binarized, or skeletonized RVMs. Area under the precision-recall curve (AUC-PR) was evaluated. Findings: CNNs predicted race from RFIs near perfectly (image-level AUC-PR: 0.999, subject-level AUC-PR: 1.000). Raw RVMs were almost as informative as color RFIs (image-level AUC-PR: 0.938, subject-level AUC-PR: 0.995). Ultimately, CNNs were able to detect whether RFIs or RVMs were from Black or White babies, regardless of whether images contained color, vessel segmentation brightness differences were nullified, or vessel segmentation widths were normalized. Interpretation: AI can detect race from grayscale RVMs that were not thought to contain racial information. Two potential explanations for these findings are that: retinal vessels physiologically differ between Black and White babies or the U-Net segments the retinal vasculature differently for various fundus pigmentations. Either way, the implications remain the same: AI algorithms have potential to demonstrate racial bias in practice, even when preliminary attempts to remove such information from the underlying images appear to be successful.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
257,761
2308.07781
Learning Image Deraining Transformer Network with Dynamic Dual Self-Attention
Recently, Transformer-based architecture has been introduced into single image deraining task due to its advantage in modeling non-local information. However, existing approaches tend to integrate global features based on a dense self-attention strategy since it tend to uses all similarities of the tokens between the queries and keys. In fact, this strategy leads to ignoring the most relevant information and inducing blurry effect by the irrelevant representations during the feature aggregation. To this end, this paper proposes an effective image deraining Transformer with dynamic dual self-attention (DDSA), which combines both dense and sparse attention strategies to better facilitate clear image reconstruction. Specifically, we only select the most useful similarity values based on top-k approximate calculation to achieve sparse attention. In addition, we also develop a novel spatial-enhanced feed-forward network (SEFN) to further obtain a more accurate representation for achieving high-quality derained results. Extensive experiments on benchmark datasets demonstrate the effectiveness of our proposed method.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
385,645
2110.06931
A neural simulation-based inference approach for characterizing the Galactic Center $\gamma$-ray excess
The nature of the Fermi gamma-ray Galactic Center Excess (GCE) has remained a persistent mystery for over a decade. Although the excess is broadly compatible with emission expected due to dark matter annihilation, an explanation in terms of a population of unresolved astrophysical point sources e.g., millisecond pulsars, remains viable. The effort to uncover the origin of the GCE is hampered in particular by an incomplete understanding of diffuse emission of Galactic origin. This can lead to spurious features that make it difficult to robustly differentiate smooth emission, as expected for a dark matter origin, from more "clumpy" emission expected for a population of relatively bright, unresolved point sources. We use recent advancements in the field of simulation-based inference, in particular density estimation techniques using normalizing flows, in order to characterize the contribution of modeled components, including unresolved point source populations, to the GCE. Compared to traditional techniques based on the statistical distribution of photon counts, our machine learning-based method is able to utilize more of the information contained in a given model of the Galactic Center emission, and in particular can perform posterior parameter estimation while accounting for pixel-to-pixel spatial correlations in the gamma-ray map. This makes the method demonstrably more resilient to certain forms of model misspecification. On application to Fermi data, the method generically attributes a smaller fraction of the GCE flux to unresolved point sources when compared to traditional approaches. We nevertheless infer such a contribution to make up a non-negligible fraction of the GCE across all analysis variations considered, with at least $38^{+9}_{-19}\%$ of the excess attributed to unresolved point sources in our baseline analysis.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
260,796
2102.07073
Costly Features Classification using Monte Carlo Tree Search
We consider the problem of costly feature classification, where we sequentially select the subset of features to make a balance between the classification error and the feature cost. In this paper, we first cast the task into a MDP problem and use Advantage Actor Critic algorithm to solve it. In order to further improve the agent's performance and make the policy explainable, we employ the Monte Carlo Tree Search to update the policy iteratively. During the procedure, we also consider its performance on the unbalanced dataset and its sensitivity to the missing value. We evaluate our model on multiple datasets and find it outperforms other methods.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
219,978
2401.06588
Dynamic Behaviour of Connectionist Speech Recognition with Strong Latency Constraints
This paper describes the use of connectionist techniques in phonetic speech recognition with strong latency constraints. The constraints are imposed by the task of deriving the lip movements of a synthetic face in real time from the speech signal, by feeding the phonetic string into an articulatory synthesiser. Particular attention has been paid to analysing the interaction between the time evolution model learnt by the multi-layer perceptrons and the transition model imposed by the Viterbi decoder, in different latency conditions. Two experiments were conducted in which the time dependencies in the language model (LM) were controlled by a parameter. The results show a strong interaction between the three factors involved, namely the neural network topology, the length of time dependencies in the LM and the decoder latency.
false
false
true
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
421,209
1803.02082
Partitioning signed networks
Signed networks appear naturally in contexts where conflict or animosity is apparent. In this book chapter we review some of the literature on signed networks, especially in the context of partitioning. Most of the work is founded in what is known as structural balance theory. We cover the basic mathematical principles of structural balance theory. The theory yields a natural formulation for partitioning. We briefly compare this to other partitioning approaches based on community detection. Finally, we analyse an international network of alliances and conflicts and discuss the implications of our findings for structural balance theory.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
91,986
1109.5348
Dynkin Game of Stochastic Differential Equations with Random Coefficients, and Associated Backward Stochastic Partial Differential Variational Inequality
A Dynkin game is considered for stochastic differential equations with random coefficients. We first apply Qiu and Tang's maximum principle for backward stochastic partial differential equations to generalize Krylov estimate for the distribution of a Markov process to that of a non-Markov process, and establish a generalized It\^o-Kunita-Wentzell's formula allowing the test function to be a random field of It\^o's type which takes values in a suitable Sobolev space. We then prove the verification theorem that the Nash equilibrium point and the value of the Dynkin game are characterized by the strong solution of the associated Hamilton-Jacobi-Bellman-Isaacs equation, which is currently a backward stochastic partial differential variational inequality (BSPDVI, for short) with two obstacles. We obtain the existence and uniqueness result and a comparison theorem for strong solution of the BSPDVI. Moreover, we study the monotonicity on the strong solution of the BSPDVI by the comparison theorem for BSPDVI and define the free boundaries. Finally, we identify the counterparts for an optimal stopping time problem as a special Dynkin game.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
12,308
2205.11485
Conditional Supervised Contrastive Learning for Fair Text Classification
Contrastive representation learning has gained much attention due to its superior performance in learning representations from both image and sequential data. However, the learned representations could potentially lead to performance disparities in downstream tasks, such as increased silencing of underrepresented groups in toxicity comment classification. In light of this challenge, in this work, we study learning fair representations that satisfy a notion of fairness known as equalized odds for text classification via contrastive learning. Specifically, we first theoretically analyze the connections between learning representations with a fairness constraint and conditional supervised contrastive objectives, and then propose to use conditional supervised contrastive objectives to learn fair representations for text classification. We conduct experiments on two text datasets to demonstrate the effectiveness of our approaches in balancing the trade-offs between task performance and bias mitigation among existing baselines for text classification. Furthermore, we also show that the proposed methods are stable in different hyperparameter settings.
false
false
false
false
true
false
true
false
true
false
false
false
false
true
false
false
false
false
298,164
2105.07055
3D Two-Hop Cellular Networks with Wireless Backhauled UAVs: Modeling and Fundamentals
In this paper, we characterize the performance of a three-dimensional (3D) two-hop cellular network in which terrestrial base stations (BSs) coexist with unmanned aerial vehicles (UAVs) to serve a set of ground user equipment (UE). In particular, a UE connects either directly to its serving terrestrial BS by an access link or connects first to its serving UAV which is then wirelessly backhauled to a terrestrial BS (joint access and backhaul). We consider realistic antenna radiation patterns for both BSs and UAVs using practical models developed by the third generation partnership project (3GPP). We assume a probabilistic channel model for the air-to-ground transmission, which incorporates both line-of-sight (LoS) and non-line-of-sight (NLoS) links. Assuming the max-power association policy, we study the performance of the network in both amplify-and-forward (AF) and decode-and-forward (DF) relaying protocols. Using tools from stochastic geometry, we analyze the joint distribution of distance and zenith angle of the closest (and serving) UAV to the origin in a 3D setting. Further, we identify and extensively study key mathematical constructs as the building blocks of characterizing the received signal-to-interference-plus-noise ratio (SINR) distribution. Using these results, we obtain exact mathematical expressions for the coverage probability in both AF and DF relaying protocols. Furthermore, considering the fact that backhaul links could be quite weak because of the downtilted antennas at the BSs, we propose and analyze the addition of a directional uptilted antenna at the BS that is solely used for backhaul purposes. The superiority of having directional antennas with wirelessly backhauled UAVs is further demonstrated via simulation.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
235,299
2210.16776
Saliency Can Be All You Need In Contrastive Self-Supervised Learning
We propose an augmentation policy for Contrastive Self-Supervised Learning (SSL) in the form of an already established Salient Image Segmentation technique entitled Global Contrast based Salient Region Detection. This detection technique, which had been devised for unrelated Computer Vision tasks, was empirically observed to play the role of an augmentation facilitator within the SSL protocol. This observation is rooted in our practical attempts to learn, by SSL-fashion, aerial imagery of solar panels, which exhibit challenging boundary patterns. Upon the successful integration of this technique on our problem domain, we formulated a generalized procedure and conducted a comprehensive, systematic performance assessment with various Contrastive SSL algorithms subject to standard augmentation techniques. This evaluation, which was conducted across multiple datasets, indicated that the proposed technique indeed contributes to SSL. We hypothesize whether salient image segmentation may suffice as the only augmentation policy in Contrastive SSL when treating downstream segmentation tasks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
327,451
2410.18974
3D-Adapter: Geometry-Consistent Multi-View Diffusion for High-Quality 3D Generation
Multi-view image diffusion models have significantly advanced open-domain 3D object generation. However, most existing models rely on 2D network architectures that lack inherent 3D biases, resulting in compromised geometric consistency. To address this challenge, we introduce 3D-Adapter, a plug-in module designed to infuse 3D geometry awareness into pretrained image diffusion models. Central to our approach is the idea of 3D feedback augmentation: for each denoising step in the sampling loop, 3D-Adapter decodes intermediate multi-view features into a coherent 3D representation, then re-encodes the rendered RGBD views to augment the pretrained base model through feature addition. We study two variants of 3D-Adapter: a fast feed-forward version based on Gaussian splatting and a versatile training-free version utilizing neural fields and meshes. Our extensive experiments demonstrate that 3D-Adapter not only greatly enhances the geometry quality of text-to-multi-view models such as Instant3D and Zero123++, but also enables high-quality 3D generation using the plain text-to-image Stable Diffusion. Furthermore, we showcase the broad application potential of 3D-Adapter by presenting high quality results in text-to-3D, image-to-3D, text-to-texture, and text-to-avatar tasks.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
502,115
2402.18172
NiteDR: Nighttime Image De-Raining with Cross-View Sensor Cooperative Learning for Dynamic Driving Scenes
In real-world environments, outdoor imaging systems are often affected by disturbances such as rain degradation. Especially, in nighttime driving scenes, insufficient and uneven lighting shrouds the scenes in darkness, resulting degradation of both the image quality and visibility. Particularly, in the field of autonomous driving, the visual perception ability of RGB sensors experiences a sharp decline in such harsh scenarios. Additionally, driving assistance systems suffer from reduced capabilities in capturing and discerning the surrounding environment, posing a threat to driving safety. Single-view information captured by single-modal sensors cannot comprehensively depict the entire scene. To address these challenges, we developed an image de-raining framework tailored for rainy nighttime driving scenes. It aims to remove rain artifacts, enrich scene representation, and restore useful information. Specifically, we introduce cooperative learning between visible and infrared images captured by different sensors. By cross-view fusion of these multi-source data, the scene within the images gains richer texture details and enhanced contrast. We constructed an information cleaning module called CleanNet as the first stage of our framework. Moreover, we designed an information fusion module called FusionNet as the second stage to fuse the clean visible images with infrared images. Using this stage-by-stage learning strategy, we obtain de-rained fusion images with higher quality and better visual perception. Extensive experiments demonstrate the effectiveness of our proposed Cross-View Cooperative Learning (CVCL) in adverse driving scenarios in low-light rainy environments. The proposed approach addresses the gap in the utilization of existing rain removal algorithms in specific low-light conditions.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
433,324
1903.06278
gym-gazebo2, a toolkit for reinforcement learning using ROS 2 and Gazebo
This paper presents an upgraded, real world application oriented version of gym-gazebo, the Robot Operating System (ROS) and Gazebo based Reinforcement Learning (RL) toolkit, which complies with OpenAI Gym. The content discusses the new ROS 2 based software architecture and summarizes the results obtained using Proximal Policy Optimization (PPO). Ultimately, the output of this work presents a benchmarking system for robotics that allows different techniques and algorithms to be compared using the same virtual conditions. We have evaluated environments with different levels of complexity of the Modular Articulated Robotic Arm (MARA), reaching accuracies in the millimeter scale. The converged results show the feasibility and usefulness of the gym-gazebo 2 toolkit, its potential and applicability in industrial use cases, using modular robots.
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
124,344
2101.08466
Anti-UAV: A Large Multi-Modal Benchmark for UAV Tracking
Unmanned Aerial Vehicle (UAV) offers lots of applications in both commerce and recreation. With this, monitoring the operation status of UAVs is crucially important. In this work, we consider the task of tracking UAVs, providing rich information such as location and trajectory. To facilitate research on this topic, we propose a dataset, Anti-UAV, with more than 300 video pairs containing over 580k manually annotated bounding boxes. The releasing of such a large-scale dataset could be a useful initial step in research of tracking UAVs. Furthermore, the advancement of addressing research challenges in Anti-UAV can help the design of anti-UAV systems, leading to better surveillance of UAVs. Besides, a novel approach named dual-flow semantic consistency (DFSC) is proposed for UAV tracking. Modulated by the semantic flow across video sequences, the tracker learns more robust class-level semantic information and obtains more discriminative instance-level features. Experimental results demonstrate that Anti-UAV is very challenging, and the proposed method can effectively improve the tracker's performance. The Anti-UAV benchmark and the code of the proposed approach will be publicly available at https://github.com/ucas-vg/Anti-UAV.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
216,331
2206.10892
I^2R-Net: Intra- and Inter-Human Relation Network for Multi-Person Pose Estimation
In this paper, we present the Intra- and Inter-Human Relation Networks (I^2R-Net) for Multi-Person Pose Estimation. It involves two basic modules. First, the Intra-Human Relation Module operates on a single person and aims to capture Intra-Human dependencies. Second, the Inter-Human Relation Module considers the relation between multiple instances and focuses on capturing Inter-Human interactions. The Inter-Human Relation Module can be designed very lightweight by reducing the resolution of feature map, yet learn useful relation information to significantly boost the performance of the Intra-Human Relation Module. Even without bells and whistles, our method can compete or outperform current competition winners. We conduct extensive experiments on COCO, CrowdPose, and OCHuman datasets. The results demonstrate that the proposed model surpasses all the state-of-the-art methods. Concretely, the proposed method achieves 77.4% AP on CrowPose dataset and 67.8% AP on OCHuman dataset respectively, outperforming existing methods by a large margin. Additionally, the ablation study and visualization analysis also prove the effectiveness of our model.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
304,075
2105.05994
Neural Trajectory Fields for Dynamic Novel View Synthesis
Recent approaches to render photorealistic views from a limited set of photographs have pushed the boundaries of our interactions with pictures of static scenes. The ability to recreate moments, that is, time-varying sequences, is perhaps an even more interesting scenario, but it remains largely unsolved. We introduce DCT-NeRF, a coordinatebased neural representation for dynamic scenes. DCTNeRF learns smooth and stable trajectories over the input sequence for each point in space. This allows us to enforce consistency between any two frames in the sequence, which results in high quality reconstruction, particularly in dynamic regions.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
234,981
0707.4083
Chain of Separable Binary Goppa Codes and their Minimal Distance
It is shown that subclasses of separable binary Goppa codes, $\Gamma(L,G)$ - codes, with $L=\{\alpha \in GF(2^{2l}):G(\alpha)\neq 0 \}$ and special Goppa polynomials G(x) can be presented as a chain of embedded codes. The true minimal distance has been obtained for all codes of the chain.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
502
2011.07534
SAG-GAN: Semi-Supervised Attention-Guided GANs for Data Augmentation on Medical Images
Recently deep learning methods, in particular, convolutional neural networks (CNNs), have led to a massive breakthrough in the range of computer vision. Also, the large-scale annotated dataset is the essential key to a successful training procedure. However, it is a huge challenge to get such datasets in the medical domain. Towards this, we present a data augmentation method for generating synthetic medical images using cycle-consistency Generative Adversarial Networks (GANs). We add semi-supervised attention modules to generate images with convincing details. We treat tumor images and normal images as two domains. The proposed GANs-based model can generate a tumor image from a normal image, and in turn, it can also generate a normal image from a tumor image. Furthermore, we show that generated medical images can be used for improving the performance of ResNet18 for medical image classification. Our model is applied to three limited datasets of tumor MRI images. We first generate MRI images on limited datasets, then we trained three popular classification models to get the best model for tumor classification. Finally, we train the classification model using real images with classic data augmentation methods and classification models using synthetic images. The classification results between those trained models showed that the proposed SAG-GAN data augmentation method can boost Accuracy and AUC compare with classic data augmentation methods. We believe the proposed data augmentation method can apply to other medical image domains, and improve the accuracy of computer-assisted diagnosis.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
206,593
2301.01149
I2F: A Unified Image-to-Feature Approach for Domain Adaptive Semantic Segmentation
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work. However, domain discrepancies in low-level image statistics and high-level contexts compromise the segmentation performance over the target domain. A key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly. Unfortunately, there is a lack of such unified approaches for UDA tasks in the existing literature. This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation. Concretely, for image-level domain shifts, we propose a global photometric alignment module and a global texture alignment module that align images in the source and target domains in terms of image-level properties. For feature-level domain shifts, we perform global manifold alignment by projecting pixel features from both domains onto the feature manifold of the source domain; and we further regularize category centers in the source domain through a category-oriented triplet loss and perform target domain consistency regularization over augmented target domain images. Experimental results demonstrate that our pipeline significantly outperforms previous methods. In the commonly tested GTA5$\rightarrow$Cityscapes task, our proposed method using Deeplab V3+ as the backbone surpasses previous SOTA by 8%, achieving 58.2% in mIoU.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
339,149
2401.06377
Design and Nonlinear Modeling of a Modular Cable Driven Soft Robotic Arm
We propose a novel multi-section cable-driven soft robotic arm inspired by octopus tentacles along with a new modeling approach. Each section of the modular manipulator is made of a soft tubing backbone, a soft silicon arm body, and two rigid endcaps, which connect adjacent sections and decouple the actuation cables of different sections. The soft robotic arm is made with casting after the rigid endcaps are 3D-printed, achieving low-cost and convenient fabrication. To capture the nonlinear effect of cables pushing into the soft silicon arm body, which results from the absence of intermediate rigid cable guides for higher compliance, an analytical static model is developed to capture the relationship between the bending curvature and the cable lengths. The proposed model shows superior prediction performance in experiments over that of a baseline model, especially under large bending conditions. Based on the nonlinear static model, a kinematic model of a multi-section arm is further developed and used to derive a motion planning algorithm. Experiments show that the proposed soft arm has high flexibility and a large workspace, and the tracking errors under the algorithm based on the proposed modeling approach are up to 52$\%$ smaller than those with the algorithm derived from the baseline model. The presented modeling approach is expected to be applicable to a broad range of soft cable-driven actuators and manipulators.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
421,135
2301.06269
DarkVision: A Benchmark for Low-light Image/Video Perception
Imaging and perception in photon-limited scenarios is necessary for various applications, e.g., night surveillance or photography, high-speed photography, and autonomous driving. In these cases, cameras suffer from low signal-to-noise ratio, which degrades the image quality severely and poses challenges for downstream high-level vision tasks like object detection and recognition. Data-driven methods have achieved enormous success in both image restoration and high-level vision tasks. However, the lack of high-quality benchmark dataset with task-specific accurate annotations for photon-limited images/videos delays the research progress heavily. In this paper, we contribute the first multi-illuminance, multi-camera, and low-light dataset, named DarkVision, serving for both image enhancement and object detection. We provide bright and dark pairs with pixel-wise registration, in which the bright counterpart provides reliable reference for restoration and annotation. The dataset consists of bright-dark pairs of 900 static scenes with objects from 15 categories, and 32 dynamic scenes with 4-category objects. For each scene, images/videos were captured at 5 illuminance levels using three cameras of different grades, and average photons can be reliably estimated from the calibration data for quantitative studies. The static-scene images and dynamic videos respectively contain around 7,344 and 320,667 instances in total. With DarkVision, we established baselines for image/video enhancement and object detection by representative algorithms. To demonstrate an exemplary application of DarkVision, we propose two simple yet effective approaches for improving performance in video enhancement and object detection respectively. We believe DarkVision would advance the state-of-the-arts in both imaging and related computer vision tasks in low-light environment.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
340,603
1908.04092
Active Annotation: bootstrapping annotation lexicon and guidelines for supervised NLU learning
Natural Language Understanding (NLU) models are typically trained in a supervised learning framework. In the case of intent classification, the predicted labels are predefined and based on the designed annotation schema while the labelling process is based on a laborious task where annotators manually inspect each utterance and assign the corresponding label. We propose an Active Annotation (AA) approach where we combine an unsupervised learning method in the embedding space, a human-in-the-loop verification process, and linguistic insights to create lexicons that can be open categories and adapted over time. In particular, annotators define the y-label space on-the-fly during the annotation using an iterative process and without the need for prior knowledge about the input data. We evaluate the proposed annotation paradigm in a real use-case NLU scenario. Results show that our Active Annotation paradigm achieves accurate and higher quality training data, with an annotation speed of an order of magnitude higher with respect to the traditional human-only driven baseline annotation methodology.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
141,404
2011.03078
Parameter Identification and Sensitivity Analysis for Zero-dimensional Physics-based Lithium-Sulfur Battery Models
This paper examines the problem of estimating the parameters of a Lithium-Sulfur (LiS) battery from experimental cycling data. LiS batteries are attractive compared to traditional Lithium-Ion batteries, thanks largely to their potential to provide higher energy densities. The literature presents a number of different LiS battery models, with different fidelities and complexities. This includes both higher-fidelity diffusion-reaction models as well as "zero-dimensional" models that neglect diffusion dynamics while capturing the physics of the underlying reduction-oxidation reactions. The paper focuses on zero-dimensional LiS battery models, and develops four such models from the literature, reflecting different choices of which redox reactions to model. There is a growing need for using experimental cycling datasets to both parameterize these models and compare their fidelities. To address this need, we fabricated LiS coin cells and performed charge/discharge cycling tests on these cells. In parallel, we analyzed the sensitivity of simulated LiS battery charge/discharge characteristics to underlying model parameters. Using this sensitivity analysis, we selected a subset of model parameters for identification, and estimated these parameters for all four LiS battery models from cycling data, thereby arriving at a consistent experimental comparison and assessment of these models' respective fidelities.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
205,119
2407.03618
BM25S: Orders of magnitude faster lexical search via eager sparse scoring
We introduce BM25S, an efficient Python-based implementation of BM25 that only depends on Numpy and Scipy. BM25S achieves up to a 500x speedup compared to the most popular Python-based framework by eagerly computing BM25 scores during indexing and storing them into sparse matrices. It also achieves considerable speedups compared to highly optimized Java-based implementations, which are used by popular commercial products. Finally, BM25S reproduces the exact implementation of five BM25 variants based on Kamphuis et al. (2020) by extending eager scoring to non-sparse variants using a novel score shifting method. The code can be found at https://github.com/xhluca/bm25s
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
470,222
2010.08258
Variational (Gradient) Estimate of the Score Function in Energy-based Latent Variable Models
The learning and evaluation of energy-based latent variable models (EBLVMs) without any structural assumptions are highly challenging, because the true posteriors and the partition functions in such models are generally intractable. This paper presents variational estimates of the score function and its gradient with respect to the model parameters in a general EBLVM, referred to as VaES and VaGES respectively. The variational posterior is trained to minimize a certain divergence to the true model posterior and the bias in both estimates can be bounded by the divergence theoretically. With a minimal model assumption, VaES and VaGES can be applied to the kernelized Stein discrepancy (KSD) and score matching (SM)-based methods to learn EBLVMs. Besides, VaES can also be used to estimate the exact Fisher divergence between the data and general EBLVMs.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
201,127
2305.15730
A Tutorial on Holographic MIMO Communications--Part II: Performance Analysis and Holographic Beamforming
As Part II of a three-part tutorial on holographic multiple-input multiple-output (HMIMO), this Letter focuses on the state-of-the-art in performance analysis and on holographic beamforming for HMIMO communications. We commence by discussing the spatial degrees of freedom (DoF) and ergodic capacity of a point-to-point HMIMO system, based on the channel model presented in Part I. Additionally, we also consider the sum-rate analysis of multi-user HMIMO systems. Moreover, we review the recent progress in holographic beamforming techniques developed for various HMIMO scenarios. Finally, we evaluate both the spatial DoF and the channel capacity through numerical simulations.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
367,749
1805.07468
Unsupervised Learning of Neural Networks to Explain Neural Networks
This paper presents an unsupervised method to learn a neural network, namely an explainer, to interpret a pre-trained convolutional neural network (CNN), i.e., explaining knowledge representations hidden in middle conv-layers of the CNN. Given feature maps of a certain conv-layer of the CNN, the explainer performs like an auto-encoder, which first disentangles the feature maps into object-part features and then inverts object-part features back to features of higher conv-layers of the CNN. More specifically, the explainer contains interpretable conv-layers, where each filter disentangles the representation of a specific object part from chaotic input feature maps. As a paraphrase of CNN features, the disentangled representations of object parts help people understand the logic inside the CNN. We also learn the explainer to use object-part features to reconstruct features of higher CNN layers, in order to minimize loss of information during the feature disentanglement. More crucially, we learn the explainer via network distillation without using any annotations of sample labels, object parts, or textures for supervision. We have applied our method to different types of CNNs for evaluation, and explainers have significantly boosted the interpretability of CNN features.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
97,829
2210.12254
Score-based Denoising Diffusion with Non-Isotropic Gaussian Noise Models
Generative models based on denoising diffusion techniques have led to an unprecedented increase in the quality and diversity of imagery that is now possible to create with neural generative models. However, most contemporary state-of-the-art methods are derived from a standard isotropic Gaussian formulation. In this work we examine the situation where non-isotropic Gaussian distributions are used. We present the key mathematical derivations for creating denoising diffusion models using an underlying non-isotropic Gaussian noise model. We also provide initial experiments with the CIFAR-10 dataset to help verify empirically that this more general modeling approach can also yield high-quality samples.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
325,652
2301.10780
Quantum anomaly detection in the latent space of proton collision events at the LHC
The ongoing quest to discover new phenomena at the LHC necessitates the continuous development of algorithms and technologies. Established approaches like machine learning, along with emerging technologies such as quantum computing show promise in the enhancement of experimental capabilities. In this work, we propose a strategy for anomaly detection tasks at the LHC based on unsupervised quantum machine learning, and demonstrate its effectiveness in identifying new phenomena. The designed quantum models, an unsupervised kernel machine and two clustering algorithms, are trained to detect new-physics events using a latent representation of LHC data, generated by an autoencoder designed to accommodate current quantum hardware limitations on problem size. For kernel-based anomaly detection, we implement an instance of the model on a quantum computer, and we identify a regime where it significantly outperforms its classical counterparts. We show that the observed performance enhancement is related to the quantum resources utilised by the model.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
341,906
1706.07532
A generalized framework for the estimation of edge infection probabilities
Modeling the spread of infections on networks is a well-studied and important field of research. Most infection and diffusion models require a real value or probability on the edges of the network as an input, but this is rarely available in real-life applications. Our goal in this paper is to develop a general framework for this task. The general model works with the most widely used infection models and is able to handle an arbitrary number of observations on such processes. The model is defined as a general optimization task and a Particle Swarm heuristic is proposed to solve it. We evaluate the accuracy and speed of the proposed method on a high variety of realistic infection scenarios.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
75,863
2009.07469
Deep Sinogram Completion with Image Prior for Metal Artifact Reduction in CT Images
Computed tomography (CT) has been widely used for medical diagnosis, assessment, and therapy planning and guidance. In reality, CT images may be affected adversely in the presence of metallic objects, which could lead to severe metal artifacts and influence clinical diagnosis or dose calculation in radiation therapy. In this paper, we propose a generalizable framework for metal artifact reduction (MAR) by simultaneously leveraging the advantages of image domain and sinogram domain-based MAR techniques. We formulate our framework as a sinogram completion problem and train a neural network (SinoNet) to restore the metal-affected projections. To improve the continuity of the completed projections at the boundary of metal trace and thus alleviate new artifacts in the reconstructed CT images, we train another neural network (PriorNet) to generate a good prior image to guide sinogram learning, and further design a novel residual sinogram learning strategy to effectively utilize the prior image information for better sinogram completion. The two networks are jointly trained in an end-to-end fashion with a differentiable forward projection (FP) operation so that the prior image generation and deep sinogram completion procedures can benefit from each other. Finally, the artifact-reduced CT images are reconstructed using the filtered backward projection (FBP) from the completed sinogram. Extensive experiments on simulated and real artifacts data demonstrate that our method produces superior artifact-reduced results while preserving the anatomical structures and outperforms other MAR methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
195,937
2111.02118
A Novel Actuation Strategy for an Agile Bio-inspired FWAV Performing a Morphing-coupled Wingbeat Pattern
Flying vertebrates exhibit sophisticated wingbeat kinematics. Their specialized forelimbs allow for the wing morphing motion to couple with the flapping motion during their level flight, Previous flyable bionic platforms have successfully applied bio-inspired wing morphing but cannot yet be propelled by the morphing-coupled wingbeat pattern. Spurred by this, we develop a bio-inspired flapping-wing aerial vehicle (FWAV) entitled RoboFalcon, which is equipped with a novel mechanism to drive the bat-style morphing wings, performs a morphing-coupled wingbeat pattern, and overall manages an appealing flight. The novel mechanism of RoboFalcon allows coupling the morphing and flapping during level flight and decoupling these when maneuvering is required, producing a bilateral asymmetric downstroke affording high rolling agility. The bat-style morphing wing is designed with a tilted mounting angle around the radius at the wrist joint to mimic the wrist supination and pronation effect of flying vertebrates' forelimbs. The agility of RoboFalcon is assessed through several rolling maneuver flight tests, and we demonstrate its well-performing agility capability compared to flying creatures and current flapping-wing platforms. Wind tunnel tests indicate that the roll moment of the asymmetric downstroke is correlated with the flapping frequency, and the wrist mounting angle can be used for tuning the angle of attack and lift-thrust configuration of the equilibrium flight state. We believe that this work yields a well-performing bionic platform and provides a new actuation strategy for the morphing-coupled flapping flight.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
264,768
2303.09266
SmartBERT: A Promotion of Dynamic Early Exiting Mechanism for Accelerating BERT Inference
Dynamic early exiting has been proven to improve the inference speed of the pre-trained language model like BERT. However, all samples must go through all consecutive layers before early exiting and more complex samples usually go through more layers, which still exists redundant computation. In this paper, we propose a novel dynamic early exiting combined with layer skipping for BERT inference named SmartBERT, which adds a skipping gate and an exiting operator into each layer of BERT. SmartBERT can adaptively skip some layers and adaptively choose whether to exit. Besides, we propose cross-layer contrastive learning and combine it into our training phases to boost the intermediate layers and classifiers which would be beneficial for early exiting. To keep the consistent usage of skipping gates between training and inference phases, we propose a hard weight mechanism during training phase. We conduct experiments on eight classification datasets of the GLUE benchmark. Experimental results show that SmartBERT achieves 2-3x computation reduction with minimal accuracy drops compared with BERT and our method outperforms previous methods in both efficiency and accuracy. Moreover, in some complex datasets like RTE and WNLI, we prove that the early exiting based on entropy hardly works, and the skipping mechanism is essential for reducing computation.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
351,974