id
stringlengths 9
16
| title
stringlengths 4
278
| abstract
stringlengths 3
4.08k
| cs.HC
bool 2
classes | cs.CE
bool 2
classes | cs.SD
bool 2
classes | cs.SI
bool 2
classes | cs.AI
bool 2
classes | cs.IR
bool 2
classes | cs.LG
bool 2
classes | cs.RO
bool 2
classes | cs.CL
bool 2
classes | cs.IT
bool 2
classes | cs.SY
bool 2
classes | cs.CV
bool 2
classes | cs.CR
bool 2
classes | cs.CY
bool 2
classes | cs.MA
bool 2
classes | cs.NE
bool 2
classes | cs.DB
bool 2
classes | Other
bool 2
classes | __index_level_0__
int64 0
541k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2307.12793
|
Imperfect CSI: A Key Factor of Uncertainty to Over-the-Air Federated
Learning
|
Over-the-air computation (AirComp) has recently been identified as a prominent technique to enhance communication efficiency of wireless federated learning (FL). This letter investigates the impact of channel state information (CSI) uncertainty at the transmitter on an AirComp enabled FL (AirFL) system with the truncated channel inversion strategy. To characterize the performance of the AirFL system, the weight divergence with respect to the ideal aggregation is analytically derived to evaluate learning performance loss. We explicitly reveal that the weight divergence deteriorates as $\mathcal{O}(1/\rho^2)$ as the level of channel estimation accuracy $\rho$ vanishes, and also has a decay rate of $\mathcal{O}(1/K^2)$ with the increasing number of participating devices, $K$. Building upon our analytical results, we formulate the channel truncation threshold optimization problem to adapt to different $\rho$, which can be solved optimally. Numerical results verify the analytical results and show that a lower truncation threshold is preferred with more accurate CSI.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| 381,382
|
2501.17273
|
Tailored Truths: Optimizing LLM Persuasion with Personalization and
Fabricated Statistics
|
Large Language Models (LLMs) are becoming increasingly persuasive, demonstrating the ability to personalize arguments in conversation with humans by leveraging their personal data. This may have serious impacts on the scale and effectiveness of disinformation campaigns. We studied the persuasiveness of LLMs in a debate setting by having humans $(n=33)$ engage with LLM-generated arguments intended to change the human's opinion. We quantified the LLM's effect by measuring human agreement with the debate's hypothesis pre- and post-debate and analyzing both the magnitude of opinion change, as well as the likelihood of an update in the LLM's direction. We compare persuasiveness across established persuasion strategies, including personalized arguments informed by user demographics and personality, appeal to fabricated statistics, and a mixed strategy utilizing both personalized arguments and fabricated statistics. We found that static arguments generated by humans and GPT-4o-mini have comparable persuasive power. However, the LLM outperformed static human-written arguments when leveraging the mixed strategy in an interactive debate setting. This approach had a $\mathbf{51\%}$ chance of persuading participants to modify their initial position, compared to $\mathbf{32\%}$ for the static human-written arguments. Our results highlight the concerning potential for LLMs to enable inexpensive and persuasive large-scale disinformation campaigns.
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 528,269
|
2212.10649
|
Inversion of Bayesian Networks
|
Variational autoencoders and Helmholtz machines use a recognition network (encoder) to approximate the posterior distribution of a generative model (decoder). In this paper we study the necessary and sufficient properties of a recognition network so that it can model the true posterior distribution exactly. These results are derived in the general context of probabilistic graphical modelling / Bayesian networks, for which the network represents a set of conditional independence statements. We derive both global conditions, in terms of d-separation, and local conditions for the recognition network to have the desired qualities. It turns out that for the local conditions the property perfectness (for every node, all parents are joined) plays an important role.
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 337,547
|
2312.04234
|
Graph Convolutions Enrich the Self-Attention in Transformers!
|
Transformers, renowned for their self-attention mechanism, have achieved state-of-the-art performance across various tasks in natural language processing, computer vision, time-series modeling, etc. However, one of the challenges with deep Transformer models is the oversmoothing problem, where representations across layers converge to indistinguishable values, leading to significant performance degradation. We interpret the original self-attention as a simple graph filter and redesign it from a graph signal processing (GSP) perspective. We propose a graph-filter-based self-attention (GFSA) to learn a general yet effective one, whose complexity, however, is slightly larger than that of the original self-attention mechanism. We demonstrate that GFSA improves the performance of Transformers in various fields, including computer vision, natural language processing, graph-level tasks, speech recognition, and code classification.
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 413,606
|
1601.01121
|
A pragmatic approach to multi-class classification
|
We present a novel hierarchical approach to multi-class classification which is generic in that it can be applied to different classification models (e.g., support vector machines, perceptrons), and makes no explicit assumptions about the probabilistic structure of the problem as it is usually done in multi-class classification. By adding a cascade of additional classifiers, each of which receives the previous classifier's output in addition to regular input data, the approach harnesses unused information that manifests itself in the form of, e.g., correlations between predicted classes. Using multilayer perceptrons as a classification model, we demonstrate the validity of this approach by testing it on a complex ten-class 3D gesture recognition task.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 50,713
|
2410.06905
|
Reliable Probabilistic Human Trajectory Prediction for Autonomous
Applications
|
Autonomous systems, like vehicles or robots, require reliable, accurate, fast, resource-efficient, scalable, and low-latency trajectory predictions to get initial knowledge about future locations and movements of surrounding objects for safe human-machine interaction. Furthermore, they need to know the uncertainty of the predictions for risk assessment to provide safe path planning. This paper presents a lightweight method to address these requirements, combining Long Short-Term Memory and Mixture Density Networks. Our method predicts probability distributions, including confidence level estimations for positional uncertainty to support subsequent risk management applications and runs on a low-power embedded platform. We discuss essential requirements for human trajectory prediction in autonomous vehicle applications and demonstrate our method's performance using multiple traffic-related datasets. Furthermore, we explain reliability and sharpness metrics and show how important they are to guarantee the correctness and robustness of a model's predictions and uncertainty assessments. These essential evaluations have so far received little attention for no good reason. Our approach focuses entirely on real-world applicability. Verifying prediction uncertainties and a model's reliability are central to autonomous real-world applications. Our framework and code are available at: https://github.com/kav-institute/mdn_trajectory_forecasting.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 496,389
|
2201.12701
|
DearFSAC: An Approach to Optimizing Unreliable Federated Learning via
Deep Reinforcement Learning
|
In federated learning (FL), model aggregation has been widely adopted for data privacy. In recent years, assigning different weights to local models has been used to alleviate the FL performance degradation caused by differences between local datasets. However, when various defects make the FL process unreliable, most existing FL approaches expose weak robustness. In this paper, we propose the DEfect-AwaRe federated soft actor-critic (DearFSAC) to dynamically assign weights to local models to improve the robustness of FL. The deep reinforcement learning algorithm soft actor-critic is adopted for near-optimal performance and stable convergence. Besides, an auto-encoder is trained to output low-dimensional embedding vectors that are further utilized to evaluate model quality. In the experiments, DearFSAC outperforms three existing approaches on four datasets for both independent and identically distributed (IID) and non-IID settings under defective scenarios.
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 277,749
|
2010.12729
|
ANLIzing the Adversarial Natural Language Inference Dataset
|
We perform an in-depth error analysis of Adversarial NLI (ANLI), a recently introduced large-scale human-and-model-in-the-loop natural language inference dataset collected over multiple rounds. We propose a fine-grained annotation scheme of the different aspects of inference that are responsible for the gold classification labels, and use it to hand-code all three of the ANLI development sets. We use these annotations to answer a variety of interesting questions: which inference types are most common, which models have the highest performance on each reasoning type, and which types are the most challenging for state of-the-art models? We hope that our annotations will enable more fine-grained evaluation of models trained on ANLI, provide us with a deeper understanding of where models fail and succeed, and help us determine how to train better models in future.
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 202,821
|
2312.02420
|
Towards Granularity-adjusted Pixel-level Semantic Annotation
|
Recent advancements in computer vision predominantly rely on learning-based systems, leveraging annotations as the driving force to develop specialized models. However, annotating pixel-level information, particularly in semantic segmentation, presents a challenging and labor-intensive task, prompting the need for autonomous processes. In this work, we propose GranSAM which distinguishes itself by providing semantic segmentation at the user-defined granularity level on unlabeled data without the need for any manual supervision, offering a unique contribution in the realm of semantic mask annotation method. Specifically, we propose an approach to enable the Segment Anything Model (SAM) with semantic recognition capability to generate pixel-level annotations for images without any manual supervision. For this, we accumulate semantic information from synthetic images generated by the Stable Diffusion model or web crawled images and employ this data to learn a mapping function between SAM mask embeddings and object class labels. As a result, SAM, enabled with granularity-adjusted mask recognition, can be used for pixel-level semantic annotation purposes. We conducted experiments on the PASCAL VOC 2012 and COCO-80 datasets and observed a +17.95% and +5.17% increase in mIoU, respectively, compared to existing state-of-the-art methods when evaluated under our problem setting.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 412,857
|
2205.07736
|
Prioritizing Corners in OoD Detectors via Symbolic String Manipulation
|
For safety assurance of deep neural networks (DNNs), out-of-distribution (OoD) monitoring techniques are essential as they filter spurious input that is distant from the training dataset. This paper studies the problem of systematically testing OoD monitors to avoid cases where an input data point is tested as in-distribution by the monitor, but the DNN produces spurious output predictions. We consider the definition of "in-distribution" characterized in the feature space by a union of hyperrectangles learned from the training dataset. Thus the testing is reduced to finding corners in hyperrectangles distant from the available training data in the feature space. Concretely, we encode the abstract location of every data point as a finite-length binary string, and the union of all binary strings is stored compactly using binary decision diagrams (BDDs). We demonstrate how to use BDDs to symbolically extract corners distant from all data points within the training set. Apart from test case generation, we explain how to use the proposed corners to fine-tune the DNN to ensure that it does not predict overly confidently. The result is evaluated over examples such as number and traffic sign recognition.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| 296,694
|
2311.05146
|
OW-SLR: Overlapping Windows on Semi-Local Region for Image
Super-Resolution
|
There has been considerable progress in implicit neural representation to upscale an image to any arbitrary resolution. However, existing methods are based on defining a function to predict the Red, Green and Blue (RGB) value from just four specific loci. Relying on just four loci is insufficient as it leads to losing fine details from the neighboring region(s). We show that by taking into account the semi-local region leads to an improvement in performance. In this paper, we propose applying a new technique called Overlapping Windows on Semi-Local Region (OW-SLR) to an image to obtain any arbitrary resolution by taking the coordinates of the semi-local region around a point in the latent space. This extracted detail is used to predict the RGB value of a point. We illustrate the technique by applying the algorithm to the Optical Coherence Tomography-Angiography (OCT-A) images and show that it can upscale them to random resolution. This technique outperforms the existing state-of-the-art methods when applied to the OCT500 dataset. OW-SLR provides better results for classifying healthy and diseased retinal images such as diabetic retinopathy and normals from the given set of OCT-A images. The project page is available at https://rishavbb.github.io/ow-slr/index.html
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 406,491
|
2111.11517
|
Columnar Formats for Schemaless LSM-based Document Stores
|
In the last decade, document store database systems have gained more traction for storing and querying large volumes of semi-structured data. However, the flexibility of the document stores' data models has limited their ability to store data in a columnar-major layout - making them less performant for analytical workloads than column store relational databases. In this paper, we propose several techniques based on piggy-backing on Log-Structured Merge (LSM) tree events and tailored to document stores to store document data in a columnar layout. We first extend the Dremel format, a popular on-disk columnar format for semi-structured data, to comply with document stores' flexible data model. We then introduce two columnar layouts for organizing and storing data in LSM-based storage. We also highlight the potential of using query compilation techniques for document stores, where values' types are known only at runtime. We have implemented and evaluated our techniques to measure their impact on storage, data ingestion, and query performance in Apache AsterixDB. Our experiments show significant performance gains, improving the query execution time by orders of magnitude while minimally impacting ingestion performance.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| 267,688
|
2104.09703
|
Bridging between soft and hard thresholding by scaling
|
In this article, we developed and analyzed a thresholding method in which soft thresholding estimators are independently expanded by empirical scaling values. The scaling values have a common hyper-parameter that is an order of expansion of an ideal scaling value that achieves hard thresholding. We simply call this estimator a scaled soft thresholding estimator. The scaled soft thresholding is a general method that includes the soft thresholding and non-negative garrote as special cases and gives an another derivation of adaptive LASSO. We then derived the degree of freedom of the scaled soft thresholding by means of the Stein's unbiased risk estimate and found that it is decomposed into the degree of freedom of soft thresholding and the reminder connecting to hard thresholding. In this meaning, the scaled soft thresholding gives a natural bridge between soft and hard thresholding methods. Since the degree of freedom represents the degree of over-fitting, this result implies that there are two sources of over-fitting in the scaled soft thresholding. The first source originated from soft thresholding is determined by the number of un-removed coefficients and is a natural measure of the degree of over-fitting. We analyzed the second source in a particular case of the scaled soft thresholding by referring a known result for hard thresholding. We then found that, in a sparse, large sample and non-parametric setting, the second source is largely determined by coefficient estimates whose true values are zeros and has an influence on over-fitting when threshold levels are around noise levels in those coefficient estimates. In a simple numerical example, these theoretical implications has well explained the behavior of the degree of freedom. Moreover, based on the results here and some known facts, we explained the behaviors of risks of soft, hard and scaled soft thresholding methods.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 231,322
|
1612.05143
|
Sampling-based Motion Planning for Active Multirotor System
Identification
|
This paper reports on an algorithm for planning trajectories that allow a multirotor micro aerial vehicle (MAV) to quickly identify a set of unknown parameters. In many problems like self calibration or model parameter identification some states are only observable under a specific motion. These motions are often hard to find, especially for inexperienced users. Therefore, we consider system model identification in an active setting, where the vehicle autonomously decides what actions to take in order to quickly identify the model. Our algorithm approximates the belief dynamics of the system around a candidate trajectory using an extended Kalman filter (EKF). It uses sampling-based motion planning to explore the space of possible beliefs and find a maximally informative trajectory within a user-defined budget. We validate our method in simulation and on a real system showing the feasibility and repeatability of the proposed approach. Our planner creates trajectories which reduce model parameter convergence time and uncertainty by a factor of four.
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 65,641
|
1912.10169
|
A Comparison of Architectures and Pretraining Methods for Contextualized
Multilingual Word Embeddings
|
The lack of annotated data in many languages is a well-known challenge within the field of multilingual natural language processing (NLP). Therefore, many recent studies focus on zero-shot transfer learning and joint training across languages to overcome data scarcity for low-resource languages. In this work we (i) perform a comprehensive comparison of state-ofthe-art multilingual word and sentence encoders on the tasks of named entity recognition (NER) and part of speech (POS) tagging; and (ii) propose a new method for creating multilingual contextualized word embeddings, compare it to multiple baselines and show that it performs at or above state-of-theart level in zero-shot transfer settings. Finally, we show that our method allows for better knowledge sharing across languages in a joint training setting.
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 158,253
|
2111.15452
|
On the Generalization of Agricultural Drought Classification from
Climate Data
|
Climate change is expected to increase the likelihood of drought events, with severe implications for food security. Unlike other natural disasters, droughts have a slow onset and depend on various external factors, making drought detection in climate data difficult. In contrast to existing works that rely on simple relative drought indices as ground-truth data, we build upon soil moisture index (SMI) obtained from a hydrological model. This index is directly related to insufficiently available water to vegetation. Given ERA5-Land climate input data of six months with land use information from MODIS satellite observation, we compare different models with and without sequential inductive bias in classifying droughts based on SMI. We use PR-AUC as the evaluation measure to account for the class imbalance and obtain promising results despite a challenging time-based split. We further show in an ablation study that the models retain their predictive capabilities given input data of coarser resolutions, as frequently encountered in climate models.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 268,933
|
1406.6844
|
FrameNet Resource Grammar Library for GF
|
In this paper we present an ongoing research investigating the possibility and potential of integrating frame semantics, particularly FrameNet, in the Grammatical Framework (GF) application grammar development. An important component of GF is its Resource Grammar Library (RGL) that encapsulates the low-level linguistic knowledge about morphology and syntax of currently more than 20 languages facilitating rapid development of multilingual applications. In the ideal case, porting a GF application grammar to a new language would only require introducing the domain lexicon - translation equivalents that are interlinked via common abstract terms. While it is possible for a highly restricted CNL, developing and porting a less restricted CNL requires above average linguistic knowledge about the particular language, and above average GF experience. Specifying a lexicon is mostly straightforward in the case of nouns (incl. multi-word units), however, verbs are the most complex category (in terms of both inflectional paradigms and argument structure), and adding them to a GF application grammar is not a straightforward task. In this paper we are focusing on verbs, investigating the possibility of creating a multilingual FrameNet-based GF library. We propose an extension to the current RGL, allowing GF application developers to define clauses on the semantic level, thus leaving the language-specific syntactic mapping to this extension. We demonstrate our approach by reengineering the MOLTO Phrasebook application grammar.
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 34,160
|
2107.07737
|
EGC2: Enhanced Graph Classification with Easy Graph Compression
|
Graph classification is crucial in network analyses. Networks face potential security threats, such as adversarial attacks. Some defense methods may trade off the algorithm complexity for robustness, such as adversarial training, whereas others may trade off clean example performance, such as smoothingbased defense. Most suffer from high complexity or low transferability. To address this problem, we proposed EGC2, an enhanced graph classification model with easy graph compression. EGC2 captures the relationship between the features of different nodes by constructing feature graphs and improving the aggregation of the node-level representations. To achieve lower-complexity defense applied to graph classification models, EGC2 utilizes a centrality-based edge-importance index to compress the graphs, filtering out trivial structures and adversarial perturbations in the input graphs, thus improving the model's robustness. Experiments on ten benchmark datasets demonstrate that the proposed feature read-out and graph compression mechanisms enhance the robustness of multiple basic models, resulting in a state-of-the-art performance in terms of accuracy and robustness against various adversarial attacks.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| 246,512
|
2009.09335
|
Biomedical Event Extraction with Hierarchical Knowledge Graphs
|
Biomedical event extraction is critical in understanding biomolecular interactions described in scientific corpus. One of the main challenges is to identify nested structured events that are associated with non-indicative trigger words. We propose to incorporate domain knowledge from Unified Medical Language System (UMLS) to a pre-trained language model via Graph Edge-conditioned Attention Networks (GEANet) and hierarchical graph representation. To better recognize the trigger words, each sentence is first grounded to a sentence graph based on a jointly modeled hierarchical knowledge graph from UMLS. The grounded graphs are then propagated by GEANet, a novel graph neural networks for enhanced capabilities in inferring complex events. On BioNLP 2011 GENIA Event Extraction task, our approach achieved 1.41% F1 and 3.19% F1 improvements on all events and complex events, respectively. Ablation studies confirm the importance of GEANet and hierarchical KG.
| false
| false
| false
| false
| true
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 196,544
|
2401.01801
|
A quatum inspired neural network for geometric modeling
|
By conceiving physical systems as 3D many-body point clouds, geometric graph neural networks (GNNs), such as SE(3)/E(3) equivalent GNNs, have showcased promising performance. In particular, their effective message-passing mechanics make them adept at modeling molecules and crystalline materials. However, current geometric GNNs only offer a mean-field approximation of the many-body system, encapsulated within two-body message passing, thus falling short in capturing intricate relationships within these geometric graphs. To address this limitation, tensor networks, widely employed by computational physics to handle manybody systems using high-order tensors, have been introduced. Nevertheless, integrating these tensorized networks into the message-passing framework of GNNs faces scalability and symmetry conservation (e.g., permutation and rotation) challenges. In response, we introduce an innovative equivariant Matrix Product State (MPS)-based message-passing strategy, through achieving an efficient implementation of the tensor contraction operation. Our method effectively models complex many-body relationships, suppressing mean-field approximations, and captures symmetries within geometric graphs. Importantly, it seamlessly replaces the standard message-passing and layer-aggregation modules intrinsic to geometric GNNs. We empirically validate the superior accuracy of our approach on benchmark tasks, including predicting classical Newton systems and quantum tensor Hamiltonian matrices. To our knowledge, our approach represents the inaugural utilization of parameterized geometric tensor networks.
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 419,503
|
2202.10367
|
Probabilities of the Third Type: Statistical Relational Learning and
Reasoning with Relative Frequencies
|
Dependencies on the relative frequency of a state in the domain are common when modelling probabilistic dependencies on relational data. For instance, the likelihood of a school closure during an epidemic might depend on the proportion of infected pupils exceeding a threshold. Often, rather than depending on discrete thresholds, dependencies are continuous: for instance, the likelihood of any one mosquito bite transmitting an illness depends on the proportion of carrier mosquitoes. Current approaches usually only consider probabilities over possible worlds rather than over domain elements themselves. An exception are the recently introduced lifted Bayesian networks for conditional probability logic, which express discrete dependencies on probabilistic data. We introduce functional lifted Bayesian networks, a formalism that explicitly incorporates continuous dependencies on relative frequencies into statistical relational artificial intelligence, and compare and contrast them with lifted Bayesian networks for conditional probability logic. Incorporating relative frequencies is not only beneficial to modelling; it also provides a more rigorous approach to learning problems where training and test or application domains have different sizes. To this end, we provide a representation of the asymptotic probability distributions induced by functional lifted Bayesian networks on domains of increasing sizes. Since that representation has well-understood scaling behaviour across domain sizes, it can be used to estimate parameters for a large domain consistently from randomly sampled subpopulations. Furthermore, we show that in parametric families of FLBN, convergence is uniform in the parameters, which ensures a meaningful dependence of the asymptotic probabilities on the parameters of the model.
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| 281,504
|
2004.00163
|
Weakly-Supervised Action Localization with Expectation-Maximization
Multi-Instance Learning
|
Weakly-supervised action localization requires training a model to localize the action segments in the video given only video level action label. It can be solved under the Multiple Instance Learning (MIL) framework, where a bag (video) contains multiple instances (action segments). Since only the bag's label is known, the main challenge is assigning which key instances within the bag to trigger the bag's label. Most previous models use attention-based approaches applying attentions to generate the bag's representation from instances, and then train it via the bag's classification. These models, however, implicitly violate the MIL assumption that instances in negative bags should be uniformly negative. In this work, we explicitly model the key instances assignment as a hidden variable and adopt an Expectation-Maximization (EM) framework. We derive two pseudo-label generation schemes to model the E and M process and iteratively optimize the likelihood lower bound. We show that our EM-MIL approach more accurately models both the learning objective and the MIL assumptions. It achieves state-of-the-art performance on two standard benchmarks, THUMOS14 and ActivityNet1.2.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 170,534
|
2402.08698
|
AMEND: A Mixture of Experts Framework for Long-tailed Trajectory
Prediction
|
Accurate prediction of pedestrians' future motions is critical for intelligent driving systems. Developing models for this task requires rich datasets containing diverse sets of samples. However, the existing naturalistic trajectory prediction datasets are generally imbalanced in favor of simpler samples and lack challenging scenarios. Such a long-tail effect causes prediction models to underperform on the tail portion of the data distribution containing safety-critical scenarios. Previous methods tackle the long-tail problem using methods such as contrastive learning and class-conditioned hypernetworks. These approaches, however, are not modular and cannot be applied to many machine learning architectures. In this work, we propose a modular model-agnostic framework for trajectory prediction that leverages a specialized mixture of experts. In our approach, each expert is trained with a specialized skill with respect to a particular part of the data. To produce predictions, we utilise a router network that selects the best expert by generating relative confidence scores. We conduct experimentation on common pedestrian trajectory prediction datasets and show that our method improves performance on long-tail scenarios. We further conduct ablation studies to highlight the contribution of different proposed components.
| false
| false
| false
| false
| false
| false
| true
| true
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 429,199
|
2405.18663
|
Lifelong Learning and Selective Forgetting via Contrastive Strategy
|
Lifelong learning aims to train a model with good performance for new tasks while retaining the capacity of previous tasks. However, some practical scenarios require the system to forget undesirable knowledge due to privacy issues, which is called selective forgetting. The joint task of the two is dubbed Learning with Selective Forgetting (LSF). In this paper, we propose a new framework based on contrastive strategy for LSF. Specifically, for the preserved classes (tasks), we make features extracted from different samples within a same class compacted. And for the deleted classes, we make the features from different samples of a same class dispersed and irregular, i.e., the network does not have any regular response to samples from a specific deleted class as if the network has no training at all. Through maintaining or disturbing the feature distribution, the forgetting and memory of different classes can be or independent of each other. Experiments are conducted on four benchmark datasets, and our method acieves new state-of-the-art.
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 458,521
|
2310.15179
|
Reducing Uncertainty in Sea-level Rise Prediction: A
Spatial-variability-aware Approach
|
Given multi-model ensemble climate projections, the goal is to accurately and reliably predict future sea-level rise while lowering the uncertainty. This problem is important because sea-level rise affects millions of people in coastal communities and beyond due to climate change's impacts on polar ice sheets and the ocean. This problem is challenging due to spatial variability and unknowns such as possible tipping points (e.g., collapse of Greenland or West Antarctic ice-shelf), climate feedback loops (e.g., clouds, permafrost thawing), future policy decisions, and human actions. Most existing climate modeling approaches use the same set of weights globally, during either regression or deep learning to combine different climate projections. Such approaches are inadequate when different regions require different weighting schemes for accurate and reliable sea-level rise predictions. This paper proposes a zonal regression model which addresses spatial variability and model inter-dependency. Experimental results show more reliable predictions using the weights learned via this approach on a regional scale.
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 402,205
|
2201.02450
|
Analytical calculation formulas for capacities of classical and
classical-quantum channels
|
We derive an analytical calculation formula for the channel capacity of a classical channel without any iteration while its existing algorithms require iterations and the number of iteration depends on the required precision level. Hence, our formula is its first analytical formula without any iteration. We apply the obtained formula to examples and see how the obtained formula works in these examples. Then, we extend it to the channel capacity of a classical-quantum (cq-) channel. Many existing studies proposed algorithms for a cq-channel and all of them require iterations. Our extended analytical algorithm have also no iteration and output the exactly optimum values.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| 274,548
|
2406.01917
|
GOMAA-Geo: GOal Modality Agnostic Active Geo-localization
|
We consider the task of active geo-localization (AGL) in which an agent uses a sequence of visual cues observed during aerial navigation to find a target specified through multiple possible modalities. This could emulate a UAV involved in a search-and-rescue operation navigating through an area, observing a stream of aerial images as it goes. The AGL task is associated with two important challenges. Firstly, an agent must deal with a goal specification in one of multiple modalities (e.g., through a natural language description) while the search cues are provided in other modalities (aerial imagery). The second challenge is limited localization time (e.g., limited battery life, urgency) so that the goal must be localized as efficiently as possible, i.e. the agent must effectively leverage its sequentially observed aerial views when searching for the goal. To address these challenges, we propose GOMAA-Geo - a goal modality agnostic active geo-localization agent - for zero-shot generalization between different goal modalities. Our approach combines cross-modality contrastive learning to align representations across modalities with supervised foundation model pretraining and reinforcement learning to obtain highly effective navigation and localization policies. Through extensive evaluations, we show that GOMAA-Geo outperforms alternative learnable approaches and that it generalizes across datasets - e.g., to disaster-hit areas without seeing a single disaster scenario during training - and goal modalities - e.g., to ground-level imagery or textual descriptions, despite only being trained with goals specified as aerial views. Code and models are publicly available at https://github.com/mvrl/GOMAA-Geo/tree/main.
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 460,522
|
2306.14152
|
Low-Rank Prune-And-Factorize for Language Model Compression
|
The components underpinning PLMs -- large weight matrices -- were shown to bear considerable redundancy. Matrix factorization, a well-established technique from matrix theory, has been utilized to reduce the number of parameters in PLM. However, it fails to retain satisfactory performance under moderate to high compression rate. In this paper, we identify the \textit{full-rankness} of fine-tuned PLM as the fundamental bottleneck for the failure of matrix factorization and explore the use of network pruning to extract low-rank sparsity pattern desirable to matrix factorization. We find such low-rank sparsity pattern exclusively exists in models generated by first-order pruning, which motivates us to unite the two approaches and achieve more effective model compression. We further propose two techniques: sparsity-aware SVD and mixed-rank fine-tuning, which improve the initialization and training of the compression procedure, respectively. Experiments on GLUE and question-answering tasks show that the proposed method has superior compression-performance trade-off compared to existing approaches.
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 375,574
|
2212.14370
|
Can 5th Generation Local Training Methods Support Client Sampling? Yes!
|
The celebrated FedAvg algorithm of McMahan et al. (2017) is based on three components: client sampling (CS), data sampling (DS) and local training (LT). While the first two are reasonably well understood, the third component, whose role is to reduce the number of communication rounds needed to train the model, resisted all attempts at a satisfactory theoretical explanation. Malinovsky et al. (2022) identified four distinct generations of LT methods based on the quality of the provided theoretical communication complexity guarantees. Despite a lot of progress in this area, none of the existing works were able to show that it is theoretically better to employ multiple local gradient-type steps (i.e., to engage in LT) than to rely on a single local gradient-type step only in the important heterogeneous data regime. In a recent breakthrough embodied in their ProxSkip method and its theoretical analysis, Mishchenko et al. (2022) showed that LT indeed leads to provable communication acceleration for arbitrarily heterogeneous data, thus jump-starting the $5^{\rm th}$ generation of LT methods. However, while these latest generation LT methods are compatible with DS, none of them support CS. We resolve this open problem in the affirmative. In order to do so, we had to base our algorithmic development on new algorithmic and theoretical foundations.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| 338,592
|
2410.21315
|
GraphLSS: Integrating Lexical, Structural, and Semantic Features for
Long Document Extractive Summarization
|
Heterogeneous graph neural networks have recently gained attention for long document summarization, modeling the extraction as a node classification task. Although effective, these models often require external tools or additional machine learning models to define graph components, producing highly complex and less intuitive structures. We present GraphLSS, a heterogeneous graph construction for long document extractive summarization, incorporating Lexical, Structural, and Semantic features. It defines two levels of information (words and sentences) and four types of edges (sentence semantic similarity, sentence occurrence order, word in sentence, and word semantic similarity) without any need for auxiliary learning models. Experiments on two benchmark datasets show that GraphLSS is competitive with top-performing graph-based methods, outperforming recent non-graph models. We release our code on GitHub.
| false
| false
| false
| false
| true
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 503,195
|
2204.11405
|
Adaptive cognitive fit: Artificial intelligence augmented management of
information facets and representations
|
Explosive growth in big data technologies and artificial intelligence [AI] applications have led to increasing pervasiveness of information facets and a rapidly growing array of information representations. Information facets, such as equivocality and veracity, can dominate and significantly influence human perceptions of information and consequently affect human performance. Extant research in cognitive fit, which preceded the big data and AI era, focused on the effects of aligning information representation and task on performance, without sufficient consideration to information facets and attendant cognitive challenges. Therefore, there is a compelling need to understand the interplay of these dominant information facets with information representations and tasks, and their influence on human performance. We suggest that artificially intelligent technologies that can adapt information representations to overcome cognitive limitations are necessary for these complex information environments. To this end, we propose and test a novel *Adaptive Cognitive Fit* [ACF] framework that explains the influence of information facets and AI-augmented information representations on human performance. We draw on information processing theory and cognitive dissonance theory to advance the ACF framework and a set of propositions. We empirically validate the ACF propositions with an economic experiment that demonstrates the influence of information facets, and a machine learning simulation that establishes the viability of using AI to improve human performance.
| true
| false
| false
| true
| true
| false
| false
| false
| false
| true
| false
| false
| false
| true
| false
| false
| false
| false
| 293,132
|
2402.13724
|
Bring Your Own Character: A Holistic Solution for Automatic Facial
Animation Generation of Customized Characters
|
Animating virtual characters has always been a fundamental research problem in virtual reality (VR). Facial animations play a crucial role as they effectively convey emotions and attitudes of virtual humans. However, creating such facial animations can be challenging, as current methods often involve utilization of expensive motion capture devices or significant investments of time and effort from human animators in tuning animation parameters. In this paper, we propose a holistic solution to automatically animate virtual human faces. In our solution, a deep learning model was first trained to retarget the facial expression from input face images to virtual human faces by estimating the blendshape coefficients. This method offers the flexibility of generating animations with characters of different appearances and blendshape topologies. Second, a practical toolkit was developed using Unity 3D, making it compatible with the most popular VR applications. The toolkit accepts both image and video as input to animate the target virtual human faces and enables users to manipulate the animation results. Furthermore, inspired by the spirit of Human-in-the-loop (HITL), we leveraged user feedback to further improve the performance of the model and toolkit, thereby increasing the customization properties to suit user preferences. The whole solution, for which we will make the code public, has the potential to accelerate the generation of facial animations for use in VR applications.
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 431,385
|
2310.12563
|
Approximate information maximization for bandit games
|
Entropy maximization and free energy minimization are general physical principles for modeling the dynamics of various physical systems. Notable examples include modeling decision-making within the brain using the free-energy principle, optimizing the accuracy-complexity trade-off when accessing hidden variables with the information bottleneck principle (Tishby et al., 2000), and navigation in random environments using information maximization (Vergassola et al., 2007). Built on this principle, we propose a new class of bandit algorithms that maximize an approximation to the information of a key variable within the system. To this end, we develop an approximated analytical physics-based representation of an entropy to forecast the information gain of each action and greedily choose the one with the largest information gain. This method yields strong performances in classical bandit settings. Motivated by its empirical success, we prove its asymptotic optimality for the two-armed bandit problem with Gaussian rewards. Owing to its ability to encompass the system's properties in a global physical functional, this approach can be efficiently adapted to more complex bandit settings, calling for further investigation of information maximization approaches for multi-armed bandit problems.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 401,072
|
2405.07072
|
Selecting focused digital cohorts from social media using the metric
backbone of biomedical knowledge graphs
|
The abundance of social media data allows researchers to construct large digital cohorts to study the interplay between human behavior and medical treatment. Identifying the users most relevant to a specific health problem is, however, a challenge in that social media sites vary in the generality of their discourse. While X (formerly Twitter), Instagram, and Facebook cater to wide ranging topics, Reddit subgroups and dedicated patient advocacy forums trade in much more specific, biomedically-relevant discourse. To hone in on relevant users anywhere, we have developed a general framework and applied it to epilepsy discourse in social media as a test case. We analyzed the text from posts by users who mention epilepsy drugs in the general-purpose social media sites X and Instagram, the epilepsy-focused Reddit subgroup (r/Epilepsy), and the Epilepsy Foundation of America (EFA) forums. We curated a medical terms dictionary and used it to generate a knowledge graph (KG) for each online community. For each KG, we computed the metric backbone--the smallest subgraph that preserves all shortest paths in the network. By comparing the subset of users who contribute to the backbone to the subset who do not, we found that epilepsy-focused social media users contribute to the KG backbone in much higher proportion than do general-purpose social media users. Furthermore, using human annotation of Instagram posts, we demonstrated that users who do not contribute to the backbone are more than twice as likely to use dictionary terms in a manner inconsistent with their biomedical meaning. For biomedical research applications, our backbone-based approach thus has several benefits over simple engagement-based approaches: It can retain low-engagement users who nonetheless contribute meaningful biomedical insights. It can filter out very vocal users who contribute no relevant content.
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 453,572
|
2402.17151
|
Clustering Document Parts: Detecting and Characterizing Influence
Campaigns from Documents
|
We propose a novel clustering pipeline to detect and characterize influence campaigns from documents. This approach clusters parts of document, detects clusters that likely reflect an influence campaign, and then identifies documents linked to an influence campaign via their association with the high-influence clusters. Our approach outperforms both the direct document-level classification and the direct document-level clustering approach in predicting if a document is part of an influence campaign. We propose various novel techniques to enhance our pipeline, including using an existing event factuality prediction system to obtain document parts, and aggregating multiple clustering experiments to improve the performance of both cluster and document classification. Classifying documents after clustering not only accurately extracts the parts of the documents that are relevant to influence campaigns, but also captures influence campaigns as a coordinated and holistic phenomenon. Our approach makes possible more fine-grained and interpretable characterizations of influence campaigns from documents.
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 432,859
|
2205.14894
|
Daisy Bloom Filters
|
A filter is a widely used data structure for storing an approximation of a given set $S$ of elements from some universe $U$ (a countable set).It represents a superset $S'\supseteq S$ that is ''close to $S$'' in the sense that for $x\not\in S$, the probability that $x\in S'$ is bounded by some $\varepsilon > 0$. The advantage of using a Bloom filter, when some false positives are acceptable, is that the space usage becomes smaller than what is required to store $S$ exactly. Though filters are well-understood from a worst-case perspective, it is clear that state-of-the-art constructions may not be close to optimal for particular distributions of data and queries. Suppose, for instance, that some elements are in $S$ with probability close to 1. Then it would make sense to always include them in $S'$, saving space by not having to represent these elements in the filter. Questions like this have been raised in the context of Weighted Bloom filters (Bruck, Gao and Jiang, ISIT 2006) and Bloom filter implementations that make use of access to learned components (Vaidya, Knorr, Mitzenmacher, and Krask, ICLR 2021). In this paper, we present a lower bound for the expected space that such a filter requires. We also show that the lower bound is asymptotically tight by exhibiting a filter construction that executes queries and insertions in worst-case constant time, and has a false positive rate at most $\varepsilon $ with high probability over input sets drawn from a product distribution. We also present a Bloom filter alternative, which we call the $\textit{Daisy Bloom filter}$, that executes operations faster and uses significantly less space than the standard Bloom filter.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| true
| 299,531
|
2103.08095
|
Towards Robust Speech-to-Text Adversarial Attack
|
This paper introduces a novel adversarial algorithm for attacking the state-of-the-art speech-to-text systems, namely DeepSpeech, Kaldi, and Lingvo. Our approach is based on developing an extension for the conventional distortion condition of the adversarial optimization formulation using the Cram\`er integral probability metric. Minimizing over this metric, which measures the discrepancies between original and adversarial samples' distributions, contributes to crafting signals very close to the subspace of legitimate speech recordings. This helps to yield more robust adversarial signals against playback over-the-air without employing neither costly expectation over transformation operations nor static room impulse response simulations. Our approach outperforms other targeted and non-targeted algorithms in terms of word error rate and sentence-level-accuracy with competitive performance on the crafted adversarial signals' quality. Compared to seven other strong white and black-box adversarial attacks, our proposed approach is considerably more resilient against multiple consecutive playbacks over-the-air, corroborating its higher robustness in noisy environments.
| false
| false
| true
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 224,793
|
2308.15645
|
AskIt: Unified Programming Interface for Programming with Large Language
Models
|
Large Language Models (LLMs) exhibit a unique phenomenon known as emergent abilities, demonstrating adeptness across numerous tasks, from text summarization to code generation. While these abilities open up novel avenues in software design and crafting, their incorporation presents substantial challenges. Developers face decisions regarding the use of LLMs for directly performing tasks within applications as well as for generating and executing code to accomplish these tasks. Moreover, effective prompt design becomes a critical concern, given the necessity of extracting data from natural language outputs. To address these complexities, this paper introduces AskIt, a domain-specific language (DSL) specifically designed for LLMs. AskIt simplifies LLM integration by providing a unified interface that not only allows for direct task execution using LLMs but also supports the entire cycle of code generation and execution. This dual capability is achieved through (1) type-guided output control, (2) template-based function definitions, and (3) prompt generation for both usage modes. Our evaluations underscore AskIt's effectiveness. Across 50 tasks, AskIt generated concise prompts, achieving a 16.14 % reduction in prompt length compared to benchmarks. Additionally, by enabling a seamless transition between using LLMs directly in applications and for generating code, AskIt achieved significant efficiency improvements, as observed in our GSM8K benchmark experiments. The implementations of AskIt in TypeScript and Python are available at https://github.com/katsumiok/ts-askit and https://github.com/katsumiok/pyaskit, respectively.
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| 388,742
|
2212.07429
|
Building Multilingual Corpora for a Complex Named Entity Recognition and
Classification Hierarchy using Wikipedia and DBpedia
|
With the ever-growing popularity of the field of NLP, the demand for datasets in low resourced-languages follows suit. Following a previously established framework, in this paper, we present the UNER dataset, a multilingual and hierarchical parallel corpus annotated for named-entities. We describe in detail the developed procedure necessary to create this type of dataset in any language available on Wikipedia with DBpedia information. The three-step procedure extracts entities from Wikipedia articles, links them to DBpedia, and maps the DBpedia sets of classes to the UNER labels. This is followed by a post-processing procedure that significantly increases the number of identified entities in the final results. The paper concludes with a statistical and qualitative analysis of the resulting dataset.
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 336,399
|
2406.16868
|
Neural Network-based Two-Dimensional Filtering for OTFS Symbol Detection
|
Orthogonal time frequency space (OTFS) is a promising modulation scheme for wireless communication in high-mobility scenarios. Recently, a reservoir computing (RC) based approach has been introduced for online subframe-based symbol detection in the OTFS system, where only the limited over-the-air (OTA) pilot symbols are utilized for training. However, the previous RC-based approach does not design the RC architecture based on the properties of the OTFS system to fully unlock the potential of RC. This paper introduces a novel two-dimensional RC (2D-RC) approach for online symbol detection on a subframe basis in the OTFS system. The 2D-RC is designed to have a two-dimensional (2D) filtering structure to equalize the 2D circular channel effect in the delay-Doppler (DD) domain of the OTFS system. With the introduced architecture, the 2D-RC can operate in the DD domain with only a single neural network, unlike our previous work which requires multiple RCs to track channel variations in the time domain. Experimental results demonstrate the advantages of the 2D-RC approach over the previous RC-based approach and the compared model-based methods across different modulation orders.
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 467,310
|
2005.10247
|
Model-Based Robust Deep Learning: Generalizing to Natural,
Out-of-Distribution Data
|
While deep learning has resulted in major breakthroughs in many application domains, the frameworks commonly used in deep learning remain fragile to artificially-crafted and imperceptible changes in the data. In response to this fragility, adversarial training has emerged as a principled approach for enhancing the robustness of deep learning with respect to norm-bounded perturbations. However, there are other sources of fragility for deep learning that are arguably more common and less thoroughly studied. Indeed, natural variation such as lighting or weather conditions can significantly degrade the accuracy of trained neural networks, proving that such natural variation presents a significant challenge for deep learning. In this paper, we propose a paradigm shift from perturbation-based adversarial robustness toward model-based robust deep learning. Our objective is to provide general training algorithms that can be used to train deep neural networks to be robust against natural variation in data. Critical to our paradigm is first obtaining a model of natural variation which can be used to vary data over a range of natural conditions. Such models may be either known a priori or else learned from data. In the latter case, we show that deep generative models can be used to learn models of natural variation that are consistent with realistic conditions. We then exploit such models in three novel model-based robust training algorithms in order to enhance the robustness of deep learning with respect to the given model. Our extensive experiments show that across a variety of naturally-occurring conditions and across various datasets, deep neural networks trained with our model-based algorithms significantly outperform both standard deep learning algorithms as well as norm-bounded robust deep learning algorithms.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 178,130
|
2012.00402
|
Use of Remote Sensing Data to Identify Air Pollution Signatures in India
|
Air quality has major impact on a country's socio-economic position and identifying major air pollution sources is at the heart of tackling the issue. Spatially and temporally distributed air quality data acquisition across a country as varied as India has been a challenge to such analysis. The launch of the Sentinel-5P satellite has helped in the observation of a wider variety of air pollutants than measured before at a global scale on a daily basis. In this chapter, spatio-temporal multi pollutant data retrieved from Sentinel-5P satellite is used to cluster states as well as districts in India and associated average monthly pollution signature and trends depicted by each of the clusters are derived and presented.The clustering signatures can be used to identify states and districts based on the types of pollutants emitted by various pollution sources.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| 209,124
|
2203.10763
|
Performance-Robustness Tradeoffs in Adversarially Robust
Linear-Quadratic Control
|
While $\mathcal{H}_\infty$ methods can introduce robustness against worst-case perturbations, their nominal performance under conventional stochastic disturbances is often drastically reduced. Though this fundamental tradeoff between nominal performance and robustness is known to exist, it is not well-characterized in quantitative terms. Toward addressing this issue, we borrow from the increasingly ubiquitous notion of adversarial training from machine learning to construct a class of controllers which are optimized for disturbances consisting of mixed stochastic and worst-case components. We find that this problem admits a stationary optimal controller that has a simple analytic form closely related to suboptimal $\mathcal{H}_\infty$ solutions. We then provide a quantitative performance-robustness tradeoff analysis, in which system-theoretic properties such as controllability and stability explicitly manifest in an interpretable manner. This provides practitioners with general guidance for determining how much robustness to incorporate based on a priori system knowledge. We empirically validate our results by comparing the performance of our controller against standard baselines, and plotting tradeoff curves.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| 286,674
|
1503.00694
|
Consistent Probabilistic Social Choice
|
Two fundamental axioms in social choice theory are consistency with respect to a variable electorate and consistency with respect to components of similar alternatives. In the context of traditional non-probabilistic social choice, these axioms are incompatible with each other. We show that in the context of probabilistic social choice, these axioms uniquely characterize a function proposed by Fishburn (Rev. Econ. Stud., 51(4), 683--692, 1984). Fishburn's function returns so-called maximal lotteries, i.e., lotteries that correspond to optimal mixed strategies of the underlying plurality game. Maximal lotteries are guaranteed to exist due to von Neumann's Minimax Theorem, are almost always unique, and can be efficiently computed using linear programming.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| true
| false
| false
| true
| 40,736
|
2101.11376
|
Learning Abstract Representations through Lossy Compression of
Multi-Modal Signals
|
A key competence for open-ended learning is the formation of increasingly abstract representations useful for driving complex behavior. Abstract representations ignore specific details and facilitate generalization. Here we consider the learning of abstract representations in a multi-modal setting with two or more input modalities. We treat the problem as a lossy compression problem and show that generic lossy compression of multimodal sensory input naturally extracts abstract representations that tend to strip away modalitiy specific details and preferentially retain information that is shared across the different modalities. Furthermore, we propose an architecture to learn abstract representations by identifying and retaining only the information that is shared across multiple modalities while discarding any modality specific information.
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| 217,256
|
2303.09152
|
Learning a Room with the Occ-SDF Hybrid: Signed Distance Function
Mingled with Occupancy Aids Scene Representation
|
Implicit neural rendering, which uses signed distance function (SDF) representation with geometric priors (such as depth or surface normal), has led to impressive progress in the surface reconstruction of large-scale scenes. However, applying this method to reconstruct a room-level scene from images may miss structures in low-intensity areas or small and thin objects. We conducted experiments on three datasets to identify limitations of the original color rendering loss and priors-embedded SDF scene representation. We found that the color rendering loss results in optimization bias against low-intensity areas, causing gradient vanishing and leaving these areas unoptimized. To address this issue, we propose a feature-based color rendering loss that utilizes non-zero feature values to bring back optimization signals. Additionally, the SDF representation can be influenced by objects along a ray path, disrupting the monotonic change of SDF values when a single object is present. To counteract this, we explore using the occupancy representation, which encodes each point separately and is unaffected by objects along a querying ray. Our experimental results demonstrate that the joint forces of the feature-based rendering loss and Occ-SDF hybrid representation scheme can provide high-quality reconstruction results, especially in challenging room-level scenarios. The code would be released.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 351,931
|
2301.04653
|
Optirank: classification for RNA-Seq data with optimal ranking reference
genes
|
Classification algorithms using RNA-Sequencing (RNA-Seq) data as input are used in a variety of biological applications. By nature, RNA-Seq data is subject to uncontrolled fluctuations both within and especially across datasets, which presents a major difficulty for a trained classifier to generalize to an external dataset. Replacing raw gene counts with the rank of gene counts inside an observation has proven effective to mitigate this problem. However, the rank of a feature is by definition relative to all other features, including highly variable features that introduce noise in the ranking. To address this problem and obtain more robust ranks, we propose a logistic regression model, optirank, which learns simultaneously the parameters of the model and the genes to use as a reference set in the ranking. We show the effectiveness of this method on simulated data. We also consider real classification tasks, which present different kinds of distribution shifts between train and test data. Those tasks concern a variety of applications, such as cancer of unknown primary classification, identification of specific gene signatures, and determination of cell type in single-cell RNA-Seq datasets. On those real tasks, optirank performs at least as well as the vanilla logistic regression on classical ranks, while producing sparser solutions. In addition, to increase the robustness against dataset shifts, we propose a multi-source learning scheme and demonstrate its effectiveness when used in combination with rank-based classifiers.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 340,134
|
2106.05249
|
What Would a Teacher Do? Predicting Future Talk Moves
|
Recent advances in natural language processing (NLP) have the ability to transform how classroom learning takes place. Combined with the increasing integration of technology in today's classrooms, NLP systems leveraging question answering and dialog processing techniques can serve as private tutors or participants in classroom discussions to increase student engagement and learning. To progress towards this goal, we use the classroom discourse framework of academically productive talk (APT) to learn strategies that make for the best learning experience. In this paper, we introduce a new task, called future talk move prediction (FTMP): it consists of predicting the next talk move -- an utterance strategy from APT -- given a conversation history with its corresponding talk moves. We further introduce a neural network model for this task, which outperforms multiple baselines by a large margin. Finally, we compare our model's performance on FTMP to human performance and show several similarities between the two.
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 240,028
|
1403.6540
|
The quest for optimal sampling: Computationally efficient,
structure-exploiting measurements for compressed sensing
|
An intriguing phenomenon in many instances of compressed sensing is that the reconstruction quality is governed not just by the overall sparsity of the signal, but also on its structure. This paper is about understanding this phenomenon, and demonstrating how it can be fruitfully exploited by the design of suitable sampling strategies in order to outperform more standard compressed sensing techniques based on random matrices.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| 31,826
|
2208.04112
|
A review on longitudinal data analysis with random forest in precision
medicine
|
Precision medicine provides customized treatments to patients based on their characteristics and is a promising approach to improving treatment efficiency. Large scale omics data are useful for patient characterization, but often their measurements change over time, leading to longitudinal data. Random forest is one of the state-of-the-art machine learning methods for building prediction models, and can play a crucial role in precision medicine. In this paper, we review extensions of the standard random forest method for the purpose of longitudinal data analysis. Extension methods are categorized according to the data structures for which they are designed. We consider both univariate and multivariate responses and further categorize the repeated measurements according to whether the time effect is relevant. Information of available software implementations of the reviewed extensions is also given. We conclude with discussions on the limitations of our review and some future research directions.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 311,997
|
2004.04814
|
Deep learning for synthetic microstructure generation in a
materials-by-design framework for heterogeneous energetic materials
|
The sensitivity of heterogeneous energetic (HE) materials (propellants, explosives, and pyrotechnics) is critically dependent on their microstructure. Initiation of chemical reactions occurs at hot spots due to energy localization at sites of porosities and other defects. Emerging multi-scale predictive models of HE response to loads account for the physics at the meso-scale, i.e. at the scale of statistically representative clusters of particles and other features in the microstructure. Meso-scale physics is infused in machine-learned closure models informed by resolved meso-scale simulations. Since microstructures are stochastic, ensembles of meso-scale simulations are required to quantify hot spot ignition and growth and to develop models for microstructure-dependent energy deposition rates. We propose utilizing generative adversarial networks (GAN) to spawn ensembles of synthetic heterogeneous energetic material microstructures. The method generates qualitatively and quantitatively realistic microstructures by learning from images of HE microstructures. We show that the proposed GAN method also permits the generation of new morphologies, where the porosity distribution can be controlled and spatially manipulated. Such control paves the way for the design of novel microstructures to engineer HE materials for targeted performance in a materials-by-design framework.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 171,999
|
2209.08543
|
A Decoupled and Linear Framework for Global Outlier Rejection over
Planar Pose Graph
|
We propose a robust framework for the planar pose graph optimization contaminated by loop closure outliers. Our framework rejects outliers by first decoupling the robust PGO problem wrapped by a Truncated Least Squares kernel into two subproblems. Then, the framework introduces a linear angle representation to rewrite the first subproblem that is originally formulated with rotation matrices. The framework is configured with the Graduated Non-Convexity (GNC) algorithm to solve the two non-convex subproblems in succession without initial guesses. Thanks to the linearity properties of both the subproblems, our framework requires only linear solvers to optimally solve the optimization problems encountered in GNC. We extensively validate the proposed framework, named DEGNC-LAF (DEcoupled Graduated Non-Convexity with Linear Angle Formulation) in planar PGO benchmarks. It turns out that it runs significantly (sometimes up to over 30 times) faster than the standard and general-purpose GNC while resulting in high-quality estimates.
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 318,166
|
2310.18709
|
Audio-Visual Instance Segmentation
|
In this paper, we propose a new multi-modal task, termed audio-visual instance segmentation (AVIS), which aims to simultaneously identify, segment and track individual sounding object instances in audible videos. To facilitate this research, we introduce a high-quality benchmark named AVISeg, containing over 90K instance masks from 26 semantic categories in 926 long videos. Additionally, we propose a strong baseline model for this task. Our model first localizes sound source within each frame, and condenses object-specific contexts into concise tokens. Then it builds long-range audio-visual dependencies between these tokens using window-based attention, and tracks sounding objects among the entire video sequences. Extensive experiments reveal that our method performs best on AVISeg, surpassing the existing methods from related tasks. We further conduct the evaluation on several multi-modal large models; however, they exhibits subpar performance on instance-level sound source localization and temporal perception. We expect that AVIS will inspire the community towards a more comprehensive multi-modal understanding. The dataset and code will soon be released on https://github.com/ruohaoguo/avis.
| false
| false
| true
| false
| false
| false
| true
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| true
| 403,666
|
2211.12422
|
PiRL: Participant-Invariant Representation Learning for Healthcare
|
Due to individual heterogeneity, performance gaps are observed between generic (one-size-fits-all) models and person-specific models in data-driven health applications. However, in real-world applications, generic models are usually more favorable due to new-user-adaptation issues and system complexities, etc. To improve the performance of the generic model, we propose a representation learning framework that learns participant-invariant representations, named PiRL. The proposed framework utilizes maximum mean discrepancy (MMD) loss and domain-adversarial training to encourage the model to learn participant-invariant representations. Further, a triplet loss, which constrains the model for inter-class alignment of the representations, is utilized to optimize the learned representations for downstream health applications. We evaluated our frameworks on two public datasets related to physical and mental health, for detecting sleep apnea and stress, respectively. As preliminary results, we found the proposed approach shows around a 5% increase in accuracy compared to the baseline.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 332,103
|
2106.09082
|
Zeroth-Order Methods for Convex-Concave Minmax Problems: Applications to
Decision-Dependent Risk Minimization
|
Min-max optimization is emerging as a key framework for analyzing problems of robustness to strategically and adversarially generated data. We propose a random reshuffling-based gradient free Optimistic Gradient Descent-Ascent algorithm for solving convex-concave min-max problems with finite sum structure. We prove that the algorithm enjoys the same convergence rate as that of zeroth-order algorithms for convex minimization problems. We further specialize the algorithm to solve distributionally robust, decision-dependent learning problems, where gradient information is not readily available. Through illustrative simulations, we observe that our proposed approach learns models that are simultaneously robust against adversarial distribution shifts and strategic decisions from the data sources, and outperforms existing methods from the strategic classification literature.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 241,535
|
2410.11324
|
Diffusion-Based Offline RL for Improved Decision-Making in Augmented ARC
Task
|
Effective long-term strategies enable AI systems to navigate complex environments by making sequential decisions over extended horizons. Similarly, reinforcement learning (RL) agents optimize decisions across sequences to maximize rewards, even without immediate feedback. To verify that Latent Diffusion-Constrained Q-learning (LDCQ), a prominent diffusion-based offline RL method, demonstrates strong reasoning abilities in multi-step decision-making, we aimed to evaluate its performance on the Abstraction and Reasoning Corpus (ARC). However, applying offline RL methodologies to enhance strategic reasoning in AI for solving tasks in ARC is challenging due to the lack of sufficient experience data in the ARC training set. To address this limitation, we introduce an augmented offline RL dataset for ARC, called Synthesized Offline Learning Data for Abstraction and Reasoning (SOLAR), along with the SOLAR-Generator, which generates diverse trajectory data based on predefined rules. SOLAR enables the application of offline RL methods by offering sufficient experience data. We synthesized SOLAR for a simple task and used it to train an agent with the LDCQ method. Our experiments demonstrate the effectiveness of the offline RL approach on a simple ARC task, showing the agent's ability to make multi-step sequential decisions and correctly identify answer states. These results highlight the potential of the offline RL approach to enhance AI's strategic reasoning capabilities.
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 498,506
|
1911.02265
|
Predictive modeling of brain tumor: A Deep learning approach
|
Image processing concepts can visualize the different anatomy structure of the human body. Recent advancements in the field of deep learning have made it possible to detect the growth of cancerous tissue just by a patient's brain Magnetic Resonance Imaging (MRI) scans. These methods require very high accuracy and meager false negative rates to be of any practical use. This paper presents a Convolutional Neural Network (CNN) based transfer learning approach to classify the brain MRI scans into two classes using three pre-trained models. The performances of these models are compared with each other. Experimental results show that the Resnet-50 model achieves the highest accuracy and least false negative rates as 95% and zero respectively. It is followed by VGG-16 and Inception-V3 model with an accuracy of 90% and 55% respectively.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 152,324
|
1908.00682
|
Attention Guided Low-light Image Enhancement with a Large Scale
Low-light Simulation Dataset
|
Low-light image enhancement is challenging in that it needs to consider not only brightness recovery but also complex issues like color distortion and noise, which usually hide in the dark. Simply adjusting the brightness of a low-light image will inevitably amplify those artifacts. To address this difficult problem, this paper proposes a novel end-to-end attention-guided method based on multi-branch convolutional neural network. To this end, we first construct a synthetic dataset with carefully designed low-light simulation strategies. The dataset is much larger and more diverse than existing ones. With the new dataset for training, our method learns two attention maps to guide the brightness enhancement and denoising tasks respectively. The first attention map distinguishes underexposed regions from well lit regions, and the second attention map distinguishes noises from real textures. With their guidance, the proposed multi-branch decomposition-and-fusion enhancement network works in an input adaptive way. Moreover, a reinforcement-net further enhances color and contrast of the output image. Extensive experiments on multiple datasets demonstrate that our method can produce high fidelity enhancement results for low-light images and outperforms the current state-of-the-art methods by a large margin both quantitatively and visually.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 140,562
|
1712.02449
|
Quantifying how much sensory information in a neural code is relevant
for behavior
|
Determining how much of the sensory information carried by a neural code contributes to behavioral performance is key to understand sensory function and neural information flow. However, there are as yet no analytical tools to compute this information that lies at the intersection between sensory coding and behavioral readout. Here we develop a novel measure, termed the information-theoretic intersection information $I_{II}(S;R;C)$, that quantifies how much of the sensory information carried by a neural response R is used for behavior during perceptual discrimination tasks. Building on the Partial Information Decomposition framework, we define $I_{II}(S;R;C)$ as the part of the mutual information between the stimulus S and the response R that also informs the consequent behavioral choice C. We compute $I_{II}(S;R;C)$ in the analysis of two experimental cortical datasets, to show how this measure can be used to compare quantitatively the contributions of spike timing and spike rates to task performance, and to identify brain areas or neural populations that specifically transform sensory information into choice.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| 86,296
|
2306.16146
|
An optimal hierarchical control scheme for smart generation units: an
application to combined steam and electricity generation
|
Optimal management of thermal and energy grids with fluctuating demand and prices requires to orchestrate the generation units (GU) among all their operating modes. A hierarchical approach is proposed to control coupled energy nonlinear systems. The high level hybrid optimization defines the unit commitment, with the optimal transition strategy, and best production profiles. The low level dynamic model predictive control (MPC), receiving the set-points from the upper layer, safely governs the systems considering process constraints. To enhance the overall efficiency of the system, a method to optimal start-up the GU is here presented: a linear parameter varying MPC computes the optimal trajectory in closed-loop by iteratively linearising the system along the previous optimal solution. The introduction of an intermediate equilibrium state as additional decision variable permits the reduction of the optimization horizon,while a terminal cost term steers the system to the target set-point. Simulation results show the effectiveness of the proposed approach.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| 376,295
|
1709.06772
|
Temporal Pattern Mining from Evolving Networks
|
Recently, evolving networks are becoming a suitable form to model many real-world complex systems, due to their peculiarities to represent the systems and their constituting entities, the interactions between the entities and the time-variability of their structure and properties. Designing computational models able to analyze evolving networks becomes relevant in many applications. The goal of this research project is to evaluate the possible contribution of temporal pattern mining techniques in the analysis of evolving networks. In particular, we aim at exploiting available snapshots for the recognition of valuable and potentially useful knowledge about the temporal dynamics exhibited by the network over the time, without making any prior assumption about the underlying evolutionary schema. Pattern-based approaches of temporal pattern mining can be exploited to detect and characterize changes exhibited by a network over the time, starting from observed snapshots.
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 81,171
|
1903.05784
|
Learning Parallax Attention for Stereo Image Super-Resolution
|
Stereo image pairs can be used to improve the performance of super-resolution (SR) since additional information is provided from a second viewpoint. However, it is challenging to incorporate this information for SR since disparities between stereo images vary significantly. In this paper, we propose a parallax-attention stereo superresolution network (PASSRnet) to integrate the information from a stereo image pair for SR. Specifically, we introduce a parallax-attention mechanism with a global receptive field along the epipolar line to handle different stereo images with large disparity variations. We also propose a new and the largest dataset for stereo image SR (namely, Flickr1024). Extensive experiments demonstrate that the parallax-attention mechanism can capture correspondence between stereo images to improve SR performance with a small computational and memory cost. Comparative results show that our PASSRnet achieves the state-of-the-art performance on the Middlebury, KITTI 2012 and KITTI 2015 datasets.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 124,232
|
2402.07946
|
Re-Envisioning Command and Control
|
Future warfare will require Command and Control (C2) decision-making to occur in more complex, fast-paced, ill-structured, and demanding conditions. C2 will be further complicated by operational challenges such as Denied, Degraded, Intermittent, and Limited (DDIL) communications and the need to account for many data streams, potentially across multiple domains of operation. Yet, current C2 practices -- which stem from the industrial era rather than the emerging intelligence era -- are linear and time-consuming. Critically, these approaches may fail to maintain overmatch against adversaries on the future battlefield. To address these challenges, we propose a vision for future C2 based on robust partnerships between humans and artificial intelligence (AI) systems. This future vision is encapsulated in three operational impacts: streamlining the C2 operations process, maintaining unity of effort, and developing adaptive collective knowledge systems. This paper illustrates the envisaged future C2 capabilities, discusses the assumptions that shaped them, and describes how the proposed developments could transform C2 in future warfare.
| true
| false
| false
| false
| true
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 428,910
|
2012.14337
|
WiFresh: Age-of-Information from Theory to Implementation
|
Emerging applications, such as smart factories and fleets of drones, increasingly rely on sharing time-sensitive information for monitoring and control. In such application domains, it is essential to keep information fresh, as outdated information loses its value and can lead to system failures and safety risks. The Age-of-Information is a performance metric that captures how fresh the information is from the perspective of the destination. In this paper, we show that as the congestion in the wireless network increases, the Age-of-Information degrades sharply, leading to outdated information at the destination. Leveraging years of theoretical research, we propose WiFresh: an unconventional architecture that achieves near optimal information freshness in wireless networks of any size, even when the network is overloaded. Our experimental results show that WiFresh can improve information freshness by two orders of magnitude when compared to an equivalent standard WiFi network. We propose and realize two strategies for implementing WiFresh: one at the MAC layer using hardware-level programming and another at the Application layer using Python.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| true
| 213,482
|
2302.12980
|
Frequency Disentangled Learning for Segmentation of Midbrain Structures
from Quantitative Susceptibility Mapping Data
|
One often lacks sufficient annotated samples for training deep segmentation models. This is in particular the case for less common imaging modalities such as Quantitative Susceptibility Mapping (QSM). It has been shown that deep models tend to fit the target function from low to high frequencies. One may hypothesize that such property can be leveraged for better training of deep learning models. In this paper, we exploit this property to propose a new training method based on frequency-domain disentanglement. It consists of two main steps: i) disentangling the image into high- and low-frequency parts and feature learning; ii) frequency-domain fusion to complete the task. The approach can be used with any backbone segmentation network. We apply the approach to the segmentation of the red and dentate nuclei from QSM data which is particularly relevant for the study of parkinsonian syndromes. We demonstrate that the proposed method provides considerable performance improvements for these tasks. We further applied it to three public datasets from the Medical Segmentation Decathlon (MSD) challenge. For two MSD tasks, it provided smaller but still substantial improvements (up to 7 points of Dice), especially under small training set situations.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 347,761
|
2008.07672
|
Ensemble Node Embeddings using Tensor Decomposition: A Case-Study on
DeepWalk
|
Node embeddings have been attracting increasing attention during the past years. In this context, we propose a new ensemble node embedding approach, called TenSemble2Vec, by first generating multiple embeddings using the existing techniques and taking them as multiview data input of the state-of-art tensor decomposition model namely PARAFAC2 to learn the shared lower-dimensional representations of the nodes. Contrary to other embedding methods, our TenSemble2Vec takes advantage of the complementary information from different methods or the same method with different hyper-parameters, which bypasses the challenge of choosing models. Extensive tests using real-world data validates the efficiency of the proposed method.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 192,180
|
2206.01739
|
Mutual- and Self- Prototype Alignment for Semi-supervised Medical Image
Segmentation
|
Semi-supervised learning methods have been explored in medical image segmentation tasks due to the scarcity of pixel-level annotation in the real scenario. Proto-type alignment based consistency constraint is an intuitional and plausible solu-tion to explore the useful information in the unlabeled data. In this paper, we propose a mutual- and self- prototype alignment (MSPA) framework to better utilize the unlabeled data. In specific, mutual-prototype alignment enhances the information interaction between labeled and unlabeled data. The mutual-prototype alignment imposes two consistency constraints in reverse directions between the unlabeled and labeled data, which enables the consistent embedding and model discriminability on unlabeled data. The proposed self-prototype alignment learns more stable region-wise features within unlabeled images, which optimizes the classification margin in semi-supervised segmentation by boosting the intra-class compactness and inter-class separation on the feature space. Extensive experimental results on three medical datasets demonstrate that with a small amount of labeled data, MSPA achieves large improvements by leveraging the unlabeled data. Our method also outperforms seven state-of-the-art semi-supervised segmentation methods on all three datasets.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 300,589
|
2104.06191
|
Lucas-Kanade Reloaded: End-to-End Super-Resolution from Raw Image Bursts
|
This presentation addresses the problem of reconstructing a high-resolution image from multiple lower-resolution snapshots captured from slightly different viewpoints in space and time. Key challenges for solving this problem include (i) aligning the input pictures with sub-pixel accuracy, (ii) handling raw (noisy) images for maximal faithfulness to native camera data, and (iii) designing/learning an image prior (regularizer) well suited to the task. We address these three challenges with a hybrid algorithm building on the insight from Wronski et al. that aliasing is an ally in this setting, with parameters that can be learned end to end, while retaining the interpretability of classical approaches to inverse problems. The effectiveness of our approach is demonstrated on synthetic and real image bursts, setting a new state of the art on several benchmarks and delivering excellent qualitative results on real raw bursts captured by smartphones and prosumer cameras.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 229,991
|
2111.00653
|
SADGA: Structure-Aware Dual Graph Aggregation Network for Text-to-SQL
|
The Text-to-SQL task, aiming to translate the natural language of the questions into SQL queries, has drawn much attention recently. One of the most challenging problems of Text-to-SQL is how to generalize the trained model to the unseen database schemas, also known as the cross-domain Text-to-SQL task. The key lies in the generalizability of (i) the encoding method to model the question and the database schema and (ii) the question-schema linking method to learn the mapping between words in the question and tables/columns in the database schema. Focusing on the above two key issues, we propose a Structure-Aware Dual Graph Aggregation Network (SADGA) for cross-domain Text-to-SQL. In SADGA, we adopt the graph structure to provide a unified encoding model for both the natural language question and database schema. Based on the proposed unified modeling, we further devise a structure-aware aggregation method to learn the mapping between the question-graph and schema-graph. The structure-aware aggregation method is featured with Global Graph Linking, Local Graph Linking, and Dual-Graph Aggregation Mechanism. We not only study the performance of our proposal empirically but also achieved 3rd place on the challenging Text-to-SQL benchmark Spider at the time of writing.
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 264,291
|
1809.02648
|
Minimally Constrained Stable Switched Systems and Application to
Co-simulation
|
We propose an algorithm to restrict the switching signals of a constrained switched system in order to guarantee its stability, while at the same time attempting to keep the largest possible set of allowed switching signals. Our work is motivated by applications to (co-)simulation, where numerical stability is a hard constraint, but should be attained by restricting as little as possible the allowed behaviours of the simulators. We apply our results to certify the stability of an adaptive co-simulation orchestration algorithm, which selects the optimal switching signal at run-time, as a function of (varying) performance and accuracy requirements.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| 107,092
|
2411.15435
|
What Makes a Scene ? Scene Graph-based Evaluation and Feedback for
Controllable Generation
|
While text-to-image generation has been extensively studied, generating images from scene graphs remains relatively underexplored, primarily due to challenges in accurately modeling spatial relationships and object interactions. To fill this gap, we introduce Scene-Bench, a comprehensive benchmark designed to evaluate and enhance the factual consistency in generating natural scenes. Scene-Bench comprises MegaSG, a large-scale dataset of one million images annotated with scene graphs, facilitating the training and fair comparison of models across diverse and complex scenes. Additionally, we propose SGScore, a novel evaluation metric that leverages chain-of-thought reasoning capabilities of multimodal large language models (LLMs) to assess both object presence and relationship accuracy, offering a more effective measure of factual consistency than traditional metrics like FID and CLIPScore. Building upon this evaluation framework, we develop a scene graph feedback pipeline that iteratively refines generated images by identifying and correcting discrepancies between the scene graph and the image. Extensive experiments demonstrate that Scene-Bench provides a more comprehensive and effective evaluation framework compared to existing benchmarks, particularly for complex scene generation. Furthermore, our feedback strategy significantly enhances the factual consistency of image generation models, advancing the field of controllable image generation.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 510,609
|
2110.03252
|
Layer-wise Pruning of Transformer Attention Heads for Efficient Language
Modeling
|
While Transformer-based models have shown impressive language modeling performance, the large computation cost is often prohibitive for practical use. Attention head pruning, which removes unnecessary attention heads in the multihead attention, is a promising technique to solve this problem. However, it does not evenly reduce the overall load because the heavy feedforward module is not affected by head pruning. In this paper, we apply layer-wise attention head pruning on All-attention Transformer so that the entire computation and the number of parameters can be reduced proportionally to the number of pruned heads. While the architecture has the potential to fully utilize head pruning, we propose three training methods that are especially helpful to minimize performance degradation and stabilize the pruning process. Our pruned model shows consistently lower perplexity within a comparable parameter size than Transformer-XL on WikiText-103 language modeling benchmark.
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 259,425
|
1707.00117
|
SAM: Semantic Attribute Modulation for Language Modeling and Style
Variation
|
This paper presents a Semantic Attribute Modulation (SAM) for language modeling and style variation. The semantic attribute modulation includes various document attributes, such as titles, authors, and document categories. We consider two types of attributes, (title attributes and category attributes), and a flexible attribute selection scheme by automatically scoring them via an attribute attention mechanism. The semantic attributes are embedded into the hidden semantic space as the generation inputs. With the attributes properly harnessed, our proposed SAM can generate interpretable texts with regard to the input attributes. Qualitative analysis, including word semantic analysis and attention values, shows the interpretability of SAM. On several typical text datasets, we empirically demonstrate the superiority of the Semantic Attribute Modulated language model with different combinations of document attributes. Moreover, we present a style variation for the lyric generation using SAM, which shows a strong connection between the style variation and the semantic attributes.
| false
| false
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 76,298
|
2106.10683
|
Solution for Large-scale Long-tailed Recognition with Noisy Labels
|
This is a technical report for CVPR 2021 AliProducts Challenge. AliProducts Challenge is a competition proposed for studying the large-scale and fine-grained commodity image recognition problem encountered by worldleading ecommerce companies. The large-scale product recognition simultaneously meets the challenge of noisy annotations, imbalanced (long-tailed) data distribution and fine-grained classification. In our solution, we adopt stateof-the-art model architectures of both CNNs and Transformer, including ResNeSt, EfficientNetV2, and DeiT. We found that iterative data cleaning, classifier weight normalization, high-resolution finetuning, and test time augmentation are key components to improve the performance of training with the noisy and imbalanced dataset. Finally, we obtain 6.4365% mean class error rate in the leaderboard with our ensemble model.
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 242,113
|
2406.19532
|
Dataless Quadratic Neural Networks for the Maximum Independent Set
Problem
|
Combinatorial Optimization (CO) addresses many important problems, including the challenging Maximum Independent Set (MIS) problem. Alongside exact and heuristic solvers, differentiable approaches have emerged, often using continuous relaxations of ReLU-based or quadratic objectives. Noting that an MIS in a graph is a Maximum Clique (MC) in its complement, we propose a new quadratic formulation for MIS by incorporating an MC term, improving convergence and exploration. We show that every maximal independent set corresponds to a local minimizer, derive conditions for the MIS size, and characterize stationary points. To solve our non-convex objective, we propose solving parallel multiple initializations using momentum-based gradient descent, complemented by an efficient MIS checking criterion derived from our theory. Therefore, we dub our method as parallelized Clique-Informed Quadratic Optimization for MIS (pCQO-MIS). Our experimental results demonstrate the effectiveness of the proposed method compared to exact, heuristic, sampling, and data-centric approaches. Notably, our method avoids the out-of-distribution tuning and reliance on (un)labeled data required by data-centric methods, while achieving superior MIS sizes and competitive runtime relative to their inference time. Additionally, a key advantage of pCQO-MIS is that, unlike exact and heuristic solvers, the runtime scales only with the number of nodes in the graph, not the number of edges.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| 468,451
|
2104.09938
|
Nonlinear Tracking and Rejection using Linear Parameter-Varying Control
|
The Linear Parameter-Varying (LPV) framework has been introduced with the intention to provide stability and performance guarantees for analysis and controller synthesis for Nonlinear (NL) systems via convex methods. By extending results of the Linear Time-Invariant framework, mainly based on quadratic stability and performance using dissipativity theory, it has been assumed that they generalize tracking and disturbance rejection guarantees for NL systems. However, as has been shown in literature, stability and performance through standard dissipativity is not sufficient in order to satisfy the desired guarantees in case of reference tracking and disturbance rejection for nonlinear systems. We propose to solve this problem by the application of incremental dissipativity, which does ensure these specifications. A novel approach is proposed to synthesize and realize an NL controller which is able to guarantee incremental stability and performance for NL systems via convex optimization using methods from the LPV framework. Through simulations and experiments, the presented method is compared to standard LPV controller designs, showing significant performance improvements.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| 231,408
|
2412.15129
|
Jet: A Modern Transformer-Based Normalizing Flow
|
In the past, normalizing generative flows have emerged as a promising class of generative models for natural images. This type of model has many modeling advantages: the ability to efficiently compute log-likelihood of the input data, fast generation and simple overall structure. Normalizing flows remained a topic of active research but later fell out of favor, as visual quality of the samples was not competitive with other model classes, such as GANs, VQ-VAE-based approaches or diffusion models. In this paper we revisit the design of the coupling-based normalizing flow models by carefully ablating prior design choices and using computational blocks based on the Vision Transformer architecture, not convolutional neural networks. As a result, we achieve state-of-the-art quantitative and qualitative performance with a much simpler architecture. While the overall visual quality is still behind the current state-of-the-art models, we argue that strong normalizing flow models can help advancing research frontier by serving as building components of more powerful generative models.
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 518,953
|
1207.0783
|
Hybrid Template Update System for Unimodal Biometric Systems
|
Semi-supervised template update systems allow to automatically take into account the intra-class variability of the biometric data over time. Such systems can be inefficient by including too many impostor's samples or skipping too many genuine's samples. In the first case, the biometric reference drifts from the real biometric data and attracts more often impostors. In the second case, the biometric reference does not evolve quickly enough and also progressively drifts from the real biometric data. We propose a hybrid system using several biometric sub-references in order to increase per- formance of self-update systems by reducing the previously cited errors. The proposition is validated for a keystroke- dynamics authentication system (this modality suffers of high variability over time) on two consequent datasets from the state of the art.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 17,194
|
2007.01814
|
DynNet: Physics-based neural architecture design for linear and
nonlinear structural response modeling and prediction
|
Data-driven models for predicting dynamic responses of linear and nonlinear systems are of great importance due to their wide application from probabilistic analysis to inverse problems such as system identification and damage diagnosis. In this study, a physics-based recurrent neural network model is designed that is able to learn the dynamics of linear and nonlinear multiple degrees of freedom systems given a ground motion. The model is able to estimate a complete set of responses, including displacement, velocity, acceleration, and internal forces. Compared to the most advanced counterparts, this model requires a smaller number of trainable variables while the accuracy of predictions is higher for long trajectories. In addition, the architecture of the recurrent block is inspired by differential equation solver algorithms and it is expected that this approach yields more generalized solutions. In the training phase, we propose multiple novel techniques to dramatically accelerate the learning process using smaller datasets, such as hardsampling, utilization of trajectory loss function, and implementation of a trust-region approach. Numerical case studies are conducted to examine the strength of the network to learn different nonlinear behaviors. It is shown that the network is able to capture different nonlinear behaviors of dynamic systems with very high accuracy and with no need for prior information or very large datasets.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 185,541
|
2009.07611
|
Perceiving Traffic from Aerial Images
|
Drones or UAVs, equipped with different sensors, have been deployed in many places especially for urban traffic monitoring or last-mile delivery. It provides the ability to control the different aspects of traffic given real-time obeservations, an important pillar for the future of transportation and smart cities. With the increasing use of such machines, many previous state-of-the-art object detectors, who have achieved high performance on front facing cameras, are being used on UAV datasets. When applied to high-resolution aerial images captured from such datasets, they fail to generalize to the wide range of objects' scales. In order to address this limitation, we propose an object detection method called Butterfly Detector that is tailored to detect objects in aerial images. We extend the concept of fields and introduce butterfly fields, a type of composite field that describes the spatial information of output features as well as the scale of the detected object. To overcome occlusion and viewing angle variations that can hinder the localization process, we employ a voting mechanism between related butterfly vectors pointing to the object center. We evaluate our Butterfly Detector on two publicly available UAV datasets (UAVDT and VisDrone2019) and show that it outperforms previous state-of-the-art methods while remaining real-time.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 195,991
|
2205.13128
|
Cascading Residual Graph Convolutional Network for Multi-Behavior
Recommendation
|
Multi-behavior recommendation exploits multiple types of user-item interactions to alleviate the data sparsity problem faced by the traditional models that often utilize only one type of interaction for recommendation. In real scenarios, users often take a sequence of actions to interact with an item, in order to get more information about the item and thus accurately evaluate whether an item fits personal preference. Those interaction behaviors often obey a certain order, and different behaviors reveal different information or aspects of user preferences towards the target item. Most existing multi-behavior recommendation methods take the strategy to first extract information from different behaviors separately and then fuse them for final prediction. However, they have not exploited the connections between different behaviors to learn user preferences. Besides, they often introduce complex model structures and more parameters to model multiple behaviors, largely increasing the space and time complexity. In this work, we propose a lightweight multi-behavior recommendation model named Cascading Residual Graph Convolutional Network (CRGCN for short), which can explicitly exploit the connections between different behaviors into the embedding learning process without introducing any additional parameters. In particular, we design a cascading residual graph convolutional network structure, which enables our model to learn user preferences by continuously refining user embeddings across different types of behaviors. The multi-task learning method is adopted to jointly optimize our model based on different behaviors. Extensive experimental results on two real-world benchmark datasets show that CRGCN can substantially outperform state-of-the-art methods. Further studies also analyze the effects of leveraging multi-behaviors in different numbers and orders on the final performance.
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 298,818
|
1909.07541
|
A*3D Dataset: Towards Autonomous Driving in Challenging Environments
|
With the increasing global popularity of self-driving cars, there is an immediate need for challenging real-world datasets for benchmarking and training various computer vision tasks such as 3D object detection. Existing datasets either represent simple scenarios or provide only day-time data. In this paper, we introduce a new challenging A*3D dataset which consists of RGB images and LiDAR data with significant diversity of scene, time, and weather. The dataset consists of high-density images ($\approx~10$ times more than the pioneering KITTI dataset), heavy occlusions, a large number of night-time frames ($\approx~3$ times the nuScenes dataset), addressing the gaps in the existing datasets to push the boundaries of tasks in autonomous driving research to more challenging highly diverse environments. The dataset contains $39\text{K}$ frames, $7$ classes, and $230\text{K}$ 3D object annotations. An extensive 3D object detection benchmark evaluation on the A*3D dataset for various attributes such as high density, day-time/night-time, gives interesting insights into the advantages and limitations of training and testing 3D object detection in real-world setting.
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 145,693
|
2311.10502
|
Fast Estimations of Hitting Time of Elitist Evolutionary Algorithms from
Fitness Levels
|
The fitness level method is an easy-to-use tool for estimating the hitting time of elitist evolutionary algorithms. Recently, linear lower and upper bounds by fitness levels have been constructed. But these bounds require recursive computation, which makes them difficult to use in practice. We address this shortcoming with a new directed graph (digraph) method that does not require recursive computation and significantly simplifies the calculation of coefficients in the lower bound. In the method, we select a sub-digraph and divide it into fitness levels, then construct an explicit formula for computing the linear lower bound coefficients using transition probabilities restricted to the subdigraph. A major advantage of the new method is the derivation of tight lower bounds on fitness functions with shortcuts, which are difficult to achieve using previous fitness methods. We use three examples (FullyDeceptive, TwoMax1 and Deceptive) to demonstrate that each new lower bound is tight, but previous lower bounds are not. Our work significantly extends the fitness level method from addressing simple fitness functions without shortcuts to more complex functions with shortcuts.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| 408,548
|
2310.03037
|
Quantum image edge detection based on eight-direction Sobel operator for
NEQR
|
Quantum Sobel edge detection (QSED) is a kind of algorithm for image edge detection using quantum mechanism, which can solve the real-time problem encountered by classical algorithms. However, the existing QSED algorithms only consider two- or four-direction Sobel operator, which leads to a certain loss of edge detail information in some high-definition images. In this paper, a novel QSED algorithm based on eight-direction Sobel operator is proposed, which not only reduces the loss of edge information, but also simultaneously calculates eight directions' gradient values of all pixel in a quantum image. In addition, the concrete quantum circuits, which consist of gradient calculation, non-maximum suppression, double threshold detection and edge tracking units, are designed in details. For a 2^n x 2^n image with q gray scale, the complexity of our algorithm can be reduced to O(n^2 + q^2), which is lower than other existing classical or quantum algorithms. And the simulation experiment demonstrates that our algorithm can detect more edge information, especially diagonal edges, than the two- and four-direction QSED algorithms.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| true
| 397,111
|
2107.09760
|
An Exploration of Exploration: Measuring the ability of lexicase
selection to find obscure pathways to optimality
|
Parent selection algorithms (selection schemes) steer populations through a problem's search space, often trading off between exploitation and exploration. Understanding how selection schemes affect exploitation and exploration within a search space is crucial to tackling increasingly challenging problems. Here, we introduce an "exploration diagnostic" that diagnoses a selection scheme's capacity for search space exploration. We use our exploration diagnostic to investigate the exploratory capacity of lexicase selection and several of its variants: epsilon lexicase, down-sampled lexicase, cohort lexicase, and novelty-lexicase. We verify that lexicase selection out-explores tournament selection, and we show that lexicase selection's exploratory capacity can be sensitive to the ratio between population size and the number of test cases used for evaluating candidate solutions. Additionally, we find that relaxing lexicase's elitism with epsilon lexicase can further improve exploration. Both down-sampling and cohort lexicase -- two techniques for applying random subsampling to test cases -- degrade lexicase's exploratory capacity; however, we find that cohort partitioning better preserves lexicase's exploratory capacity than down-sampling. Finally, we find evidence that novelty-lexicase's addition of novelty test cases can degrade lexicase's capacity for exploration. Overall, our findings provide hypotheses for further exploration and actionable insights and recommendations for using lexicase selection. Additionally, this work demonstrates the value of selection scheme diagnostics as a complement to more conventional benchmarking approaches to selection scheme analysis.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| 247,113
|
2306.14219
|
Total Error Sheets for Datasets (TES-D) -- A Critical Guide to
Documenting Online Platform Datasets
|
This paper proposes a template for documenting datasets that have been collected from online platforms for research purposes. The template should help to critically reflect on data quality and increase transparency in research fields that make use of online platform data. The paper describes our motivation, outlines the procedure for developing a specific documentation template that we refer to as TES-D (Total Error Sheets for Datasets) and has the current version of the template, guiding questions and a manual attached as supplementary material. The TES-D approach builds upon prior work in designing error frameworks for data from online platforms, namely the Total Error Framework for digital traces of human behavior on online platforms (TED-On, https://doi.org/10.1093/poq/nfab018).
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| true
| false
| 375,595
|
2305.17672
|
New Cycle-based Formulation, Cost Function, and Heuristics for DC OPF
Based Controlled Islanding
|
This paper presents a new formulation for intentional controlled islanding (ICI) of power transmission grids based on mixed-integer linear programming (MILP) DC optimal power flow (OPF) model. We highlight several deficiencies of the most well-known formulation for this problem and propose new enhancements for their improvement. In particular, we propose a new alternative optimization objective that may be more suitable for ICI than the minimization of load shedding, a new set of island connectivity constraints, and a new set of constraints for DC OPF with switching, and a new MILP heuristic to find initial feasible solutions for ICI. It is shown that the proposed improvements help to reduce the final optimality gaps as compared to the benchmark model on several test instances.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| 368,697
|
2106.05094
|
Semi-supervised lane detection with Deep Hough Transform
|
Current work on lane detection relies on large manually annotated datasets. We reduce the dependency on annotations by leveraging massive cheaply available unlabelled data. We propose a novel loss function exploiting geometric knowledge of lanes in Hough space, where a lane can be identified as a local maximum. By splitting lanes into separate channels, we can localize each lane via simple global max-pooling. The location of the maximum encodes the layout of a lane, while the intensity indicates the the probability of a lane being present. Maximizing the log-probability of the maximal bins helps neural networks find lanes without labels. On the CULane and TuSimple datasets, we show that the proposed Hough Transform loss improves performance significantly by learning from large amounts of unlabelled images.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 239,967
|
2103.10284
|
SG-Net: Spatial Granularity Network for One-Stage Video Instance
Segmentation
|
Video instance segmentation (VIS) is a new and critical task in computer vision. To date, top-performing VIS methods extend the two-stage Mask R-CNN by adding a tracking branch, leaving plenty of room for improvement. In contrast, we approach the VIS task from a new perspective and propose a one-stage spatial granularity network (SG-Net). Compared to the conventional two-stage methods, SG-Net demonstrates four advantages: 1) Our method has a one-stage compact architecture and each task head (detection, segmentation, and tracking) is crafted interdependently so they can effectively share features and enjoy the joint optimization; 2) Our mask prediction is dynamically performed on the sub-regions of each detected instance, leading to high-quality masks of fine granularity; 3) Each of our task predictions avoids using expensive proposal-based RoI features, resulting in much reduced runtime complexity per instance; 4) Our tracking head models objects centerness movements for tracking, which effectively enhances the tracking robustness to different object appearances. In evaluation, we present state-of-the-art comparisons on the YouTube-VIS dataset. Extensive experiments demonstrate that our compact one-stage method can achieve improved performance in both accuracy and inference speed. We hope our SG-Net could serve as a strong and flexible baseline for the VIS task. Our code will be available.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 225,409
|
1704.06918
|
Neural Machine Translation via Binary Code Prediction
|
In this paper, we propose a new method for calculating the output layer in neural machine translation systems. The method is based on predicting a binary code for each word and can reduce computation time/memory requirements of the output layer to be logarithmic in vocabulary size in the best case. In addition, we also introduce two advanced approaches to improve the robustness of the proposed model: using error-correcting codes and combining softmax and binary codes. Experiments on two English-Japanese bidirectional translation tasks show proposed models achieve BLEU scores that approach the softmax, while reducing memory usage to the order of less than 1/10 and improving decoding speed on CPUs by x5 to x10.
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 72,255
|
2203.10229
|
Reinforcement Learned Distributed Multi-Robot Navigation with Reciprocal
Velocity Obstacle Shaped Rewards
|
The challenges to solving the collision avoidance problem lie in adaptively choosing optimal robot velocities in complex scenarios full of interactive obstacles. In this paper, we propose a distributed approach for multi-robot navigation which combines the concept of reciprocal velocity obstacle (RVO) and the scheme of deep reinforcement learning (DRL) to solve the reciprocal collision avoidance problem under limited information. The novelty of this work is threefold: (1) using a set of sequential VO and RVO vectors to represent the interactive environmental states of static and dynamic obstacles, respectively; (2) developing a bidirectional recurrent module based neural network, which maps the states of a varying number of surrounding obstacles to the actions directly; (3) developing a RVO area and expected collision time based reward function to encourage reciprocal collision avoidance behaviors and trade off between collision risk and travel time. The proposed policy is trained through simulated scenarios and updated by the actor-critic based DRL algorithm. We validate the policy in complex environments with various numbers of differential drive robots and obstacles. The experiment results demonstrate that our approach outperforms the state-of-art methods and other learning based approaches in terms of the success rate, travel time, and average speed. Source code of this approach is available at https://github.com/hanruihua/rl_rvo_nav.
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 286,453
|
0803.2973
|
Rule Generalisation in Intrusion Detection Systems using Snort
|
Intrusion Detection Systems (ids)provide an important layer of security for computer systems and networks, and are becoming more and more necessary as reliance on Internet services increases and systems with sensitive data are more commonly open to Internet access. An ids responsibility is to detect suspicious or unacceptable system and network activity and to alert a systems administrator to this activity. The majority of ids use a set of signatures that define what suspicious traffic is, and Snort is one popular and actively developing open-source ids that uses such a set of signatures known as Snort rules. Our aim is to identify a way in which Snort could be developed further by generalising rules to identify novel attacks. In particular, we attempted to relax and vary the conditions and parameters of current Snort rules, using a similar approach to classic rule learning operators such as generalisation and specialisation. We demonstrate the effectiveness of our approach through experiments with standard datasets and show that we are able to detect previously undeleted variants of various attacks. We conclude by discussing the general effectiveness and appropriateness of generalisation in Snort based ids rule processing.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| true
| false
| false
| 1,468
|
2011.09353
|
Generic Ontology Design Patterns: Roles and Change over Time
|
In this chapter we propose Generic Ontology Design Patterns, GODPs, as a methodology for representing and instantiating ontology design patterns in a way that is adaptable, and allows domain experts (and other users) to safely use them without cluttering their ontologies.
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 207,158
|
2003.01184
|
Variational inference formulation for a model-free simulation of a
dynamical system with unknown parameters by a recurrent neural network
|
We propose a recurrent neural network for a "model-free" simulation of a dynamical system with unknown parameters without prior knowledge. The deep learning model aims to jointly learn the nonlinear time marching operator and the effects of the unknown parameters from a time series dataset. We assume that the time series data set consists of an ensemble of trajectories for a range of the parameters. The learning task is formulated as a statistical inference problem by considering the unknown parameters as random variables. A latent variable is introduced to model the effects of the unknown parameters, and a variational inference method is employed to simultaneously train probabilistic models for the time marching operator and an approximate posterior distribution for the latent variable. Unlike the classical variational inference, where a factorized distribution is used to approximate the posterior, we employ a feedforward neural network supplemented by an encoder recurrent neural network to develop a more flexible probabilistic model. The approximate posterior distribution makes an inference on a trajectory to identify the effects of the unknown parameters. The time marching operator is approximated by a recurrent neural network, which takes a latent state sampled from the approximate posterior distribution as one of the input variables, to compute the time evolution of the probability distribution conditioned on the latent variable. In the numerical experiments, it is shown that the proposed variational inference model makes a more accurate simulation compared to the standard recurrent neural networks. It is found that the proposed deep learning model is capable of correctly identifying the dimensions of the random parameters and learning a representation of complex time series data.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| 166,562
|
2310.06572
|
Deep Learning reconstruction with uncertainty estimation for $\gamma$
photon interaction in fast scintillator detectors
|
This article presents a physics-informed deep learning method for the quantitative estimation of the spatial coordinates of gamma interactions within a monolithic scintillator, with a focus on Positron Emission Tomography (PET) imaging. A Density Neural Network approach is designed to estimate the 2-dimensional gamma photon interaction coordinates in a fast lead tungstate (PbWO4) monolithic scintillator detector. We introduce a custom loss function to estimate the inherent uncertainties associated with the reconstruction process and to incorporate the physical constraints of the detector. This unique combination allows for more robust and reliable position estimations and the obtained results demonstrate the effectiveness of the proposed approach and highlights the significant benefits of the uncertainties estimation. We discuss its potential impact on improving PET imaging quality and show how the results can be used to improve the exploitation of the model, to bring benefits to the application and how to evaluate the validity of the given prediction and the associated uncertainties. Importantly, our proposed methodology extends beyond this specific use case, as it can be generalized to other applications beyond PET imaging.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 398,635
|
2311.09783
|
Investigating Data Contamination in Modern Benchmarks for Large Language
Models
|
Recent observations have underscored a disparity between the inflated benchmark scores and the actual performance of LLMs, raising concerns about potential contamination of evaluation benchmarks. This issue is especially critical for closed-source models and certain open-source models where training data transparency is lacking. In this paper we study data contamination by proposing two methods tailored for both open-source and proprietary LLMs. We first introduce a retrieval-based system to explore potential overlaps between evaluation benchmarks and pretraining corpora. We further present a novel investigation protocol named \textbf{T}estset \textbf{S}lot Guessing (\textit{TS-Guessing}), applicable to both open and proprietary models. This approach entails masking a wrong answer in a multiple-choice question and prompting the model to fill in the gap. Additionally, it involves obscuring an unlikely word in an evaluation example and asking the model to produce it. We find that certain commercial LLMs could surprisingly guess the missing option in various test sets. Specifically, in the TruthfulQA benchmark, we find that LLMs exhibit notable performance improvement when provided with additional metadata in the benchmark. Further, in the MMLU benchmark, ChatGPT and GPT-4 demonstrated an exact match rate of 52\% and 57\%, respectively, in guessing the missing options in benchmark test data. We hope these results underscore the need for more robust evaluation methodologies and benchmarks in the field.
| false
| false
| false
| false
| true
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 408,294
|
1910.03883
|
Second-order coding rates for key distillation in quantum key
distribution
|
The security of quantum key distribution has traditionally been analyzed in either the asymptotic or non-asymptotic regimes. In this paper, we provide a bridge between these two regimes, by determining second-order coding rates for key distillation in quantum key distribution under collective attacks. Our main result is a formula that characterizes the backoff from the known asymptotic formula for key distillation -- our formula incorporates the reliability and security of the protocol, as well as the mutual information variances to the legitimate receiver and the eavesdropper. In order to determine secure key rates against collective attacks, one should perform a joint optimization of the Holevo information and the Holevo information variance to the eavesdropper. We show how to do so by analyzing several examples, including the six-state, BB84, and continuous-variable quantum key distribution protocols (the last involving Gaussian modulation of coherent states along with heterodyne detection). The technical contributions of this paper include one-shot and second-order analyses of private communication over a compound quantum wiretap channel with fixed marginal and key distillation over a compound quantum wiretap source with fixed marginal. We also establish the second-order asymptotics of the smooth max-relative entropy of quantum states acting on a separable Hilbert space, and we derive a formula for the Holevo information variance of a Gaussian ensemble of Gaussian states.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| 148,620
|
2008.10224
|
Variable Compliance Control for Robotic Peg-in-Hole Assembly: A Deep
Reinforcement Learning Approach
|
Industrial robot manipulators are playing a more significant role in modern manufacturing industries. Though peg-in-hole assembly is a common industrial task which has been extensively researched, safely solving complex high precision assembly in an unstructured environment remains an open problem. Reinforcement Learning (RL) methods have been proven successful in solving manipulation tasks autonomously. However, RL is still not widely adopted on real robotic systems because working with real hardware entails additional challenges, especially when using position-controlled manipulators. The main contribution of this work is a learning-based method to solve peg-in-hole tasks with position uncertainty of the hole. We proposed the use of an off-policy model-free reinforcement learning method and bootstrap the training speed by using several transfer learning techniques (sim2real) and domain randomization. Our proposed learning framework for position-controlled robots was extensively evaluated on contact-rich insertion tasks on a variety of environments.
| false
| false
| false
| false
| false
| false
| true
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 192,941
|
1708.00420
|
Impact of different time series aggregation methods on optimal energy
system design
|
Modelling renewable energy systems is a computationally-demanding task due to the high fluctuation of supply and demand time series. To reduce the scale of these, this paper discusses different methods for their aggregation into typical periods. Each aggregation method is applied to a different type of energy system model, making the methods fairly incomparable. To overcome this, the different aggregation methods are first extended so that they can be applied to all types of multidimensional time series and then compared by applying them to different energy system configurations and analyzing their impact on the cost optimal design. It was found that regardless of the method, time series aggregation allows for significantly reduced computational resources. Nevertheless, averaged values lead to underestimation of the real system cost in comparison to the use of representative periods from the original time series. The aggregation method itself, e.g. k means clustering, plays a minor role. More significant is the system considered: Energy systems utilizing centralized resources require fewer typical periods for a feasible system design in comparison to systems with a higher share of renewable feed-in. Furthermore, for energy systems based on seasonal storage, currently existing models integration of typical periods is not suitable.
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 78,209
|
1707.07301
|
Deep Optical Flow Estimation Via Multi-Scale Correspondence Structure
Learning
|
As an important and challenging problem in computer vision, learning based optical flow estimation aims to discover the intrinsic correspondence structure between two adjacent video frames through statistical learning. Therefore, a key issue to solve in this area is how to effectively model the multi-scale correspondence structure properties in an adaptive end-to-end learning fashion. Motivated by this observation, we propose an end-to-end multi-scale correspondence structure learning (MSCSL) approach for optical flow estimation. In principle, the proposed MSCSL approach is capable of effectively capturing the multi-scale inter-image-correlation correspondence structures within a multi-level feature space from deep learning. Moreover, the proposed MSCSL approach builds a spatial Conv-GRU neural network model to adaptively model the intrinsic dependency relationships among these multi-scale correspondence structures. Finally, the above procedures for correspondence structure learning and multi-scale dependency modeling are implemented in a unified end-to-end deep learning framework. Experimental results on several benchmark datasets demonstrate the effectiveness of the proposed approach.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 77,597
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.