id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2210.09948 | Number-Adaptive Prototype Learning for 3D Point Cloud Semantic
Segmentation | 3D point cloud semantic segmentation is one of the fundamental tasks for 3D scene understanding and has been widely used in the metaverse applications. Many recent 3D semantic segmentation methods learn a single prototype (classifier weights) for each semantic class, and classify 3D points according to their nearest prototype. However, learning only one prototype for each class limits the model's ability to describe the high variance patterns within a class. Instead of learning a single prototype for each class, in this paper, we propose to use an adaptive number of prototypes to dynamically describe the different point patterns within a semantic class. With the powerful capability of vision transformer, we design a Number-Adaptive Prototype Learning (NAPL) model for point cloud semantic segmentation. To train our NAPL model, we propose a simple yet effective prototype dropout training strategy, which enables our model to adaptively produce prototypes for each class. The experimental results on SemanticKITTI dataset demonstrate that our method achieves 2.3% mIoU improvement over the baseline model based on the point-wise classification paradigm. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 324,732 |
1612.03273 | Towards an Automated Image De-fencing Algorithm Using Sparsity | Conventional approaches to image de-fencing suffer from non-robust fence detection and are limited to processing images of static scenes. In this position paper, we propose an automatic de-fencing algorithm for images of dynamic scenes. We divide the problem of image de-fencing into the tasks of automated fence detection, motion estimation and fusion of data from multiple frames of a captured video of the dynamic scene. Fences are detected automatically using two approaches, namely, employing Gabor filter and a machine learning method. We cast the fence removal problem in an optimization framework, by modeling the formation of the degraded observations. The inverse problem is solved using split Bregman technique assuming total variation of the de-fenced image as the regularization constraint. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 65,357 |
2105.06381 | Class-Incremental Learning for Wireless Device Identification in IoT | Deep Learning (DL) has been utilized pervasively in the Internet of Things (IoT). One typical application of DL in IoT is device identification from wireless signals, namely Non-cryptographic Device Identification (NDI). However, learning components in NDI systems have to evolve to adapt to operational variations, such a paradigm is termed as Incremental Learning (IL). Various IL algorithms have been proposed and many of them require dedicated space to store the increasing amount of historical data, and therefore, they are not suitable for IoT or mobile applications. However, conventional IL schemes can not provide satisfying performance when historical data are not available. In this paper, we address the IL problem in NDI from a new perspective, firstly, we provide a new metric to measure the degree of topological maturity of DNN models from the degree of conflict of class-specific fingerprints. We discover that an important cause for performance degradation in IL enabled NDI is owing to the conflict of devices' fingerprints. Second, we also show that the conventional IL schemes can lead to low topological maturity of DNN models in NDI systems. Thirdly, we propose a new Channel Separation Enabled Incremental Learning (CSIL) scheme without using historical data, in which our strategy can automatically separate devices' fingerprints in different learning stages and avoid potential conflict. Finally, We evaluated the effectiveness of the proposed framework using real data from ADS-B (Automatic Dependent Surveillance-Broadcast), an application of IoT in aviation. The proposed framework has the potential to be applied to accurate identification of IoT devices in a variety of IoT applications and services. Data and code available at IEEE Dataport (DOI: 10.21227/1bxc-ke87) and \url{https://github.com/pcwhy/CSIL}} | false | false | false | false | true | false | true | false | false | false | false | false | true | false | false | true | false | false | 235,107 |
2304.03955 | Robust Deep Learning Models Against Semantic-Preserving Adversarial
Attack | Deep learning models can be fooled by small $l_p$-norm adversarial perturbations and natural perturbations in terms of attributes. Although the robustness against each perturbation has been explored, it remains a challenge to address the robustness against joint perturbations effectively. In this paper, we study the robustness of deep learning models against joint perturbations by proposing a novel attack mechanism named Semantic-Preserving Adversarial (SPA) attack, which can then be used to enhance adversarial training. Specifically, we introduce an attribute manipulator to generate natural and human-comprehensible perturbations and a noise generator to generate diverse adversarial noises. Based on such combined noises, we optimize both the attribute value and the diversity variable to generate jointly-perturbed samples. For robust training, we adversarially train the deep learning model against the generated joint perturbations. Empirical results on four benchmarks show that the SPA attack causes a larger performance decline with small $l_{\infty}$ norm-ball constraints compared to existing approaches. Furthermore, our SPA-enhanced training outperforms existing defense methods against such joint perturbations. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 357,010 |
2207.12208 | Series2Graph: Graph-based Subsequence Anomaly Detection for Time Series | Subsequence anomaly detection in long sequences is an important problem with applications in a wide range of domains. However, the approaches proposed so far in the literature have severe limitations: they either require prior domain knowledge used to design the anomaly discovery algorithms, or become cumbersome and expensive to use in situations with recurrent anomalies of the same type. In this work, we address these problems, and propose an unsupervised method suitable for domain agnostic subsequence anomaly detection. Our method, Series2Graph, is based on a graph representation of a novel low-dimensionality embedding of subsequences. Series2Graph needs neither labeled instances (like supervised techniques) nor anomaly-free data (like zero-positive learning techniques), and identifies anomalies of varying lengths. The experimental results, on the largest set of synthetic and real datasets used to date, demonstrate that the proposed approach correctly identifies single and recurrent anomalies without any prior knowledge of their characteristics, outperforming by a large margin several competing approaches in accuracy, while being up to orders of magnitude faster. This paper has appeared in VLDB 2020. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 309,932 |
2308.10716 | Color Prompting for Data-Free Continual Unsupervised Domain Adaptive
Person Re-Identification | Unsupervised domain adaptive person re-identification (Re-ID) methods alleviate the burden of data annotation through generating pseudo supervision messages. However, real-world Re-ID systems, with continuously accumulating data streams, simultaneously demand more robust adaptation and anti-forgetting capabilities. Methods based on image rehearsal addresses the forgetting issue with limited extra storage but carry the risk of privacy leakage. In this work, we propose a Color Prompting (CoP) method for data-free continual unsupervised domain adaptive person Re-ID. Specifically, we employ a light-weighted prompter network to fit the color distribution of the current task together with Re-ID training. Then for the incoming new tasks, the learned color distribution serves as color style transfer guidance to transfer the images into past styles. CoP achieves accurate color style recovery for past tasks with adequate data diversity, leading to superior anti-forgetting effects compared with image rehearsal methods. Moreover, CoP demonstrates strong generalization performance for fast adaptation into new domains, given only a small amount of unlabeled images. Extensive experiments demonstrate that after the continual training pipeline the proposed CoP achieves 6.7% and 8.1% average rank-1 improvements over the replay method on seen and unseen domains, respectively. The source code for this work is publicly available in https://github.com/vimar-gu/ColorPromptReID. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 386,852 |
2008.12442 | Semi-supervised Learning with the EM Algorithm: A Comparative Study
between Unstructured and Structured Prediction | Semi-supervised learning aims to learn prediction models from both labeled and unlabeled samples. There has been extensive research in this area. Among existing work, generative mixture models with Expectation-Maximization (EM) is a popular method due to clear statistical properties. However, existing literature on EM-based semi-supervised learning largely focuses on unstructured prediction, assuming that samples are independent and identically distributed. Studies on EM-based semi-supervised approach in structured prediction is limited. This paper aims to fill the gap through a comparative study between unstructured and structured methods in EM-based semi-supervised learning. Specifically, we compare their theoretical properties and find that both methods can be considered as a generalization of self-training with soft class assignment of unlabeled samples, but the structured method additionally considers structural constraint in soft class assignment. We conducted a case study on real-world flood mapping datasets to compare the two methods. Results show that structured EM is more robust to class confusion caused by noise and obstacles in features in the context of the flood mapping application. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 193,576 |
2008.07157 | Where to Map? Iterative Rover-Copter Path Planning for Mars Exploration | In addition to conventional ground rovers, the Mars 2020 mission will send a helicopter to Mars. The copter's high-resolution data helps the rover to identify small hazards such as steps and pointy rocks, as well as providing rich textual information useful to predict perception performance. In this paper, we consider a three-agent system composed of a Mars rover, copter, and orbiter. The objective is to provide good localization to the rover by selecting an optimal path that minimizes the localization uncertainty accumulation during the rover's traverse. To achieve this goal, we quantify the localizability as a goodness measure associated with the map, and conduct a joint-space search over rover's path and copter's perceptual actions given prior information from the orbiter. We jointly address where to map by the copter and where to drive by the rover using the proposed iterative copter-rover path planner. We conducted numerical simulations using the map of Mars 2020 landing site to demonstrate the effectiveness of the proposed planner. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 192,019 |
2203.02667 | Underwater and Air-Water Wireless Communication: State-of-the-art,
Channel Characteristics, Security, and Open Problems | We present a first detailed survey on underwater and air-water (A-W) wireless communication networks (WCNs) that mainly focuses on the security challenges and the countermeasures proposed to date. For clarity of exposition, this survey paper is mainly divided into two parts. The first part of the paper focuses on the state-of-the-art underwater and A-W WCNs whereby we outline the benefits and drawbacks of the four promising underwater and A-W candidate technologies: radio frequency (RF), acoustic, optical and magnetic induction (MI), along with their channel characteristics. To this end, we also describe the indirect (relay-aided) and direct mechanisms for the A-W WCNs along with their channel characteristics. This sets the stage for the second part of the paper whereby we provide a thorough comparative discussion of a vast set of works that have reported the security breaches (as well as viable countermeasures) for many diverse configurations of the underwater and A-W WCNs. Specifically, we provide a detailed literature review of the various kinds of active and passive attacks which hamper the confidentiality, integrity, authentication and availability of both underwater and A-W WCNs. Finally, we highlight some research gaps in the open literature and identify security related some open problems for the future work. | false | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | true | 283,820 |
2207.07945 | Stochastic Attribute Modeling for Face Super-Resolution | When a high-resolution (HR) image is degraded into a low-resolution (LR) image, the image loses some of the existing information. Consequently, multiple HR images can correspond to the LR image. Most of the existing methods do not consider the uncertainty caused by the stochastic attribute, which can only be probabilistically inferred. Therefore, the predicted HR images are often blurry because the network tries to reflect all possibilities in a single output image. To overcome this limitation, this paper proposes a novel face super-resolution (SR) scheme to take into the uncertainty by stochastic modeling. Specifically, the information in LR images is separately encoded into deterministic and stochastic attributes. Furthermore, an Input Conditional Attribute Predictor is proposed and separately trained to predict the partially alive stochastic attributes from only the LR images. Extensive evaluation shows that the proposed method successfully reduces the uncertainty in the learning process and outperforms the existing state-of-the-art approaches. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 308,390 |
2405.18581 | Unleashing the Potential of Text-attributed Graphs: Automatic Relation
Decomposition via Large Language Models | Recent advancements in text-attributed graphs (TAGs) have significantly improved the quality of node features by using the textual modeling capabilities of language models. Despite this success, utilizing text attributes to enhance the predefined graph structure remains largely unexplored. Our extensive analysis reveals that conventional edges on TAGs, treated as a single relation (e.g., hyperlinks) in previous literature, actually encompass mixed semantics (e.g., "advised by" and "participates in"). This simplification hinders the representation learning process of Graph Neural Networks (GNNs) on downstream tasks, even when integrated with advanced node features. In contrast, we discover that decomposing these edges into distinct semantic relations significantly enhances the performance of GNNs. Despite this, manually identifying and labeling of edges to corresponding semantic relations is labor-intensive, often requiring domain expertise. To this end, we introduce RoSE (Relation-oriented Semantic Edge-decomposition), a novel framework that leverages the capability of Large Language Models (LLMs) to decompose the graph structure by analyzing raw text attributes - in a fully automated manner. RoSE operates in two stages: (1) identifying meaningful relations using an LLM-based generator and discriminator, and (2) categorizing each edge into corresponding relations by analyzing textual contents associated with connected nodes via an LLM-based decomposer. Extensive experiments demonstrate that our model-agnostic framework significantly enhances node classification performance across various datasets, with improvements of up to 16% on the Wisconsin dataset. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 458,484 |
0812.2879 | Ontology Assisted Query Reformulation Using Semantic and Assertion
Capabilities of OWL-DL Ontologies | End users of recent biomedical information systems are often unaware of the storage structure and access mechanisms of the underlying data sources and can require simplified mechanisms for writing domain specific complex queries. This research aims to assist users and their applications in formulating queries without requiring complete knowledge of the information structure of underlying data sources. To achieve this, query reformulation techniques and algorithms have been developed that can interpret ontology-based search criteria and associated domain knowledge in order to reformulate a relational query. These query reformulation algorithms exploit the semantic relationships and assertion capabilities of OWL-DL based domain ontologies for query reformulation. In this paper, this approach is applied to the integrated database schema of the EU funded Health-e-Child (HeC) project with the aim of providing ontology assisted query reformulation techniques to simplify the global access that is needed to millions of medical records across the UK and Europe. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 2,803 |
1410.4966 | The Visualization of Change in Word Meaning over Time using Temporal
Word Embeddings | We describe a visualization tool that can be used to view the change in meaning of words over time. The tool makes use of existing (static) word embedding datasets together with a timestamped $n$-gram corpus to create {\em temporal} word embeddings. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 36,852 |
2110.15122 | CAFE: Catastrophic Data Leakage in Vertical Federated Learning | Recent studies show that private training data can be leaked through the gradients sharing mechanism deployed in distributed machine learning systems, such as federated learning (FL). Increasing batch size to complicate data recovery is often viewed as a promising defense strategy against data leakage. In this paper, we revisit this defense premise and propose an advanced data leakage attack with theoretical justification to efficiently recover batch data from the shared aggregated gradients. We name our proposed method as catastrophic data leakage in vertical federated learning (CAFE). Comparing to existing data leakage attacks, our extensive experimental results on vertical FL settings demonstrate the effectiveness of CAFE to perform large-batch data leakage attack with improved data recovery quality. We also propose a practical countermeasure to mitigate CAFE. Our results suggest that private data participated in standard FL, especially the vertical case, have a high risk of being leaked from the training gradients. Our analysis implies unprecedented and practical data leakage risks in those learning settings. The code of our work is available at https://github.com/DeRafael/CAFE. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 263,767 |
1907.00352 | Instability of social network dynamics with stubborn links | This paper studies the signed networks in the presence of stubborn links, based on the structural balance theory. Each agent in the network has a mixture of positive and negative links represent friendly and antagonistic interactions and his stubbornness about interactions. Structural balance theory affirms that in signed social networks with simultaneous friendly/hostile interactions, there is a general tendency of evolving over time to reduce the tensions. From this perspective, individuals iteratively invert their own sentiments to reduce the felt tensions induced by imbalance. In this paper, we investigate the consequences of the agents' stubbornness on their interactions. We define stubbornness as an extreme antagonistic interaction which is resistant to change. In the current paper, we investigated if the presence of stubborn links renders an impact on the balance state of the network and whether or not the degree of balance in a signed network depends on the location of stubborn links. Our results show that a poorly balanced configuration consists of multiple antagonistic groups. Both analytical and simulation results demonstrate that the global level of balance of the network is more influenced by the locations of stubborn links in the resulting network topology than by the fraction of stubborn links. This means that even with a large fraction of stubborn links the network would evolve towards a balanced state. On the other hand, if a small fraction of stubborn links are clustered in five stubborn communities, the network evolves to an unbalanced state. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 137,018 |
2310.11377 | Faster Algorithms for Generalized Mean Densest Subgraph Problem | The densest subgraph of a large graph usually refers to some subgraph with the highest average degree, which has been extended to the family of $p$-means dense subgraph objectives by~\citet{veldt2021generalized}. The $p$-mean densest subgraph problem seeks a subgraph with the highest average $p$-th-power degree, whereas the standard densest subgraph problem seeks a subgraph with a simple highest average degree. It was shown that the standard peeling algorithm can perform arbitrarily poorly on generalized objective when $p>1$ but uncertain when $0<p<1$. In this paper, we are the first to show that a standard peeling algorithm can still yield $2^{1/p}$-approximation for the case $0<p < 1$. (Veldt 2021) proposed a new generalized peeling algorithm (GENPEEL), which for $p \geq 1$ has an approximation guarantee ratio $(p+1)^{1/p}$, and time complexity $O(mn)$, where $m$ and $n$ denote the number of edges and nodes in graph respectively. In terms of algorithmic contributions, we propose a new and faster generalized peeling algorithm (called GENPEEL++ in this paper), which for $p \in [1, +\infty)$ has an approximation guarantee ratio $(2(p+1))^{1/p}$, and time complexity $O(m(\log n))$, where $m$ and $n$ denote the number of edges and nodes in graph, respectively. This approximation ratio converges to 1 as $p \rightarrow \infty$. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 400,615 |
2304.01081 | FMGNN: Fused Manifold Graph Neural Network | Graph representation learning has been widely studied and demonstrated effectiveness in various graph tasks. Most existing works embed graph data in the Euclidean space, while recent works extend the embedding models to hyperbolic or spherical spaces to achieve better performance on graphs with complex structures, such as hierarchical or ring structures. Fusing the embedding from different manifolds can further take advantage of the embedding capabilities over different graph structures. However, existing embedding fusion methods mostly focus on concatenating or summing up the output embeddings, without considering interacting and aligning the embeddings of the same vertices on different manifolds, which can lead to distortion and impression in the final fusion results. Besides, it is also challenging to fuse the embeddings of the same vertices from different coordinate systems. In face of these challenges, we propose the Fused Manifold Graph Neural Network (FMGNN), a novel GNN architecture that embeds graphs into different Riemannian manifolds with interaction and alignment among these manifolds during training and fuses the vertex embeddings through the distances on different manifolds between vertices and selected landmarks, geometric coresets. Our experiments demonstrate that FMGNN yields superior performance over strong baselines on the benchmarks of node classification and link prediction tasks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 355,926 |
2205.12331 | Certified Robustness Against Natural Language Attacks by Causal
Intervention | Deep learning models have achieved great success in many fields, yet they are vulnerable to adversarial examples. This paper follows a causal perspective to look into the adversarial vulnerability and proposes Causal Intervention by Semantic Smoothing (CISS), a novel framework towards robustness against natural language attacks. Instead of merely fitting observational data, CISS learns causal effects p(y|do(x)) by smoothing in the latent semantic space to make robust predictions, which scales to deep architectures and avoids tedious construction of noise customized for specific attacks. CISS is provably robust against word substitution attacks, as well as empirically robust even when perturbations are strengthened by unknown attack algorithms. For example, on YELP, CISS surpasses the runner-up by 6.7% in terms of certified robustness against word substitutions, and achieves 79.4% empirical robustness when syntactic attacks are integrated. | false | false | false | false | false | false | true | false | true | false | false | false | true | false | false | false | false | false | 298,487 |
1801.05944 | PTB-TIR: A Thermal Infrared Pedestrian Tracking Benchmark | Thermal infrared (TIR) pedestrian tracking is one of the important components among numerous applications of computer vision, which has a major advantage: it can track pedestrians in total darkness. The ability to evaluate the TIR pedestrian tracker fairly, on a benchmark dataset, is significant for the development of this field. However, there is not a benchmark dataset. In this paper, we develop a TIR pedestrian tracking dataset for the TIR pedestrian tracker evaluation. The dataset includes 60 thermal sequences with manual annotations. Each sequence has nine attribute labels for the attribute based evaluation. In addition to the dataset, we carry out the large-scale evaluation experiments on our benchmark dataset using nine publicly available trackers. The experimental results help us understand the strengths and weaknesses of these trackers.In addition, in order to gain more insight into the TIR pedestrian tracker, we divide its functions into three components: feature extractor, motion model, and observation model. Then, we conduct three comparison experiments on our benchmark dataset to validate how each component affects the tracker's performance. The findings of these experiments provide some guidelines for future research. The dataset and evaluation toolkit can be downloaded at {https://github.com/QiaoLiuHit/PTB-TIR_Evaluation_toolkit}. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 88,539 |
1901.06904 | Learning sound representations using trainable COPE feature extractors | Sound analysis research has mainly been focused on speech and music processing. The deployed methodologies are not suitable for analysis of sounds with varying background noise, in many cases with very low signal-to-noise ratio (SNR). In this paper, we present a method for the detection of patterns of interest in audio signals. We propose novel trainable feature extractors, which we call COPE (Combination of Peaks of Energy). The structure of a COPE feature extractor is determined using a single prototype sound pattern in an automatic configuration process, which is a type of representation learning. We construct a set of COPE feature extractors, configured on a number of training patterns. Then we take their responses to build feature vectors that we use in combination with a classifier to detect and classify patterns of interest in audio signals. We carried out experiments on four public data sets: MIVIA audio events, MIVIA road events, ESC-10 and TU Dortmund data sets. The results that we achieved (recognition rate equal to 91.71% on the MIVIA audio events, 94% on the MIVIA road events, 81.25% on the ESC-10 and 94.27% on the TU Dortmund) demonstrate the effectiveness of the proposed method and are higher than the ones obtained by other existing approaches. The COPE feature extractors have high robustness to variations of SNR. Real-time performance is achieved even when the value of a large number of features is computed. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 119,112 |
2001.01668 | Secret key authentication capacity region, Part I: average
authentication rate | This paper investigates the secret key authentication capacity region. Specifically, the focus is on a model where a source must transmit information over an adversary controlled channel where the adversary, prior to the source's transmission, decides whether or not to replace the destination's observation with an arbitrary one of their choosing (done in hopes of having the destination accept a false message). To combat the adversary, the source and destination share a secret key which they may use to guarantee authenticated communications. The secret key authentication capacity region here is then defined as the region of jointly achievable message rate, authentication rate, and key consumption rate (i.e., how many bits of secret key are needed). This is the first of a two part study, with the parts differing in how the authentication rate is measured. In this first study the authenticated rate is measured by the traditional metric of the maximum expected probability of false authentication. For this metric, we provide an inner bound which improves on those existing in the literature. This is achieved by adopting and merging different classical techniques in novel ways. Within these classical techniques, one technique derives authentication capability directly from the noisy communications channel, and the other technique derives its' authentication capability directly from obscuring the source. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 159,542 |
2207.07106 | Benchmarking Omni-Vision Representation through the Lens of Visual
Realms | Though impressive performance has been achieved in specific visual realms (e.g. faces, dogs, and places), an omni-vision representation generalizing to many natural visual domains is highly desirable. But, existing benchmarks are biased and inefficient to evaluate the omni-vision representation -- these benchmarks either only include several specific realms, or cover most realms at the expense of subsuming numerous datasets that have extensive realm overlapping. In this paper, we propose Omni-Realm Benchmark (OmniBenchmark). It includes 21 realm-wise datasets with 7,372 concepts and 1,074,346 images. Without semantic overlapping, these datasets cover most visual realms comprehensively and meanwhile efficiently. In addition, we propose a new supervised contrastive learning framework, namely Relational Contrastive learning (ReCo), for a better omni-vision representation. Beyond pulling two instances from the same concept closer -- the typical supervised contrastive learning framework -- ReCo also pulls two instances from the same semantic realm closer, encoding the semantic relation between concepts, and facilitating omni-vision representation learning. We benchmark ReCo and other advances in omni-vision representation studies that are different in architectures (from CNNs to transformers) and in learning paradigms (from supervised learning to self-supervised learning) on OmniBenchmark. We illustrate the superior of ReCo to other supervised contrastive learning methods and reveal multiple practical observations to facilitate future research. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 308,108 |
2111.12128 | On the Unreasonable Effectiveness of Feature propagation in Learning on
Graphs with Missing Node Features | While Graph Neural Networks (GNNs) have recently become the de facto standard for modeling relational data, they impose a strong assumption on the availability of the node or edge features of the graph. In many real-world applications, however, features are only partially available; for example, in social networks, age and gender are available only for a small subset of users. We present a general approach for handling missing features in graph machine learning applications that is based on minimization of the Dirichlet energy and leads to a diffusion-type differential equation on the graph. The discretization of this equation produces a simple, fast and scalable algorithm which we call Feature Propagation. We experimentally show that the proposed approach outperforms previous methods on seven common node-classification benchmarks and can withstand surprisingly high rates of missing features: on average we observe only around 4% relative accuracy drop when 99% of the features are missing. Moreover, it takes only 10 seconds to run on a graph with $\sim$2.5M nodes and $\sim$123M edges on a single GPU. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 267,871 |
1210.2704 | On the Capacity of the One-Bit Deletion and Duplication Channel | The one-bit deletion and duplication channel is investigated. An input to this channel consists of a block of bits which experiences either a deletion, or a duplication, or remains unchanged. For this channel a capacity expression is obtained in a certain asymptotic regime where the deletion and duplication probabilities tend to zero. As a corollary, we obtain an asymptotic expression for the capacity of the segmented deletion and duplication channel where the input now consists of several blocks and each block independently experiences either a deletion, or a duplication, or remains unchanged. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 19,036 |
2102.04903 | FeedRec: News Feed Recommendation with Various User Feedbacks | Accurate user interest modeling is important for news recommendation. Most existing methods for news recommendation rely on implicit feedbacks like click for inferring user interests and model training. However, click behaviors usually contain heavy noise, and cannot help infer complicated user interest such as dislike. Besides, the feed recommendation models trained solely on click behaviors cannot optimize other objectives such as user engagement. In this paper, we present a news feed recommendation method that can exploit various kinds of user feedbacks to enhance both user interest modeling and model training. We propose a unified user modeling framework to incorporate various explicit and implicit user feedbacks to infer both positive and negative user interests. In addition, we propose a strong-to-weak attention network that uses the representations of stronger feedbacks to distill positive and negative user interests from implicit weak feedbacks for accurate user interest modeling. Besides, we propose a multi-feedback model training framework to learn an engagement-aware feed recommendation model. Extensive experiments on a real-world dataset show that our approach can effectively improve the model performance in terms of both news clicks and user engagement. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 219,261 |
2307.06835 | The generic crystallographic phase retrieval problem | In this paper we consider the problem of recovering a signal $x \in \mathbb{R}^N$ from its power spectrum assuming that the signal is sparse with respect to a generic basis for $\mathbb{R}^N$. Our main result is that if the sparsity level is at most $\sim\! N/2$ in this basis then the generic sparse vector is uniquely determined up to sign from its power spectrum. We also prove that if the sparsity level is $\sim\! N/4$ then every sparse vector is determined up to sign from its power spectrum. Analogous results are also obtained for the power spectrum of a vector in $\mathbb{C}^N$ which extend earlier results of Wang and Xu \cite{arXiv:1310.0873}. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 379,189 |
1904.04854 | 3D Object Instance Recognition and Pose Estimation Using Triplet Loss
with Dynamic Margin | In this paper, we address the problem of 3D object instance recognition and pose estimation of localized objects in cluttered environments using convolutional neural networks. Inspired by the descriptor learning approach of Wohlhart et al., we propose a method that introduces the dynamic margin in the manifold learning triplet loss function. Such a loss function is designed to map images of different objects under different poses to a lower-dimensional, similarity-preserving descriptor space on which efficient nearest neighbor search algorithms can be applied. Introducing the dynamic margin allows for faster training times and better accuracy of the resulting low-dimensional manifolds. Furthermore, we contribute the following: adding in-plane rotations (ignored by the baseline method) to the training, proposing new background noise types that help to better mimic realistic scenarios and improve accuracy with respect to clutter, adding surface normals as another powerful image modality representing an object surface leading to better performance than merely depth, and finally implementing an efficient online batch generation that allows for better variability during the training phase. We perform an exhaustive evaluation to demonstrate the effects of our contributions. Additionally, we assess the performance of the algorithm on the large BigBIRD dataset to demonstrate good scalability properties of the pipeline with respect to the number of models. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 127,140 |
cs/0307031 | Automatic Classification using Self-Organising Neural Networks in
Astrophysical Experiments | Self-Organising Maps (SOMs) are effective tools in classification problems, and in recent years the even more powerful Dynamic Growing Neural Networks, a variant of SOMs, have been developed. Automatic Classification (also called clustering) is an important and difficult problem in many Astrophysical experiments, for instance, Gamma Ray Burst classification, or gamma-hadron separation. After a brief introduction to classification problem, we discuss Self-Organising Maps in section 2. Section 3 discusses with various models of growing neural networks and finally in section 4 we discuss the research perspectives in growing neural networks for efficient classification in astrophysical problems. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | false | false | 537,920 |
2011.08073 | Analyzing Sustainability Reports Using Natural Language Processing | Climate change is a far-reaching, global phenomenon that will impact many aspects of our society, including the global stock market \cite{dietz2016climate}. In recent years, companies have increasingly been aiming to both mitigate their environmental impact and adapt to the changing climate context. This is reported via increasingly exhaustive reports, which cover many types of climate risks and exposures under the umbrella of Environmental, Social, and Governance (ESG). However, given this abundance of data, sustainability analysts are obliged to comb through hundreds of pages of reports in order to find relevant information. We leveraged recent progress in Natural Language Processing (NLP) to create a custom model, ClimateQA, which allows the analysis of financial reports in order to identify climate-relevant sections based on a question answering approach. We present this tool and the methodology that we used to develop it in the present article. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 206,771 |
2404.00618 | A Multi-Branched Radial Basis Network Approach to Predicting Complex
Chaotic Behaviours | In this study, we propose a multi branched network approach to predict the dynamics of a physics attractor characterized by intricate and chaotic behavior. We introduce a unique neural network architecture comprised of Radial Basis Function (RBF) layers combined with an attention mechanism designed to effectively capture nonlinear inter-dependencies inherent in the attractor's temporal evolution. Our results demonstrate successful prediction of the attractor's trajectory across 100 predictions made using a real-world dataset of 36,700 time-series observations encompassing approximately 28 minutes of activity. To further illustrate the performance of our proposed technique, we provide comprehensive visualizations depicting the attractor's original and predicted behaviors alongside quantitative measures comparing observed versus estimated outcomes. Overall, this work showcases the potential of advanced machine learning algorithms in elucidating hidden structures in complex physical systems while offering practical applications in various domains requiring accurate short-term forecasting capabilities. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | true | false | false | 443,027 |
2209.14085 | PTSD in the Wild: A Video Database for Studying Post-Traumatic Stress
Disorder Recognition in Unconstrained Environments | POST-traumatic stress disorder (PTSD) is a chronic and debilitating mental condition that is developed in response to catastrophic life events, such as military combat, sexual assault, and natural disasters. PTSD is characterized by flashbacks of past traumatic events, intrusive thoughts, nightmares, hypervigilance, and sleep disturbance, all of which affect a person's life and lead to considerable social, occupational, and interpersonal dysfunction. The diagnosis of PTSD is done by medical professionals using self-assessment questionnaire of PTSD symptoms as defined in the Diagnostic and Statistical Manual of Mental Disorders (DSM). In this paper, and for the first time, we collected, annotated, and prepared for public distribution a new video database for automatic PTSD diagnosis, called PTSD in the wild dataset. The database exhibits "natural" and big variability in acquisition conditions with different pose, facial expression, lighting, focus, resolution, age, gender, race, occlusions and background. In addition to describing the details of the dataset collection, we provide a benchmark for evaluating computer vision and machine learning based approaches on PTSD in the wild dataset. In addition, we propose and we evaluate a deep learning based approach for PTSD detection in respect to the given benchmark. The proposed approach shows very promising results. Interested researcher can download a copy of PTSD-in-the wild dataset from: http://www.lissi.fr/PTSD-Dataset/ | true | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 320,134 |
2010.11119 | DuoRAT: Towards Simpler Text-to-SQL Models | Recent neural text-to-SQL models can effectively translate natural language questions to corresponding SQL queries on unseen databases. Working mostly on the Spider dataset, researchers have proposed increasingly sophisticated solutions to the problem. Contrary to this trend, in this paper we focus on simplifications. We begin by building DuoRAT, a re-implementation of the state-of-the-art RAT-SQL model that unlike RAT-SQL is using only relation-aware or vanilla transformers as the building blocks. We perform several ablation experiments using DuoRAT as the baseline model. Our experiments confirm the usefulness of some techniques and point out the redundancy of others, including structural SQL features and features that link the question with the schema. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 202,128 |
2306.01412 | Matrix Inference in Growing Rank Regimes | The inference of a large symmetric signal-matrix $\mathbf{S} \in \mathbb{R}^{N\times N}$ corrupted by additive Gaussian noise, is considered for two regimes of growth of the rank $M$ as a function of $N$. For sub-linear ranks $M=\Theta(N^\alpha)$ with $\alpha\in(0,1)$ the mutual information and minimum mean-square error (MMSE) are derived for two classes of signal-matrices: (a) $\mathbf{S}=\mathbf{X}\mathbf{X}^\intercal$ with entries of $\mathbf{X}\in\mathbb{R}^{N\times M}$ independent identically distributed; (b) $\mathbf{S}$ sampled from a rotationally invariant distribution. Surprisingly, the formulas match the rank-one case. Two efficient algorithms are explored and conjectured to saturate the MMSE when no statistical-to-computational gap is present: (1) Decimation Approximate Message Passing; (2) a spectral algorithm based on a Rotation Invariant Estimator. For linear ranks $M=\Theta(N)$ the mutual information is rigorously derived for signal-matrices from a rotationally invariant distribution. Close connections with scalar inference in free probability are uncovered, which allow to deduce a simple formula for the MMSE as an integral involving the limiting spectral measure of the data matrix only. An interesting issue is whether the known information theoretic phase transitions for rank-one, and hence also sub-linear-rank, still persist in linear-rank. Our analysis suggests that only a smoothed-out trace of the transitions persists. Furthermore, the change of behavior between low and truly high-rank regimes only happens at the linear scale $\alpha=1$. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 370,442 |
1804.02576 | POL-LWIR Vehicle Detection: Convolutional Neural Networks Meet Polarised
Infrared Sensors | For vehicle autonomy, driver assistance and situational awareness, it is necessary to operate at day and night, and in all weather conditions. In particular, long wave infrared (LWIR) sensors that receive predominantly emitted radiation have the capability to operate at night as well as during the day. In this work, we employ a polarised LWIR (POL-LWIR) camera to acquire data from a mobile vehicle, to compare and contrast four different convolutional neural network (CNN) configurations to detect other vehicles in video sequences. We evaluate two distinct and promising approaches, two-stage detection (Faster-RCNN) and one-stage detection (SSD), in four different configurations. We also employ two different image decompositions: the first based on the polarisation ellipse and the second on the Stokes parameters themselves. To evaluate our approach, the experimental trials were quantified by mean average precision (mAP) and processing time, showing a clear trade-off between the two factors. For example, the best mAP result of 80.94% was achieved using Faster-RCNN, but at a frame rate of 6.4 fps. In contrast, MobileNet SSD achieved only 64.51% mAP, but at 53.4 fps. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 94,435 |
2204.08609 | "Flux+Mutability": A Conditional Generative Approach to One-Class
Classification and Anomaly Detection | Anomaly Detection is becoming increasingly popular within the experimental physics community. At experiments such as the Large Hadron Collider, anomaly detection is at the forefront of finding new physics beyond the Standard Model. This paper details the implementation of a novel Machine Learning architecture, called Flux+Mutability, which combines cutting-edge conditional generative models with clustering algorithms. In the `flux' stage we learn the distribution of a reference class. The `mutability' stage at inference addresses if data significantly deviates from the reference class. We demonstrate the validity of our approach and its connection to multiple problems spanning from one-class classification to anomaly detection. In particular, we apply our method to the isolation of neutral showers in an electromagnetic calorimeter and show its performance in detecting anomalous dijets events from standard QCD background. This approach limits assumptions on the reference sample and remains agnostic to the complementary class of objects of a given problem. We describe the possibility of dynamically generating a reference population and defining selection criteria via quantile cuts. Remarkably this flexible architecture can be deployed for a wide range of problems, and applications like multi-class classification or data quality control are left for further exploration. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 292,146 |
2112.12575 | A Modeling Framework for Reliability of Erasure Codes in SSD Arrays | To help reliability of SSD arrays, Redundant Array of Independent Disks (RAID) are commonly employed. However, the conventional reliability models of HDD RAID cannot be applied to SSD arrays, as the nature of failures in SSDs are different from HDDs. Previous studies on the reliability of SSD arrays are based on the deprecated SSD failure data, and only focus on limited failure types, device failures, and page failures caused by the bit errors, while recent field studies have reported other failure types including bad blocks and bad chips, and a high correlation between failures. In this paper, we explore the reliability of SSD arrays using field storage traces and real-system implementation of conventional and emerging erasure codes. The reliability is evaluated by statistical fault injections that post-process the usage logs from the real-system implementation, while the fault/failure attributes are obtained from field data. As a case study, we examine conventional and emerging erasure codes in terms of both reliability and performance using Linux MD RAID and commercial SSDs. Our analysis shows that a) emerging erasure codes fail to replace RAID6 in terms of reliability, b) row-wise erasure codes are the most efficient choices for contemporary SSD devices, and c) previous models overestimate the SSD array reliability by up to six orders of magnitude, as they focus on the coincidence of bad pages and bad chips that roots the minority of Data Loss (DL) in SSD arrays. Our experiments show that the combination of bad chips with bad blocks is the major source of DL in RAID5 and emerging codes (contributing more than 54% and 90% of DL in RAID5 and emerging codes, respectively), while RAID6 remains robust under these failure combinations. Finally, the fault injection results show that SSD array reliability, as well as the failure breakdown is significantly correlated with SSD type. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | true | 273,003 |
2410.15653 | Opportunities and Challenges of Generative-AI in Finance | Gen-AI techniques are able to improve understanding of context and nuances in language modeling, translation between languages, handle large volumes of data, provide fast, low-latency responses and can be fine-tuned for various tasks and domains. In this manuscript, we present a comprehensive overview of the applications of Gen-AI techniques in the finance domain. In particular, we present the opportunities and challenges associated with the usage of Gen-AI techniques. We also illustrate the various methodologies which can be used to train Gen-AI techniques and present the various application areas of Gen-AI technologies in the finance ecosystem. To the best of our knowledge, this work represents the most comprehensive summarization of Gen-AI techniques within the financial domain. The analysis is designed for a deep overview of areas marked for substantial advancement while simultaneously pin-point those warranting future prioritization. We also hope that this work would serve as a conduit between finance and other domains, thus fostering the cross-pollination of innovative concepts and practices. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 500,659 |
2001.11180 | Multiple Object Tracking by Flowing and Fusing | Most of Multiple Object Tracking (MOT) approaches compute individual target features for two subtasks: estimating target-wise motions and conducting pair-wise Re-Identification (Re-ID). Because of the indefinite number of targets among video frames, both subtasks are very difficult to scale up efficiently in end-to-end Deep Neural Networks (DNNs). In this paper, we design an end-to-end DNN tracking approach, Flow-Fuse-Tracker (FFT), that addresses the above issues with two efficient techniques: target flowing and target fusing. Specifically, in target flowing, a FlowTracker DNN module learns the indefinite number of target-wise motions jointly from pixel-level optical flows. In target fusing, a FuseTracker DNN module refines and fuses targets proposed by FlowTracker and frame-wise object detection, instead of trusting either of the two inaccurate sources of target proposal. Because FlowTracker can explore complex target-wise motion patterns and FuseTracker can refine and fuse targets from FlowTracker and detectors, our approach can achieve the state-of-the-art results on several MOT benchmarks. As an online MOT approach, FFT produced the top MOTA of 46.3 on the 2DMOT15, 56.5 on the MOT16, and 56.5 on the MOT17 tracking benchmarks, surpassing all the online and offline methods in existing publications. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 162,004 |
2009.01632 | Optimal Wireless Streaming of Multi-Quality 360 VR Video by Exploiting
Natural, Relative Smoothness-enabled and Transcoding-enabled Multicast
Opportunities | In this paper, we would like to investigate optimal wireless streaming of a multi-quality tiled 360 virtual reality (VR) video from a server to multiple users. To this end, we propose to maximally exploit potential multicast opportunities by effectively utilizing characteristics of multi-quality tiled 360 VR videos and computation resources at the users' side. In particular, we consider two requirements for quality variation in one field-of-view (FoV), i.e., the absolute smoothness requirement and the relative smoothness requirement, and two video playback modes, i.e., the direct-playback mode (without user transcoding) and transcode-playback mode (with user transcoding). Besides natural multicast opportunities, we introduce two new types of multicast opportunities, namely, relative smoothness-enabled multicast opportunities, which allow flexible tradeoff between viewing quality and communications resource consumption, and transcoding-enabled multicast opportunities, which allow flexible tradeoff between computation and communications resource consumptions. Then, we establish a novel mathematical model that reflects the impacts of natural, relative smoothness-enabled and transcoding-enabled multicast opportunities on the average transmission energy and transcoding energy. Based on this model, we optimize the transmission resource allocation, playback quality level selection and transmission quality level selection to minimize the energy consumption in the four cases with different requirements for quality variation and video playback modes. By comparing the optimal values in the four cases, we prove that the energy consumption reduces when more multicast opportunities can be utilized. Finally, numerical results show substantial gains of the proposed solutions over existing schemes, and demonstrate the importance of effective exploitation of the three types of multicast opportunities. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 194,359 |
2309.01335 | In-processing User Constrained Dominant Sets for User-Oriented Fairness
in Recommender Systems | Recommender systems are typically biased toward a small group of users, leading to severe unfairness in recommendation performance, i.e., User-Oriented Fairness (UOF) issue. The existing research on UOF is limited and fails to deal with the root cause of the UOF issue: the learning process between advantaged and disadvantaged users is unfair. To tackle this issue, we propose an In-processing User Constrained Dominant Sets (In-UCDS) framework, which is a general framework that can be applied to any backbone recommendation model to achieve user-oriented fairness. We split In-UCDS into two stages, i.e., the UCDS modeling stage and the in-processing training stage. In the UCDS modeling stage, for each disadvantaged user, we extract a constrained dominant set (a user cluster) containing some advantaged users that are similar to it. In the in-processing training stage, we move the representations of disadvantaged users closer to their corresponding cluster by calculating a fairness loss. By combining the fairness loss with the original backbone model loss, we address the UOF issue and maintain the overall recommendation performance simultaneously. Comprehensive experiments on three real-world datasets demonstrate that In-UCDS outperforms the state-of-the-art methods, leading to a fairer model with better overall recommendation performance. | false | false | false | false | false | true | true | false | false | false | false | false | false | true | false | false | false | false | 389,644 |
2311.02909 | Distributed Matrix-Based Sampling for Graph Neural Network Training | Graph Neural Networks (GNNs) offer a compact and computationally efficient way to learn embeddings and classifications on graph data. GNN models are frequently large, making distributed minibatch training necessary. The primary contribution of this paper is new methods for reducing communication in the sampling step for distributed GNN training. Here, we propose a matrix-based bulk sampling approach that expresses sampling as a sparse matrix multiplication (SpGEMM) and samples multiple minibatches at once. When the input graph topology does not fit on a single device, our method distributes the graph and use communication-avoiding SpGEMM algorithms to scale GNN minibatch sampling, enabling GNN training on much larger graphs than those that can fit into a single device memory. When the input graph topology (but not the embeddings) fits in the memory of one GPU, our approach (1) performs sampling without communication, (2) amortizes the overheads of sampling a minibatch, and (3) can represent multiple sampling algorithms by simply using different matrix constructions. In addition to new methods for sampling, we introduce a pipeline that uses our matrix-based bulk sampling approach to provide end-to-end training results. We provide experimental results on the largest Open Graph Benchmark (OGB) datasets on $128$ GPUs, and show that our pipeline is $2.5\times$ faster than Quiver (a distributed extension to PyTorch-Geometric) on a $3$-layer GraphSAGE network. On datasets outside of OGB, we show a $8.46\times$ speedup on $128$ GPUs in per-epoch time. Finally, we show scaling when the graph is distributed across GPUs and scaling for both node-wise and layer-wise sampling algorithms. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 405,639 |
1505.06850 | Implementing feedback in creative systems: A workshop approach | One particular challenge in AI is the computational modelling and simulation of creativity. Feedback and learning from experience are key aspects of the creative process. Here we investigate how we could implement feedback in creative systems using a social model. From the field of creative writing we borrow the concept of a Writers Workshop as a model for learning through feedback. The Writers Workshop encourages examination, discussion and debates of a piece of creative work using a prescribed format of activities. We propose a computational model of the Writers Workshop as a roadmap for incorporation of feedback in artificial creativity systems. We argue that the Writers Workshop setting describes the anatomy of the creative process. We support our claim with a case study that describes how to implement the Writers Workshop model in a computational creativity system. We present this work using patterns other people can follow to implement similar designs in their own systems. We conclude by discussing the broader relevance of this model to other aspects of AI. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 43,489 |
1908.06588 | How far should self-driving cars see? Effect of observation range on
vehicle self-localization | Accuracy and time efficiency are two essential requirements for the self-localization of autonomous vehicles. While the observation range considered for simultaneous localization and mapping (SLAM) has a significant effect on both accuracy and computation time, its effect is not well investigated in the literature. In this paper, we will answer the question: How far should a driverless car observe during self-localization? We introduce a framework to dynamically define the observation range for localization to meet the accuracy requirement for autonomous driving, while keeping the computation time low. To model the effect of scanning range on the localization accuracy for every point on the map, several map factors were employed. The capability of the proposed framework was verified using field data, demonstrating that it is able to improve the average matching time from 142.2 ms to 39.3 ms while keeping the localization accuracy around 8.1 cm. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 142,054 |
1704.07335 | Development of a Swarm UAV Simulator Integrating Realistic Motion
Control Models For Disaster Operations | Simulation environments for Unmanned Aerial Vehicles (UAVs) can be very useful for prototyping user interfaces and training personnel that will operate UAVs in the real world. The realistic operation of such simulations will only enhance the value of such training. In this paper, we present the integration of a model-based waypoint navigation controller into the Reno Rescue Simulator for the purposes of providing a more realistic user interface in simulated environments. We also present potential uses for such simulations, even for real-world operation of UAVs. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 72,340 |
2204.00653 | Safe Backstepping with Control Barrier Functions | Complex control systems are often described in a layered fashion, represented as higher-order systems where the inputs appear after a chain of integrators. While Control Barrier Functions (CBFs) have proven to be powerful tools for safety-critical controller design of nonlinear systems, their application to higher-order systems adds complexity to the controller synthesis process -- it necessitates dynamically extending the CBF to include higher order terms, which consequently modifies the safe set in complex ways. We propose an alternative approach for addressing safety of higher-order systems through Control Barrier Function Backstepping. Drawing inspiration from the method of Lyapunov backstepping, we provide a constructive framework for synthesizing safety-critical controllers and CBFs for higher-order systems from a top-level dynamics safety specification and controller design. Furthermore, we integrate the proposed method with Lyapunov backstepping, allowing the tasks of stability and safety to be expressed individually but achieved jointly. We demonstrate the efficacy of this approach in simulation. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 289,338 |
1911.09976 | Instance Cross Entropy for Deep Metric Learning | Loss functions play a crucial role in deep metric learning thus a variety of them have been proposed. Some supervise the learning process by pairwise or tripletwise similarity constraints while others take advantage of structured similarity information among multiple data points. In this work, we approach deep metric learning from a novel perspective. We propose instance cross entropy (ICE) which measures the difference between an estimated instance-level matching distribution and its ground-truth one. ICE has three main appealing properties. Firstly, similar to categorical cross entropy (CCE), ICE has clear probabilistic interpretation and exploits structured semantic similarity information for learning supervision. Secondly, ICE is scalable to infinite training data as it learns on mini-batches iteratively and is independent of the training set size. Thirdly, motivated by our relative weight analysis, seamless sample reweighting is incorporated. It rescales samples' gradients to control the differentiation degree over training examples instead of truncating them by sample mining. In addition to its simplicity and intuitiveness, extensive experiments on three real-world benchmarks demonstrate the superiority of ICE. | false | false | false | false | false | true | true | false | false | false | false | true | false | false | false | false | false | false | 154,690 |
1403.1437 | Evolution of the digital society reveals balance between viral and mass
media influence | Online social networks (OSNs) enable researchers to study the social universe at a previously unattainable scale. The worldwide impact and the necessity to sustain their rapid growth emphasize the importance to unravel the laws governing their evolution. We present a quantitative two-parameter model which reproduces the entire topological evolution of a quasi-isolated OSN with unprecedented precision from the birth of the network. This allows us to precisely gauge the fundamental macroscopic and microscopic mechanisms involved. Our findings suggest that the coupling between the real pre-existing underlying social structure, a viral spreading mechanism, and mass media influence govern the evolution of OSNs. The empirical validation of our model, on a macroscopic scale, reveals that virality is four to five times stronger than mass media influence and, on a microscopic scale, individuals have a higher subscription probability if invited by weaker social contacts, in agreement with the "strength of weak ties" paradigm. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 31,393 |
2106.10314 | Differentiable Particle Filtering without Modifying the Forward Pass | Particle filters are not compatible with automatic differentiation due to the presence of discrete resampling steps. While known estimators for the score function, based on Fisher's identity, can be computed using particle filters, up to this point they required manual implementation. In this paper we show that such estimators can be computed using automatic differentiation, after introducing a simple correction to the particle weights. This correction utilizes the stop-gradient operator and does not modify the particle filter operation on the forward pass, while also being cheap and easy to compute. Surprisingly, with the same correction automatic differentiation also produces good estimators for gradients of expectations under the posterior. We can therefore regard our method as a general recipe for making particle filters differentiable. We additionally show that it produces desired estimators for second-order derivatives and how to extend it to further reduce variance at the expense of additional computation. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 241,970 |
2107.07787 | Attention-based Vehicle Self-Localization with HD Feature Maps | We present a vehicle self-localization method using point-based deep neural networks. Our approach processes measurements and point features, i.e. landmarks, from a high-definition digital map to infer the vehicle's pose. To learn the best association and incorporate local information between the point sets, we propose an attention mechanism that matches the measurements to the corresponding landmarks. Finally, we use this representation for the point-cloud registration and the subsequent pose regression task. Furthermore, we introduce a training simulation framework that artificially generates measurements and landmarks to facilitate the deployment process and reduce the cost of creating extensive datasets from real-world data. We evaluate our method on our dataset, as well as an adapted version of the Kitti odometry dataset, where we achieve superior performance compared to related approaches; and additionally show dominant generalization capabilities. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 246,529 |
2403.01886 | FCDS: Fusing Constituency and Dependency Syntax into Document-Level
Relation Extraction | Document-level Relation Extraction (DocRE) aims to identify relation labels between entities within a single document. It requires handling several sentences and reasoning over them. State-of-the-art DocRE methods use a graph structure to connect entities across the document to capture dependency syntax information. However, this is insufficient to fully exploit the rich syntax information in the document. In this work, we propose to fuse constituency and dependency syntax into DocRE. It uses constituency syntax to aggregate the whole sentence information and select the instructive sentences for the pairs of targets. It exploits the dependency syntax in a graph structure with constituency syntax enhancement and chooses the path between entity pairs based on the dependency graph. The experimental results on datasets from various domains demonstrate the effectiveness of the proposed method. The code is publicly available at this url. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 434,619 |
2502.04699 | A Meta-learner for Heterogeneous Effects in Difference-in-Differences | We address the problem of estimating heterogeneous treatment effects in panel data, adopting the popular Difference-in-Differences (DiD) framework under the conditional parallel trends assumption. We propose a novel doubly robust meta-learner for the Conditional Average Treatment Effect on the Treated (CATT), reducing the estimation to a convex risk minimization problem involving a set of auxiliary models. Our framework allows for the flexible estimation of the CATT, when conditioning on any subset of variables of interest using generic machine learning. Leveraging Neyman orthogonality, our proposed approach is robust to estimation errors in the auxiliary models. As a generalization to our main result, we develop a meta-learning approach for the estimation of general conditional functionals under covariate shift. We also provide an extension to the instrumented DiD setting with non-compliance. Empirical results demonstrate the superiority of our approach over existing baselines. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 531,285 |
2408.09585 | On the Necessity of World Knowledge for Mitigating Missing Labels in
Extreme Classification | Extreme Classification (XC) aims to map a query to the most relevant documents from a very large document set. XC algorithms used in real-world applications learn this mapping from datasets curated from implicit feedback, such as user clicks. However, these datasets inevitably suffer from missing labels. In this work, we observe that systematic missing labels lead to missing knowledge, which is critical for accurately modelling relevance between queries and documents. We formally show that this absence of knowledge cannot be recovered using existing methods such as propensity weighting and data imputation strategies that solely rely on the training dataset. While LLMs provide an attractive solution to augment the missing knowledge, leveraging them in applications with low latency requirements and large document sets is challenging. To incorporate missing knowledge at scale, we propose SKIM (Scalable Knowledge Infusion for Missing Labels), an algorithm that leverages a combination of small LM and abundant unstructured meta-data to effectively mitigate the missing label problem. We show the efficacy of our method on large-scale public datasets through exhaustive unbiased evaluation ranging from human annotations to simulations inspired from industrial settings. SKIM outperforms existing methods on Recall@100 by more than 10 absolute points. Additionally, SKIM scales to proprietary query-ad retrieval datasets containing 10 million documents, outperforming contemporary methods by 12% in offline evaluation and increased ad click-yield by 1.23% in an online A/B test conducted on a popular search engine. We release our code, prompts, trained XC models and finetuned SLMs at: https://github.com/bicycleman15/skim | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | 481,502 |
1612.06176 | An extended Perona-Malik model based on probabilistic models | The Perona-Malik model has been very successful at restoring images from noisy input. In this paper, we reinterpret the Perona-Malik model in the language of Gaussian scale mixtures and derive some extensions of the model. Specifically, we show that the expectation-maximization (EM) algorithm applied to Gaussian scale mixtures leads to the lagged-diffusivity algorithm for computing stationary points of the Perona-Malik diffusion equations. Moreover, we show how mean field approximations to these Gaussian scale mixtures lead to a modification of the lagged-diffusivity algorithm that better captures the uncertainties in the restoration. Since this modification can be hard to compute in practice we propose relaxations to the mean field objective to make the algorithm computationally feasible. Our numerical experiments show that this modified lagged-diffusivity algorithm often performs better at restoring textured areas and fuzzy edges than the unmodified algorithm. As a second application of the Gaussian scale mixture framework, we show how an efficient sampling procedure can be obtained for the probabilistic model, making the computation of the conditional mean and other expectations algorithmically feasible. Again, the resulting algorithm has a strong resemblance to the lagged-diffusivity algorithm. Finally, we show that a probabilistic version of the Mumford-Shah segementation model can be obtained in the same framework with a discrete edge-prior. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 65,791 |
2211.17135 | BudgetLongformer: Can we Cheaply Pretrain a SotA Legal Language Model
From Scratch? | Pretrained transformer models have achieved state-of-the-art results in many tasks and benchmarks recently. Many state-of-the-art Language Models (LMs), however, do not scale well above the threshold of 512 input tokens. In specialized domains though (such as legal, scientific or biomedical), models often need to process very long text (sometimes well above 10000 tokens). Even though many efficient transformers have been proposed (such as Longformer, BigBird or FNet), so far, only very few such efficient models are available for specialized domains. Additionally, since the pretraining process is extremely costly in general - but even more so as the sequence length increases - it is often only in reach of large research labs. One way of making pretraining cheaper is the Replaced Token Detection (RTD) task, by providing more signal during training, since the loss can be computed over all tokens. In this work, we train Longformer models with the efficient RTD task on legal data to showcase that pretraining efficient LMs is possible using much less compute. We evaluate the trained models on challenging summarization tasks requiring the model to summarize long texts to show to what extent the models can achieve good performance on downstream tasks. We find that both the small and base models outperform their baselines on the in-domain BillSum and out-of-domain PubMed tasks in their respective parameter range. We publish our code and models for research purposes. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 333,874 |
1408.4433 | Sequential Recurrence-Based Multidimensional Universal Source Coding of
Lempel-Ziv Type | We define an algorithm that parses multidimensional arrays sequentially into mainly unrepeated but nested multidimensional sub-arrays of increasing size, and show that the resulting sub-block pointer encoder compresses almost every realization of any finite-alphabet ergodic process on $\mathbb{Z}_{\geq0}^d$ to the entropy, in the limit. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 35,456 |
1710.10174 | SGD Learns Over-parameterized Networks that Provably Generalize on
Linearly Separable Data | Neural networks exhibit good generalization behavior in the over-parameterized regime, where the number of network parameters exceeds the number of observations. Nonetheless, current generalization bounds for neural networks fail to explain this phenomenon. In an attempt to bridge this gap, we study the problem of learning a two-layer over-parameterized neural network, when the data is generated by a linearly separable function. In the case where the network has Leaky ReLU activations, we provide both optimization and generalization guarantees for over-parameterized networks. Specifically, we prove convergence rates of SGD to a global minimum and provide generalization guarantees for this global minimum that are independent of the network size. Therefore, our result clearly shows that the use of SGD for optimization both finds a global minimum, and avoids overfitting despite the high capacity of the model. This is the first theoretical demonstration that SGD can avoid overfitting, when learning over-specified neural network classifiers. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 83,322 |
2410.17186 | DyPNIPP: Predicting Environment Dynamics for RL-based Robust Informative
Path Planning | Informative path planning (IPP) is an important planning paradigm for various real-world robotic applications such as environment monitoring. IPP involves planning a path that can learn an accurate belief of the quantity of interest, while adhering to planning constraints. Traditional IPP methods typically require high computation time during execution, giving rise to reinforcement learning (RL) based IPP methods. However, the existing RL-based methods do not consider spatio-temporal environments which involve their own challenges due to variations in environment characteristics. In this paper, we propose DyPNIPP, a robust RL-based IPP framework, designed to operate effectively across spatio-temporal environments with varying dynamics. To achieve this, DyPNIPP incorporates domain randomization to train the agent across diverse environments and introduces a dynamics prediction model to capture and adapt the agent actions to specific environment dynamics. Our extensive experiments in a wildfire environment demonstrate that DyPNIPP outperforms existing RL-based IPP algorithms by significantly improving robustness and performing across diverse environment conditions. | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | 501,346 |
2309.16834 | Energy Optimal Control of a Harmonic Oscillator with a State Inequality
Constraint | In this article, the optimal control problem for a harmonic oscillator with an inequality constraint is considered. The applied energy of the oscillator during a fixed final time period is used as the performance criterion. The analytical solution with both small and large terminal time is found for a special case when the undriven oscillator system is initially at rest. For other initial states of the Harmonic oscillator, the optimal solution is found to have three modes: wait-move, move-wait, and move-wait-move given a longer terminal time. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 395,518 |
2408.04586 | Sampling for View Synthesis: From Local Light Field Fusion to Neural
Radiance Fields and Beyond | Capturing and rendering novel views of complex real-world scenes is a long-standing problem in computer graphics and vision, with applications in augmented and virtual reality, immersive experiences and 3D photography. The advent of deep learning has enabled revolutionary advances in this area, classically known as image-based rendering. However, previous approaches require intractably dense view sampling or provide little or no guidance for how users should sample views of a scene to reliably render high-quality novel views. Local light field fusion proposes an algorithm for practical view synthesis from an irregular grid of sampled views that first expands each sampled view into a local light field via a multiplane image scene representation, then renders novel views by blending adjacent local light fields. Crucially, we extend traditional plenoptic sampling theory to derive a bound that specifies precisely how densely users should sample views of a given scene when using our algorithm. We achieve the perceptual quality of Nyquist rate view sampling while using up to 4000x fewer views. Subsequent developments have led to new scene representations for deep learning with view synthesis, notably neural radiance fields, but the problem of sparse view synthesis from a small number of images has only grown in importance. We reprise some of the recent results on sparse and even single image view synthesis, while posing the question of whether prescriptive sampling guidelines are feasible for the new generation of image-based rendering algorithms. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | true | 479,435 |
2301.13507 | An Analysis of Classification Approaches for Hit Song Prediction using
Engineered Metadata Features with Lyrics and Audio Features | Hit song prediction, one of the emerging fields in music information retrieval (MIR), remains a considerable challenge. Being able to understand what makes a given song a hit is clearly beneficial to the whole music industry. Previous approaches to hit song prediction have focused on using audio features of a record. This study aims to improve the prediction result of the top 10 hits among Billboard Hot 100 songs using more alternative metadata, including song audio features provided by Spotify, song lyrics, and novel metadata-based features (title topic, popularity continuity and genre class). Five machine learning approaches are applied, including: k-nearest neighbours, Naive Bayes, Random Forest, Logistic Regression and Multilayer Perceptron. Our results show that Random Forest (RF) and Logistic Regression (LR) with all features (including novel features, song audio features and lyrics features) outperforms other models, achieving 89.1% and 87.2% accuracy, and 0.91 and 0.93 AUC, respectively. Our findings also demonstrate the utility of our novel music metadata features, which contributed most to the models' discriminative performance. | false | false | true | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | 342,941 |
2303.14811 | Learning Generative Models with Goal-conditioned Reinforcement Learning | We present a novel, alternative framework for learning generative models with goal-conditioned reinforcement learning. We define two agents, a goal conditioned agent (GC-agent) and a supervised agent (S-agent). Given a user-input initial state, the GC-agent learns to reconstruct the training set. In this context, elements in the training set are the goals. During training, the S-agent learns to imitate the GC-agent while remaining agnostic of the goals. At inference we generate new samples with the S-agent. Following a similar route as in variational auto-encoders, we derive an upper bound on the negative log-likelihood that consists of a reconstruction term and a divergence between the GC-agent policy and the (goal-agnostic) S-agent policy. We empirically demonstrate that our method is able to generate diverse and high quality samples in the task of image synthesis. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 354,252 |
2209.06458 | Design space exploration of a poultry fillet processing system using
discrete-event simulation | Developments in the poultry processing industry, such as how livestock is raised and how consumers buy meat, make it increasingly difficult to design poultry processing systems that meet evolving standards. More and more iterations of (re)design are required to optimize the product flow in these systems. This paper presents a method for design space exploration of production systems using discrete-event simulation. This method automates most steps of design space exploration: iterating on the design, model construction, performing simulation experiments, and interpreting the simulation results. This greatly reduces the time and effort required to iterate through different designs. A case study is presented which shows that this method can be effective for design space exploration of poultry processing systems. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 317,415 |
2204.02624 | There Are a Thousand Hamlets in a Thousand People's Eyes: Enhancing
Knowledge-grounded Dialogue with Personal Memory | Knowledge-grounded conversation (KGC) shows great potential in building an engaging and knowledgeable chatbot, and knowledge selection is a key ingredient in it. However, previous methods for knowledge selection only concentrate on the relevance between knowledge and dialogue context, ignoring the fact that age, hobby, education and life experience of an interlocutor have a major effect on his or her personal preference over external knowledge. Without taking the personalization issue into account, it is difficult to select the proper knowledge and generate persona-consistent responses. In this work, we introduce personal memory into knowledge selection in KGC to address the personalization issue. We propose a variational method to model the underlying relationship between one's personal memory and his or her selection of knowledge, and devise a learning scheme in which the forward mapping from personal memory to knowledge and its inverse mapping is included in a closed loop so that they could teach each other. Experiment results show that our method outperforms existing KGC methods significantly on both automatic evaluation and human evaluation. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 290,020 |
2412.10338 | XYScanNet: An Interpretable State Space Model for Perceptual Image
Deblurring | Deep state-space models (SSMs), like recent Mamba architectures, are emerging as a promising alternative to CNN and Transformer networks. Existing Mamba-based restoration methods process the visual data by leveraging a flatten-and-scan strategy that converts image patches into a 1D sequence before scanning. However, this scanning paradigm ignores local pixel dependencies and introduces spatial misalignment by positioning distant pixels incorrectly adjacent, which reduces local noise-awareness and degrades image sharpness in low-level vision tasks. To overcome these issues, we propose a novel slice-and-scan strategy that alternates scanning along intra- and inter-slices. We further design a new Vision State Space Module (VSSM) for image deblurring, and tackle the inefficiency challenges of the current Mamba-based vision module. Building upon this, we develop XYScanNet, an SSM architecture integrated with a lightweight feature fusion module for enhanced image deblurring. XYScanNet, maintains competitive distortion metrics and significantly improves perceptual performance. Experimental results show that XYScanNet enhances KID by $17\%$ compared to the nearest competitor. Our code will be released soon. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 516,873 |
1804.04829 | Learning Warped Guidance for Blind Face Restoration | This paper studies the problem of blind face restoration from an unconstrained blurry, noisy, low-resolution, or compressed image (i.e., degraded observation). For better recovery of fine facial details, we modify the problem setting by taking both the degraded observation and a high-quality guided image of the same identity as input to our guided face restoration network (GFRNet). However, the degraded observation and guided image generally are different in pose, illumination and expression, thereby making plain CNNs (e.g., U-Net) fail to recover fine and identity-aware facial details. To tackle this issue, our GFRNet model includes both a warping subnetwork (WarpNet) and a reconstruction subnetwork (RecNet). The WarpNet is introduced to predict flow field for warping the guided image to correct pose and expression (i.e., warped guidance), while the RecNet takes the degraded observation and warped guidance as input to produce the restoration result. Due to that the ground-truth flow field is unavailable, landmark loss together with total variation regularization are incorporated to guide the learning of WarpNet. Furthermore, to make the model applicable to blind restoration, our GFRNet is trained on the synthetic data with versatile settings on blur kernel, noise level, downsampling scale factor, and JPEG quality factor. Experiments show that our GFRNet not only performs favorably against the state-of-the-art image and face restoration methods, but also generates visually photo-realistic results on real degraded facial images. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 94,945 |
1602.01883 | Diagnosis and Repair for Synthesis from Signal Temporal Logic
Specifications | We address the problem of diagnosing and repairing specifications for hybrid systems formalized in signal temporal logic (STL). Our focus is on the setting of automatic synthesis of controllers in a model predictive control (MPC) framework. We build on recent approaches that reduce the controller synthesis problem to solving one or more mixed integer linear programs (MILPs), where infeasibility of a MILP usually indicates unrealizability of the controller synthesis problem. Given an infeasible STL synthesis problem, we present algorithms that provide feedback on the reasons for unrealizability, and suggestions for making it realizable. Our algorithms are sound and complete, i.e., they provide a correct diagnosis, and always terminate with a non-trivial specification that is feasible using the chosen synthesis method, when such a solution exists. We demonstrate the effectiveness of our approach on the synthesis of controllers for various cyber-physical systems, including an autonomous driving application and an aircraft electric power system. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | 51,753 |
2107.10706 | Distributed Saddle-Point Problems Under Similarity | We study solution methods for (strongly-)convex-(strongly)-concave Saddle-Point Problems (SPPs) over networks of two type - master/workers (thus centralized) architectures and meshed (thus decentralized) networks. The local functions at each node are assumed to be similar, due to statistical data similarity or otherwise. We establish lower complexity bounds for a fairly general class of algorithms solving the SPP. We show that a given suboptimality $\epsilon>0$ is achieved over master/workers networks in $\Omega\big(\Delta\cdot \delta/\mu\cdot \log (1/\varepsilon)\big)$ rounds of communications, where $\delta>0$ measures the degree of similarity of the local functions, $\mu$ is their strong convexity constant, and $\Delta$ is the diameter of the network. The lower communication complexity bound over meshed networks reads $\Omega\big(1/{\sqrt{\rho}} \cdot {\delta}/{\mu}\cdot\log (1/\varepsilon)\big)$, where $\rho$ is the (normalized) eigengap of the gossip matrix used for the communication between neighbouring nodes. We then propose algorithms matching the lower bounds over either types of networks (up to log-factors). We assess the effectiveness of the proposed algorithms on a robust logistic regression problem. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 247,378 |
2308.02011 | Silence Speaks Volumes: Re-weighting Techniques for Under-Represented
Users in Fake News Detection | Social media platforms provide a rich environment for analyzing user behavior. Recently, deep learning-based methods have been a mainstream approach for social media analysis models involving complex patterns. However, these methods are susceptible to biases in the training data, such as participation inequality. Basically, a mere 1% of users generate the majority of the content on social networking sites, while the remaining users, though engaged to varying degrees, tend to be less active in content creation and largely silent. These silent users consume and listen to information that is propagated on the platform. However, their voice, attitude, and interests are not reflected in the online content, making the decision of the current methods predisposed towards the opinion of the active users. So models can mistake the loudest users for the majority. We propose to leverage re-weighting techniques to make the silent majority heard, and in turn, investigate whether the cues from these users can improve the performance of the current models for the downstream task of fake news detection. | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | false | 383,445 |
1501.02410 | Matching Theory for Backhaul Management in Small Cell Networks with
mmWave Capabilities | Designing cost-effective and scalable backhaul solutions is one of the main challenges for emerging wireless small cell networks (SCNs). In this regard, millimeter wave (mmW) communication technologies have recently emerged as an attractive solution to realize the vision of a high-speed and reliable wireless small cell backhaul network (SCBN). In this paper, a novel approach is proposed for managing the spectral resources of a heterogeneous SCBN that can exploit simultaneously mmW and conventional frequency bands via carrier aggregation. In particular, a new SCBN model is proposed in which small cell base stations (SCBSs) equipped with broadband fiber backhaul allocate their frequency resources to SCBSs with wireless backhaul, by using aggregated bands. One unique feature of the studied model is that it jointly accounts for both wireless channel characteristics and economic factors during resource allocation. The problem is then formulated as a one-to-many matching game and a distributed algorithm is proposed to find a stable outcome of the game. The convergence of the algorithm is proven and the properties of the resulting matching are studied. Simulation results show that under the constraints of wireless backhauling, the proposed approach achieves substantial performance gains, reaching up to $30 \%$ compared to a conventional best-effort approach. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 39,179 |
2103.08907 | BBAM: Bounding Box Attribution Map for Weakly Supervised Semantic and
Instance Segmentation | Weakly supervised segmentation methods using bounding box annotations focus on obtaining a pixel-level mask from each box containing an object. Existing methods typically depend on a class-agnostic mask generator, which operates on the low-level information intrinsic to an image. In this work, we utilize higher-level information from the behavior of a trained object detector, by seeking the smallest areas of the image from which the object detector produces almost the same result as it does from the whole image. These areas constitute a bounding-box attribution map (BBAM), which identifies the target object in its bounding box and thus serves as pseudo ground-truth for weakly supervised semantic and instance segmentation. This approach significantly outperforms recent comparable techniques on both the PASCAL VOC and MS COCO benchmarks in weakly supervised semantic and instance segmentation. In addition, we provide a detailed analysis of our method, offering deeper insight into the behavior of the BBAM. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 225,025 |
2012.07688 | Improving Adversarial Robustness via Probabilistically Compact Loss with
Logit Constraints | Convolutional neural networks (CNNs) have achieved state-of-the-art performance on various tasks in computer vision. However, recent studies demonstrate that these models are vulnerable to carefully crafted adversarial samples and suffer from a significant performance drop when predicting them. Many methods have been proposed to improve adversarial robustness (e.g., adversarial training and new loss functions to learn adversarially robust feature representations). Here we offer a unique insight into the predictive behavior of CNNs that they tend to misclassify adversarial samples into the most probable false classes. This inspires us to propose a new Probabilistically Compact (PC) loss with logit constraints which can be used as a drop-in replacement for cross-entropy (CE) loss to improve CNN's adversarial robustness. Specifically, PC loss enlarges the probability gaps between true class and false classes meanwhile the logit constraints prevent the gaps from being melted by a small perturbation. We extensively compare our method with the state-of-the-art using large scale datasets under both white-box and black-box attacks to demonstrate its effectiveness. The source codes are available from the following url: https://github.com/xinli0928/PC-LC. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 211,547 |
2311.07591 | Identification of Books That are Suitable for Middle School Students
Using Artificial Neural Networks | Reading right books contributes to children's imagination and brain development, enhances their language and emotional comprehension abilities, and strengthens their relationships with others. Building upon the critical role of reading books in individual development, this paper aims to develop an algorithm that determines the suitability of books for middle school students by analyzing their structural and semantic features. Using methods described, an algorithm will be created that can be utilized by institutions and individuals responsible for children's education, such as the Ministry of National Education officials and schools. This algorithm will facilitate the selection of books to be taught at the middle school level. With the algorithm, the book selection process for the middle school curriculum can be expedited, and it will serve as a preliminary reference source for those who evaluate books by reading them. In this paper, the Python programming language was employed, utilizing natural language processing methods. Additionally, an artificial neural network (ANN) was trained using the data which had been preprocessed to construct an original dataset. To train this network, suitable books for middle school students were provided by the MEB, Oxford and Cambridge and with content assessed based on the "R" criterion, and inappropriate books for middle school students in terms of content were included. This trained neural network achieved a 90.06% consistency rate in determining the appropriateness of the test-provided books. Considering the obtained findings, it can be concluded that the developed software has achieved the desired objective. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 407,392 |
1612.06099 | Multi-Kernel Construction of Polar Codes | We propose a generalized construction for binary polar codes based on mixing multiple kernels of different sizes in order to construct polar codes of block lengths that are not only powers of integers. This results in a multi kernel polar code with very good performance while the encoding complexity remains low and the decoding follows the same general structure as for the original Arikan polar codes. The construction provides numerous practical advantages as more code lengths can be achieved without puncturing or shortening. We observe numerically that the error-rate performance of our construction outperforms stateof the-art constructions using puncturing methods. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 65,775 |
1803.10274 | A Study of Clustering Techniques and Hierarchical Matrix Formats for
Kernel Ridge Regression | We present memory-efficient and scalable algorithms for kernel methods used in machine learning. Using hierarchical matrix approximations for the kernel matrix the memory requirements, the number of floating point operations, and the execution time are drastically reduced compared to standard dense linear algebra routines. We consider both the general $\mathcal{H}$ matrix hierarchical format as well as Hierarchically Semi-Separable (HSS) matrices. Furthermore, we investigate the impact of several preprocessing and clustering techniques on the hierarchical matrix compression. Effective clustering of the input leads to a ten-fold increase in efficiency of the compression. The algorithms are implemented using the STRUMPACK solver library. These results confirm that --- with correct tuning of the hyperparameters --- classification using kernel ridge regression with the compressed matrix does not lose prediction accuracy compared to the exact --- not compressed --- kernel matrix and that our approach can be extended to $\mathcal{O}(1M)$ datasets, for which computation with the full kernel matrix becomes prohibitively expensive. We present numerical experiments in a distributed memory environment up to 1,024 processors of the NERSC's Cori supercomputer using well-known datasets to the machine learning community that range from dimension 8 up to 784. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 93,665 |
2002.09737 | Amortised Learning by Wake-Sleep | Models that employ latent variables to capture structure in observed data lie at the heart of many current unsupervised learning algorithms, but exact maximum-likelihood learning for powerful and flexible latent-variable models is almost always intractable. Thus, state-of-the-art approaches either abandon the maximum-likelihood framework entirely, or else rely on a variety of variational approximations to the posterior distribution over the latents. Here, we propose an alternative approach that we call amortised learning. Rather than computing an approximation to the posterior over latents, we use a wake-sleep Monte-Carlo strategy to learn a function that directly estimates the maximum-likelihood parameter updates. Amortised learning is possible whenever samples of latents and observations can be simulated from the generative model, treating the model as a "black box". We demonstrate its effectiveness on a wide range of complex models, including those with latents that are discrete or supported on non-Euclidean spaces. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 165,164 |
2106.01425 | GAL: Gradient Assisted Learning for Decentralized Multi-Organization
Collaborations | Collaborations among multiple organizations, such as financial institutions, medical centers, and retail markets in decentralized settings are crucial to providing improved service and performance. However, the underlying organizations may have little interest in sharing their local data, models, and objective functions. These requirements have created new challenges for multi-organization collaboration. In this work, we propose Gradient Assisted Learning (GAL), a new method for multiple organizations to assist each other in supervised learning tasks without sharing local data, models, and objective functions. In this framework, all participants collaboratively optimize the aggregate of local loss functions, and each participant autonomously builds its own model by iteratively fitting the gradients of the overarching objective function. We also provide asymptotic convergence analysis and practical case studies of GAL. Experimental studies demonstrate that GAL can achieve performance close to centralized learning when all data, models, and objective functions are fully disclosed. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 238,486 |
2210.00898 | Robust $Q$-learning Algorithm for Markov Decision Processes under
Wasserstein Uncertainty | We present a novel $Q$-learning algorithm tailored to solve distributionally robust Markov decision problems where the corresponding ambiguity set of transition probabilities for the underlying Markov decision process is a Wasserstein ball around a (possibly estimated) reference measure. We prove convergence of the presented algorithm and provide several examples also using real data to illustrate both the tractability of our algorithm as well as the benefits of considering distributional robustness when solving stochastic optimal control problems, in particular when the estimated distributions turn out to be misspecified in practice. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 321,050 |
2310.05723 | Planning to Go Out-of-Distribution in Offline-to-Online Reinforcement
Learning | Offline pretraining with a static dataset followed by online fine-tuning (offline-to-online, or OtO) is a paradigm well matched to a real-world RL deployment process. In this scenario, we aim to find the best-performing policy within a limited budget of online interactions. Previous work in the OtO setting has focused on correcting for bias introduced by the policy-constraint mechanisms of offline RL algorithms. Such constraints keep the learned policy close to the behavior policy that collected the dataset, but we show this can unnecessarily limit policy performance if the behavior policy is far from optimal. Instead, we forgo constraints and frame OtO RL as an exploration problem that aims to maximize the benefit of online data-collection. We first study the major online RL exploration methods based on intrinsic rewards and UCB in the OtO setting, showing that intrinsic rewards add training instability through reward-function modification, and UCB methods are myopic and it is unclear which learned-component's ensemble to use for action selection. We then introduce an algorithm for planning to go out-of-distribution (PTGOOD) that avoids these issues. PTGOOD uses a non-myopic planning procedure that targets exploration in relatively high-reward regions of the state-action space unlikely to be visited by the behavior policy. By leveraging concepts from the Conditional Entropy Bottleneck, PTGOOD encourages data collected online to provide new information relevant to improving the final deployment policy without altering rewards. We show empirically in several continuous control tasks that PTGOOD significantly improves agent returns during online fine-tuning and avoids the suboptimal policy convergence that many of our baselines exhibit in several environments. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 398,268 |
1812.00176 | A Deep Sequential Model for Discourse Parsing on Multi-Party Dialogues | Discourse structures are beneficial for various NLP tasks such as dialogue understanding, question answering, sentiment analysis, and so on. This paper presents a deep sequential model for parsing discourse dependency structures of multi-party dialogues. The proposed model aims to construct a discourse dependency tree by predicting dependency relations and constructing the discourse structure jointly and alternately. It makes a sequential scan of the Elementary Discourse Units (EDUs) in a dialogue. For each EDU, the model decides to which previous EDU the current one should link and what the corresponding relation type is. The predicted link and relation type are then used to build the discourse structure incrementally with a structured encoder. During link prediction and relation classification, the model utilizes not only local information that represents the concerned EDUs, but also global information that encodes the EDU sequence and the discourse structure that is already built at the current step. Experiments show that the proposed model outperforms all the state-of-the-art baselines. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 115,182 |
2307.00315 | Joint Downlink-Uplink Beamforming for Wireless Multi-Antenna Federated
Learning | We study joint downlink-uplink beamforming design for wireless federated learning (FL) with a multi-antenna base station. Considering analog transmission over noisy channels and uplink over-the-air aggregation, we derive the global model update expression over communication rounds. We then obtain an upper bound on the expected global loss function, capturing the downlink and uplink beamforming and receiver noise effect. We propose a low-complexity joint beamforming algorithm to minimize this upper bound, which employs alternating optimization to breakdown the problem into three subproblems, each solved via closed-form gradient updates. Simulation under practical wireless system setup shows that our proposed joint beamforming design solution substantially outperforms the conventional separate-link design approach and nearly attains the performance of ideal FL with error-free communication links. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 376,961 |
2211.01155 | Recommendation with User Active Disclosing Willingness | Recommender system has been deployed in a large amount of real-world applications, profoundly influencing people's daily life and production.Traditional recommender models mostly collect as comprehensive as possible user behaviors for accurate preference estimation. However, considering the privacy, preference shaping and other issues, the users may not want to disclose all their behaviors for training the model. In this paper, we study a novel recommendation paradigm, where the users are allowed to indicate their "willingness" on disclosing different behaviors, and the models are optimized by trading-off the recommendation quality as well as the violation of the user "willingness". More specifically, we formulate the recommendation problem as a multiplayer game, where the action is a selection vector representing whether the items are involved into the model training. For efficiently solving this game, we design a tailored algorithm based on influence function to lower the time cost for recommendation quality exploration, and also extend it with multiple anchor selection vectors.We conduct extensive experiments to demonstrate the effectiveness of our model on balancing the recommendation quality and user disclosing willingness. | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | false | 328,124 |
1609.02669 | Correlation between social proximity and mobility similarity | Human behaviors exhibit ubiquitous correlations in many aspects, such as individual and collective levels, temporal and spatial dimensions, content, social and geographical layers. With rich Internet data of online behaviors becoming available, it attracts academic interests to explore human mobility similarity from the perspective of social network proximity. Existent analysis shows a strong correlation between online social proximity and offline mobility similari- ty, namely, mobile records between friends are significantly more similar than between strangers, and those between friends with common neighbors are even more similar. We argue the importance of the number and diversity of com- mon friends, with a counter intuitive finding that the number of common friends has no positive impact on mobility similarity while the diversity plays a key role, disagreeing with previous studies. Our analysis provides a novel view for better understanding the coupling between human online and offline behaviors, and will help model and predict human behaviors based on social proximity. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 60,769 |
1911.02060 | Infusing Knowledge into the Textual Entailment Task Using Graph
Convolutional Networks | Textual entailment is a fundamental task in natural language processing. Most approaches for solving the problem use only the textual content present in training data. A few approaches have shown that information from external knowledge sources like knowledge graphs (KGs) can add value, in addition to the textual content, by providing background knowledge that may be critical for a task. However, the proposed models do not fully exploit the information in the usually large and noisy KGs, and it is not clear how it can be effectively encoded to be useful for entailment. We present an approach that complements text-based entailment models with information from KGs by (1) using Personalized PageR- ank to generate contextual subgraphs with reduced noise and (2) encoding these subgraphs using graph convolutional networks to capture KG structure. Our technique extends the capability of text models exploiting structural and semantic information found in KGs. We evaluate our approach on multiple textual entailment datasets and show that the use of external knowledge helps improve prediction accuracy. This is particularly evident in the challenging BreakingNLI dataset, where we see an absolute improvement of 5-20% over multiple text-based entailment models. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 152,261 |
1606.02033 | User Cooperation for Enhanced Throughput Fairness in Wireless Powered
Communication Networks | This paper studies a novel user cooperation method in a wireless powered communication network (WPCN), where a pair of distributed terminal users first harvest wireless energy broadcasted by one energy node (EN) and then use the harvested energy to transmit information cooperatively to a destination node (DN). In particular, the two cooperating users exchange their independent information with each other to form a virtual antenna array and transmit jointly to the DN. By allowing each user to allocate part of its harvested energy to transmit the other's information, the proposed cooperation can effectively mitigate the user unfairness problem in WPCNs, where a user may suffer from very low data rate due to the poor energy harvesting performance and high data transmission consumptions. We derive the maximum common throughput achieved by the cooperation scheme through optimizing the time allocation on wireless energy transfer, user message exchange, and joint information transmissions. Through comparing with some representative benchmark schemes, our results demonstrate the effectiveness of the proposed user cooperation in enhancing the throughput performance under different setups. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 56,899 |
2002.06659 | TempLe: Learning Template of Transitions for Sample Efficient Multi-task
RL | Transferring knowledge among various environments is important to efficiently learn multiple tasks online. Most existing methods directly use the previously learned models or previously learned optimal policies to learn new tasks. However, these methods may be inefficient when the underlying models or optimal policies are substantially different across tasks. In this paper, we propose Template Learning (TempLe), the first PAC-MDP method for multi-task reinforcement learning that could be applied to tasks with varying state/action space. TempLe generates transition dynamics templates, abstractions of the transition dynamics across tasks, to gain sample efficiency by extracting similarities between tasks even when their underlying models or optimal policies have limited commonalities. We present two algorithms for an "online" and a "finite-model" setting respectively. We prove that our proposed TempLe algorithms achieve much lower sample complexity than single-task learners or state-of-the-art multi-task methods. We show via systematically designed experiments that our TempLe method universally outperforms the state-of-the-art multi-task methods (PAC-MDP or not) in various settings and regimes. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 164,259 |
1712.07920 | Track, then Decide: Category-Agnostic Vision-based Multi-Object Tracking | The most common paradigm for vision-based multi-object tracking is tracking-by-detection, due to the availability of reliable detectors for several important object categories such as cars and pedestrians. However, future mobile systems will need a capability to cope with rich human-made environments, in which obtaining detectors for every possible object category would be infeasible. In this paper, we propose a model-free multi-object tracking approach that uses a category-agnostic image segmentation method to track objects. We present an efficient segmentation mask-based tracker which associates pixel-precise masks reported by the segmentation. Our approach can utilize semantic information whenever it is available for classifying objects at the track level, while retaining the capability to track generic unknown objects in the absence of such information. We demonstrate experimentally that our approach achieves performance comparable to state-of-the-art tracking-by-detection methods for popular object categories such as cars and pedestrians. Additionally, we show that the proposed method can discover and robustly track a large variety of other objects. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 87,123 |
2305.05673 | Implementation and analysis of Ryze Tello drone vision-based positioning
using AprilTags | The paper describes of the Ryze Tello drone to move autonomously using a basic vision system. The drone's position is determined by identifying AprilTags' position relative to the drone's built-in camera. The accuracy of the drone's position readings and distance calculations was tested under controlled conditions, and errors were analysed. The study showed a decrease in absolute error with decreasing drone distance from the marker, a little change in the relative error for large distances, and a sharp decrease in the relative error for small distances. The method is satisfactory for determining the drone's position relative to a marker. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 363,246 |
1707.09800 | A Principled Approximation Framework for Optimal Control of Semi-Markov
Jump Linear Systems | We consider continuous-time, finite-horizon, optimal quadratic control of semi-Markov jump linear systems (S-MJLS), and develop principled approximations through Markov-like representations for the holding-time distributions. We adopt a phase-type approximation for holding times, which is known to be consistent, and translates a S-MJLS into a specific MJLS with partially observable modes (MJLSPOM), where the modes in a cluster have the same dynamic, the same cost weighting matrices and the same control policy. For a general MJLSPOM, we give necessary and sufficient conditions for optimal (switched) linear controllers. When specialized to our particular MJLSPOM, we additionally establish the existence of optimal linear controller, as well as its optimality within the class of general controllers satisfying standard smoothness conditions. The known equivalence between phase-type distributions and positive linear systems allows to leverage existing modeling tools, but possibly with large computational costs. Motivated by this, we propose matrix exponential approximation of holding times, resulting in pseudo-MJLSPOM representation, i.e., where the transition rates could be negative. Such a representation is of relatively low order, and maintains the same optimality conditions as for the MJLSPOM representation, but could violate non-negativity of holding-time density functions. A two-step procedure consisting of a local pulling-up modification and a filtering technique is constructed to enforce non-negativity. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 78,078 |
2310.05632 | Binary Classification with Confidence Difference | Recently, learning with soft labels has been shown to achieve better performance than learning with hard labels in terms of model generalization, calibration, and robustness. However, collecting pointwise labeling confidence for all training examples can be challenging and time-consuming in real-world scenarios. This paper delves into a novel weakly supervised binary classification problem called confidence-difference (ConfDiff) classification. Instead of pointwise labeling confidence, we are given only unlabeled data pairs with confidence difference that specifies the difference in the probabilities of being positive. We propose a risk-consistent approach to tackle this problem and show that the estimation error bound achieves the optimal convergence rate. We also introduce a risk correction approach to mitigate overfitting problems, whose consistency and convergence rate are also proven. Extensive experiments on benchmark data sets and a real-world recommender system data set validate the effectiveness of our proposed approaches in exploiting the supervision information of the confidence difference. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 398,226 |
1602.04930 | Generalized minimum dominating set and application in automatic text
summarization | For a graph formed by vertices and weighted edges, a generalized minimum dominating set (MDS) is a vertex set of smallest cardinality such that the summed weight of edges from each outside vertex to vertices in this set is equal to or larger than certain threshold value. This generalized MDS problem reduces to the conventional MDS problem in the limiting case of all the edge weights being equal to the threshold value. We treat the generalized MDS problem in the present paper by a replica-symmetric spin glass theory and derive a set of belief-propagation equations. As a practical application we consider the problem of extracting a set of sentences that best summarize a given input text document. We carry out a preliminary test of the statistical physics-inspired method to this automatic text summarization problem. | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | 52,198 |
2201.12738 | AutoSNN: Towards Energy-Efficient Spiking Neural Networks | Spiking neural networks (SNNs) that mimic information transmission in the brain can energy-efficiently process spatio-temporal information through discrete and sparse spikes, thereby receiving considerable attention. To improve accuracy and energy efficiency of SNNs, most previous studies have focused solely on training methods, and the effect of architecture has rarely been studied. We investigate the design choices used in the previous studies in terms of the accuracy and number of spikes and figure out that they are not best-suited for SNNs. To further improve the accuracy and reduce the spikes generated by SNNs, we propose a spike-aware neural architecture search framework called AutoSNN. We define a search space consisting of architectures without undesirable design choices. To enable the spike-aware architecture search, we introduce a fitness that considers both the accuracy and number of spikes. AutoSNN successfully searches for SNN architectures that outperform hand-crafted SNNs in accuracy and energy efficiency. We thoroughly demonstrate the effectiveness of AutoSNN on various datasets including neuromorphic datasets. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | true | false | false | 277,764 |
2406.14781 | Optimal estimation in spatially distributed systems: how far to share
measurements from? | We consider the centralized optimal estimation problem in spatially distributed systems. We use the setting of spatially invariant systems as an idealization for which concrete and detailed results are given. Such estimators are known to have a degree of spatial localization in the sense that the estimator gains decay in space, with the spatial decay rates serving as a proxy for how far measurements need to be shared in an optimal distributed estimator. In particular, we examine the dependence of spatial decay rates on problem specifications such as system dynamics, measurement and process noise variances, as well as their spatial autocorrelations. We propose non-dimensional parameters that characterize the decay rates as a function of problem specifications. In particular, we find an interesting matching condition between the characteristic lengthscale of the dynamics and the measurement noise correlation lengthscale for which the optimal centralized estimator is completely decentralized. A new technique - termed the Branch Point Locus - is introduced to quantify spatial decay rates in terms of analyticity regions in the complex spatial frequency plane. Our results are illustrated through two case studies of systems with dynamics modeled by diffusion and the Swift-Hohenberg equation, respectively. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 466,464 |
2405.04118 | Policy Learning with a Language Bottleneck | Modern AI systems such as self-driving cars and game-playing agents achieve superhuman performance, but often lack human-like features such as generalization, interpretability and human inter-operability. Inspired by the rich interactions between language and decision-making in humans, we introduce Policy Learning with a Language Bottleneck (PLLB), a framework enabling AI agents to generate linguistic rules that capture the strategies underlying their most rewarding behaviors. PLLB alternates between a rule generation step guided by language models, and an update step where agents learn new policies guided by rules. In a two-player communication game, a maze solving task, and two image reconstruction tasks, we show that PLLB agents are not only able to learn more interpretable and generalizable behaviors, but can also share the learned rules with human users, enabling more effective human-AI coordination. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 452,451 |
1802.07591 | Least Square Error Method Robustness of Computation: What is not usually
considered and taught | There are many practical applications based on the Least Square Error (LSE) approximation. It is based on a square error minimization 'on a vertical' axis. The LSE method is simple and easy also for analytical purposes. However, if data span is large over several magnitudes or non-linear LSE is used, severe numerical instability can be expected. The presented contribution describes a simple method for large span of data LSE computation. It is especially convenient if large span of data are to be processed, when the 'standard' pseudoinverse matrix is ill conditioned. It is actually based on a LSE solution using orthogonal basis vectors instead of orthonormal basis vectors. The presented approach has been used for a linear regression as well as for approximation using radial basis functions. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 90,927 |
2301.12316 | Wind Tunnel Testing and Aerodynamic Characterization of a QuadPlane
Uncrewed Aircraft System | Electric Vertical Takeoff and Landing (eVTOL) vehicles will open new opportunities in aviation. This paper describes the design and wind tunnel analysis of an eVTOL uncrewed aircraft system (UAS) prototype with a traditional aircraft wing, tail, and puller motor along with four vertical thrust pusher motors. Vehicle design and construction are summarized. Dynamic thrust from propulsion modules is experimentally determined at different airspeeds over a large sweep of propeller angles of attack. Wind tunnel tests with the vehicle prototype cover a suite of hover, transition and cruise flight conditions. Net aerodynamic forces and moments are distinctly computed and compared for plane, quadrotor and hybrid flight modes. Coefficient-based models are developed. Polynomial curve fits accurately capture observed data over all test configurations. To our knowledge, the presented wind tunnel experimental analysis for a multi-mode eVTOL platform is novel. Increased drag and reduced dynamic thrust likely due to flow interactions will be important to address in future designs. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 342,485 |
cs/9505104 | Pac-Learning Recursive Logic Programs: Efficient Algorithms | We present algorithms that learn certain classes of function-free recursive logic programs in polynomial time from equivalence queries. In particular, we show that a single k-ary recursive constant-depth determinate clause is learnable. Two-clause programs consisting of one learnable recursive clause and one constant-depth determinate non-recursive clause are also learnable, if an additional ``basecase'' oracle is assumed. These results immediately imply the pac-learnability of these classes. Although these classes of learnable recursive programs are very constrained, it is shown in a companion paper that they are maximally general, in that generalizing either class in any natural way leads to a computationally difficult learning problem. Thus, taken together with its companion paper, this paper establishes a boundary of efficient learnability for recursive logic programs. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 540,309 |
2003.03998 | Improving noise robust automatic speech recognition with single-channel
time-domain enhancement network | With the advent of deep learning, research on noise-robust automatic speech recognition (ASR) has progressed rapidly. However, ASR performance in noisy conditions of single-channel systems remains unsatisfactory. Indeed, most single-channel speech enhancement (SE) methods (denoising) have brought only limited performance gains over state-of-the-art ASR back-end trained on multi-condition training data. Recently, there has been much research on neural network-based SE methods working in the time-domain showing levels of performance never attained before. However, it has not been established whether the high enhancement performance achieved by such time-domain approaches could be translated into ASR. In this paper, we show that a single-channel time-domain denoising approach can significantly improve ASR performance, providing more than 30 % relative word error reduction over a strong ASR back-end on the real evaluation data of the single-channel track of the CHiME-4 dataset. These positive results demonstrate that single-channel noise reduction can still improve ASR performance, which should open the door to more research in that direction. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 167,430 |
2308.02597 | Designing a Deep Learning-Driven Resource-Efficient Diagnostic System
for Metastatic Breast Cancer: Reducing Long Delays of Clinical Diagnosis and
Improving Patient Survival in Developing Countries | Breast cancer is one of the leading causes of cancer mortality. Breast cancer patients in developing countries, especially sub-Saharan Africa, South Asia, and South America, suffer from the highest mortality rate in the world. One crucial factor contributing to the global disparity in mortality rate is long delay of diagnosis due to a severe shortage of trained pathologists, which consequently has led to a large proportion of late-stage presentation at diagnosis. The delay between the initial development of symptoms and the receipt of a diagnosis could stretch upwards 15 months. To tackle this critical healthcare disparity, this research has developed a deep learning-based diagnosis system for metastatic breast cancer that can achieve high diagnostic accuracy as well as computational efficiency. Based on our evaluation, the MobileNetV2-based diagnostic model outperformed the more complex VGG16, ResNet50 and ResNet101 models in diagnostic accuracy, model generalization, and model training efficiency. The visual comparisons between the model prediction and ground truth have demonstrated that the MobileNetV2 diagnostic models can identify very small cancerous nodes embedded in a large area of normal cells which is challenging for manual image analysis. Equally Important, the light weighted MobleNetV2 models were computationally efficient and ready for mobile devices or devices of low computational power. These advances empower the development of a resource-efficient and high performing AI-based metastatic breast cancer diagnostic system that can adapt to under-resourced healthcare facilities in developing countries. This research provides an innovative technological solution to address the long delays in metastatic breast cancer diagnosis and the consequent disparity in patient survival outcome in developing countries. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 383,704 |
1306.2295 | Markov random fields factorization with context-specific independences | Markov random fields provide a compact representation of joint probability distributions by representing its independence properties in an undirected graph. The well-known Hammersley-Clifford theorem uses these conditional independences to factorize a Gibbs distribution into a set of factors. However, an important issue of using a graph to represent independences is that it cannot encode some types of independence relations, such as the context-specific independences (CSIs). They are a particular case of conditional independences that is true only for a certain assignment of its conditioning set; in contrast to conditional independences that must hold for all its assignments. This work presents a method for factorizing a Markov random field according to CSIs present in a distribution, and formally guarantees that this factorization is correct. This is presented in our main contribution, the context-specific Hammersley-Clifford theorem, a generalization to CSIs of the Hammersley-Clifford theorem that applies for conditional independences. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 25,114 |
2502.09644 | From Argumentation to Deliberation: Perspectivized Stance Vectors for
Fine-grained (Dis)agreement Analysis | Debating over conflicting issues is a necessary first step towards resolving conflicts. However, intrinsic perspectives of an arguer are difficult to overcome by persuasive argumentation skills. Proceeding from a debate to a deliberative process, where we can identify actionable options for resolving a conflict requires a deeper analysis of arguments and the perspectives they are grounded in - as it is only from there that one can derive mutually agreeable resolution steps. In this work we develop a framework for a deliberative analysis of arguments in a computational argumentation setup. We conduct a fine-grained analysis of perspectivized stances expressed in the arguments of different arguers or stakeholders on a given issue, aiming not only to identify their opposing views, but also shared perspectives arising from their attitudes, values or needs. We formalize this analysis in Perspectivized Stance Vectors that characterize the individual perspectivized stances of all arguers on a given issue. We construct these vectors by determining issue- and argument-specific concepts, and predict an arguer's stance relative to each of them. The vectors allow us to measure a modulated (dis)agreement between arguers, structured by perspectives, which allows us to identify actionable points for conflict resolution, as a first step towards deliberation. | false | false | false | false | true | false | false | false | true | false | false | false | false | true | false | false | false | false | 533,530 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.