id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
1709.06239
Inter-Operator Base Station Coordination in Spectrum-Shared Millimeter Wave Cellular Networks
We characterize the rate coverage distribution for a spectrum-shared millimeter wave downlink cellular network. Each of multiple cellular operators owns separate mmWave bandwidth, but shares the spectrum amongst each other while using dynamic inter-operator base station (BS) coordination to suppress the resulting cross-operator interference. We model the BS locations of each operator as mutually independent Poisson point processes, and derive the probability density function (PDF) of the K-th strongest link power, incorporating both line-of-sight and non line-of-sight states. Leveraging the obtained PDF, we derive the rate coverage expression as a function of system parameters such as the BS density, transmit power, bandwidth, and coordination set size. We verify the analysis with extensive simulation results. A major finding is that inter-operator BS coordination is useful in spectrum sharing (i) with dense and high power operators and (ii) with fairly wide beams, e.g., 30 or higher.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
81,062
1912.05062
Blockchain for 5G and Beyond Networks: A State of the Art Survey
The fifth generation (5G) wireless networks are on the way to be deployed around the world. The 5G technologies target to support diverse vertical applications by connecting heterogeneous devices and machines with drastic improvements in terms of high quality of service, increased network capacity and enhanced system throughput. Despite all these advantages that 5G will bring about, there are still major challenges to be addressed, including decentralization, transparency, risks of data interoperability, network privacy and security vulnerabilities. Blockchain can offer innovative solutions to effectively solve the challenges in 5G networks. Driven by the dramatically increased capacities of the 5G networks and the recent breakthroughs in the blockchain technology, blockchain-based 5G services are expected to witness a rapid development and bring substantial benefits to future society. In this paper, we provide a state-of-art survey on the integration of blockchain with 5G networks and beyond. Our key focus is on the discussions on the potential of blockchain for enabling key 5G technologies, including cloud/edge computing, Software Defined Networks, Network Function Virtualization, Network Slicing, and D2D communications. We then explore the opportunities of blockchain to important 5G services, ranging from spectrum management, network virtualization, resource management to interference management, federated learning, privacy and security provision. The recent advances in the applications of blockchain in 5G Internet of Things are also surveyed in various domains, i.e. smart healthcare, smart city, smart transportation, smart grid and UAVs. The main findings derived from the survey are then summarized, and possible research challenges with open issues are also identified. Lastly, we complete this survey by shedding new light on future directions of research on this newly emerging area.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
156,998
1903.04613
Learning Edge Properties in Graphs from Path Aggregations
Graph edges, along with their labels, can represent information of fundamental importance, such as links between web pages, friendship between users, the rating given by users to other users or items, and much more. We introduce LEAP, a trainable, general framework for predicting the presence and properties of edges on the basis of the local structure, topology, and labels of the graph. The LEAP framework is based on the exploration and machine-learning aggregation of the paths connecting nodes in a graph. We provide several methods for performing the aggregation phase by training path aggregators, and we demonstrate the flexibility and generality of the framework by applying it to the prediction of links and user ratings in social networks. We validate the LEAP framework on two problems: link prediction, and user rating prediction. On eight large datasets, among which the arXiv collaboration network, the Yeast protein-protein interaction, and the US airlines routes network, we show that the link prediction performance of LEAP is at least as good as the current state of the art methods, such as SEAL and WLNM. Next, we consider the problem of predicting user ratings on other users: this problem is known as the edge-weight prediction problem in weighted signed networks (WSN). On Bitcoin networks, and Wikipedia RfA, we show that LEAP performs consistently better than the Fairness & Goodness based regression models, varying the amount of training edges between 10 to 90%. These examples demonstrate that LEAP, in spite of its generality, can match or best the performance of approaches that have been especially crafted to solve very specific edge prediction problems.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
124,006
2411.15043
OVO-SLAM: Open-Vocabulary Online Simultaneous Localization and Mapping
This paper presents the first Open-Vocabulary Online 3D semantic SLAM pipeline, that we denote as OVO-SLAM. Our primary contribution is in the pipeline itself, particularly in the mapping thread. Given a set of posed RGB-D frames, we detect and track 3D segments, which we describe using CLIP vectors, calculated through a novel aggregation from the viewpoints where these 3D segments are observed. Notably, our OVO-SLAM pipeline is not only faster but also achieves better segmentation metrics compared to offline approaches in the literature. Along with superior segmentation performance, we show experimental results of our contributions integrated with Gaussian-SLAM, being the first ones demonstrating end-to-end open-vocabulary online 3D reconstructions without relying on ground-truth camera poses or scene geometry.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
510,418
1907.11000
Personalised novel and explainable matrix factorisation
Recommendation systems personalise suggestions to individuals to help them in their decision making and exploration tasks. In the ideal case, these recommendations, besides of being accurate, should also be novel and explainable. However, up to now most platforms fail to provide both, novel recommendations that advance users' exploration along with explanations to make their reasoning more transparent to them. For instance, a well-known recommendation algorithm, such as matrix factorisation (MF), optimises only the accuracy criterion, while disregarding other quality criteria such as the explainability or the novelty, of recommended items. In this paper, to the best of our knowledge, we propose a new model, denoted as NEMF, that allows to trade-off the MF performance with respect to the criteria of novelty and explainability, while only minimally compromising on accuracy. In addition, we recommend a new explainability metric based on nDCG, which distinguishes a more explainable item from a less explainable item. An initial user study indicates how users perceive the different attributes of these "user" style explanations and our extensive experimental results demonstrate that we attain high accuracy by recommending also novel and explainable items.
true
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
139,760
1009.0117
Exploring Language-Independent Emotional Acoustic Features via Feature Selection
We propose a novel feature selection strategy to discover language-independent acoustic features that tend to be responsible for emotions regardless of languages, linguistics and other factors. Experimental results suggest that the language-independent feature subset discovered yields the performance comparable to the full feature set on various emotional speech corpora.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
7,439
2405.14014
RadarOcc: Robust 3D Occupancy Prediction with 4D Imaging Radar
3D occupancy-based perception pipeline has significantly advanced autonomous driving by capturing detailed scene descriptions and demonstrating strong generalizability across various object categories and shapes. Current methods predominantly rely on LiDAR or camera inputs for 3D occupancy prediction. These methods are susceptible to adverse weather conditions, limiting the all-weather deployment of self-driving cars. To improve perception robustness, we leverage the recent advances in automotive radars and introduce a novel approach that utilizes 4D imaging radar sensors for 3D occupancy prediction. Our method, RadarOcc, circumvents the limitations of sparse radar point clouds by directly processing the 4D radar tensor, thus preserving essential scene details. RadarOcc innovatively addresses the challenges associated with the voluminous and noisy 4D radar data by employing Doppler bins descriptors, sidelobe-aware spatial sparsification, and range-wise self-attention mechanisms. To minimize the interpolation errors associated with direct coordinate transformations, we also devise a spherical-based feature encoding followed by spherical-to-Cartesian feature aggregation. We benchmark various baseline methods based on distinct modalities on the public K-Radar dataset. The results demonstrate RadarOcc's state-of-the-art performance in radar-based 3D occupancy prediction and promising results even when compared with LiDAR- or camera-based methods. Additionally, we present qualitative evidence of the superior performance of 4D radar in adverse weather conditions and explore the impact of key pipeline components through ablation studies.
false
false
false
false
true
false
true
true
false
false
false
true
false
false
false
false
false
false
456,200
2411.06612
An exact active sensing strategy for a class of bio-inspired systems
We consider a general class of translation-invariant systems with a specific category of output nonlinearities motivated by biological sensing. We show that no dynamic output feedback can stabilize this class of systems to an isolated equilibrium point. To overcome this fundamental limitation, we propose a simple control scheme that includes a low-amplitude periodic forcing function akin to so-called "active sensing" in biology, together with nonlinear output feedback. Our analysis shows that this approach leads to the emergence of an exponentially stable limit cycle. These findings offer a provably stable active sensing strategy and may thus help to rationalize the active sensing movements made by animals as they perform certain motor behaviors.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
507,187
2403.09712
A Knowledge-Injected Curriculum Pretraining Framework for Question Answering
Knowledge-based question answering (KBQA) is a key task in NLP research, and also an approach to access the web data and knowledge, which requires exploiting knowledge graphs (KGs) for reasoning. In the literature, one promising solution for KBQA is to incorporate the pretrained language model (LM) with KGs by generating KG-centered pretraining corpus, which has shown its superiority. However, these methods often depend on specific techniques and resources to work, which may not always be available and restrict its application. Moreover, existing methods focus more on improving language understanding with KGs, while neglect the more important human-like complex reasoning. To this end, in this paper, we propose a general Knowledge-Injected Curriculum Pretraining framework (KICP) to achieve comprehensive KG learning and exploitation for KBQA tasks, which is composed of knowledge injection (KI), knowledge adaptation (KA) and curriculum reasoning (CR). Specifically, the KI module first injects knowledge into the LM by generating KG-centered pretraining corpus, and generalizes the process into three key steps that could work with different implementations for flexible application. Next, the KA module learns knowledge from the generated corpus with LM equipped with an adapter as well as keeps its original natural language understanding ability to reduce the negative impacts of the difference between the generated and natural corpus. Last, to enable the LM with complex reasoning, the CR module follows human reasoning patterns to construct three corpora with increasing difficulties of reasoning, and further trains the LM from easy to hard in a curriculum manner. We provide an implementation of the general framework, and evaluate the proposed KICP on four real-word datasets. The results demonstrate that our framework can achieve higher performances.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
437,878
1708.01967
Fake News Detection on Social Media: A Data Mining Perspective
Social media for news consumption is a double-edged sword. On the one hand, its low cost, easy access, and rapid dissemination of information lead people to seek out and consume news from social media. On the other hand, it enables the wide spread of "fake news", i.e., low quality news with intentionally false information. The extensive spread of fake news has the potential for extremely negative impacts on individuals and society. Therefore, fake news detection on social media has recently become an emerging research that is attracting tremendous attention. Fake news detection on social media presents unique characteristics and challenges that make existing detection algorithms from traditional news media ineffective or not applicable. First, fake news is intentionally written to mislead readers to believe false information, which makes it difficult and nontrivial to detect based on news content; therefore, we need to include auxiliary information, such as user social engagements on social media, to help make a determination. Second, exploiting this auxiliary information is challenging in and of itself as users' social engagements with fake news produce data that is big, incomplete, unstructured, and noisy. Because the issue of fake news detection on social media is both challenging and relevant, we conducted this survey to further facilitate research on the problem. In this survey, we present a comprehensive review of detecting fake news on social media, including fake news characterizations on psychology and social theories, existing algorithms from a data mining perspective, evaluation metrics and representative datasets. We also discuss related research areas, open problems, and future research directions for fake news detection on social media.
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
false
78,495
2012.05633
Can we detect harmony in artistic compositions? A machine learning approach
Harmony in visual compositions is a concept that cannot be defined or easily expressed mathematically, even by humans. The goal of the research described in this paper was to find a numerical representation of artistic compositions with different levels of harmony. We ask humans to rate a collection of grayscale images based on the harmony they convey. To represent the images, a set of special features were designed and extracted. By doing so, it became possible to assign objective measures to subjectively judged compositions. Given the ratings and the extracted features, we utilized machine learning algorithms to evaluate the efficiency of such representations in a harmony classification problem. The best performing model (SVM) achieved 80% accuracy in distinguishing between harmonic and disharmonic images, which reinforces the assumption that concept of harmony can be expressed in a mathematical way that can be assessed by humans.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
210,844
1504.05349
List and Probabilistic Unique Decoding of Folded Subspace Codes
A new class of folded subspace codes for noncoherent network coding is presented. The codes can correct insertions and deletions beyond the unique decoding radius for any code rate $R\in[0,1]$. An efficient interpolation-based decoding algorithm for this code construction is given which allows to correct insertions and deletions up to the normalized radius $s(1-((1/h+h)/(h-s+1))R)$, where $h$ is the folding parameter and $s\leq h$ is a decoding parameter. The algorithm serves as a list decoder or as a probabilistic unique decoder that outputs a unique solution with high probability. An upper bound on the average list size of (folded) subspace codes and on the decoding failure probability is derived. A major benefit of the decoding scheme is that it enables probabilistic unique decoding up to the list decoding radius.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
42,259
2206.06619
TransVG++: End-to-End Visual Grounding with Language Conditioned Vision Transformer
In this work, we explore neat yet effective Transformer-based frameworks for visual grounding. The previous methods generally address the core problem of visual grounding, i.e., multi-modal fusion and reasoning, with manually-designed mechanisms. Such heuristic designs are not only complicated but also make models easily overfit specific data distributions. To avoid this, we first propose TransVG, which establishes multi-modal correspondences by Transformers and localizes referred regions by directly regressing box coordinates. We empirically show that complicated fusion modules can be replaced by a simple stack of Transformer encoder layers with higher performance. However, the core fusion Transformer in TransVG is stand-alone against uni-modal encoders, and thus should be trained from scratch on limited visual grounding data, which makes it hard to be optimized and leads to sub-optimal performance. To this end, we further introduce TransVG++ to make two-fold improvements. For one thing, we upgrade our framework to a purely Transformer-based one by leveraging Vision Transformer (ViT) for vision feature encoding. For another, we devise Language Conditioned Vision Transformer that removes external fusion modules and reuses the uni-modal ViT for vision-language fusion at the intermediate layers. We conduct extensive experiments on five prevalent datasets, and report a series of state-of-the-art records.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
302,438
2410.12193
Trajectory Manifold Optimization for Fast and Adaptive Kinodynamic Motion Planning
Fast kinodynamic motion planning is crucial for systems to effectively adapt to dynamically changing environments. Despite some efforts, existing approaches still struggle with rapid planning in high-dimensional, complex problems. Not surprisingly, the primary challenge arises from the high-dimensionality of the search space, specifically the trajectory space. We address this issue with a two-step method: initially, we identify a lower-dimensional trajectory manifold {\it offline}, comprising diverse trajectories specifically relevant to the task at hand while meeting kinodynamic constraints. Subsequently, we search for solutions within this manifold {\it online}, significantly enhancing the planning speed. To encode and generate a manifold of continuous-time, differentiable trajectories, we propose a novel neural network model, {\it Differentiable Motion Manifold Primitives (DMMP)}, along with a practical training strategy. Experiments with a 7-DoF robot arm tasked with dynamic throwing to arbitrary target positions demonstrate that our method surpasses existing approaches in planning speed, task success, and constraint satisfaction.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
498,898
2006.16395
On Bellman's Optimality Principle for zs-POSGs
Many non-trivial sequential decision-making problems are efficiently solved by relying on Bellman's optimality principle, i.e., exploiting the fact that sub-problems are nested recursively within the original problem. Here we show how it can apply to (infinite horizon) 2-player zero-sum partially observable stochastic games (zs-POSGs) by (i) taking a central planner's viewpoint, which can only reason on a sufficient statistic called occupancy state, and (ii) turning such problems into zero-sum occupancy Markov games (zs-OMGs). Then, exploiting the Lipschitz-continuity of the value function in occupancy space, one can derive a version of the HSVI algorithm (Heuristic Search Value Iteration) that provably finds an $\epsilon$-Nash equilibrium in finite time.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
184,790
2110.00701
Graph Compression with Application to Model Selection
Many multivariate data such as social and biological data exhibit complex dependencies that are best characterized by graphs. Unlike sequential data, graphs are, in general, unordered structures. This means we can no longer use classic, sequential-based compression methods on these graph-based data. Therefore, it is necessary to develop new methods for graph compression. In this paper, we present universal source coding methods for the lossless compression of unweighted, undirected, unlabelled graphs. We encode in two steps: 1) transforming graph into a rooted binary tree, 2) the encoding rooted binary tree using graph statistics. Our coders showed better compression performance than other source coding methods on both synthetic and real-world graphs. We then applied our graph coding methods for model selection of Gaussian graphical models using minimum description length (MDL) principle finding the description length of the conditional independence graph. Experiments on synthetic data show that our approach gives better performance compared to common model selection methods. We also applied our approach to electrocardiogram (ECG) data in order to explore the differences between graph models of two groups of subjects.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
258,497
2009.10054
Regularizing Attention Networks for Anomaly Detection in Visual Question Answering
For stability and reliability of real-world applications, the robustness of DNNs in unimodal tasks has been evaluated. However, few studies consider abnormal situations that a visual question answering (VQA) model might encounter at test time after deployment in the real-world. In this study, we evaluate the robustness of state-of-the-art VQA models to five different anomalies, including worst-case scenarios, the most frequent scenarios, and the current limitation of VQA models. Different from the results in unimodal tasks, the maximum confidence of answers in VQA models cannot detect anomalous inputs, and post-training of the outputs, such as outlier exposure, is ineffective for VQA models. Thus, we propose an attention-based method, which uses confidence of reasoning between input images and questions and shows much more promising results than the previous methods in unimodal tasks. In addition, we show that a maximum entropy regularization of attention networks can significantly improve the attention-based anomaly detection of the VQA models. Thanks to the simplicity, attention-based anomaly detection and the regularization are model-agnostic methods, which can be used for various cross-modal attentions in the state-of-the-art VQA models. The results imply that cross-modal attention in VQA is important to improve not only VQA accuracy, but also the robustness to various anomalies.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
196,786
2411.19213
ANDHRA Bandersnatch: Training Neural Networks to Predict Parallel Realities
Inspired by the Many-Worlds Interpretation (MWI), this work introduces a novel neural network architecture that splits the same input signal into parallel branches at each layer, utilizing a Hyper Rectified Activation, referred to as ANDHRA. The branched layers do not merge and form separate network paths, leading to multiple network heads for output prediction. For a network with a branching factor of 2 at three levels, the total number of heads is 2^3 = 8 . The individual heads are jointly trained by combining their respective loss values. However, the proposed architecture requires additional parameters and memory during training due to the additional branches. During inference, the experimental results on CIFAR-10/100 demonstrate that there exists one individual head that outperforms the baseline accuracy, achieving statistically significant improvement with equal parameters and computational cost.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
512,158
2411.19814
Gaussian multi-target filtering with target dynamics driven by a stochastic differential equation
This paper proposes multi-target filtering algorithms in which target dynamics are given in continuous time and measurements are obtained at discrete time instants. In particular, targets appear according to a Poisson point process (PPP) in time with a given Gaussian spatial distribution, targets move according to a general time-invariant linear stochastic differential equation, and the life span of each target is modelled with an exponential distribution. For this multi-target dynamic model, we derive the distribution of the set of new born targets and calculate closed-form expressions for the best fitting mean and covariance of each target at its time of birth by minimising the Kullback-Leibler divergence via moment matching. This yields a novel Gaussian continuous-discrete Poisson multi-Bernoulli mixture (PMBM) filter, and its approximations based on Poisson multi-Bernoulli and probability hypothesis density filtering. These continuous-discrete multi-target filters are also extended to target dynamics driven by nonlinear stochastic differential equations.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
512,395
2102.10998
Towards a Machine Learning-driven Trust Evaluation Model for Social Internet of Things: A Time-aware Approach
The emerging paradigm of the Social Internet of Things (SIoT) has transformed the traditional notion of the Internet of Things (IoT) into a social network of billions of interconnected smart objects by integrating social networking facets into the same. In SIoT, objects can establish social relationships in an autonomous manner and interact with the other objects in the network based on their social behaviour. A fundamental problem that needs attention is establishing of these relationships in a reliable and trusted way, i.e., establishing trustworthy relationships and building trust amongst objects. In addition, it is also indispensable to ascertain and predict an object's behaviour in the SIoT network over a period of time. Accordingly, in this paper, we have proposed an efficient time-aware machine learning-driven trust evaluation model to address this particular issue. The envisaged model deliberates social relationships in terms of friendship and community-interest, and further takes into consideration the working relationships and cooperativeness (object-object interactions) as trust parameters to quantify the trustworthiness of an object. Subsequently, in contrast to the traditional weighted sum heuristics, a machine learning-driven aggregation scheme is delineated to synthesize these trust parameters to ascertain a single trust score. The experimental results demonstrate that the proposed model can efficiently segregates the trustworthy and untrustworthy objects within a network, and further provides the insight on how the trust of an object varies with time along with depicting the effect of each trust parameter on a trust score.
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
false
221,287
2303.14765
A Survey on Dual-Quaternions
Over the past few years, the applications of dual-quaternions have not only developed in many different directions but has also evolved in exciting ways in several areas. As dual-quaternions offer an efficient and compact symbolic form with unique mathematical properties. While dual-quaternions are now common place in many aspects of research and implementation, such as, robotics and engineering through to computer graphics and animation, there are still a large number of avenues for exploration with huge potential benefits. This article is the first to provide a comprehensive review of the dual-quaternion landscape. In this survey, we present a review of dual-quaternion techniques and applications developed over the years while providing insights into current and future directions. The article starts with the definition of dual-quaternions, their mathematical formulation, while explaining key aspects of importance (e.g., compression and ambiguities). The literature review in this article is divided into categories to help manage and visualize the application of dual-quaternions for solving specific problems. A timeline illustrating key methods is presented, explaining how dual-quaternion approaches have progressed over the years. The most popular dual-quaternion methods are discussed with regard to their impact in the literature, performance, computational cost and their real-world results (compared to associated models). Finally, we indicate the limitations of dual-quaternion methodologies and propose future research directions.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
354,240
2208.04559
Analyzing and Enhancing Closed-loop Stability in Reactive Simulation
Simulation has played an important role in efficiently evaluating self-driving vehicles in terms of scalability. Existing methods mostly rely on heuristic-based simulation, where traffic participants follow certain human-encoded rules that fail to generate complex human behaviors. Therefore, the reactive simulation concept is proposed to bridge the human behavior gap between simulation and real-world traffic scenarios by leveraging real-world data. However, these reactive models can easily generate unreasonable behaviors after a few steps of simulation, where we regard the model as losing its stability. To the best of our knowledge, no work has explicitly discussed and analyzed the stability of the reactive simulation framework. In this paper, we aim to provide a thorough stability analysis of the reactive simulation and propose a solution to enhance the stability. Specifically, we first propose a new reactive simulation framework, where we discover that the smoothness and consistency of the simulated state sequences are crucial factors to stability. We then incorporate the kinematic vehicle model into the framework to improve the closed-loop stability of the reactive simulation. Furthermore, along with commonly-used metrics, several novel metrics are proposed in this paper to better analyze the simulation performance.
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
312,153
1904.01114
Deep Industrial Espionage
The theory of deep learning is now considered largely solved, and is well understood by researchers and influencers alike. To maintain our relevance, we therefore seek to apply our skills to under-explored, lucrative applications of this technology. To this end, we propose and Deep Industrial Espionage, an efficient end-to-end framework for industrial information propagation and productisation. Specifically, given a single image of a product or service, we aim to reverse-engineer, rebrand and distribute a copycat of the product at a profitable price-point to consumers in an emerging market---all within in a single forward pass of a Neural Network. Differently from prior work in machine perception which has been restricted to classifying, detecting and reasoning about object instances, our method offers tangible business value in a wide range of corporate settings. Our approach draws heavily on a promising recent arxiv paper until its original authors' names can no longer be read (we use felt tip pen). We then rephrase the anonymised paper, add the word "novel" to the title, and submit it a prestigious, closed-access espionage journal who assure us that someday, we will be entitled to some fraction of their extortionate readership fees.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
126,053
2312.17431
MVPatch: More Vivid Patch for Adversarial Camouflaged Attacks on Object Detectors in the Physical World
Recent studies have shown that Adversarial Patches (APs) can effectively manipulate object detection models. However, the conspicuous patterns often associated with these patches tend to attract human attention, posing a significant challenge. Existing research has primarily focused on enhancing attack efficacy in the physical domain while often neglecting the optimization of stealthiness and transferability. Furthermore, applying APs in real-world scenarios faces major challenges related to transferability, stealthiness, and practicality. To address these challenges, we introduce generalization theory into the context of APs, enabling our iterative process to simultaneously enhance transferability and refine visual correlation with realistic images. We propose a Dual-Perception-Based Framework (DPBF) to generate the More Vivid Patch (MVPatch), which enhances transferability, stealthiness, and practicality. The DPBF integrates two key components: the Model-Perception-Based Module (MPBM) and the Human-Perception-Based Module (HPBM), along with regularization terms. The MPBM employs ensemble strategy to reduce object confidence scores across multiple detectors, thereby improving AP transferability with robust theoretical support. Concurrently, the HPBM introduces a lightweight method for achieving visual similarity, creating natural and inconspicuous adversarial patches without relying on additional generative models. The regularization terms further enhance the practicality of the generated APs in the physical domain. Additionally, we introduce naturalness and transferability scores to provide an unbiased assessment of APs. Extensive experimental validation demonstrates that MVPatch achieves superior transferability and a natural appearance in both digital and physical domains, underscoring its effectiveness and stealthiness.
false
false
false
false
false
false
false
false
false
false
false
true
true
false
false
false
false
false
418,745
1307.7897
Energy Distribution of EEG Signals: EEG Signal Wavelet-Neural Network Classifier
In this paper, a wavelet-based neural network (WNN) classifier for recognizing EEG signals is implemented and tested under three sets EEG signals (healthy subjects, patients with epilepsy and patients with epileptic syndrome during the seizure). First, the Discrete Wavelet Transform (DWT) with the Multi-Resolution Analysis (MRA) is applied to decompose EEG signal at resolution levels of the components of the EEG signal (delta, theta, alpha, beta and gamma) and the Parsevals theorem are employed to extract the percentage distribution of energy features of the EEG signal at different resolution levels. Second, the neural network (NN) classifies these extracted features to identify the EEGs type according to the percentage distribution of energy features. The performance of the proposed algorithm has been evaluated using in total 300 EEG signals. The results showed that the proposed classifier has the ability of recognizing and classifying EEG signals efficiently.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
26,151
2311.05941
Out-of-Distribution-Aware Electric Vehicle Charging
We tackle the challenge of learning to charge Electric Vehicles (EVs) with Out-of-Distribution (OOD) data. Traditional scheduling algorithms typically fail to balance near-optimal average performance with worst-case guarantees, particularly with OOD data. Model Predictive Control (MPC) is often too conservative and data-independent, whereas Reinforcement Learning (RL) tends to be overly aggressive and fully trusts the data, hindering their ability to consistently achieve the best-of-both-worlds. To bridge this gap, we introduce a novel OOD-aware scheduling algorithm, denoted OOD-Charging. This algorithm employs a dynamic "awareness radius", which updates in real-time based on the Temporal Difference (TD)-error that reflects the severity of OOD. The OOD-Charging algorithm allows for a more effective balance between consistency and robustness in EV charging schedules, thereby significantly enhancing adaptability and efficiency in real-world charging environments. Our results demonstrate that this approach improves the scheduling reward reliably under real OOD scenarios with remarkable shifts of EV charging behaviors caused by COVID-19 in the Caltech ACN-Data.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
406,769
2302.03921
Predictable MDP Abstraction for Unsupervised Model-Based RL
A key component of model-based reinforcement learning (RL) is a dynamics model that predicts the outcomes of actions. Errors in this predictive model can degrade the performance of model-based controllers, and complex Markov decision processes (MDPs) can present exceptionally difficult prediction problems. To mitigate this issue, we propose predictable MDP abstraction (PMA): instead of training a predictive model on the original MDP, we train a model on a transformed MDP with a learned action space that only permits predictable, easy-to-model actions, while covering the original state-action space as much as possible. As a result, model learning becomes easier and more accurate, which allows robust, stable model-based planning or model-based RL. This transformation is learned in an unsupervised manner, before any task is specified by the user. Downstream tasks can then be solved with model-based control in a zero-shot fashion, without additional environment interactions. We theoretically analyze PMA and empirically demonstrate that PMA leads to significant improvements over prior unsupervised model-based RL approaches in a range of benchmark environments. Our code and videos are available at https://seohong.me/projects/pma/
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
344,521
2404.03070
Behind the Veil: Enhanced Indoor 3D Scene Reconstruction with Occluded Surfaces Completion
In this paper, we present a novel indoor 3D reconstruction method with occluded surface completion, given a sequence of depth readings. Prior state-of-the-art (SOTA) methods only focus on the reconstruction of the visible areas in a scene, neglecting the invisible areas due to the occlusions, e.g., the contact surface between furniture, occluded wall and floor. Our method tackles the task of completing the occluded scene surfaces, resulting in a complete 3D scene mesh. The core idea of our method is learning 3D geometry prior from various complete scenes to infer the occluded geometry of an unseen scene from solely depth measurements. We design a coarse-fine hierarchical octree representation coupled with a dual-decoder architecture, i.e., Geo-decoder and 3D Inpainter, which jointly reconstructs the complete 3D scene geometry. The Geo-decoder with detailed representation at fine levels is optimized online for each scene to reconstruct visible surfaces. The 3D Inpainter with abstract representation at coarse levels is trained offline using various scenes to complete occluded surfaces. As a result, while the Geo-decoder is specialized for an individual scene, the 3D Inpainter can be generally applied across different scenes. We evaluate the proposed method on the 3D Completed Room Scene (3D-CRS) and iTHOR datasets, significantly outperforming the SOTA methods by a gain of 16.8% and 24.2% in terms of the completeness of 3D reconstruction. 3D-CRS dataset including a complete 3D mesh of each scene is provided at project webpage.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
444,102
2104.12673
Joint Representation Learning and Novel Category Discovery on Single- and Multi-modal Data
This paper studies the problem of novel category discovery on single- and multi-modal data with labels from different but relevant categories. We present a generic, end-to-end framework to jointly learn a reliable representation and assign clusters to unlabelled data. To avoid over-fitting the learnt embedding to labelled data, we take inspiration from self-supervised representation learning by noise-contrastive estimation and extend it to jointly handle labelled and unlabelled data. In particular, we propose using category discrimination on labelled data and cross-modal discrimination on multi-modal data to augment instance discrimination used in conventional contrastive learning approaches. We further employ Winner-Take-All (WTA) hashing algorithm on the shared representation space to generate pairwise pseudo labels for unlabelled data to better predict cluster assignments. We thoroughly evaluate our framework on large-scale multi-modal video benchmarks Kinetics-400 and VGG-Sound, and image benchmarks CIFAR10, CIFAR100 and ImageNet, obtaining state-of-the-art results.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
232,288
1805.03193
Coordination Using Individually Shared Randomness
Two processors output correlated sequences using the help of a coordinator with whom they individually share independent randomness. For the case of unlimited shared randomness, we characterize the rate of communication required from the coordinator to the processors over a broadcast link. We also give an achievable trade-off between the communication and shared randomness rates.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
97,003
2304.14291
EDAPS: Enhanced Domain-Adaptive Panoptic Segmentation
With autonomous industries on the rise, domain adaptation of the visual perception stack is an important research direction due to the cost savings promise. Much prior art was dedicated to domain-adaptive semantic segmentation in the synthetic-to-real context. Despite being a crucial output of the perception stack, panoptic segmentation has been largely overlooked by the domain adaptation community. Therefore, we revisit well-performing domain adaptation strategies from other fields, adapt them to panoptic segmentation, and show that they can effectively enhance panoptic domain adaptation. Further, we study the panoptic network design and propose a novel architecture (EDAPS) designed explicitly for domain-adaptive panoptic segmentation. It uses a shared, domain-robust transformer encoder to facilitate the joint adaptation of semantic and instance features, but task-specific decoders tailored for the specific requirements of both domain-adaptive semantic and instance segmentation. As a result, the performance gap seen in challenging panoptic benchmarks is substantially narrowed. EDAPS significantly improves the state-of-the-art performance for panoptic segmentation UDA by a large margin of 20% on SYNTHIA-to-Cityscapes and even 72% on the more challenging SYNTHIA-to-Mapillary Vistas. The implementation is available at https://github.com/susaha/edaps.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
360,884
2412.00748
Exploring Cognition through Morphological Info-Computational Framework
Traditionally, cognition has been considered a uniquely human capability involving perception, memory, learning, reasoning, and problem-solving. However, recent research shows that cognition is a fundamental ability shared by all living beings, from single cells to complex organisms. This chapter takes an info-computational approach (ICON), viewing natural structures as information and the processes of change in these structures as computations. It is a relational framework dependent on the perspective of a cognizing observer/cognizer. Informational structures are properties of the material substrate, and when focusing on the behavior of the substrate, we discuss morphological computing (MC). ICON and MC are complementary perspectives for a cognizer. Information and computation are inseparably connected with cognition. This chapter explores research connecting nature as a computational structure for a cognizer, with morphological computation, morphogenesis, agency, extended cognition, and extended evolutionary synthesis, using examples of the free energy principle and active inference. It introduces theoretical and practical approaches challenging traditional computational models of cognition limited to abstract symbol processing, highlighting the computational capacities inherent in the material substrate (embodiment). Understanding the embodiment of cognition through its morphological computational basis is crucial for biology, evolution, intelligence theory, AI, robotics, and other fields.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
512,799
2105.06289
A Scalable Algorithm for Anomaly Detection via Learning-Based Controlled Sensing
We address the problem of sequentially selecting and observing processes from a given set to find the anomalies among them. The decision-maker observes one process at a time and obtains a noisy binary indicator of whether or not the corresponding process is anomalous. In this setting, we develop an anomaly detection algorithm that chooses the process to be observed at a given time instant, decides when to stop taking observations, and makes a decision regarding the anomalous processes. The objective of the detection algorithm is to arrive at a decision with an accuracy exceeding a desired value while minimizing the delay in decision making. Our algorithm relies on a Markov decision process defined using the marginal probability of each process being normal or anomalous, conditioned on the observations. We implement the detection algorithm using the deep actor-critic reinforcement learning framework. Unlike prior work on this topic that has exponential complexity in the number of processes, our algorithm has computational and memory requirements that are both polynomial in the number of processes. We demonstrate the efficacy of our algorithm using numerical experiments by comparing it with the state-of-the-art methods.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
235,080
1704.04989
Excitable behaviors
This chapter revisits the concept of excitability, a basic system property of neurons. The focus is on excitable systems regarded as behaviors rather than dynamical systems. By this we mean open systems modulated by specific interconnection properties rather than closed systems classified by their parameter ranges. Modeling, analysis, and synthesis questions can be formulated in the classical language of circuit theory. The input-output characterization of excitability is in terms of the local sensitivity of the current-voltage relationship. It suggests the formulation of novel questions for non-linear system theory, inspired by questions from experimental neurophysiology.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
71,926
2410.01847
Bayes-CATSI: A variational Bayesian deep learning framework for medical time series data imputation
Medical time series datasets feature missing values that need data imputation methods, however, conventional machine learning models fall short due to a lack of uncertainty quantification in predictions. Among these models, the CATSI (Context-Aware Time Series Imputation) stands out for its effectiveness by incorporating a context vector into the imputation process, capturing the global dependencies of each patient. In this paper, we propose a Bayesian Context-Aware Time Series Imputation (Bayes-CATSI) framework which leverages uncertainty quantification offered by variational inference. We consider the time series derived from electroencephalography (EEG), electrooculography (EOG), electromyography (EMG), electrocardiology (EKG). Variational Inference assumes the shape of the posterior distribution and through minimization of the Kullback-Leibler(KL) divergence it finds variational densities that are closest to the true posterior distribution. Thus , we integrate the variational Bayesian deep learning layers into the CATSI model. Our results show that Bayes-CATSI not only provides uncertainty quantification but also achieves superior imputation performance compared to the CATSI model. Specifically, an instance of Bayes-CATSI outperforms CATSI by 9.57 %. We provide an open-source code implementation for applying Bayes-CATSI to other medical data imputation problems.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
true
false
false
493,986
1812.05223
Phase-field study of crack nucleation and propagation in elastic - perfectly plastic bodies
Crack initiation and propagation in elastic - perfectly plastic bodies is studied in a phase-field or variational gradient damage formulation. A rate-independent formulation that naturally couples elasticity, perfect plasticity and fracture is presented, and used to study crack initiation in notched specimens and crack propagation using a surfing boundary condition. Both plane strain and plane stress are addressed. It is shown that in plane strain, a plastic zone blunts the notch or crack tip which in turn inhibits crack nucleation and propagation. Sufficient load causes the crack to nucleate or unpin, but the crack does so with a finite jump. Therefore the propagation is intermittent or jerky leaving behind a rough surface. In plane stress, failure proceeds with an intense shear zone ahead of the notch or crack tip and the fracture process is not complete.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
116,368
2006.05255
DeepFair: Deep Learning for Improving Fairness in Recommender Systems
The lack of bias management in Recommender Systems leads to minority groups receiving unfair recommendations. Moreover, the trade-off between equity and precision makes it difficult to obtain recommendations that meet both criteria. Here we propose a Deep Learning based Collaborative Filtering algorithm that provides recommendations with an optimum balance between fairness and accuracy without knowing demographic information about the users. Experimental results show that it is possible to make fair recommendations without losing a significant proportion of accuracy.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
181,001
1804.08205
Spell Once, Summon Anywhere: A Two-Level Open-Vocabulary Language Model
We show how the spellings of known words can help us deal with unknown words in open-vocabulary NLP tasks. The method we propose can be used to extend any closed-vocabulary generative model, but in this paper we specifically consider the case of neural language modeling. Our Bayesian generative story combines a standard RNN language model (generating the word tokens in each sentence) with an RNN-based spelling model (generating the letters in each word type). These two RNNs respectively capture sentence structure and word structure, and are kept separate as in linguistics. By invoking the second RNN to generate spellings for novel words in context, we obtain an open-vocabulary language model. For known words, embeddings are naturally inferred by combining evidence from type spelling and token context. Comparing to baselines (including a novel strong baseline), we beat previous work and establish state-of-the-art results on multiple datasets.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
95,711
2403.19926
Video-Based Human Pose Regression via Decoupled Space-Time Aggregation
By leveraging temporal dependency in video sequences, multi-frame human pose estimation algorithms have demonstrated remarkable results in complicated situations, such as occlusion, motion blur, and video defocus. These algorithms are predominantly based on heatmaps, resulting in high computation and storage requirements per frame, which limits their flexibility and real-time application in video scenarios, particularly on edge devices. In this paper, we develop an efficient and effective video-based human pose regression method, which bypasses intermediate representations such as heatmaps and instead directly maps the input to the output joint coordinates. Despite the inherent spatial correlation among adjacent joints of the human pose, the temporal trajectory of each individual joint exhibits relative independence. In light of this, we propose a novel Decoupled Space-Time Aggregation network (DSTA) to separately capture the spatial contexts between adjacent joints and the temporal cues of each individual joint, thereby avoiding the conflation of spatiotemporal dimensions. Concretely, DSTA learns a dedicated feature token for each joint to facilitate the modeling of their spatiotemporal dependencies. With the proposed joint-wise local-awareness attention mechanism, our method is capable of efficiently and flexibly utilizing the spatial dependency of adjacent joints and the temporal dependency of each joint itself. Extensive experiments demonstrate the superiority of our method. Compared to previous regression-based single-frame human pose estimation methods, DSTA significantly enhances performance, achieving an 8.9 mAP improvement on PoseTrack2017. Furthermore, our approach either surpasses or is on par with the state-of-the-art heatmap-based multi-frame human pose estimation methods. Project page: https://github.com/zgspose/DSTA.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
442,541
2012.09340
Roof-GAN: Learning to Generate Roof Geometry and Relations for Residential Houses
This paper presents Roof-GAN, a novel generative adversarial network that generates structured geometry of residential roof structures as a set of roof primitives and their relationships. Given the number of primitives, the generator produces a structured roof model as a graph, which consists of 1) primitive geometry as raster images at each node, encoding facet segmentation and angles; 2) inter-primitive colinear/coplanar relationships at each edge; and 3) primitive geometry in a vector format at each node, generated by a novel differentiable vectorizer while enforcing the relationships. The discriminator is trained to assess the primitive raster geometry, the primitive relationships, and the primitive vector geometry in a fully end-to-end architecture. Qualitative and quantitative evaluations demonstrate the effectiveness of our approach in generating diverse and realistic roof models over the competing methods with a novel metric proposed in this paper for the task of structured geometry generation. Code and data are available at https://github.com/yi-ming-qian/roofgan .
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
212,035
2210.11479
Exploitation of material consolidation trade-offs in multi-tier complex supply networks
While consolidation strategies form the backbone of many supply chain optimisation problems, exploitation of multi-tier material relationships through consolidation remains an understudied area, despite being a prominent feature of industries that produce complex made-to-order products. In this paper, we propose an optimisation framework for exploiting multi-to-multi relationship between tiers of a supply chain. The resulting formulation is flexible such that quantity discounts, inventory holding, and transport costs can be included. The framework introduces a new trade-off between tiers, leading to cost reductions in one tier but increased costs in the other, which helps to reduce the overall procurement cost in the supply chain. A mixed integer linear programming model is developed and tested with a range of small to large-scale test problems from aerospace manufacturing. Our comparison to benchmark results shows that there is indeed a cost trade-off between two tiers, and that its reduction can be achieved using a holistic approach to reconfiguration. Costs are decreased when second tier fixed ordering costs and the number of machining options increase. Consolidation results in reduced inventory holding costs in all scenarios. Several secondary effects such as simplified supplier selection may also be observed.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
325,332
2301.10536
Understanding and Improving Deep Graph Neural Networks: A Probabilistic Graphical Model Perspective
Recently, graph-based models designed for downstream tasks have significantly advanced research on graph neural networks (GNNs). GNN baselines based on neural message-passing mechanisms such as GCN and GAT perform worse as the network deepens. Therefore, numerous GNN variants have been proposed to tackle this performance degradation problem, including many deep GNNs. However, a unified framework is still lacking to connect these existing models and interpret their effectiveness at a high level. In this work, we focus on deep GNNs and propose a novel view for understanding them. We establish a theoretical framework via inference on a probabilistic graphical model. Given the fixed point equation (FPE) derived from the variational inference on the Markov random fields, the deep GNNs, including JKNet, GCNII, DGCN, and the classical GNNs, such as GCN, GAT, and APPNP, can be regarded as different approximations of the FPE. Moreover, given this framework, more accurate approximations of FPE are brought, guiding us to design a more powerful GNN: coupling graph neural network (CoGNet). Extensive experiments are carried out on citation networks and natural language processing downstream tasks. The results demonstrate that the CoGNet outperforms the SOTA models.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
341,837
2412.16464
Transducer-Llama: Integrating LLMs into Streamable Transducer-based Speech Recognition
While large language models (LLMs) have been applied to automatic speech recognition (ASR), the task of making the model streamable remains a challenge. This paper proposes a novel model architecture, Transducer-Llama, that integrates LLMs into a Factorized Transducer (FT) model, naturally enabling streaming capabilities. Furthermore, given that the large vocabulary of LLMs can cause data sparsity issue and increased training costs for spoken language systems, this paper introduces an efficient vocabulary adaptation technique to align LLMs with speech system vocabularies. The results show that directly optimizing the FT model with a strong pre-trained LLM-based predictor using the RNN-T loss yields some but limited improvements over a smaller pre-trained LM predictor. Therefore, this paper proposes a weak-to-strong LM swap strategy, using a weak LM predictor during RNN-T loss training and then replacing it with a strong LLM. After LM replacement, the minimum word error rate (MWER) loss is employed to finetune the integration of the LLM predictor with the Transducer-Llama model. Experiments on the LibriSpeech and large-scale multi-lingual LibriSpeech corpora show that the proposed streaming Transducer-Llama approach gave a 17% relative WER reduction (WERR) over a strong FT baseline and a 32% WERR over an RNN-T baseline.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
519,532
2403.03390
Performance Evaluation of Semi-supervised Learning Frameworks for Multi-Class Weed Detection
Effective weed control plays a crucial role in optimizing crop yield and enhancing agricultural product quality. However, the reliance on herbicide application not only poses a critical threat to the environment but also promotes the emergence of resistant weeds. Fortunately, recent advances in precision weed management enabled by ML and DL provide a sustainable alternative. Despite great progress, existing algorithms are mainly developed based on supervised learning approaches, which typically demand large-scale datasets with manual-labeled annotations, which is time-consuming and labor-intensive. As such, label-efficient learning methods, especially semi-supervised learning, have gained increased attention in the broader domain of computer vision and have demonstrated promising performance. These methods aim to utilize a small number of labeled data samples along with a great number of unlabeled samples to develop high-performing models comparable to the supervised learning counterpart trained on a large amount of labeled data samples. In this study, we assess the effectiveness of a semi-supervised learning framework for multi-class weed detection, employing two well-known object detection frameworks, namely FCOS and Faster-RCNN. Specifically, we evaluate a generalized student-teacher framework with an improved pseudo-label generation module to produce reliable pseudo-labels for the unlabeled data. To enhance generalization, an ensemble student network is employed to facilitate the training process. Experimental results show that the proposed approach is able to achieve approximately 76\% and 96\% detection accuracy as the supervised methods with only 10\% of labeled data in CottenWeedDet3 and CottonWeedDet12, respectively. We offer access to the source code, contributing a valuable resource for ongoing semi-supervised learning research in weed detection and beyond.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
435,167
1908.03364
Deep Learning based Wearable Assistive System for Visually Impaired People
In this paper, we propose a deep learning based assistive system to improve the environment perception experience of visually impaired (VI). The system is composed of a wearable terminal equipped with an RGBD camera and an earphone, a powerful processor mainly for deep learning inferences and a smart phone for touch-based interaction. A data-driven learning approach is proposed to predict safe and reliable walkable instructions using RGBD data and the established semantic map. This map is also used to help VI understand their 3D surrounding objects and layout through well-designed touchscreen interactions. The quantitative and qualitative experimental results show that our learning based obstacle avoidance approach achieves excellent results in both indoor and outdoor datasets with low-lying obstacles. Meanwhile, user studies have also been carried out in various scenarios and showed the improvement of VI's environment perception experience with our system.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
141,226
2207.11762
Anti-Overestimation Dialogue Policy Learning for Task-Completion Dialogue System
A dialogue policy module is an essential part of task-completion dialogue systems. Recently, increasing interest has focused on reinforcement learning (RL)-based dialogue policy. Its favorable performance and wise action decisions rely on an accurate estimation of action values. The overestimation problem is a widely known issue of RL since its estimate of the maximum action value is larger than the ground truth, which results in an unstable learning process and suboptimal policy. This problem is detrimental to RL-based dialogue policy learning. To mitigate this problem, this paper proposes a dynamic partial average estimator (DPAV) of the ground truth maximum action value. DPAV calculates the partial average between the predicted maximum action value and minimum action value, where the weights are dynamically adaptive and problem-dependent. We incorporate DPAV into a deep Q-network as the dialogue policy and show that our method can achieve better or comparable results compared to top baselines on three dialogue datasets of different domains with a lower computational load. In addition, we also theoretically prove the convergence and derive the upper and lower bounds of the bias compared with those of other methods.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
309,773
2308.06622
DFM-X: Augmentation by Leveraging Prior Knowledge of Shortcut Learning
Neural networks are prone to learn easy solutions from superficial statistics in the data, namely shortcut learning, which impairs generalization and robustness of models. We propose a data augmentation strategy, named DFM-X, that leverages knowledge about frequency shortcuts, encoded in Dominant Frequencies Maps computed for image classification models. We randomly select X% training images of certain classes for augmentation, and process them by retaining the frequencies included in the DFMs of other classes. This strategy compels the models to leverage a broader range of frequencies for classification, rather than relying on specific frequency sets. Thus, the models learn more deep and task-related semantics compared to their counterpart trained with standard setups. Unlike other commonly used augmentation techniques which focus on increasing the visual variations of training data, our method targets exploiting the original data efficiently, by distilling prior knowledge about destructive learning behavior of models from data. Our experimental results demonstrate that DFM-X improves robustness against common corruptions and adversarial attacks. It can be seamlessly integrated with other augmentation techniques to further enhance the robustness of models.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
385,204
2201.04131
Optimally compressing VC classes
Resolving a conjecture of Littlestone and Warmuth, we show that any concept class of VC-dimension $d$ has a sample compression scheme of size $d$.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
275,026
1906.03644
The Implicit Metropolis-Hastings Algorithm
Recent works propose using the discriminator of a GAN to filter out unrealistic samples of the generator. We generalize these ideas by introducing the implicit Metropolis-Hastings algorithm. For any implicit probabilistic model and a target distribution represented by a set of samples, implicit Metropolis-Hastings operates by learning a discriminator to estimate the density-ratio and then generating a chain of samples. Since the approximation of density ratio introduces an error on every step of the chain, it is crucial to analyze the stationary distribution of such chain. For that purpose, we present a theoretical result stating that the discriminator loss upper bounds the total variation distance between the target distribution and the stationary distribution. Finally, we validate the proposed algorithm both for independent and Markov proposals on CIFAR-10 and CelebA datasets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
134,444
2305.10996
Entropy of microcanonical finite-graph ensembles
The entropy of random graph ensembles has gained widespread attention in the field of graph theory and network science. We consider microcanonical ensembles of simple graphs with prescribed degree sequences. We demonstrate that the mean-field approximations of the generating function using the Chebyshev-Hermite polynomials provide estimates for the entropy of finite-graph ensembles. Our estimate reproduces the Bender-Canfield formula in the limit of large graphs.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
365,327
2105.03265
LatentSLAM: unsupervised multi-sensor representation learning for localization and mapping
Biologically inspired algorithms for simultaneous localization and mapping (SLAM) such as RatSLAM have been shown to yield effective and robust robot navigation in both indoor and outdoor environments. One drawback however is the sensitivity to perceptual aliasing due to the template matching of low-dimensional sensory templates. In this paper, we propose an unsupervised representation learning method that yields low-dimensional latent state descriptors that can be used for RatSLAM. Our method is sensor agnostic and can be applied to any sensor modality, as we illustrate for camera images, radar range-doppler maps and lidar scans. We also show how combining multiple sensors can increase the robustness, by reducing the number of false matches. We evaluate on a dataset captured with a mobile robot navigating in a warehouse-like environment, moving through different aisles with similar appearance, making it hard for the SLAM algorithms to disambiguate locations.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
234,094
2206.14971
Boosting 3D Object Detection by Simulating Multimodality on Point Clouds
This paper presents a new approach to boost a single-modality (LiDAR) 3D object detector by teaching it to simulate features and responses that follow a multi-modality (LiDAR-image) detector. The approach needs LiDAR-image data only when training the single-modality detector, and once well-trained, it only needs LiDAR data at inference. We design a novel framework to realize the approach: response distillation to focus on the crucial response samples and avoid the background samples; sparse-voxel distillation to learn voxel semantics and relations from the estimated crucial voxels; a fine-grained voxel-to-point distillation to better attend to features of small and distant objects; and instance distillation to further enhance the deep-feature consistency. Experimental results on the nuScenes dataset show that our approach outperforms all SOTA LiDAR-only 3D detectors and even surpasses the baseline LiDAR-image detector on the key NDS metric, filling 72% mAP gap between the single- and multi-modality detectors.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
305,438
2004.07561
AMPSO: Artificial Multi-Swarm Particle Swarm Optimization
In this paper we propose a novel artificial multi-swarm PSO which consists of an exploration swarm, an artificial exploitation swarm and an artificial convergence swarm. The exploration swarm is a set of equal-sized sub-swarms randomly distributed around the particles space, the exploitation swarm is artificially generated from a perturbation of the best particle of exploration swarm for a fixed period of iterations, and the convergence swarm is artificially generated from a Gaussian perturbation of the best particle in the exploitation swarm as it is stagnated. The exploration and exploitation operations are alternatively carried out until the evolution rate of the exploitation is smaller than a threshold or the maximum number of iterations is reached. An adaptive inertia weight strategy is applied to different swarms to guarantee their performances of exploration and exploitation. To guarantee the accuracy of the results, a novel diversity scheme based on the positions and fitness values of the particles is proposed to control the exploration, exploitation and convergence processes of the swarms. To mitigate the inefficiency issue due to the use of diversity, two swarm update techniques are proposed to get rid of lousy particles such that nice results can be achieved within a fixed number of iterations. The effectiveness of AMPSO is validated on all the functions in the CEC2015 test suite, by comparing with a set of comprehensive set of 16 algorithms, including the most recently well-performing PSO variants and some other non-PSO optimization algorithms.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
172,815
2309.08163
Self-Assessment Tests are Unreliable Measures of LLM Personality
As large language models (LLM) evolve in their capabilities, various recent studies have tried to quantify their behavior using psychological tools created to study human behavior. One such example is the measurement of "personality" of LLMs using self-assessment personality tests developed to measure human personality. Yet almost none of these works verify the applicability of these tests on LLMs. In this paper, we analyze the reliability of LLM personality scores obtained from self-assessment personality tests using two simple experiments. We first introduce the property of prompt sensitivity, where three semantically equivalent prompts representing three intuitive ways of administering self-assessment tests on LLMs are used to measure the personality of the same LLM. We find that all three prompts lead to very different personality scores, a difference that is statistically significant for all traits in a large majority of scenarios. We then introduce the property of option-order symmetry for personality measurement of LLMs. Since most of the self-assessment tests exist in the form of multiple choice question (MCQ) questions, we argue that the scores should also be robust to not just the prompt template but also the order in which the options are presented. This test unsurprisingly reveals that the self-assessment test scores are not robust to the order of the options. These simple tests, done on ChatGPT and three Llama2 models of different sizes, show that self-assessment personality tests created for humans are unreliable measures of personality in LLMs.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
392,058
1909.02621
TEASPN: Framework and Protocol for Integrated Writing Assistance Environments
Language technologies play a key role in assisting people with their writing. Although there has been steady progress in e.g., grammatical error correction (GEC), human writers are yet to benefit from this progress due to the high development cost of integrating with writing software. We propose TEASPN, a protocol and an open-source framework for achieving integrated writing assistance environments. The protocol standardizes the way writing software communicates with servers that implement such technologies, allowing developers and researchers to integrate the latest developments in natural language processing (NLP) with low cost. As a result, users can enjoy the integrated experience in their favorite writing software. The results from experiments with human participants show that users use a wide range of technologies and rate their writing experience favorably, allowing them to write more fluent text.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
144,241
2501.09137
Gradient Descent Converges Linearly to Flatter Minima than Gradient Flow in Shallow Linear Networks
We study the gradient descent (GD) dynamics of a depth-2 linear neural network with a single input and output. We show that GD converges at an explicit linear rate to a global minimum of the training loss, even with a large stepsize -- about $2/\textrm{sharpness}$. It still converges for even larger stepsizes, but may do so very slowly. We also characterize the solution to which GD converges, which has lower norm and sharpness than the gradient flow solution. Our analysis reveals a trade off between the speed of convergence and the magnitude of implicit regularization. This sheds light on the benefits of training at the ``Edge of Stability'', which induces additional regularization by delaying convergence and may have implications for training more complex models.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
525,027
1711.07601
Pixie: A System for Recommending 3+ Billion Items to 200+ Million Users in Real-Time
User experience in modern content discovery applications critically depends on high-quality personalized recommendations. However, building systems that provide such recommendations presents a major challenge due to a massive pool of items, a large number of users, and requirements for recommendations to be responsive to user actions and generated on demand in real-time. Here we present Pixie, a scalable graph-based real-time recommender system that we developed and deployed at Pinterest. Given a set of user-specific pins as a query, Pixie selects in real-time from billions of possible pins those that are most related to the query. To generate recommendations, we develop Pixie Random Walk algorithm that utilizes the Pinterest object graph of 3 billion nodes and 17 billion edges. Experiments show that recommendations provided by Pixie lead up to 50% higher user engagement when compared to the previous Hadoop-based production system. Furthermore, we develop a graph pruning strategy at that leads to an additional 58% improvement in recommendations. Last, we discuss system aspects of Pixie, where a single server executes 1,200 recommendation requests per second with 60 millisecond latency. Today, systems backed by Pixie contribute to more than 80% of all user engagement on Pinterest.
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
true
85,022
2103.00435
Integrating Over-the-Air Federated Learning and Non-Orthogonal Multiple Access: What Role can RIS Play?
With the aim of integrating over-the-air federated learning (AirFL) and non-orthogonal multiple access (NOMA) into an on-demand universal framework, this paper proposes a novel reconfigurable intelligent surface (RIS)-aided hybrid network by leveraging the RIS to flexibly adjust the signal processing order of heterogeneous data. The objective of this work is to maximize the achievable hybrid rate by jointly optimizing the transmit power, controlling the receive scalar, and designing the phase shifts. Since the concurrent transmissions of all computation and communication signals are aided by the discrete phase shifts at the RIS, the considered problem (P0) is a challenging mixed integer programming problem. To tackle this intractable issue, we decompose the original problem (P0) into a non-convex problem (P1) and a combinatorial problem (P2), which are characterized by the continuous and discrete variables, respectively. For the transceiver design problem (P1), the power allocation subproblem is first solved by invoking the difference-of-convex programming, and then the receive control subproblem is addressed by using the successive convex approximation, where the closed-form expressions of simplified cases are derived to obtain deep insights. For the reflection design problem (P2), the relaxation-then-quantization method is adopted to find a suboptimal solution for striking a trade-off between complexity and performance. Afterwards, an alternating optimization algorithm is developed to solve the non-linear and non-convex problem (P0) iteratively. Finally, simulation results reveal that 1) the proposed RIS-aided hybrid network can support the on-demand communication and computation efficiently, 2) the performance gains can be improved by properly selecting the location of the RIS, and 3) the designed algorithms are also applicable to conventional networks with only AirFL or NOMA users.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
222,275
2307.05869
Autonomous and Ubiquitous In-node Learning Algorithms of Active Directed Graphs and Its Storage Behavior
Memory is an important cognitive function for humans. How a brain with such a small power can complete such a complex memory function, the working mechanism behind this is undoubtedly fascinating. Engram theory views memory as the co-activation of specific neuronal clusters. From the perspective of graph theory, nodes represent neurons, and directed edges represent synapses. Then the memory engram is the connected subgraph formed between the activated nodes. In this paper, we use subgraphs as physical carriers of information and propose a parallel distributed information storage algorithm based on node scale in active-directed graphs. An active-directed graph is defined as a graph in which each node has autonomous and independent behavior and relies only on information obtained within the local field of view to make decisions. Unlike static directed graphs used for recording facts, active-directed graphs are decentralized like biological neuron networks and do not have a super manager who has a global view and can control the behavior of each node. Distinct from traditional algorithms with a global field of view, this algorithm is characterized by nodes collaborating globally on resource usage through their limited local field of view. While this strategy may not achieve global optimality as well as algorithms with a global field of view, it offers better robustness, concurrency, decentralization, and bioviability. Finally, it was tested in network capacity, fault tolerance, and robustness. It was found that the algorithm exhibits a larger network capacity in a more sparse network structure because the subgraph generated by a single sample is not a whole but consists of multiple weakly connected components. In this case, the network capacity can be understood as the number of permutations of several weakly connected components in the network.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
true
378,878
2203.01386
Exploring Hierarchical Graph Representation for Large-Scale Zero-Shot Image Classification
The main question we address in this paper is how to scale up visual recognition of unseen classes, also known as zero-shot learning, to tens of thousands of categories as in the ImageNet-21K benchmark. At this scale, especially with many fine-grained categories included in ImageNet-21K, it is critical to learn quality visual semantic representations that are discriminative enough to recognize unseen classes and distinguish them from seen ones. We propose a \emph{H}ierarchical \emph{G}raphical knowledge \emph{R}epresentation framework for the confidence-based classification method, dubbed as HGR-Net. Our experimental results demonstrate that HGR-Net can grasp class inheritance relations by utilizing hierarchical conceptual knowledge. Our method significantly outperformed all existing techniques, boosting the performance by 7\% compared to the runner-up approach on the ImageNet-21K benchmark. We show that HGR-Net is learning-efficient in few-shot scenarios. We also analyzed our method on smaller datasets like ImageNet-21K-P, 2-hops and 3-hops, demonstrating its generalization ability. Our benchmark and code are available at https://kaiyi.me/p/hgrnet.html.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
283,355
2407.05934
Graph Anomaly Detection with Noisy Labels by Reinforcement Learning
Graph anomaly detection (GAD) has been widely applied in many areas, e.g., fraud detection in finance and robot accounts in social networks. Existing methods are dedicated to identifying the outlier nodes that deviate from normal ones. While they heavily rely on high-quality annotation, which is hard to obtain in real-world scenarios, this could lead to severely degraded performance based on noisy labels. Thus, we are motivated to cut the edges of suspicious nodes to alleviate the impact of noise. However, it remains difficult to precisely identify the nodes with noisy labels. Moreover, it is hard to quantitatively evaluate the regret of cutting the edges, which may have either positive or negative influences. To this end, we propose a novel framework REGAD, i.e., REinforced Graph Anomaly Detector. Specifically, we aim to maximize the performance improvement (AUC) of a base detector by cutting noisy edges approximated through the nodes with high-confidence labels. (i) We design a tailored action and search space to train a policy network to carefully prune edges step by step, where only a few suspicious edges are prioritized in each step. (ii) We design a policy-in-the-loop mechanism to iteratively optimize the policy based on the feedback from base detector. The overall performance is evaluated by the cumulative rewards. Extensive experiments are conducted on three datasets under different anomaly ratios. The results indicate the superior performance of our proposed REGAD.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
471,185
2005.13272
Waveform relaxation for low frequency coupled field/circuit differential-algebraic models of index 2
Motivated by the task to design quench protection systems for superconducting magnets in particle accelerators we address a coupled field/circuit simulation based on a magneto-quasistatic field modeling. We investigate how a waveform relaxation of Gau{\ss}-Seidel type performs for a coupled simulation when circuit solving packages are used that describe the circuit by the modified nodal analysis. We present sufficient convergence criteria for the coupled simulation of FEM discretised field models and circuit models formed by a differential-algebraic equation (DAE) system of index 2. In particular, we demonstrate by a simple benchmark system the drastic influence of the circuit topology on the convergence behavior of the coupled simulation.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
178,966
2211.10138
Joint nnU-Net and Radiomics Approaches for Segmentation and Prognosis of Head and Neck Cancers with PET/CT images
Automatic segmentation of head and neck cancer (HNC) tumors and lymph nodes plays a crucial role in the optimization treatment strategy and prognosis analysis. This study aims to employ nnU-Net for automatic segmentation and radiomics for recurrence-free survival (RFS) prediction using pretreatment PET/CT images in multi-center HNC cohort. A multi-center HNC dataset with 883 patients (524 patients for training, 359 for testing) was provided in HECKTOR 2022. A bounding box of the extended oropharyngeal region was retrieved for each patient with fixed size of 224 x 224 x 224 $mm^{3}$. Then 3D nnU-Net architecture was adopted to automatic segmentation of primary tumor and lymph nodes synchronously.Based on predicted segmentation, ten conventional features and 346 standardized radiomics features were extracted for each patient. Three prognostic models were constructed containing conventional and radiomics features alone, and their combinations by multivariate CoxPH modelling. The statistical harmonization method, ComBat, was explored towards reducing multicenter variations. Dice score and C-index were used as evaluation metrics for segmentation and prognosis task, respectively. For segmentation task, we achieved mean dice score around 0.701 for primary tumor and lymph nodes by 3D nnU-Net. For prognostic task, conventional and radiomics models obtained the C-index of 0.658 and 0.645 in the test set, respectively, while the combined model did not improve the prognostic performance with the C-index of 0.648.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
331,221
1710.06574
The Effects of Memory Replay in Reinforcement Learning
Experience replay is a key technique behind many recent advances in deep reinforcement learning. Allowing the agent to learn from earlier memories can speed up learning and break undesirable temporal correlations. Despite its wide-spread application, very little is understood about the properties of experience replay. How does the amount of memory kept affect learning dynamics? Does it help to prioritize certain experiences? In this paper, we address these questions by formulating a dynamical systems ODE model of Q-learning with experience replay. We derive analytic solutions of the ODE for a simple setting. We show that even in this very simple setting, the amount of memory kept can substantially affect the agent's performance. Too much or too little memory both slow down learning. Moreover, we characterize regimes where prioritized replay harms the agent's learning. We show that our analytic solutions have excellent agreement with experiments. Finally, we propose a simple algorithm for adaptively changing the memory buffer size which achieves consistently good empirical performance.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
82,799
1805.04201
Learning to Grasp Without Seeing
Can a robot grasp an unknown object without seeing it? In this paper, we present a tactile-sensing based approach to this challenging problem of grasping novel objects without prior knowledge of their location or physical properties. Our key idea is to combine touch based object localization with tactile based re-grasping. To train our learning models, we created a large-scale grasping dataset, including more than 30 RGB frames and over 2.8 million tactile samples from 7800 grasp interactions of 52 objects. To learn a representation of tactile signals, we propose an unsupervised auto-encoding scheme, which shows a significant improvement of 4-9% over prior methods on a variety of tactile perception tasks. Our system consists of two steps. First, our touch localization model sequentially 'touch-scans' the workspace and uses a particle filter to aggregate beliefs from multiple hits of the target. It outputs an estimate of the object's location, from which an initial grasp is established. Next, our re-grasping model learns to progressively improve grasps with tactile feedback based on the learned features. This network learns to estimate grasp stability and predict adjustment for the next grasp. Re-grasping thus is performed iteratively until our model identifies a stable grasp. Finally, we demonstrate extensive experimental results on grasping a large set of novel objects using tactile sensing alone. Furthermore, when applied on top of a vision-based policy, our re-grasping model significantly boosts the overall accuracy by 10.6%. We believe this is the first attempt at learning to grasp with only tactile sensing and without any prior object knowledge.
false
false
false
false
true
false
false
true
false
false
true
false
false
false
false
false
false
false
97,191
2110.01864
Mobile authentication of copy detection patterns: how critical is to know fakes?
Protection of physical objects against counterfeiting is an important task for the modern economies. In recent years, the high-quality counterfeits appear to be closer to originals thanks to the rapid advancement of digital technologies. To combat these counterfeits, an anti-counterfeiting technology based on hand-crafted randomness implemented in a form of copy detection patterns (CDP) is proposed enabling a link between the physical and digital worlds and being used in various brand protection applications. The modern mobile phone technologies make the verification process of CDP easier and available to the end customers. Besides a big interest and attractiveness, the CDP authentication based on the mobile phone imaging remains insufficiently studied. In this respect, in this paper we aim at investigating the CDP authentication under the real-life conditions with the codes printed on an industrial printer and enrolled via a modern mobile phone under the regular light conditions. The authentication aspects of the obtained CDP are investigated with respect to the four types of copy fakes. The impact of fakes' type used for training of authentication classifier is studied in two scenarios: (i) supervised binary classification under various assumptions about the fakes and (ii) one-class classification under unknown fakes. The obtained results show that the modern machine-learning approaches and the technical capacity of modern mobile phones allow to make the CDP authentication under unknown fakes feasible with respect to the considered types of fakes and code design.
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
true
false
258,923
2101.00922
Zombie Account Detection Based on Community Detection and Uneven Assignation PageRank
In the social media, there are a large amount of potential zombie accounts which may has negative impact on the public opinion. In tradition, PageRank algorithm is used to detect zombie accounts. However, problems such as it requires a large RAM to store adjacent matrix or adjacent list and the value of importance may approximately to zero for large graph exist. To solve the first problem, since the structure of social media makes the graph divisible, we conducted a community detection algorithm Louvain to decompose the whole graph into 1,002 subgraphs. The modularity of 0.58 shows the result is effective. To solve the second problem, we performed the uneven assignation PageRank algorithm to calculate the importance of node in each community. Then, a threshold is set to distinguish the zombie account and normal accounts. The result shows that about 20% accounts in the dataset are zombie accounts and they center in tier-one cities in China such as Beijing, Shanghai, and Guangzhou. In the future, a classification algorithm with semi-supervised learning can be used to detect zombie accounts.
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
true
214,236
2404.08923
Trustworthy Multimodal Fusion for Sentiment Analysis in Ordinal Sentiment Space
Multimodal video sentiment analysis aims to integrate multiple modal information to analyze the opinions and attitudes of speakers. Most previous work focuses on exploring the semantic interactions of intra- and inter-modality. However, these works ignore the reliability of multimodality, i.e., modalities tend to contain noise, semantic ambiguity, missing modalities, etc. In addition, previous multimodal approaches treat different modalities equally, largely ignoring their different contributions. Furthermore, existing multimodal sentiment analysis methods directly regress sentiment scores without considering ordinal relationships within sentiment categories, with limited performance. To address the aforementioned problems, we propose a trustworthy multimodal sentiment ordinal network (TMSON) to improve performance in sentiment analysis. Specifically, we first devise a unimodal feature extractor for each modality to obtain modality-specific features. Then, an uncertainty distribution estimation network is customized, which estimates the unimodal uncertainty distributions. Next, Bayesian fusion is performed on the learned unimodal distributions to obtain multimodal distributions for sentiment prediction. Finally, an ordinal-aware sentiment space is constructed, where ordinal regression is used to constrain the multimodal distributions. Our proposed TMSON outperforms baselines on multimodal sentiment analysis tasks, and empirical results demonstrate that TMSON is capable of reducing uncertainty to obtain more robust predictions.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
446,461
1201.2395
Polynomial Regression on Riemannian Manifolds
In this paper we develop the theory of parametric polynomial regression in Riemannian manifolds and Lie groups. We show application of Riemannian polynomial regression to shape analysis in Kendall shape space. Results are presented, showing the power of polynomial regression on the classic rat skull growth data of Bookstein as well as the analysis of the shape changes associated with aging of the corpus callosum from the OASIS Alzheimer's study.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
13,775
2109.13827
Intelligent Decision Assistance Versus Automated Decision-Making: Enhancing Knowledge Work Through Explainable Artificial Intelligence
While recent advances in AI-based automated decision-making have shown many benefits for businesses and society, they also come at a cost. It has for long been known that a high level of automation of decisions can lead to various drawbacks, such as automation bias and deskilling. In particular, the deskilling of knowledge workers is a major issue, as they are the same people who should also train, challenge and evolve AI. To address this issue, we conceptualize a new class of DSS, namely Intelligent Decision Assistance (IDA) based on a literature review of two different research streams -- DSS and automation. IDA supports knowledge workers without influencing them through automated decision-making. Specifically, we propose to use techniques of Explainable AI (XAI) while withholding concrete AI recommendations. To test this conceptualization, we develop hypotheses on the impacts of IDA and provide first evidence for their validity based on empirical studies in the literature.
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
257,755
1711.05953
3D Face Reconstruction from Light Field Images: A Model-free Approach
Reconstructing 3D facial geometry from a single RGB image has recently instigated wide research interest. However, it is still an ill-posed problem and most methods rely on prior models hence undermining the accuracy of the recovered 3D faces. In this paper, we exploit the Epipolar Plane Images (EPI) obtained from light field cameras and learn CNN models that recover horizontal and vertical 3D facial curves from the respective horizontal and vertical EPIs. Our 3D face reconstruction network (FaceLFnet) comprises a densely connected architecture to learn accurate 3D facial curves from low resolution EPIs. To train the proposed FaceLFnets from scratch, we synthesize photo-realistic light field images from 3D facial scans. The curve by curve 3D face estimation approach allows the networks to learn from only 14K images of 80 identities, which still comprises over 11 Million EPIs/curves. The estimated facial curves are merged into a single pointcloud to which a surface is fitted to get the final 3D face. Our method is model-free, requires only a few training samples to learn FaceLFnet and can reconstruct 3D faces with high accuracy from single light field images under varying poses, expressions and lighting conditions. Comparison on the BU-3DFE and BU-4DFE datasets show that our method reduces reconstruction errors by over 20% compared to recent state of the art.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
84,684
2306.01780
Simulation of a first prototypical 3D solution for Indoor Localization based on Directed and Reflected Signals
We introduce a solution for a specific case of Indoor Localization which involves a directed signal, a reflected signal from the wall and the time difference between them. This solution includes robust localization with a given wall, finding the right wall from a group of walls, obtaining the reflecting wall from measurements, using averaging techniques for improving measurements with errors and successfully grouping measurements regarding reflecting walls. It also includes performing self-calibration by computation of wall distance and direction introducing algorithms such as All pairs, Disjoint pairs and Overlapping pairs and clustering walls based on Inversion and Gnomonic Projection. Several of these algorithms are then compared in order to ameliorate the effects of measurement errors.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
370,601
1502.07448
Automatic Optimization of Hardware Accelerators for Image Processing
In the domain of image processing, often real-time constraints are required. In particular, in safety-critical applications, such as X-ray computed tomography in medical imaging or advanced driver assistance systems in the automotive domain, timing is of utmost importance. A common approach to maintain real-time capabilities of compute-intensive applications is to offload those computations to dedicated accelerator hardware, such as Field Programmable Gate Arrays (FPGAs). Programming such architectures is a challenging task, with respect to the typical FPGA-specific design criteria: Achievable overall algorithm latency and resource usage of FPGA primitives (BRAM, FF, LUT, and DSP). High-Level Synthesis (HLS) dramatically simplifies this task by enabling the description of algorithms in well-known higher languages (C/C++) and its automatic synthesis that can be accomplished by HLS tools. However, algorithm developers still need expert knowledge about the target architecture, in order to achieve satisfying results. Therefore, in previous work, we have shown that elevating the description of image algorithms to an even higher abstraction level, by using a Domain-Specific Language (DSL), can significantly cut down the complexity for designing such algorithms for FPGAs. To give the developer even more control over the common trade-off, latency vs. resource usage, we will present an automatic optimization process where these criteria are analyzed and fed back to the DSL compiler, in order to generate code that is closer to the desired design specifications. Finally, we generate code for stereo block matching algorithms and compare it with handwritten implementations to quantify the quality of our results.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
40,582
2408.12022
Understanding Epistemic Language with a Bayesian Theory of Mind
How do people understand and evaluate claims about others' beliefs, even though these beliefs cannot be directly observed? In this paper, we introduce a cognitive model of epistemic language interpretation, grounded in Bayesian inferences about other agents' goals, beliefs, and intentions: a language-augmented Bayesian theory-of-mind (LaBToM). By translating natural language into an epistemic ``language-of-thought'', then evaluating these translations against the inferences produced by inverting a probabilistic generative model of rational action and perception, LaBToM captures graded plausibility judgments about epistemic claims. We validate our model in an experiment where participants watch an agent navigate a maze to find keys hidden in boxes needed to reach their goal, then rate sentences about the agent's beliefs. In contrast with multimodal LLMs (GPT-4o, Gemini Pro) and ablated models, our model correlates highly with human judgments for a wide range of expressions, including modal language, uncertainty expressions, knowledge claims, likelihood comparisons, and attributions of false belief.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
482,548
cs/0508012
n-Channel Asymmetric Multiple-Description Lattice Vector Quantization
We present analytical expressions for optimal entropy-constrained multiple-description lattice vector quantizers which, under high-resolutions assumptions, minimize the expected distortion for given packet-loss probabilities. We consider the asymmetric case where packet-loss probabilities and side entropies are allowed to be unequal and find optimal quantizers for any number of descriptions in any dimension. We show that the normalized second moments of the side-quantizers are given by that of an $L$-dimensional sphere independent of the choice of lattices. Furthermore, we show that the optimal bit-distribution among the descriptions is not unique. In fact, within certain limits, bits can be arbitrarily distributed.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
538,850
2401.06426
UPDP: A Unified Progressive Depth Pruner for CNN and Vision Transformer
Traditional channel-wise pruning methods by reducing network channels struggle to effectively prune efficient CNN models with depth-wise convolutional layers and certain efficient modules, such as popular inverted residual blocks. Prior depth pruning methods by reducing network depths are not suitable for pruning some efficient models due to the existence of some normalization layers. Moreover, finetuning subnet by directly removing activation layers would corrupt the original model weights, hindering the pruned model from achieving high performance. To address these issues, we propose a novel depth pruning method for efficient models. Our approach proposes a novel block pruning strategy and progressive training method for the subnet. Additionally, we extend our pruning method to vision transformer models. Experimental results demonstrate that our method consistently outperforms existing depth pruning methods across various pruning configurations. We obtained three pruned ConvNeXtV1 models with our method applying on ConvNeXtV1, which surpass most SOTA efficient models with comparable inference performance. Our method also achieves state-of-the-art pruning performance on the vision transformer model.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
421,152
2204.08941
CodexDB: Generating Code for Processing SQL Queries using GPT-3 Codex
CodexDB is an SQL processing engine whose internals can be customized via natural language instructions. CodexDB is based on OpenAI's GPT-3 Codex model which translates text into code. It is a framework on top of GPT-3 Codex that decomposes complex SQL queries into a series of simple processing steps, described in natural language. Processing steps are enriched with user-provided instructions and descriptions of database properties. Codex translates the resulting text into query processing code. An early prototype of CodexDB is able to generate correct code for a majority of queries of the WikiSQL benchmark and can be customized in various ways.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
true
false
292,262
2212.09098
Mask-FPAN: Semi-Supervised Face Parsing in the Wild With De-Occlusion and UV GAN
Fine-grained semantic segmentation of a person's face and head, including facial parts and head components, has progressed a great deal in recent years. However, it remains a challenging task, whereby considering ambiguous occlusions and large pose variations are particularly difficult. To overcome these difficulties, we propose a novel framework termed Mask-FPAN. It uses a de-occlusion module that learns to parse occluded faces in a semi-supervised way. In particular, face landmark localization, face occlusionstimations, and detected head poses are taken into account. A 3D morphable face model combined with the UV GAN improves the robustness of 2D face parsing. In addition, we introduce two new datasets named FaceOccMask-HQ and CelebAMaskOcc-HQ for face paring work. The proposed Mask-FPAN framework addresses the face parsing problem in the wild and shows significant performance improvements with MIOU from 0.7353 to 0.9013 compared to the state-of-the-art on challenging face datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
337,001
1401.8053
Hallucinating optimal high-dimensional subspaces
Linear subspace representations of appearance variation are pervasive in computer vision. This paper addresses the problem of robustly matching such subspaces (computing the similarity between them) when they are used to describe the scope of variations within sets of images of different (possibly greatly so) scales. A naive solution of projecting the low-scale subspace into the high-scale image space is described first and subsequently shown to be inadequate, especially at large scale discrepancies. A successful approach is proposed instead. It consists of (i) an interpolated projection of the low-scale subspace into the high-scale space, which is followed by (ii) a rotation of this initial estimate within the bounds of the imposed ``downsampling constraint''. The optimal rotation is found in the closed-form which best aligns the high-scale reconstruction of the low-scale subspace with the reference it is compared to. The method is evaluated on the problem of matching sets of (i) face appearances under varying illumination and (ii) object appearances under varying viewpoint, using two large data sets. In comparison to the naive matching, the proposed algorithm is shown to greatly increase the separation of between-class and within-class similarities, as well as produce far more meaningful modes of common appearance on which the match score is based.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
30,501
1808.03015
Radon Inversion via Deep Learning
Radon transform is widely used in physical and life sciences and one of its major applications is the X-ray computed tomography (X-ray CT), which is significant in modern health examination. The Radon inversion or image reconstruction is challenging due to the potentially defective radon projections. Conventionally, the reconstruction process contains several ad hoc stages to approximate the corresponding Radon inversion. Each of the stages is highly dependent on the results of the previous stage. In this paper, we propose a novel unified framework for Radon inversion via deep learning (DL). The Radon inversion can be approximated by the proposed framework with an end-to-end fashion instead of processing step-by-step with multiple stages. For simplicity, the proposed framework is short as iRadonMap (inverse Radon transform approximation). Specifically, we implement the iRadonMap as an appropriative neural network, of which the architecture can be divided into two segments. In the first segment, a learnable fully-connected filtering layer is used to filter the radon projections along the view-angle direction, which is followed by a learnable sinusoidal back-projection layer to transfer the filtered radon projections into an image. The second segment is a common neural network architecture to further improve the reconstruction performance in the image domain. The iRadonMap is overall optimized by training a large number of generic images from ImageNet database. To evaluate the performance of the iRadonMap, clinical patient data is used. Qualitative results show promising reconstruction performance of the iRadonMap.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
104,866
1912.03359
Ultra-Reliable and Low-Latency Vehicular Communication: An Active Learning Approach
In this letter, an age of information (AoI)-aware transmission power and resource block (RB) allocation technique for vehicular communication networks is proposed. Due to the highly dynamic nature of vehicular networks, gaining a prior knowledge about the network dynamics, i.e., wireless channels and interference, in order to allocate resources, is challenging. Therefore, to effectively allocate power and RBs, the proposed approach allows the network to actively learn its dynamics by balancing a tradeoff between minimizing the probability that the vehicles' AoI exceeds a predefined threshold and maximizing the knowledge about the network dynamics. In this regard, using a Gaussian process regression (GPR) approach, an online decentralized strategy is proposed to actively learn the network dynamics, estimate the vehicles' future AoI, and proactively allocate resources. Simulation results show a significant improvement in terms of AoI violation probability, compared to several baselines, with a reduction of at least 50%.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
156,567
2406.16502
LOGCAN++: Adaptive Local-global class-aware network for semantic segmentation of remote sensing imagery
Remote sensing images usually characterized by complex backgrounds, scale and orientation variations, and large intra-class variance. General semantic segmentation methods usually fail to fully investigate the above issues, and thus their performances on remote sensing image segmentation are limited. In this paper, we propose our LOGCAN++, a semantic segmentation model customized for remote sensing images, which is made up of a Global Class Awareness (GCA) module and several Local Class Awareness (LCA) modules. The GCA module captures global representations for class-level context modeling to reduce the interference of background noise. The LCA module generates local class representations as intermediate perceptual elements to indirectly associate pixels with the global class representations, targeting at dealing with the large intra-class variance problem. In particular, we introduce affine transformations in the LCA module for adaptive extraction of local class representations to effectively tolerate scale and orientation variations in remotely sensed images. Extensive experiments on three benchmark datasets show that our LOGCAN++ outperforms current mainstream general and remote sensing semantic segmentation methods and achieves a better trade-off between speed and accuracy. Code is available at https://github.com/xwmaxwma/rssegmentation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
467,156
1811.07665
FD-GAN: Face-demorphing generative adversarial network for restoring accomplice's facial image
Face morphing attack is proved to be a serious threat to the existing face recognition systems. Although a few face morphing detection methods have been put forward, the face morphing accomplice's facial restoration remains a challenging problem. In this paper, a face de-morphing generative adversarial network (FD-GAN) is proposed to restore the accomplice's facial image. It utilizes a symmetric dual network architecture and two levels of restoration losses to separate the identity feature of the morphing accomplice. By exploiting the captured facial image (containing the criminal's identity) from the face recognition system and the morphed image stored in the e-passport system (containing both criminal and accomplice's identities), the FD-GAN can effectively restore the accomplice's facial image. Experimental results and analysis demonstrate the effectiveness of the proposed scheme. It has great potential to be implemented for detecting the face morphing accomplice in a real identity verification scenario.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
113,840
2501.09412
FASP: Fast and Accurate Structured Pruning of Large Language Models
The rapid increase in the size of large language models (LLMs) has significantly escalated their computational and memory demands, posing challenges for efficient deployment, especially on resource-constrained devices. Structured pruning has emerged as an effective model compression method that can reduce these demands while preserving performance. In this paper, we introduce FASP (Fast and Accurate Structured Pruning), a novel structured pruning framework for LLMs that emphasizes both speed and accuracy. FASP employs a distinctive pruning structure that interlinks sequential layers, allowing for the removal of columns in one layer while simultaneously eliminating corresponding rows in the preceding layer without incurring additional performance loss. The pruning metric, inspired by Wanda, is computationally efficient and effectively selects components to prune. Additionally, we propose a restoration mechanism that enhances model fidelity by adjusting the remaining weights post-pruning. We evaluate FASP on the OPT and LLaMA model families, demonstrating superior performance in terms of perplexity and accuracy on downstream tasks compared to state-of-the-art methods. Our approach achieves significant speed-ups, pruning models such as OPT-125M in 17 seconds and LLaMA-30B in 15 minutes on a single NVIDIA RTX 4090 GPU, making it a highly practical solution for optimizing LLMs.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
525,135
2212.09637
A Sequential Concept Drift Detection Method for On-Device Learning on Low-End Edge Devices
A practical issue of edge AI systems is that data distributions of trained dataset and deployed environment may differ due to noise and environmental changes over time. Such a phenomenon is known as a concept drift, and this gap degrades the performance of edge AI systems and may introduce system failures. To address this gap, retraining of neural network models triggered by concept drift detection is a practical approach. However, since available compute resources are strictly limited in edge devices, in this paper we propose a fully sequential concept drift detection method in cooperation with an on-device sequential learning technique of neural networks. In this case, both the neural network retraining and the proposed concept drift detection are done only by sequential computation to reduce computation cost and memory utilization. Evaluation results of the proposed approach shows that while the accuracy is decreased by 3.8%-4.3% compared to existing batch-based detection methods, it decreases the memory size by 88.9%-96.4% and the execution time by 1.3%-83.8%. As a result, the combination of the neural network retraining and the proposed concept drift detection method is demonstrated on Raspberry Pi Pico that has 264kB memory.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
337,174
2007.02343
Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks
Recent studies have shown that DNNs can be compromised by backdoor attacks crafted at training time. A backdoor attack installs a backdoor into the victim model by injecting a backdoor pattern into a small proportion of the training data. At test time, the victim model behaves normally on clean test data, yet consistently predicts a specific (likely incorrect) target class whenever the backdoor pattern is present in a test example. While existing backdoor attacks are effective, they are not stealthy. The modifications made on training data or labels are often suspicious and can be easily detected by simple data filtering or human inspection. In this paper, we present a new type of backdoor attack inspired by an important natural phenomenon: reflection. Using mathematical modeling of physical reflection models, we propose reflection backdoor (Refool) to plant reflections as backdoor into a victim model. We demonstrate on 3 computer vision tasks and 5 datasets that, Refool can attack state-of-the-art DNNs with high success rate, and is resistant to state-of-the-art backdoor defenses.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
185,706
1212.0248
The structure of Renyi entropic inequalities
We investigate the universal inequalities relating the alpha-Renyi entropies of the marginals of a multi-partite quantum state. This is in analogy to the same question for the Shannon and von Neumann entropy (alpha=1) which are known to satisfy several non-trivial inequalities such as strong subadditivity. Somewhat surprisingly, we find for 0<alpha<1, that the only inequality is non-negativity: In other words, any collection of non-negative numbers assigned to the nonempty subsets of n parties can be arbitrarily well approximated by the alpha-entropies of the 2^n-1 marginals of a quantum state. For alpha>1 we show analogously that there are no non-trivial homogeneous (in particular no linear) inequalities. On the other hand, it is known that there are further, non-linear and indeed non-homogeneous, inequalities delimiting the alpha-entropies of a general quantum state. Finally, we also treat the case of Renyi entropies restricted to classical states (i.e. probability distributions), which in addition to non-negativity are also subject to monotonicity. For alpha different from 0 and 1 we show that this is the only other homogeneous relation.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
20,081
1705.10958
FALKON: An Optimal Large Scale Kernel Method
Kernel methods provide a principled way to perform non linear, nonparametric learning. They rely on solid functional analytic foundations and enjoy optimal statistical properties. However, at least in their basic form, they have limited applicability in large scale scenarios because of stringent computational requirements in terms of time and especially memory. In this paper, we take a substantial step in scaling up kernel methods, proposing FALKON, a novel algorithm that allows to efficiently process millions of points. FALKON is derived combining several algorithmic principles, namely stochastic subsampling, iterative solvers and preconditioning. Our theoretical analysis shows that optimal statistical accuracy is achieved requiring essentially $O(n)$ memory and $O(n\sqrt{n})$ time. An extensive experimental analysis on large scale datasets shows that, even with a single machine, FALKON outperforms previous state of the art solutions, which exploit parallel/distributed architectures.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
74,507
2112.12542
How Much Space Has Been Explored? Measuring the Chemical Space Covered by Databases and Machine-Generated Molecules
Forming a molecular candidate set that contains a wide range of potentially effective compounds is crucial to the success of drug discovery. While most databases and machine-learning-based generation models aim to optimize particular chemical properties, there is limited literature on how to properly measure the coverage of the chemical space by those candidates included or generated. This problem is challenging due to the lack of formal criteria to select good measures of the chemical space. In this paper, we propose a novel evaluation framework for measures of the chemical space based on two analyses: an axiomatic analysis with three intuitive axioms that a good measure should obey, and an empirical analysis on the correlation between a measure and a proxy gold standard. Using this framework, we are able to identify #Circles, a new measure of chemical space coverage, which is superior to existing measures both analytically and empirically. We further evaluate how well the existing databases and generation models cover the chemical space in terms of #Circles. The results suggest that many generation models fail to explore a larger space over existing databases, which leads to new opportunities for improving generation models by encouraging exploration.
false
true
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
272,990
2002.11143
End-to-End Entity Linking and Disambiguation leveraging Word and Knowledge Graph Embeddings
Entity linking - connecting entity mentions in a natural language utterance to knowledge graph (KG) entities is a crucial step for question answering over KGs. It is often based on measuring the string similarity between the entity label and its mention in the question. The relation referred to in the question can help to disambiguate between entities with the same label. This can be misleading if an incorrect relation has been identified in the relation linking step. However, an incorrect relation may still be semantically similar to the relation in which the correct entity forms a triple within the KG; which could be captured by the similarity of their KG embeddings. Based on this idea, we propose the first end-to-end neural network approach that employs KG as well as word embeddings to perform joint relation and entity classification of simple questions while implicitly performing entity disambiguation with the help of a novel gating mechanism. An empirical evaluation shows that the proposed approach achieves a performance comparable to state-of-the-art entity linking while requiring less post-processing.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
165,617
1402.0591
Learning by Observation of Agent Software Images
Learning by observation can be of key importance whenever agents sharing similar features want to learn from each other. This paper presents an agent architecture that enables software agents to learn by direct observation of the actions executed by expert agents while they are performing a task. This is possible because the proposed architecture displays information that is essential for observation, making it possible for software agents to observe each other. The agent architecture supports a learning process that covers all aspects of learning by observation, such as discovering and observing experts, learning from the observed data, applying the acquired knowledge and evaluating the agents progress. The evaluation provides control over the decision to obtain new knowledge or apply the acquired knowledge to new problems. We combine two methods for learning from the observed information. The first one, the recall method, uses the sequence on which the actions were observed to solve new problems. The second one, the classification method, categorizes the information in the observed data and determines to which set of categories the new problems belong. Results show that agents are able to learn in conditions where common supervised learning algorithms fail, such as when agents do not know the results of their actions a priori or when not all the effects of the actions are visible. The results also show that our approach provides better results than other learning methods since it requires shorter learning periods.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
30,605
2008.03982
Social Interactions Clustering MOOC Students: An Exploratory Study
An exploratory study on social interactions of MOOC students in FutureLearn was conducted, to answer "how can we cluster students based on their social interactions?" Comments were categorized based on how students interacted with them, e.g., how a student's comment received replies from peers. Statistical modelling and machine learning were used to analyze comment categorization, resulting in 3 strong and stable clusters.
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
191,090
2107.05366
HCGR: Hyperbolic Contrastive Graph Representation Learning for Session-based Recommendation
Session-based recommendation (SBR) learns users' preferences by capturing the short-term and sequential patterns from the evolution of user behaviors. Among the studies in the SBR field, graph-based approaches are a relatively powerful kind of way, which generally extract item information by message aggregation under Euclidean space. However, such methods can't effectively extract the hierarchical information contained among consecutive items in a session, which is critical to represent users' preferences. In this paper, we present a hyperbolic contrastive graph recommender (HCGR), a principled session-based recommendation framework involving Lorentz hyperbolic space to adequately capture the coherence and hierarchical representations of the items. Within this framework, we design a novel adaptive hyperbolic attention computation to aggregate the graph message of each user's preference in a session-based behavior sequence. In addition, contrastive learning is leveraged to optimize the item representation by considering the geodesic distance between positive and negative samples in hyperbolic space. Extensive experiments on four real-world datasets demonstrate that HCGR consistently outperforms state-of-the-art baselines by 0.43$\%$-28.84$\%$ in terms of $HitRate$, $NDCG$ and $MRR$.
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
245,756
2406.06331
MedExQA: Medical Question Answering Benchmark with Multiple Explanations
This paper introduces MedExQA, a novel benchmark in medical question-answering, to evaluate large language models' (LLMs) understanding of medical knowledge through explanations. By constructing datasets across five distinct medical specialties that are underrepresented in current datasets and further incorporating multiple explanations for each question-answer pair, we address a major gap in current medical QA benchmarks which is the absence of comprehensive assessments of LLMs' ability to generate nuanced medical explanations. Our work highlights the importance of explainability in medical LLMs, proposes an effective methodology for evaluating models beyond classification accuracy, and sheds light on one specific domain, speech language pathology, where current LLMs including GPT4 lack good understanding. Our results show generation evaluation with multiple explanations aligns better with human assessment, highlighting an opportunity for a more robust automated comprehension assessment for LLMs. To diversify open-source medical LLMs (currently mostly based on Llama2), this work also proposes a new medical model, MedPhi-2, based on Phi-2 (2.7B). The model outperformed medical LLMs based on Llama2-70B in generating explanations, showing its effectiveness in the resource-constrained medical domain. We will share our benchmark datasets and the trained model.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
462,522
1208.6454
Spread of Influence and Content in Mobile Opportunistic Networks
We consider a setting in which a single item of content (such as a song or a video clip) is disseminated in a population of mobile nodes by opportunistic copying when pairs of nodes come in radio contact. We propose and study models that capture the joint evolution of the population of nodes interested in the content (referred to as destinations), and the population of nodes that possess the content. The evolution of interest in the content is captured using an influence spread model and the content spread occurs via epidemic copying. Nodes not yet interested in the content are called relays; the influence spread process converts relays into destinations. We consider the decentralized setting, where interest in the content and the spread of the content evolve by pairwise interactions between the mobiles. We derive fluid limits for the joint evolution models and obtain optimal policies for copying to relay nodes in order to deliver content to a desired fraction of destinations. We prove that a time-threshold policy is optimal while copying to relays. We then provide insights into the effects of various system parameters on the co-evolution model through simulations.
false
false
false
true
false
false
false
false
false
false
true
false
false
false
true
false
false
false
18,329
2309.01582
Improving Visual Quality and Transferability of Adversarial Attacks on Face Recognition Simultaneously with Adversarial Restoration
Adversarial face examples possess two critical properties: Visual Quality and Transferability. However, existing approaches rarely address these properties simultaneously, leading to subpar results. To address this issue, we propose a novel adversarial attack technique known as Adversarial Restoration (AdvRestore), which enhances both visual quality and transferability of adversarial face examples by leveraging a face restoration prior. In our approach, we initially train a Restoration Latent Diffusion Model (RLDM) designed for face restoration. Subsequently, we employ the inference process of RLDM to generate adversarial face examples. The adversarial perturbations are applied to the intermediate features of RLDM. Additionally, by treating RLDM face restoration as a sibling task, the transferability of the generated adversarial face examples is further improved. Our experimental results validate the effectiveness of the proposed attack method.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
389,731
2302.07409
Quantum Learning Theory Beyond Batch Binary Classification
Arunachalam and de Wolf (2018) showed that the sample complexity of quantum batch learning of boolean functions, in the realizable and agnostic settings, has the same form and order as the corresponding classical sample complexities. In this paper, we extend this, ostensibly surprising, message to batch multiclass learning, online boolean learning, and online multiclass learning. For our online learning results, we first consider an adaptive adversary variant of the classical model of Dawid and Tewari (2022). Then, we introduce the first (to the best of our knowledge) model of online learning with quantum examples.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
345,720
2010.00721
Using Unlabeled Data for Increasing Low-Shot Classification Accuracy of Relevant and Open-Set Irrelevant Images
In search, exploration, and reconnaissance tasks performed with autonomous ground vehicles, an image classification capability is needed for specifically identifying targeted objects (relevant classes) and at the same time recognize when a candidate image does not belong to anyone of the relevant classes (irrelevant images). In this paper, we present an open-set low-shot classifier that uses, during its training, a modest number (less than 40) of labeled images for each relevant class, and unlabeled irrelevant images that are randomly selected at each epoch of the training process. The new classifier is capable of identifying images from the relevant classes, determining when a candidate image is irrelevant, and it can further recognize categories of irrelevant images that were not included in the training (unseen). The proposed low-shot classifier can be attached as a top layer to any pre-trained feature extractor when constructing a Convolutional Neural Network.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
198,368
2007.09102
Breaking Moravec's Paradox: Visual-Based Distribution in Smart Fashion Retail
In this paper, we report an industry-academia collaborative study on the distribution method of fashion products using an artificial intelligence (AI) technique combined with an optimization method. To meet the current fashion trend of short product lifetimes and an increasing variety of styles, the company produces limited volumes of a large variety of styles. However, due to the limited volume of each style, some styles may not be distributed to some off-line stores. As a result, this high-variety, low-volume strategy presents another challenge to distribution managers. We collaborated with KOLON F/C, one of the largest fashion business units in South Korea, to develop models and an algorithm to optimally distribute the products to the stores based on the visual images of the products. The team developed a deep learning model that effectively represents the styles of clothes based on their visual image. Moreover, the team created an optimization model that effectively determines the product mix for each store based on the image representation of clothes. In the past, computers were only considered to be useful for conducting logical calculations, and visual perception and cognition were considered to be difficult computational tasks. The proposed approach is significant in that it uses both AI (perception and cognition) and mathematical optimization (logical calculation) to address a practical supply chain problem, which is why the study was called "Breaking Moravec's Paradox."
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
187,831
2208.02018
Safety Analysis Methods for Complex Systems in Aviation
Each new concept of operation and equipment generation in aviation becomes more automated, integrated and interconnected. In the case of Unmanned Aircraft Systems (UAS), this evolution allows drastically decreasing aircraft weight and operational cost, but these benefits are also realized in highly automated manned aircraft and ground Air Traffic Control (ATC) systems. The downside of these advances is overwhelmingly more complex software and hardware, making it harder to identify potential failure paths. Although there are mandatory certification processes based on broadly accepted standards, such as ARP4754 and its family, ESARR 4 and others, these standards do not allow proof or disproof of safety of disruptive technology changes, such as GBAS Precision Approaches, Autonomous UAS, aircraft self-separation and others. In order to leverage the introduction of such concepts, it is necessary to develop solid knowledge on the foundations of safety in complex systems and use this knowledge to elaborate sound demonstrations of either safety or unsafety of new system designs. These demonstrations at early design stages will help reducing costs both on development of new technology as well as reducing the risk of such technology causing accidents when in use. This paper presents some safety analysis methods which are not in the industry standards but which we identify as having benefits for analyzing safety of advanced technological concepts in aviation.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
311,353