id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2303.14689
Weak Recovery Threshold for the Hypergraph Stochastic Block Model
We study the weak recovery problem on the $r$-uniform hypergraph stochastic block model ($r$-HSBM) with two balanced communities. In HSBM a random graph is constructed by placing hyperedges with higher density if all vertices of a hyperedge share the same binary label, and weak recovery asks to recover a non-trivial fraction of the labels. We introduce a multi-terminal version of strong data processing inequalities (SDPIs), which we call the multi-terminal SDPI, and use it to prove a variety of impossibility results for weak recovery. In particular, we prove that weak recovery is impossible below the Kesten-Stigum (KS) threshold if $r=3,4$, or a strength parameter $\lambda$ is at least $\frac 15$. Prior work Pal and Zhu (2021) established that weak recovery in HSBM is always possible above the KS threshold. Consequently, there is no information-computation gap for these cases, which (partially) resolves a conjecture of Angelini et al. (2015). To our knowledge this is the first impossibility result for HSBM weak recovery. As usual, we reduce the study of non-recovery of HSBM to the study of non-reconstruction in a related broadcasting on hypertrees (BOHT) model. While we show that BOHT's reconstruction threshold coincides with KS for $r=3,4$, surprisingly, we demonstrate that for $r\ge 7$ reconstruction is possible also below KS. This shows an interesting phase transition in the parameter $r$, and suggests that for $r\ge 7$, there might be an information-computation gap for the HSBM. For $r=5,6$ and large degree we propose an approach for showing non-reconstruction below KS, suggesting that $r=7$ is the correct threshold for onset of the new phase.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
354,204
2405.12790
A Novel Methodology for Autonomous Planetary Exploration Using Multi-Robot Teams
One of the fundamental limiting factors in planetary exploration is the autonomous capabilities of planetary exploration rovers. This study proposes a novel methodology for trustworthy autonomous multi-robot teams which incorporates data from multiple sources (HiRISE orbiter imaging, probability distribution maps, and on-board rover sensors) to find efficient exploration routes in Jezero crater. A map is generated, consisting of a 3D terrain model, traversability analysis, and probability distribution map of points of scientific interest. A three-stage mission planner generates an efficient route, which maximises the accumulated probability of identifying points of interest. A 4D RRT* algorithm is used to determine smooth, flat paths, and prioritised planning is used to coordinate a safe set of paths. The above methodology is shown to coordinate safe and efficient rover paths, which ensure the rovers remain within their nominal pitch and roll limits throughout operation.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
455,656
2110.06133
Hotel Preference Rank based on Online Customer Review
Topline hotels are now shifting into the digital way in how they understand their customers to maintain and ensuring satisfaction. Rather than the conventional way which uses written reviews or interviews, the hotel is now heavily investing in Artificial Intelligence particularly Machine Learning solutions. Analysis of online customer reviews changes the way companies make decisions in a more effective way than using conventional analysis. The purpose of this research is to measure hotel service quality. The proposed approach emphasizes service quality dimensions reviews of the top-5 luxury hotel in Indonesia that appear on the online travel site TripAdvisor based on section Best of 2018. In this research, we use a model based on a simple Bayesian classifier to classify each customer review into one of the service quality dimensions. Our model was able to separate each classification properly by accuracy, kappa, recall, precision, and F-measure measurements. To uncover latent topics in the customer's opinion we use Topic Modeling. We found that the common issue that occurs is about responsiveness as it got the lowest percentage compared to others. Our research provides a faster outlook of hotel rank based on service quality to end customers based on a summary of the previous online review.
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
false
260,509
2102.09193
SeaPearl: A Constraint Programming Solver guided by Reinforcement Learning
The design of efficient and generic algorithms for solving combinatorial optimization problems has been an active field of research for many years. Standard exact solving approaches are based on a clever and complete enumeration of the solution set. A critical and non-trivial design choice with such methods is the branching strategy, directing how the search is performed. The last decade has shown an increasing interest in the design of machine learning-based heuristics to solve combinatorial optimization problems. The goal is to leverage knowledge from historical data to solve similar new instances of a problem. Used alone, such heuristics are only able to provide approximate solutions efficiently, but cannot prove optimality nor bounds on their solution. Recent works have shown that reinforcement learning can be successfully used for driving the search phase of constraint programming (CP) solvers. However, it has also been shown that this hybridization is challenging to build, as standard CP frameworks do not natively include machine learning mechanisms, leading to some sources of inefficiencies. This paper presents the proof of concept for SeaPearl, a new CP solver implemented in Julia, that supports machine learning routines in order to learn branching decisions using reinforcement learning. Support for modeling the learning component is also provided. We illustrate the modeling and solution performance of this new solver on two problems. Although not yet competitive with industrial solvers, SeaPearl aims to provide a flexible and open-source framework in order to facilitate future research in the hybridization of constraint programming and machine learning.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
220,694
2407.21734
Human-Machine Co-Adaptation for Robot-Assisted Rehabilitation via Dual-Agent Multiple Model Reinforcement Learning (DAMMRL)
This study introduces a novel approach to robot-assisted ankle rehabilitation by proposing a Dual-Agent Multiple Model Reinforcement Learning (DAMMRL) framework, leveraging multiple model adaptive control (MMAC) and co-adaptive control strategies. In robot-assisted rehabilitation, one of the key challenges is modelling human behaviour due to the complexity of human cognition and physiological systems. Traditional single-model approaches often fail to capture the dynamics of human-machine interactions. Our research employs a multiple model strategy, using simple sub-models to approximate complex human responses during rehabilitation tasks, tailored to varying levels of patient incapacity. The proposed system's versatility is demonstrated in real experiments and simulated environments. Feasibility and potential were evaluated with 13 healthy young subjects, yielding promising results that affirm the anticipated benefits of the approach. This study not only introduces a new paradigm for robot-assisted ankle rehabilitation but also opens the way for future research in adaptive, patient-centred therapeutic interventions.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
477,651
2203.13309
Weakly-Supervised Online Action Segmentation in Multi-View Instructional Videos
This paper addresses a new problem of weakly-supervised online action segmentation in instructional videos. We present a framework to segment streaming videos online at test time using Dynamic Programming and show its advantages over greedy sliding window approach. We improve our framework by introducing the Online-Offline Discrepancy Loss (OODL) to encourage the segmentation results to have a higher temporal consistency. Furthermore, only during training, we exploit frame-wise correspondence between multiple views as supervision for training weakly-labeled instructional videos. In particular, we investigate three different multi-view inference techniques to generate more accurate frame-wise pseudo ground-truth with no additional annotation cost. We present results and ablation studies on two benchmark multi-view datasets, Breakfast and IKEA ASM. Experimental results show efficacy of the proposed methods both qualitatively and quantitatively in two domains of cooking and assembly.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
287,581
2011.07697
Ensemble of Models Trained by Key-based Transformed Images for Adversarially Robust Defense Against Black-box Attacks
We propose a voting ensemble of models trained by using block-wise transformed images with secret keys for an adversarially robust defense. Key-based adversarial defenses were demonstrated to outperform state-of-the-art defenses against gradient-based (white-box) attacks. However, the key-based defenses are not effective enough against gradient-free (black-box) attacks without requiring any secret keys. Accordingly, we aim to enhance robustness against black-box attacks by using a voting ensemble of models. In the proposed ensemble, a number of models are trained by using images transformed with different keys and block sizes, and then a voting ensemble is applied to the models. In image classification experiments, the proposed defense is demonstrated to defend state-of-the-art attacks. The proposed defense achieves a clean accuracy of 95.56 % and an attack success rate of less than 9 % under attacks with a noise distance of 8/255 on the CIFAR-10 dataset.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
206,643
0902.4098
Coordination in multiagent systems and Laplacian spectra of digraphs
Constructing and studying distributed control systems requires the analysis of the Laplacian spectra and the forest structure of directed graphs. In this paper, we present some basic results of this analysis partially obtained by the present authors. We also discuss the application of these results to decentralized control and touch upon some problems of spectral graph theory.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
true
3,221
2405.04773
Hypergraph-enhanced Dual Semi-supervised Graph Classification
In this paper, we study semi-supervised graph classification, which aims at accurately predicting the categories of graphs in scenarios with limited labeled graphs and abundant unlabeled graphs. Despite the promising capability of graph neural networks (GNNs), they typically require a large number of costly labeled graphs, while a wealth of unlabeled graphs fail to be effectively utilized. Moreover, GNNs are inherently limited to encoding local neighborhood information using message-passing mechanisms, thus lacking the ability to model higher-order dependencies among nodes. To tackle these challenges, we propose a Hypergraph-Enhanced DuAL framework named HEAL for semi-supervised graph classification, which captures graph semantics from the perspective of the hypergraph and the line graph, respectively. Specifically, to better explore the higher-order relationships among nodes, we design a hypergraph structure learning to adaptively learn complex node dependencies beyond pairwise relations. Meanwhile, based on the learned hypergraph, we introduce a line graph to capture the interaction between hyperedges, thereby better mining the underlying semantic structures. Finally, we develop a relational consistency learning to facilitate knowledge transfer between the two branches and provide better mutual guidance. Extensive experiments on real-world graph datasets verify the effectiveness of the proposed method against existing state-of-the-art methods.
false
false
false
true
true
true
true
false
false
false
false
false
false
false
false
false
false
false
452,674
2311.04199
Exploring Recommendation Capabilities of GPT-4V(ision): A Preliminary Case Study
Large Multimodal Models (LMMs) have demonstrated impressive performance across various vision and language tasks, yet their potential applications in recommendation tasks with visual assistance remain unexplored. To bridge this gap, we present a preliminary case study investigating the recommendation capabilities of GPT-4V(ison), a recently released LMM by OpenAI. We construct a series of qualitative test samples spanning multiple domains and employ these samples to assess the quality of GPT-4V's responses within recommendation scenarios. Evaluation results on these test samples prove that GPT-4V has remarkable zero-shot recommendation abilities across diverse domains, thanks to its robust visual-text comprehension capabilities and extensive general knowledge. However, we have also identified some limitations in using GPT-4V for recommendations, including a tendency to provide similar responses when given similar inputs. This report concludes with an in-depth discussion of the challenges and research opportunities associated with utilizing GPT-4V in recommendation scenarios. Our objective is to explore the potential of extending LMMs from vision and language tasks to recommendation tasks. We hope to inspire further research into next-generation multimodal generative recommendation models, which can enhance user experiences by offering greater diversity and interactivity. All images and prompts used in this report will be accessible at https://github.com/PALIN2018/Evaluate_GPT-4V_Rec.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
406,134
2306.13686
Broadening the perspective for sustainable AI: Comprehensive sustainability criteria and indicators for AI systems
The increased use of AI systems is associated with multi-faceted societal, environmental, and economic consequences. These include non-transparent decision-making processes, discrimination, increasing inequalities, rising energy consumption and greenhouse gas emissions in AI model development and application, and an increasing concentration of economic power. By considering the multi-dimensionality of sustainability, this paper takes steps towards substantiating the call for an overarching perspective on "sustainable AI". It presents the SCAIS Framework (Sustainability Criteria and Indicators for Artificial Intelligence Systems) which contains a set 19 sustainability criteria for sustainable AI and 67 indicators that is based on the results of a critical review and expert workshops. This interdisciplinary approach contributes a unique holistic perspective to facilitate and structure the discourse on sustainable AI. Further, it provides a concrete framework that lays the foundation for developing standards and tools to support the conscious development and application of AI systems.
true
false
false
false
true
false
true
false
false
false
false
false
false
true
false
false
false
false
375,363
2210.00102
MLPInit: Embarrassingly Simple GNN Training Acceleration with MLP Initialization
Training graph neural networks (GNNs) on large graphs is complex and extremely time consuming. This is attributed to overheads caused by sparse matrix multiplication, which are sidestepped when training multi-layer perceptrons (MLPs) with only node features. MLPs, by ignoring graph context, are simple and faster for graph data, however they usually sacrifice prediction accuracy, limiting their applications for graph data. We observe that for most message passing-based GNNs, we can trivially derive an analog MLP (we call this a PeerMLP) with an equivalent weight space, by setting the trainable parameters with the same shapes, making us curious about \textbf{\emph{how do GNNs using weights from a fully trained PeerMLP perform?}} Surprisingly, we find that GNNs initialized with such weights significantly outperform their PeerMLPs, motivating us to use PeerMLP training as a precursor, initialization step to GNN training. To this end, we propose an embarrassingly simple, yet hugely effective initialization method for GNN training acceleration, called MLPInit. Our extensive experiments on multiple large-scale graph datasets with diverse GNN architectures validate that MLPInit can accelerate the training of GNNs (up to 33X speedup on OGB-Products) and often improve prediction performance (e.g., up to $7.97\%$ improvement for GraphSAGE across $7$ datasets for node classification, and up to $17.81\%$ improvement across $4$ datasets for link prediction on metric Hits@10). The code is available at \href{https://github.com/snap-research/MLPInit-for-GNNs}.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
320,732
1904.01333
Point in, Box out: Beyond Counting Persons in Crowds
Modern crowd counting methods usually employ deep neural networks (DNN) to estimate crowd counts via density regression. Despite their significant improvements, the regression-based methods are incapable of providing the detection of individuals in crowds. The detection-based methods, on the other hand, have not been largely explored in recent trends of crowd counting due to the needs for expensive bounding box annotations. In this work, we instead propose a new deep detection network with only point supervision required. It can simultaneously detect the size and location of human heads and count them in crowds. We first mine useful person size information from point-level annotations and initialize the pseudo ground truth bounding boxes. An online updating scheme is introduced to refine the pseudo ground truth during training; while a locally-constrained regression loss is designed to provide additional constraints on the size of the predicted boxes in a local neighborhood. In the end, we propose a curriculum learning strategy to train the network from images of relatively accurate and easy pseudo ground truth first. Extensive experiments are conducted in both detection and counting tasks on several standard benchmarks, e.g. ShanghaiTech, UCF_CC_50, WiderFace, and TRANCOS datasets, and the results show the superiority of our method over the state-of-the-art.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
126,117
2303.17878
Fused Depthwise Tiling for Memory Optimization in TinyML Deep Neural Network Inference
Memory optimization for deep neural network (DNN) inference gains high relevance with the emergence of TinyML, which refers to the deployment of DNN inference tasks on tiny, low-power microcontrollers. Applications such as audio keyword detection or radar-based gesture recognition are heavily constrained by the limited memory on such tiny devices because DNN inference requires large intermediate run-time buffers to store activations and other intermediate data, which leads to high memory usage. In this paper, we propose a new Fused Depthwise Tiling (FDT) method for the memory optimization of DNNs, which, compared to existing tiling methods, reduces memory usage without inducing any run time overhead. FDT applies to a larger variety of network layers than existing tiling methods that focus on convolutions. It improves TinyML memory optimization significantly by reducing memory of models where this was not possible before and additionally providing alternative design points for models that show high run time overhead with existing methods. In order to identify the best tiling configuration, an end-to-end flow with a new path discovery method is proposed, which applies FDT and existing tiling methods in a fully automated way, including the scheduling of the operations and planning of the layout of buffers in memory. Out of seven evaluated models, FDT achieved significant memory reduction for two models by 76.2% and 18.1% where existing tiling methods could not be applied. Two other models showed a significant run time overhead with existing methods and FDT provided alternative design points with no overhead but reduced memory savings.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
355,376
2305.14728
SenteCon: Leveraging Lexicons to Learn Human-Interpretable Language Representations
Although deep language representations have become the dominant form of language featurization in recent years, in many settings it is important to understand a model's decision-making process. This necessitates not only an interpretable model but also interpretable features. In particular, language must be featurized in a way that is interpretable while still characterizing the original text well. We present SenteCon, a method for introducing human interpretability in deep language representations. Given a passage of text, SenteCon encodes the text as a layer of interpretable categories in which each dimension corresponds to the relevance of a specific category. Our empirical evaluations indicate that encoding language with SenteCon provides high-level interpretability at little to no cost to predictive performance on downstream tasks. Moreover, we find that SenteCon outperforms existing interpretable language representations with respect to both its downstream performance and its agreement with human characterizations of the text.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
367,217
2311.00445
A Systematic Comparison of Syllogistic Reasoning in Humans and Language Models
A central component of rational behavior is logical inference: the process of determining which conclusions follow from a set of premises. Psychologists have documented several ways in which humans' inferences deviate from the rules of logic. Do language models, which are trained on text generated by humans, replicate such human biases, or are they able to overcome them? Focusing on the case of syllogisms -- inferences from two simple premises -- we show that, within the PaLM2 family of transformer language models, larger models are more logical than smaller ones, and also more logical than humans. At the same time, even the largest models make systematic errors, some of which mirror human reasoning biases: they show sensitivity to the (irrelevant) ordering of the variables in the syllogism, and draw confident but incorrect inferences from particular syllogisms (syllogistic fallacies). Overall, we find that language models often mimic the human biases included in their training data, but are able to overcome them in some cases.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
404,656
0708.1558
Construction of a 3-Dimensional MDS code
In this paper, we describe a procedure for constructing $q$--ary $[N,3,N-2]$--MDS codes, of length $N\leq q+1$ (for $q$ odd) or $N\leq q+2$ (for $q$ even), using a set of non--degenerate Hermitian forms in $PG(2,q^2)$.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
544
2105.14190
RaspberryPI for mosquito neutralization by power laser
In this article for the first time, comprehensive studies of mosquito neutralization using machine vision and a 1 W power laser are considered. Developed laser installation with Raspberry Pi that changing the direction of the laser with a galvanometer. We developed a program for mosquito tracking in real. The possibility of using deep neural networks, Haar cascades, machine learning for mosquito recognition was considered. We considered in detail the classification problems of mosquitoes in images. A recommendation is given for the implementation of this device based on a microcontroller for subsequent use as part of an unmanned aerial vehicle. Any harmful insects in the fields can be used as objects for control.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
237,569
cs/0504066
Comparison of the Bayesian and Randomised Decision Tree Ensembles within an Uncertainty Envelope Technique
Multiple Classifier Systems (MCSs) allow evaluation of the uncertainty of classification outcomes that is of crucial importance for safety critical applications. The uncertainty of classification is determined by a trade-off between the amount of data available for training, the classifier diversity and the required performance. The interpretability of MCSs can also give useful information for experts responsible for making reliable classifications. For this reason Decision Trees (DTs) seem to be attractive classification models for experts. The required diversity of MCSs exploiting such classification models can be achieved by using two techniques, the Bayesian model averaging and the randomised DT ensemble. Both techniques have revealed promising results when applied to real-world problems. In this paper we experimentally compare the classification uncertainty of the Bayesian model averaging with a restarting strategy and the randomised DT ensemble on a synthetic dataset and some domain problems commonly used in the machine learning community. To make the Bayesian DT averaging feasible, we use a Markov Chain Monte Carlo technique. The classification uncertainty is evaluated within an Uncertainty Envelope technique dealing with the class posterior distribution and a given confidence probability. Exploring a full posterior distribution, this technique produces realistic estimates which can be easily interpreted in statistical terms. In our experiments we found out that the Bayesian DTs are superior to the randomised DT ensembles within the Uncertainty Envelope technique.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
538,676
2404.16304
BezierFormer: A Unified Architecture for 2D and 3D Lane Detection
Lane detection has made significant progress in recent years, but there is not a unified architecture for its two sub-tasks: 2D lane detection and 3D lane detection. To fill this gap, we introduce B\'{e}zierFormer, a unified 2D and 3D lane detection architecture based on B\'{e}zier curve lane representation. B\'{e}zierFormer formulate queries as B\'{e}zier control points and incorporate a novel B\'{e}zier curve attention mechanism. This attention mechanism enables comprehensive and accurate feature extraction for slender lane curves via sampling and fusing multiple reference points on each curve. In addition, we propose a novel Chamfer IoU-based loss which is more suitable for the B\'{e}zier control points regression. The state-of-the-art performance of B\'{e}zierFormer on widely-used 2D and 3D lane detection benchmarks verifies its effectiveness and suggests the worthiness of further exploration.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
449,433
1407.4694
Distributed Pricing-Based User Association for Downlink Heterogeneous Cellular Networks
This paper considers the optimization of the user and base-station (BS) association in a wireless downlink heterogeneous cellular network under the proportional fairness criterion. We first consider the case where each BS has a single antenna and transmits at fixed power, and propose a distributed price update strategy for a pricing-based user association scheme, in which the users are assigned to the BS based on the value of a utility function minus a price. The proposed price update algorithm is based on a coordinate descent method for solving the dual of the network utility maximization problem, and it has a rigorous performance guarantee. The main advantage of the proposed algorithm as compared to the existing subgradient method for price update is that the proposed algorithm is independent of parameter choices and can be implemented asynchronously. Further, this paper considers the joint user association and BS power control problem, and proposes an iterative dual coordinate descent and the power optimization algorithm that significantly outperforms existing approaches. Finally, this paper considers the joint user association and BS beamforming problem for the case where the BSs are equipped with multiple antennas and spatially multiplex multiple users. We incorporate dual coordinate descent with the weighted minimum mean-squared error (WMMSE) algorithm, and show that it achieves nearly the same performance as a computationally more complex benchmark algorithm (which applies the WMMSE algorithm on the entire network for BS association), while avoiding excessive BS handover.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
34,722
1911.00038
Context-Aware Local Differential Privacy
Local differential privacy (LDP) is a strong notion of privacy for individual users that often comes at the expense of a significant drop in utility. The classical definition of LDP assumes that all elements in the data domain are equally sensitive. However, in many applications, some symbols are more sensitive than others. This work proposes a context-aware framework of local differential privacy that allows a privacy designer to incorporate the application's context into the privacy definition. For binary data domains, we provide a universally optimal privatization scheme and highlight its connections to Warner's randomized response (RR) and Mangat's improved response. Motivated by geolocation and web search applications, for $k$-ary data domains, we consider two special cases of context-aware LDP: block-structured LDP and high-low LDP. We study discrete distribution estimation and provide communication-efficient, sample-optimal schemes and information-theoretic lower bounds for both models. We show that using contextual information can require fewer samples than classical LDP to achieve the same accuracy.
false
false
false
false
false
false
true
false
false
true
false
false
true
false
false
false
false
true
151,710
2412.08316
Rumor Detection on Social Media with Temporal Propagation Structure Optimization
Traditional methods for detecting rumors on social media primarily focus on analyzing textual content, often struggling to capture the complexity of online interactions. Recent research has shifted towards leveraging graph neural networks to model the hierarchical conversation structure that emerges during rumor propagation. However, these methods tend to overlook the temporal aspect of rumor propagation and may disregard potential noise within the propagation structure. In this paper, we propose a novel approach that incorporates temporal information by constructing a weighted propagation tree, where the weight of each edge represents the time interval between connected posts. Drawing upon the theory of structural entropy, we transform this tree into a coding tree. This transformation aims to preserve the essential structure of rumor propagation while reducing noise. Finally, we introduce a recursive neural network to learn from the coding tree for rumor veracity prediction. Experimental results on two common datasets demonstrate the superiority of our approach.
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
false
false
false
516,042
2412.06146
Homogeneous Dynamics Space for Heterogeneous Humans
Analyses of human motion kinematics have achieved tremendous advances. However, the production mechanism, known as human dynamics, is still undercovered. In this paper, we aim to push data-driven human dynamics understanding forward. We identify a major obstacle to this as the heterogeneity of existing human motion understanding efforts. Specifically, heterogeneity exists in not only the diverse kinematics representations and hierarchical dynamics representations but also in the data from different domains, namely biomechanics and reinforcement learning. With an in-depth analysis of the existing heterogeneity, we propose to emphasize the beneath homogeneity: all of them represent the homogeneous fact of human motion, though from different perspectives. Given this, we propose Homogeneous Dynamics Space (HDyS) as a fundamental space for human dynamics by aggregating heterogeneous data and training a homogeneous latent space with inspiration from the inverse-forward dynamics procedure. Leveraging the heterogeneous representations and datasets, HDyS achieves decent mapping between human kinematics and dynamics. We demonstrate the feasibility of HDyS with extensive experiments and applications. The project page is https://foruck.github.io/HDyS.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
515,112
2310.15887
AdaptiX -- A Transitional XR Framework for Development and Evaluation of Shared Control Applications in Assistive Robotics
With the ongoing efforts to empower people with mobility impairments and the increase in technological acceptance by the general public, assistive technologies, such as collaborative robotic arms, are gaining popularity. Yet, their widespread success is limited by usability issues, specifically the disparity between user input and software control along the autonomy continuum. To address this, shared control concepts provide opportunities to combine the targeted increase of user autonomy with a certain level of computer assistance. This paper presents the free and open-source AdaptiX XR framework for developing and evaluating shared control applications in a high-resolution simulation environment. The initial framework consists of a simulated robotic arm with an example scenario in Virtual Reality (VR), multiple standard control interfaces, and a specialized recording/replay system. AdaptiX can easily be extended for specific research needs, allowing Human-Robot Interaction (HRI) researchers to rapidly design and test novel interaction methods, intervention strategies, and multi-modal feedback techniques, without requiring an actual physical robotic arm during the early phases of ideation, prototyping, and evaluation. Also, a Robot Operating System (ROS) integration enables the controlling of a real robotic arm in a PhysicalTwin approach without any simulation-reality gap. Here, we review the capabilities and limitations of AdaptiX in detail and present three bodies of research based on the framework. AdaptiX can be accessed at https://adaptix.robot-research.de.
true
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
true
402,487
0802.4112
Hubs in Languages: Scale Free Networks of Synonyms
Natural languages are described in this paper in terms of networks of synonyms: a word is identified with a node, and synonyms are connected by undirected links. Our statistical analysis of the network of synonyms in Polish language showed it is scale-free; similar to what is known for English. The statistical properties of the networks are also similar. Thus, the statistical aspects of the networks are good candidates for culture independent elements of human language. We hypothesize that optimization for robustness and efficiency is responsible for this universality. Despite the statistical similarity, there is no one-to-one mapping between networks of these two languages. Although many hubs in Polish are translated into similarly highly connected hubs in English, there are also hubs specific to one of these languages only: a single word in one language is equivalent to many different and disconnected words in the other, in accordance with the Whorf hypothesis about language relativity. Identifying language-specific hubs is vitally important for automatic translation, and for understanding contextual, culturally related messages that are frequently missed or twisted in a naive, literary translation.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
1,359
2010.03249
Exploring and Evaluating Attributes, Values, and Structures for Entity Alignment
Entity alignment (EA) aims at building a unified Knowledge Graph (KG) of rich content by linking the equivalent entities from various KGs. GNN-based EA methods present promising performances by modeling the KG structure defined by relation triples. However, attribute triples can also provide crucial alignment signal but have not been well explored yet. In this paper, we propose to utilize an attributed value encoder and partition the KG into subgraphs to model the various types of attribute triples efficiently. Besides, the performances of current EA methods are overestimated because of the name-bias of existing EA datasets. To make an objective evaluation, we propose a hard experimental setting where we select equivalent entity pairs with very different names as the test set. Under both the regular and hard settings, our method achieves significant improvements ($5.10\%$ on average Hits@$1$ in DBP$15$k) over $12$ baselines in cross-lingual and monolingual datasets. Ablation studies on different subgraphs and a case study about attribute types further demonstrate the effectiveness of our method. Source code and data can be found at https://github.com/thunlp/explore-and-evaluate.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
199,341
2310.09196
A 4-approximation algorithm for min max correlation clustering
We introduce a lower bounding technique for the min max correlation clustering problem and, based on this technique, a combinatorial 4-approximation algorithm for complete graphs. This improves upon the previous best known approximation guarantees of 5, using a linear program formulation (Kalhan et al., 2019), and 40, for a combinatorial algorithm (Davies et al., 2023a). We extend this algorithm by a greedy joining heuristic and show empirically that it improves the state of the art in solution quality and runtime on several benchmark datasets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
399,686
2412.07587
A Hype-Adjusted Probability Measure for NLP Stock Return Forecasting
This article introduces a Hype-Adjusted Probability Measure in the context of a new Natural Language Processing (NLP) approach for stock return and volatility forecasting. A novel sentiment score equation is proposed to represent the impact of intraday news on forecasting next-period stock return and volatility for selected U.S. semiconductor tickers, a very vibrant industry sector. This work improves the forecast accuracy by addressing news bias, memory, and weight, and incorporating shifts in sentiment direction. More importantly, it extends the use of the remarkable tool of change of Probability Measure developed in the finance of Asset Pricing to NLP forecasting by constructing a Hype-Adjusted Probability Measure, obtained from a redistribution of the weights in the probability space, meant to correct for excessive or insufficient news.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
515,724
2404.13879
Explicit Lipschitz Value Estimation Enhances Policy Robustness Against Perturbation
In robotic control tasks, policies trained by reinforcement learning (RL) in simulation often experience a performance drop when deployed on physical hardware, due to modeling error, measurement error, and unpredictable perturbations in the real world. Robust RL methods account for this issue by approximating a worst-case value function during training, but they can be sensitive to approximation errors in the value function and its gradient before training is complete. In this paper, we hypothesize that Lipschitz regularization can help condition the approximated value function gradients, leading to improved robustness after training. We test this hypothesis by combining Lipschitz regularization with an application of Fast Gradient Sign Method to reduce approximation errors when evaluating the value function under adversarial perturbations. Our empirical results demonstrate the benefits of this approach over prior work on a number of continuous control benchmarks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
448,483
2402.07229
Successive Refinement in Large-Scale Computation: Advancing Model Inference Applications
Modern computationally-intensive applications often operate under time constraints, necessitating acceleration methods and distribution of computational workloads across multiple entities. However, the outcome is either achieved within the desired timeline or not, and in the latter case, valuable resources are wasted. In this paper, we introduce solutions for layered-resolution computation. These solutions allow lower-resolution results to be obtained at an earlier stage than the final result. This innovation notably enhances the deadline-based systems, as if a computational job is terminated due to time constraints, an approximate version of the final result can still be generated. Moreover, in certain operational regimes, a high-resolution result might be unnecessary, because the low-resolution result may already deviate significantly from the decision threshold, for example in AI-based decision-making systems. Therefore, operators can decide whether higher resolution is needed or not based on intermediate results, enabling computations with adaptive resolution. We present our framework for two critical and computationally demanding jobs: distributed matrix multiplication (linear) and model inference in machine learning (nonlinear). Our theoretical and empirical results demonstrate that the execution delay for the first resolution is significantly shorter than that for the final resolution, while maintaining overall complexity comparable to the conventional one-shot approach. Our experiments further illustrate how the layering feature increases the likelihood of meeting deadlines and enables adaptability and transparency in massive, large-scale computations.
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
false
false
428,620
2108.08454
Improving Human Sequential Decision-Making with Reinforcement Learning
Workers spend a significant amount of time learning how to make good decisions. Evaluating the efficacy of a given decision, however, can be complicated -- e.g., decision outcomes are often long-term and relate to the original decision in complex ways. Surprisingly, even though learning good decision-making strategies is difficult, they can often be expressed in simple and concise forms. Focusing on sequential decision-making, we design a novel machine learning algorithm that is capable of extracting "best practices" from trace data and conveying its insights to humans in the form of interpretable "tips". Our algorithm selects the tip that best bridges the gap between the actions taken by human workers and those taken by the optimal policy in a way that accounts for which actions are consequential for achieving higher performance. We evaluate our approach through a series of randomized controlled experiments where participants manage a virtual kitchen. Our experiments show that the tips generated by our algorithm can significantly improve human performance relative to intuitive baselines. In addition, we discuss a number of empirical insights that can help inform the design of algorithms intended for human-AI interfaces. For instance, we find evidence that participants do not simply blindly follow our tips; instead, they combine them with their own experience to discover additional strategies for improving performance.
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
251,254
2005.13439
HPC compact quasi-Newton algorithm for interface problems
In this work we present a robust interface coupling algorithm called Compact Interface quasi-Newton (CIQN). It is designed for computationally intensive applications using an MPI multi-code partitioned scheme. The algorithm allows to reuse information from previous time steps, feature that has been previously proposed to accelerate convergence. Through algebraic manipulation, an efficient usage of the computational resources is achieved by: avoiding construction of dense matrices and reduce every multiplication to a matrix-vector product and reusing the computationally expensive loops. This leads to a compact version of the original quasi-Newton algorithm. Altogether with an efficient communication, in this paper we show an efficient scalability up to 4800 cores. Three examples with qualitatively different dynamics are shown to prove that the algorithm can efficiently deal with added mass instability and two-field coupled problems. We also show how reusing histories and filtering does not necessarily makes a more robust scheme and, finally, we prove the necessity of this HPC version of the algorithm. The novelty of this article lies in the HPC focused implementation of the algorithm, detailing how to fuse and combine the composing blocks to obtain an scalable MPI implementation. Such an implementation is mandatory in large scale cases, for which the contact surface cannot be stored in a single computational node, or the number of contact nodes is not negligible compared with the size of the domain. \c{opyright} <2020> Elsevier. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
179,010
2111.06334
Identification of Fine-Grained Location Mentions in Crisis Tweets
Identification of fine-grained location mentions in crisis tweets is central in transforming situational awareness information extracted from social media into actionable information. Most prior works have focused on identifying generic locations, without considering their specific types. To facilitate progress on the fine-grained location identification task, we assemble two tweet crisis datasets and manually annotate them with specific location types. The first dataset contains tweets from a mixed set of crisis events, while the second dataset contains tweets from the global COVID-19 pandemic. We investigate the performance of state-of-the-art deep learning models for sequence tagging on these datasets, in both in-domain and cross-domain settings.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
266,042
1409.7476
Short-term solar irradiance and irradiation forecasts via different time series techniques: A preliminary study
This communication is devoted to solar irradiance and irradiation short-term forecasts, which are useful for electricity production. Several different time series approaches are employed. Our results and the corresponding numerical simulations show that techniques which do not need a large amount of historical data behave better than those which need them, especially when those data are quite noisy.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
36,329
2407.06099
Physics-Informed Machine Learning Towards A Real-Time Spacecraft Thermal Simulator
Modeling thermal states for complex space missions, such as the surface exploration of airless bodies, requires high computation, whether used in ground-based analysis for spacecraft design or during onboard reasoning for autonomous operations. For example, a finite-element thermal model with hundreds of elements can take significant time to simulate, which makes it unsuitable for onboard reasoning during time-sensitive scenarios such as descent and landing, proximity operations, or in-space assembly. Further, the lack of fast and accurate thermal modeling drives thermal designs to be more conservative and leads to spacecraft with larger mass and higher power budgets. The emerging paradigm of physics-informed machine learning (PIML) presents a class of hybrid modeling architectures that address this challenge by combining simplified physics models with machine learning (ML) models resulting in models which maintain both interpretability and robustness. Such techniques enable designs with reduced mass and power through onboard thermal-state estimation and control and may lead to improved onboard handling of off-nominal states, including unplanned down-time. The PIML model or hybrid model presented here consists of a neural network which predicts reduced nodalizations (distribution and size of coarse mesh) given on-orbit thermal load conditions, and subsequently a (relatively coarse) finite-difference model operates on this mesh to predict thermal states. We compare the computational performance and accuracy of the hybrid model to a data-driven neural net model, and a high-fidelity finite-difference model of a prototype Earth-orbiting small spacecraft. The PIML based active nodalization approach provides significantly better generalization than the neural net model and coarse mesh model, while reducing computing cost by up to 1.7x compared to the high-fidelity model.
false
true
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
471,252
0806.4391
Prediction with Expert Advice in Games with Unbounded One-Step Gains
The games of prediction with expert advice are considered in this paper. We present some modification of Kalai and Vempala algorithm of following the perturbed leader for the case of unrestrictedly large one-step gains. We show that in general case the cumulative gain of any probabilistic prediction algorithm can be much worse than the gain of some expert of the pool. Nevertheless, we give the lower bound for this cumulative gain in general case and construct a universal algorithm which has the optimal performance; we also prove that in case when one-step gains of experts of the pool have ``limited deviations'' the performance of our algorithm is close to the performance of the best expert.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
1,989
1705.08218
XOR-Sampling for Network Design with Correlated Stochastic Events
Many network optimization problems can be formulated as stochastic network design problems in which edges are present or absent stochastically. Furthermore, protective actions can guarantee that edges will remain present. We consider the problem of finding the optimal protection strategy under a budget limit in order to maximize some connectivity measurements of the network. Previous approaches rely on the assumption that edges are independent. In this paper, we consider a more realistic setting where multiple edges are not independent due to natural disasters or regional events that make the states of multiple edges stochastically correlated. We use Markov Random Fields to model the correlation and define a new stochastic network design framework. We provide a novel algorithm based on Sample Average Approximation (SAA) coupled with a Gibbs or XOR sampler. The experimental results on real road network data show that the policies produced by SAA with the XOR sampler have higher quality and lower variance compared to SAA with Gibbs sampler.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
73,979
2305.02819
Algorithmic Computability of the Capacity of Gaussian Channels with Colored Noise
Designing capacity-achieving coding schemes for the band-limited additive colored Gaussian noise (ACGN) channel has been and is still a challenge. In this paper, the capacity of the band-limited ACGN channel is studied from a fundamental algorithmic point of view by addressing the question of whether or not the capacity can be algorithmically computed. To this aim, the concept of Turing machines is used, which provides fundamental performance limits of digital computers. t is shown that there are band-limited ACGN channels having computable continuous spectral densities whose capacity are non-computable numbers. Moreover, it is demonstrated that for those channels, it is impossible to find computable sequences of asymptotically sharp upper bounds for their capacities.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
362,187
1303.5003
Convolutional Codes: Techniques of Construction
In this paper we show how to construct new convolutional codes from old ones by applying the well-known techniques: puncturing, extending, expanding, direct sum, the (u|u + v) construction and the product code construction. By applying these methods, several new families of convolutional codes can be constructed. As an example of code expansion, families of convolutional codes derived from classical Bose- Chaudhuri-Hocquenghem (BCH), character codes and Melas codes are constructed.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
23,049
1609.08685
Understanding and Exploiting Object Interaction Landscapes
Interactions play a key role in understanding objects and scenes, for both virtual and real world agents. We introduce a new general representation for proximal interactions among physical objects that is agnostic to the type of objects or interaction involved. The representation is based on tracking particles on one of the participating objects and then observing them with sensors appropriately placed in the interaction volume or on the interaction surfaces. We show how to factorize these interaction descriptors and project them into a particular participating object so as to obtain a new functional descriptor for that object, its interaction landscape, capturing its observed use in a spatio-temporal framework. Interaction landscapes are independent of the particular interaction and capture subtle dynamic effects in how objects move and behave when in functional use. Our method relates objects based on their function, establishes correspondences between shapes based on functional key points and regions, and retrieves peer and partner objects with respect to an interaction.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
61,623
2312.04180
AI and Jobs: Has the Inflection Point Arrived? Evidence from an Online Labor Platform
The emergence of Large Language Models (LLMs) has renewed the debate on the important issue of "technology displacement". While prior research has investigated the effect of information technology in general on human labor from a macro perspective, this paper complements the literature by examining the impact of LLMs on freelancers from a micro perspective. Specifically, we leverage the release of ChatGPT to investigate how AI influences freelancers across different online labor markets (OLMs). Employing the Difference-in-Differences method, we discovered two distinct scenarios following ChatGPT's release: 1) the displacement effect of LLMs, featuring reduced work volume and earnings, as is exemplified by the translation & localization OLM; 2) the productivity effect of LLMs, featuring increased work volume and earnings, as is exemplified by the web development OLM. To shed light on the underlying mechanisms, we developed a Cournot-type competition model to highlight the existence of an inflection point for each occupation which separates the timeline of AI progress into a honeymoon phase and a substitution phase. Before AI performance crosses the inflection point, human labor benefits each time AI improves, resulting in the honeymoon phase. However, after AI performance crosses the inflection point, additional AI enhancement hurts human labor. Further analyzing the progression from ChatGPT 3.5 to 4.0, we found three effect scenarios (i.e., productivity to productivity, displacement to displacement, and productivity to displacement), consistent with the inflection point conjecture. Heterogeneous analyses reveal that U.S. web developers tend to benefit more from the release of ChatGPT compared to their counterparts in other regions, and somewhat surprisingly, experienced translators seem more likely to exit the market than less experienced translators after the release of ChatGPT.
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
413,584
2308.02396
HOOD: Real-Time Human Presence and Out-of-Distribution Detection Using FMCW Radar
Detecting human presence indoors with millimeter-wave frequency-modulated continuous-wave (FMCW) radar faces challenges from both moving and stationary clutter. This work proposes a robust and real-time capable human presence and out-of-distribution (OOD) detection method using 60 GHz short-range FMCW radar. HOOD solves the human presence and OOD detection problems simultaneously in a single pipeline. Our solution relies on a reconstruction-based architecture and works with radar macro and micro range-Doppler images (RDIs). HOOD aims to accurately detect the presence of humans in the presence or absence of moving and stationary disturbers. Since HOOD is also an OOD detector, it aims to detect moving or stationary clutters as OOD in humans' absence and predicts the current scene's output as "no presence." HOOD performs well in diverse scenarios, demonstrating its effectiveness across different human activities and situations. On our dataset collected with a 60 GHz short-range FMCW radar, we achieve an average AUROC of 94.36%. Additionally, our extensive evaluations and experiments demonstrate that HOOD outperforms state-of-the-art (SOTA) OOD detection methods in terms of common OOD detection metrics. Importantly, HOOD also perfectly fits on Raspberry Pi 3B+ with an ARM Cortex-A53 CPU, which showcases its versatility across different hardware environments. Videos of our human presence detection experiments are available at: https://muskahya.github.io/HOOD
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
383,603
1711.06127
SUPRA: Open Source Software Defined Ultrasound Processing for Real-Time Applications
Research in ultrasound imaging is limited in reproducibility by two factors: First, many existing ultrasound pipelines are protected by intellectual property, rendering exchange of code difficult. Second, most pipelines are implemented in special hardware, resulting in limited flexibility of implemented processing steps on such platforms. Methods: With SUPRA we propose an open-source pipeline for fully Software Defined Ultrasound Processing for Real-time Applications to alleviate these problems. Covering all steps from beamforming to output of B-mode images, SUPRA can help improve the reproducibility of results and make modifications to the image acquisition mode accessible to the research community. We evaluate the pipeline qualitatively, quantitatively, and regarding its run-time. Results: The pipeline shows image quality comparable to a clinical system and backed by point-spread function measurements a comparable resolution. Including all processing stages of a usual ultrasound pipeline, the run-time analysis shows that it can be executed in 2D and 3D on consumer GPUs in real-time. Conclusions: Our software ultrasound pipeline opens up the research in image acquisition. Given access to ultrasound data from early stages (raw channel data, radiofrequency data) it simplifies the development in imaging. Furthermore, it tackles the reproducibility of research results, as code can be shared easily and even be executed without dedicated ultrasound hardware.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
84,715
2402.17780
Constraint Latent Space Matters: An Anti-anomalous Waveform Transformation Solution from Photoplethysmography to Arterial Blood Pressure
Arterial blood pressure (ABP) holds substantial promise for proactive cardiovascular health management. Notwithstanding its potential, the invasive nature of ABP measurements confines their utility primarily to clinical environments, limiting their applicability for continuous monitoring beyond medical facilities. The conversion of photoplethysmography (PPG) signals into ABP equivalents has garnered significant attention due to its potential in revolutionizing cardiovascular disease management. Recent strides in PPG-to-ABP prediction encompass the integration of generative and discriminative models. Despite these advances, the efficacy of these models is curtailed by the latent space shift predicament, stemming from alterations in PPG data distribution across disparate hardware and individuals, potentially leading to distorted ABP waveforms. To tackle this problem, we present an innovative solution named the Latent Space Constraint Transformer (LSCT), leveraging a quantized codebook to yield robust latent spaces by employing multiple discretizing bases. To facilitate improved reconstruction, the Correlation-boosted Attention Module (CAM) is introduced to systematically query pertinent bases on a global scale. Furthermore, to enhance expressive capacity, we propose the Multi-Spectrum Enhancement Knowledge (MSEK), which fosters local information flow within the channels of latent code and provides additional embedding for reconstruction. Through comprehensive experimentation on both publicly available datasets and a private downstream task dataset, the proposed approach demonstrates noteworthy performance enhancements compared to existing methods. Extensive ablation studies further substantiate the effectiveness of each introduced module.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
433,146
2010.09158
Extended Abstract: Motion Planners Learned from Geometric Hallucination
Learning motion planners to move robot from one point to another within an obstacle-occupied space in a collision-free manner requires either an extensive amount of data or high-quality demonstrations. This requirement is caused by the fact that among the variety of maneuvers the robot can perform, it is difficult to find the single optimal plan without many trial-and-error or an expert who is already capable of doing so. However, given a plan performed in obstacle-free space, it is relatively easy to find an obstacle geometry, where this plan is optimal. We consider this "dual" problem of classical motion planning and name this process of finding appropriate obstacle geometry as hallucination. In this work, we present two different approaches to hallucinate (1) the most constrained and (2) a minimal obstacle space where a given plan executed during an exploration phase in a completely safe obstacle-free environment remains optimal. We then train an end-to-end motion planner that can produce motions to move through realistic obstacles during deployment. Both methods are tested on a physical mobile robot in real-world cluttered environments.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
201,427
1910.13294
Rethinking Cooperative Rationalization: Introspective Extraction and Complement Control
Selective rationalization has become a common mechanism to ensure that predictive models reveal how they use any available features. The selection may be soft or hard, and identifies a subset of input features relevant for prediction. The setup can be viewed as a co-operate game between the selector (aka rationale generator) and the predictor making use of only the selected features. The co-operative setting may, however, be compromised for two reasons. First, the generator typically has no direct access to the outcome it aims to justify, resulting in poor performance. Second, there's typically no control exerted on the information left outside the selection. We revise the overall co-operative framework to address these challenges. We introduce an introspective model which explicitly predicts and incorporates the outcome into the selection process. Moreover, we explicitly control the rationale complement via an adversary so as not to leave any useful information out of the selection. We show that the two complementary mechanisms maintain both high predictive accuracy and lead to comprehensive rationales.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
151,353
2304.14647
An Adaptive Policy to Employ Sharpness-Aware Minimization
Sharpness-aware minimization (SAM), which searches for flat minima by min-max optimization, has been shown to be useful in improving model generalization. However, since each SAM update requires computing two gradients, its computational cost and training time are both doubled compared to standard empirical risk minimization (ERM). Recent state-of-the-arts reduce the fraction of SAM updates and thus accelerate SAM by switching between SAM and ERM updates randomly or periodically. In this paper, we design an adaptive policy to employ SAM based on the loss landscape geometry. Two efficient algorithms, AE-SAM and AE-LookSAM, are proposed. We theoretically show that AE-SAM has the same convergence rate as SAM. Experimental results on various datasets and architectures demonstrate the efficiency and effectiveness of the adaptive policy.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
361,050
1611.06322
Spotting Rumors via Novelty Detection
Rumour detection is hard because the most accurate systems operate retrospectively, only recognizing rumours once they have collected repeated signals. By then the rumours might have already spread and caused harm. We introduce a new category of features based on novelty, tailored to detect rumours early on. To compensate for the absence of repeated signals, we make use of news wire as an additional data source. Unconfirmed (novel) information with respect to the news articles is considered as an indication of rumours. Additionally we introduce pseudo feedback, which assumes that documents that are similar to previous rumours, are more likely to also be a rumour. Comparison with other real-time approaches shows that novelty based features in conjunction with pseudo feedback perform significantly better, when detecting rumours instantly after their publication.
false
false
false
true
false
true
false
false
true
false
false
false
false
false
false
false
false
false
64,165
2309.08551
Augmenting conformers with structured state-space sequence models for online speech recognition
Online speech recognition, where the model only accesses context to the left, is an important and challenging use case for ASR systems. In this work, we investigate augmenting neural encoders for online ASR by incorporating structured state-space sequence models (S4), a family of models that provide a parameter-efficient way of accessing arbitrarily long left context. We performed systematic ablation studies to compare variants of S4 models and propose two novel approaches that combine them with convolutions. We found that the most effective design is to stack a small S4 using real-valued recurrent weights with a local convolution, allowing them to work complementarily. Our best model achieves WERs of 4.01%/8.53% on test sets from Librispeech, outperforming Conformers with extensively tuned convolution.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
392,214
2108.02571
Learning Linearized Assignment Flows for Image Labeling
We introduce a novel algorithm for estimating optimal parameters of linearized assignment flows for image labeling. An exact formula is derived for the parameter gradient of any loss function that is constrained by the linear system of ODEs determining the linearized assignment flow. We show how to efficiently evaluate this formula using a Krylov subspace and a low-rank approximation. This enables us to perform parameter learning by Riemannian gradient descent in the parameter space, without the need to backpropagate errors or to solve an adjoint equation. Experiments demonstrate that our method performs as good as highly-tuned machine learning software using automatic differentiation. Unlike methods employing automatic differentiation, our approach yields a low-dimensional representation of internal parameters and their dynamics which helps to understand how assignment flows and more generally neural networks work and perform.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
249,375
1909.08267
An NMPC Approach using Convex Inner Approximations for Online Motion Planning with Guaranteed Collision Avoidance
Even though mobile robots have been around for decades, trajectory optimization and continuous time collision avoidance remain subject of active research. Existing methods trade off between path quality, computational complexity, and kinodynamic feasibility. This work approaches the problem using a nonlinear model predictive control (NMPC) framework, that is based on a novel convex inner approximation of the collision avoidance constraint. The proposed Convex Inner ApprOximation (CIAO) method finds kinodynamically feasible and continuous time collision free trajectories, in few iterations, typically one. For a feasible initialization, the approach is guaranteed to find a feasible solution, i.e. it preserves feasibility. Our experimental evaluation shows that CIAO outperforms state of the art baselines in terms of planning efficiency and path quality. Experiments on a robot with 12 states show that it also scales to high-dimensional systems. Furthermore real-world experiments demonstrate its capability of unifying trajectory optimization and tracking for safe motion planning in dynamic environments.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
145,940
2102.06626
Do-calculus enables estimation of causal effects in partially observed biomolecular pathways
Estimating causal queries, such as changes in protein abundance in response to a perturbation, is a fundamental task in the analysis of biomolecular pathways. The estimation requires experimental measurements on the pathway components. However, in practice many pathway components are left unobserved (latent) because they are either unknown, or difficult to measure. Latent variable models (LVMs) are well-suited for such estimation. Unfortunately, LVM-based estimation of causal queries can be inaccurate when parameters of the latent variables are not uniquely identified, or when the number of latent variables is misspecified. This has limited the use of LVMs for causal inference in biomolecular pathways. In this manuscript, we propose a general and practical approach for LVM-based estimation of causal queries. We prove that, despite the challenges above, LVM-based estimators of causal queries are accurate if the queries are identifiable according to Pearl's do-calculus, and describe an algorithm for its estimation. We illustrate the breadth and the practical utility of this approach for estimating causal queries in four synthetic and two experimental case studies, where structures of biomolecular pathways challenge the existing methods for causal query estimation. The code and the data documenting all the case studies are available at \url{https://github.com/srtaheri/LVMwithDoCalculus}
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
219,821
1807.03555
An Empirical Approach For Probing the Definiteness of Kernels
Models like support vector machines or Gaussian process regression often require positive semi-definite kernels. These kernels may be based on distance functions. While definiteness is proven for common distances and kernels, a proof for a new kernel may require too much time and effort for users who simply aim at practical usage. Furthermore, designing definite distances or kernels may be equally intricate. Finally, models can be enabled to use indefinite kernels. This may deteriorate the accuracy or computational cost of the model. Hence, an efficient method to determine definiteness is required. We propose an empirical approach. We show that sampling as well as optimization with an evolutionary algorithm may be employed to determine definiteness. We provide a proof-of-concept with 16 different distance measures for permutations. Our approach allows to disprove definiteness if a respective counter-example is found. It can also provide an estimate of how likely it is to obtain indefinite kernel matrices. This provides a simple, efficient tool to decide whether additional effort should be spent on designing/selecting a more suitable kernel or algorithm.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
102,556
2412.17408
Just What You Desire: Constrained Timeline Summarization with Self-Reflection for Enhanced Relevance
Given news articles about an entity, such as a public figure or organization, timeline summarization (TLS) involves generating a timeline that summarizes the key events about the entity. However, the TLS task is too underspecified, since what is of interest to each reader may vary, and hence there is not a single ideal or optimal timeline. In this paper, we introduce a novel task, called Constrained Timeline Summarization (CTLS), where a timeline is generated in which all events in the timeline meet some constraint. An example of a constrained timeline concerns the legal battles of Tiger Woods, where only events related to his legal problems are selected to appear in the timeline. We collected a new human-verified dataset of constrained timelines involving 47 entities and 5 constraints per entity. We propose an approach that employs a large language model (LLM) to summarize news articles according to a specified constraint and cluster them to identify key events to include in a constrained timeline. In addition, we propose a novel self-reflection method during summary generation, demonstrating that this approach successfully leads to improved performance.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
519,969
1108.4801
Supervised Rank Aggregation for Predicting Influence in Networks
Much work in Social Network Analysis has focused on the identification of the most important actors in a social network. This has resulted in several measures of influence and authority. While most of such sociometrics (e.g., PageRank) are driven by intuitions based on an actors location in a network, asking for the "most influential" actors in itself is an ill-posed question, unless it is put in context with a specific measurable task. Constructing a predictive task of interest in a given domain provides a mechanism to quantitatively compare different measures of influence. Furthermore, when we know what type of actionable insight to gather, we need not rely on a single network centrality measure. A combination of measures is more likely to capture various aspects of the social network that are predictive and beneficial for the task. Towards this end, we propose an approach to supervised rank aggregation, driven by techniques from Social Choice Theory. We illustrate the effectiveness of this method through experiments on Twitter and citation networks.
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
true
11,795
2404.10710
Autoregressive Pre-Training on Pixels and Texts
The integration of visual and textual information represents a promising direction in the advancement of language models. In this paper, we explore the dual modality of language--both visual and textual--within an autoregressive framework, pre-trained on both document images and texts. Our method employs a multimodal training strategy, utilizing visual data through next patch prediction with a regression head and/or textual data through next token prediction with a classification head. We focus on understanding the interaction between these two modalities and their combined impact on model performance. Our extensive evaluation across a wide range of benchmarks shows that incorporating both visual and textual data significantly improves the performance of pixel-based language models. Remarkably, we find that a unidirectional pixel-based model trained solely on visual data can achieve comparable results to state-of-the-art bidirectional models on several language understanding tasks. This work uncovers the untapped potential of integrating visual and textual modalities for more effective language modeling. We release our code, data, and model checkpoints at \url{https://github.com/ernie-research/pixelgpt}.
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
447,215
2110.12033
A Simple Baseline for Low-Budget Active Learning
Active learning focuses on choosing a subset of unlabeled data to be labeled. However, most such methods assume that a large subset of the data can be annotated. We are interested in low-budget active learning where only a small subset (e.g., 0.2% of ImageNet) can be annotated. Instead of proposing a new query strategy to iteratively sample batches of unlabeled data given an initial pool, we learn rich features by an off-the-shelf self-supervised learning method only once, and then study the effectiveness of different sampling strategies given a low labeling budget on a variety of datasets including ImageNet. We show that although the state-of-the-art active learning methods work well given a large labeling budget, a simple K-means clustering algorithm can outperform them on low budgets. We believe this method can be used as a simple baseline for low-budget active learning on image classification. Code is available at: https://github.com/UCDvision/low-budget-al
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
262,686
2010.06218
Self-Supervised Multi-View Synchronization Learning for 3D Pose Estimation
Current state-of-the-art methods cast monocular 3D human pose estimation as a learning problem by training neural networks on large data sets of images and corresponding skeleton poses. In contrast, we propose an approach that can exploit small annotated data sets by fine-tuning networks pre-trained via self-supervised learning on (large) unlabeled data sets. To drive such networks towards supporting 3D pose estimation during the pre-training step, we introduce a novel self-supervised feature learning task designed to focus on the 3D structure in an image. We exploit images extracted from videos captured with a multi-view camera system. The task is to classify whether two images depict two views of the same scene up to a rigid transformation. In a multi-view data set, where objects deform in a non-rigid manner, a rigid transformation occurs only between two views taken at the exact same time, i.e., when they are synchronized. We demonstrate the effectiveness of the synchronization task on the Human3.6M data set and achieve state-of-the-art results in 3D human pose estimation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
200,411
2206.07499
Mitigating Intra-Cell Pilot Contamination in Massive MIMO: A Rate Splitting Approach
Massive multiple-input multiple-output (MaMIMO) has become an integral part of the fifth-generation (5G) standard, and is envisioned to be further developed in beyond 5G (B5G) networks. With a massive number of antennas at the base station (BS), MaMIMO is best equipped to cater prominent use cases of B5G networks such as enhanced mobile broadband (eMBB), ultra-reliable low-latency communications (URLLC) and massive machine-type communications (mMTC) or combinations thereof. However, one of the critical challenges to this pursuit is the sporadic access behaviour of a massive number of devices in practical networks that inevitably leads to the conspicuous pilot contamination problem. Conventional linearly precoded physical layer strategies employed for downlink transmission in time division duplex (TDD) MaMIMO would incur a noticeable spectral efficiency (SE) loss in the presence of this pilot contamination. In this paper, we aim to integrate a robust multiple access and interference management strategy named rate-splitting multiple access (RSMA) with TDD MaMIMO for downlink transmission and investigate its SE performance. We propose a novel downlink transmission framework of RSMA in TDD MaMIMO, devise a precoder design strategy and power allocation schemes to maximize different network utility functions. Numerical results reveal that RSMA is significantly more robust to pilot contamination and always achieves a SE performance that is equal to or better than the conventional linearly precoded MaMIMO transmission strategy.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
302,764
1710.04584
Towards Scalable Spectral Clustering via Spectrum-Preserving Sparsification
The eigendeomposition of nearest-neighbor (NN) graph Laplacian matrices is the main computational bottleneck in spectral clustering. In this work, we introduce a highly-scalable, spectrum-preserving graph sparsification algorithm that enables to build ultra-sparse NN (u-NN) graphs with guaranteed preservation of the original graph spectrums, such as the first few eigenvectors of the original graph Laplacian. Our approach can immediately lead to scalable spectral clustering of large data networks without sacrificing solution quality. The proposed method starts from constructing low-stretch spanning trees (LSSTs) from the original graphs, which is followed by iteratively recovering small portions of "spectrally critical" off-tree edges to the LSSTs by leveraging a spectral off-tree embedding scheme. To determine the suitable amount of off-tree edges to be recovered to the LSSTs, an eigenvalue stability checking scheme is proposed, which enables to robustly preserve the first few Laplacian eigenvectors within the sparsified graph. Additionally, an incremental graph densification scheme is proposed for identifying extra edges that have been missing in the original NN graphs but can still play important roles in spectral clustering tasks. Our experimental results for a variety of well-known data sets show that the proposed method can dramatically reduce the complexity of NN graphs, leading to significant speedups in spectral clustering.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
82,500
2204.04216
Learning Trajectory-Aware Transformer for Video Super-Resolution
Video super-resolution (VSR) aims to restore a sequence of high-resolution (HR) frames from their low-resolution (LR) counterparts. Although some progress has been made, there are grand challenges to effectively utilize temporal dependency in entire video sequences. Existing approaches usually align and aggregate video frames from limited adjacent frames (e.g., 5 or 7 frames), which prevents these approaches from satisfactory results. In this paper, we take one step further to enable effective spatio-temporal learning in videos. We propose a novel Trajectory-aware Transformer for Video Super-Resolution (TTVSR). In particular, we formulate video frames into several pre-aligned trajectories which consist of continuous visual tokens. For a query token, self-attention is only learned on relevant visual tokens along spatio-temporal trajectories. Compared with vanilla vision Transformers, such a design significantly reduces the computational cost and enables Transformers to model long-range features. We further propose a cross-scale feature tokenization module to overcome scale-changing problems that often occur in long-range videos. Experimental results demonstrate the superiority of the proposed TTVSR over state-of-the-art models, by extensive quantitative and qualitative evaluations in four widely-used video super-resolution benchmarks. Both code and pre-trained models can be downloaded at https://github.com/researchmm/TTVSR.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
290,576
1302.6957
Ensemble Sparse Models for Image Analysis
Sparse representations with learned dictionaries have been successful in several image analysis applications. In this paper, we propose and analyze the framework of ensemble sparse models, and demonstrate their utility in image restoration and unsupervised clustering. The proposed ensemble model approximates the data as a linear combination of approximations from multiple \textit{weak} sparse models. Theoretical analysis of the ensemble model reveals that even in the worst-case, the ensemble can perform better than any of its constituent individual models. The dictionaries corresponding to the individual sparse models are obtained using either random example selection or boosted approaches. Boosted approaches learn one dictionary per round such that the dictionary learned in a particular round is optimized for the training examples having high reconstruction error in the previous round. Results with compressed recovery show that the ensemble representations lead to a better performance compared to using a single dictionary obtained with the conventional alternating minimization approach. The proposed ensemble models are also used for single image superresolution, and we show that they perform comparably to the recent approaches. In unsupervised clustering, experiments show that the proposed model performs better than baseline approaches in several standard datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
22,492
2012.11936
Knowledge Graphs Evolution and Preservation -- A Technical Report from ISWS 2019
One of the grand challenges discussed during the Dagstuhl Seminar "Knowledge Graphs: New Directions for Knowledge Representation on the Semantic Web" and described in its report is that of a: "Public FAIR Knowledge Graph of Everything: We increasingly see the creation of knowledge graphs that capture information about the entirety of a class of entities. [...] This grand challenge extends this further by asking if we can create a knowledge graph of "everything" ranging from common sense concepts to location based entities. This knowledge graph should be "open to the public" in a FAIR manner democratizing this mass amount of knowledge." Although linked open data (LOD) is one knowledge graph, it is the closest realisation (and probably the only one) to a public FAIR Knowledge Graph (KG) of everything. Surely, LOD provides a unique testbed for experimenting and evaluating research hypotheses on open and FAIR KG. One of the most neglected FAIR issues about KGs is their ongoing evolution and long term preservation. We want to investigate this problem, that is to understand what preserving and supporting the evolution of KGs means and how these problems can be addressed. Clearly, the problem can be approached from different perspectives and may require the development of different approaches, including new theories, ontologies, metrics, strategies, procedures, etc. This document reports a collaborative effort performed by 9 teams of students, each guided by a senior researcher as their mentor, attending the International Semantic Web Research School (ISWS 2019). Each team provides a different perspective to the problem of knowledge graph evolution substantiated by a set of research questions as the main subject of their investigation. In addition, they provide their working definition for KG preservation and evolution.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
212,782
2004.12762
Fitness Landscape Analysis of Dimensionally-Aware Genetic Programming Featuring Feynman Equations
Genetic programming is an often-used technique for symbolic regression: finding symbolic expressions that match data from an unknown function. To make the symbolic regression more efficient, one can also use dimensionally-aware genetic programming that constrains the physical units of the equation. Nevertheless, there is no formal analysis of how much dimensionality awareness helps in the regression process. In this paper, we conduct a fitness landscape analysis of dimensionallyaware genetic programming search spaces on a subset of equations from Richard Feynmans well-known lectures. We define an initialisation procedure and an accompanying set of neighbourhood operators for conducting the local search within the physical unit constraints. Our experiments show that the added information about the variable dimensionality can efficiently guide the search algorithm. Still, further analysis of the differences between the dimensionally-aware and standard genetic programming landscapes is needed to help in the design of efficient evolutionary operators to be used in a dimensionally-aware regression.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
174,343
2111.05203
Footstep Adjustment for Biped Push Recovery on Slippery Surfaces
Despite extensive studies on motion stabilization of bipeds, they still suffer from the lack of disturbance coping capability on slippery surfaces. In this paper, a novel controller for stabilizing a bipedal motion in its sagittal plane is developed with regard to the surface friction limitations. By taking into account the physical limitation of the surface in the stabilization trend, a more advanced level of reliability is achieved that provides higher functionalities such as push recovery on low-friction surfaces and prevents the stabilizer from overreacting. The discrete event-based strategy consists of modifying the step length and time period at the beginning of each footstep in order to reestablish stability necessary conditions while taking into account the surface friction limitation as a constraint to prevent slippage. Adjusting footsteps to prevent slippage in confronting external disturbances is perceived as a novel strategy for keeping stability, quite similar to human reaction. The developed methodology consists of rough closed-form solutions utilizing elementary math operations for obtaining the control inputs, allowing to reach a balance between convergence and computational cost, which is quite suitable for real-time operations even with modest computational hardware. Several numerical simulations, including push recovery and switching between different gates on low-friction surfaces, are performed to demonstrate the effectiveness of the proposed controller. In correlation with human-gait experience, the results also reveal some physical aspects favoring stability and the fact of switching between gaits to reduce the risk of falling in confronting different conditions.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
265,723
2111.10912
Johnson Coverage Hypothesis: Inapproximability of k-means and k-median in L_p metrics
K-median and k-means are the two most popular objectives for clustering algorithms. Despite intensive effort, a good understanding of the approximability of these objectives, particularly in $\ell_p$-metrics, remains a major open problem. In this paper, we significantly improve upon the hardness of approximation factors known in literature for these objectives in $\ell_p$-metrics. We introduce a new hypothesis called the Johnson Coverage Hypothesis (JCH), which roughly asserts that the well-studied max k-coverage problem on set systems is hard to approximate to a factor greater than 1-1/e, even when the membership graph of the set system is a subgraph of the Johnson graph. We then show that together with generalizations of the embedding techniques introduced by Cohen-Addad and Karthik (FOCS '19), JCH implies hardness of approximation results for k-median and k-means in $\ell_p$-metrics for factors which are close to the ones obtained for general metrics. In particular, assuming JCH we show that it is hard to approximate the k-means objective: $\bullet$ Discrete case: To a factor of 3.94 in the $\ell_1$-metric and to a factor of 1.73 in the $\ell_2$-metric; this improves upon the previous factor of 1.56 and 1.17 respectively, obtained under UGC. $\bullet$ Continuous case: To a factor of 2.10 in the $\ell_1$-metric and to a factor of 1.36 in the $\ell_2$-metric; this improves upon the previous factor of 1.07 in the $\ell_2$-metric obtained under UGC. We also obtain similar improvements under JCH for the k-median objective. Additionally, we prove a weak version of JCH using the work of Dinur et al. (SICOMP '05) on Hypergraph Vertex Cover, and recover all the results stated above of Cohen-Addad and Karthik (FOCS '19) to (nearly) the same inapproximability factors but now under the standard NP$\neq$P assumption (instead of UGC).
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
267,484
2104.05160
Feature Decomposition and Reconstruction Learning for Effective Facial Expression Recognition
In this paper, we propose a novel Feature Decomposition and Reconstruction Learning (FDRL) method for effective facial expression recognition. We view the expression information as the combination of the shared information (expression similarities) across different expressions and the unique information (expression-specific variations) for each expression. More specifically, FDRL mainly consists of two crucial networks: a Feature Decomposition Network (FDN) and a Feature Reconstruction Network (FRN). In particular, FDN first decomposes the basic features extracted from a backbone network into a set of facial action-aware latent features to model expression similarities. Then, FRN captures the intra-feature and inter-feature relationships for latent features to characterize expression-specific variations, and reconstructs the expression feature. To this end, two modules including an intra-feature relation modeling module and an inter-feature relation modeling module are developed in FRN. Experimental results on both the in-the-lab databases (including CK+, MMI, and Oulu-CASIA) and the in-the-wild databases (including RAF-DB and SFEW) show that the proposed FDRL method consistently achieves higher recognition accuracy than several state-of-the-art methods. This clearly highlights the benefit of feature decomposition and reconstruction for classifying expressions.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
229,617
1807.00454
Multi-Stage Complex Contagions in Random Multiplex Networks
Complex contagion models have been developed to understand a wide range of social phenomena such as adoption of cultural fads, the diffusion of belief, norms, and innovations in social networks, and the rise of collective action to join a riot. Most existing works focus on contagions where individuals' states are represented by {\em binary} variables, and propagation takes place over a single isolated network. However, characterization of an individual's standing on a given matter as a binary state might be overly simplistic as most of our opinions, feelings, and perceptions vary over more than two states. Also, most real-world contagions take place over multiple networks (e.g., Twitter and Facebook) or involve {\em multiplex} networks where individuals engage in different {\em types} of relationships (e.g., acquaintance, co-worker, family, etc.). To this end, this paper studies {\em multi-stage} complex contagions that take place over multi-layer or multiplex networks. Under a linear threshold based contagion model, we give analytic results for the probability and expected size of \textit{global} cascades, i.e., cases where a randomly chosen node can initiate a propagation that eventually reaches a {\em positive} fraction of the whole population. Analytic results are also confirmed and supported by an extensive numerical study. In particular, we demonstrate how the dynamics of complex contagions is affected by the extra weight exerted by \textit{hyper-active} nodes and by the structural properties of the networks involved. Among other things, we reveal an interesting connection between the assortativity of a network and the impact of \textit{hyper-active} nodes on the cascade size.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
101,839
2004.03627
TypeNet: Scaling up Keystroke Biometrics
We study the suitability of keystroke dynamics to authenticate 100K users typing free-text. For this, we first analyze to what extent our method based on a Siamese Recurrent Neural Network (RNN) is able to authenticate users when the amount of data per user is scarce, a common scenario in free-text keystroke authentication. With 1K users for testing the network, a population size comparable to previous works, TypeNet obtains an equal error rate of 4.8% using only 5 enrollment sequences and 1 test sequence per user with 50 keystrokes per sequence. Using the same amount of data per user, as the number of test users is scaled up to 100K, the performance in comparison to 1K decays relatively by less than 5%, demonstrating the potential of TypeNet to scale well at large scale number of users. Our experiments are conducted with the Aalto University keystroke database. To the best of our knowledge, this is the largest free-text keystroke database captured with more than 136M keystrokes from 168K users.
false
false
false
false
false
false
false
false
false
false
false
true
true
false
false
false
false
false
171,623
2412.00034
Data-Driven Prescriptive Analytics Applications: A Comprehensive Survey
Prescriptive Analytics (PSA), an emerging business analytics field suggesting concrete options for solving business problems, has seen an increasing amount of interest after more than a decade of multidisciplinary research. This paper is a comprehensive survey of existing applications within PSA in terms of their use cases, methodologies, and possible future research directions. To ensure a manageable scope, we focus on PSA applications that develop data-driven, automatic workflows, i.e. Data-Driven PSA (DPSA). Following a systematic methodology, we identify and include 104 papers in our survey. As our key contributions, we derive a number of novel conceptual models: In terms of use cases, we derive 10 application domains for DPSA, from Healthcare to Manufacturing, and subsumed problem types within each. In terms of individual method usage, we derive 5 method types and map them to a comprehensive taxonomy of method usage within DPSA applications, covering mathematical optimization, data mining and machine learning, probabilistic modelling, domain expertise, as well as simulations. As for combined method usage, we provide a statistical overview of how different method usage combinations are distributed and derive 2 generic workflow patterns along with subsumed workflow patterns, combining methods by either sequential or simultaneous relationships. Finally, we derive 4 possible research directions based on frequently recurring issues among surveyed papers, suggesting new frontiers in terms of methods, tools, and use cases.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
512,451
2111.06426
A Robust Mean-field Game of Boltzmann-Vlasov-like Traffic Flow
Historically, traffic modelling approaches have taken either a particle-like (microscopic) approach, or a gas-like (meso- or macroscopic) approach. Until recently with the introduction of mean-field games to the controls community, there has not been a rigorous framework to facilitate passage between controls for the microscopic models and the macroscopic models. We begin this work with a particle-based model of autonomous vehicles subject to drag and unknown disturbances, noise, and a speed limit in addition to the control. We formulate a robust stochastic differential game on the particles. We pass formally to the infinite-particle limit to obtain a robust mean-field game PDE system. We solve the mean-field game PDE system numerically and discuss the results. In particular, we obtain an optimal control which increases the bulk velocity of the traffic flow while reducing congestion.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
266,072
2211.10643
Downscaled Representation Matters: Improving Image Rescaling with Collaborative Downscaled Images
Deep networks have achieved great success in image rescaling (IR) task that seeks to learn the optimal downscaled representations, i.e., low-resolution (LR) images, to reconstruct the original high-resolution (HR) images. Compared with super-resolution methods that consider a fixed downscaling scheme, e.g., bicubic, IR often achieves significantly better reconstruction performance thanks to the learned downscaled representations. This highlights the importance of a good downscaled representation in image reconstruction tasks. Existing IR methods mainly learn the downscaled representation by jointly optimizing the downscaling and upscaling models. Unlike them, we seek to improve the downscaled representation through a different and more direct way: optimizing the downscaled image itself instead of the down-/upscaling models. Specifically, we propose a collaborative downscaling scheme that directly generates the collaborative LR examples by descending the gradient w.r.t. the reconstruction loss on them to benefit the IR process. Furthermore, since LR images are downscaled from the corresponding HR images, one can also improve the downscaled representation if we have a better representation in the HR domain. Inspired by this, we propose a Hierarchical Collaborative Downscaling (HCD) method that performs gradient descent in both HR and LR domains to improve the downscaled representations. Extensive experiments show that our HCD significantly improves the reconstruction performance both quantitatively and qualitatively. Moreover, we also highlight the flexibility of our HCD since it can generalize well across diverse IR models.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
331,380
2409.12632
Counterfactual Explanations for Clustering Models
Clustering algorithms rely on complex optimisation processes that may be difficult to comprehend, especially for individuals who lack technical expertise. While many explainable artificial intelligence techniques exist for supervised machine learning, unsupervised learning -- and clustering in particular -- has been largely neglected. To complicate matters further, the notion of a ``true'' cluster is inherently challenging to define. These facets of unsupervised learning and its explainability make it difficult to foster trust in such methods and curtail their adoption. To address these challenges, we propose a new, model-agnostic technique for explaining clustering algorithms with counterfactual statements. Our approach relies on a novel soft-scoring method that captures the spatial information utilised by clustering models. It builds upon a state-of-the-art Bayesian counterfactual generator for supervised learning to deliver high-quality explanations. We evaluate its performance on five datasets and two clustering algorithms, and demonstrate that introducing soft scores to guide counterfactual search significantly improves the results.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
true
false
false
489,663
2306.04806
Autonomous Capability Assessment of Sequential Decision-Making Systems in Stochastic Settings (Extended Version)
It is essential for users to understand what their AI systems can and can't do in order to use them safely. However, the problem of enabling users to assess AI systems with sequential decision-making (SDM) capabilities is relatively understudied. This paper presents a new approach for modeling the capabilities of black-box AI systems that can plan and act, along with the possible effects and requirements for executing those capabilities in stochastic settings. We present an active-learning approach that can effectively interact with a black-box SDM system and learn an interpretable probabilistic model describing its capabilities. Theoretical analysis of the approach identifies the conditions under which the learning process is guaranteed to converge to the correct model of the agent; empirical evaluations on different agents and simulated scenarios show that this approach is few-shot generalizable and can effectively describe the capabilities of arbitrary black-box SDM agents in a sample-efficient manner.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
371,904
2210.15559
Robust Monocular Localization of Drones by Adapting Domain Maps to Depth Prediction Inaccuracies
We present a novel monocular localization framework by jointly training deep learning-based depth prediction and Bayesian filtering-based pose reasoning. The proposed cross-modal framework significantly outperforms deep learning-only predictions with respect to model scalability and tolerance to environmental variations. Specifically, we show little-to-no degradation of pose accuracy even with extremely poor depth estimates from a lightweight depth predictor. Our framework also maintains high pose accuracy in extreme lighting variations compared to standard deep learning, even without explicit domain adaptation. By openly representing the map and intermediate feature maps (such as depth estimates), our framework also allows for faster updates and reusing intermediate predictions for other tasks, such as obstacle avoidance, resulting in much higher resource efficiency.
false
false
false
false
true
false
true
true
false
false
false
true
false
false
false
false
false
false
327,005
2402.13270
Global Tropical Cyclone Intensity Forecasting with Multi-modal Multi-scale Causal Autoregressive Model
Accurate forecasting of Tropical cyclone (TC) intensity is crucial for formulating disaster risk reduction strategies. Current methods predominantly rely on limited spatiotemporal information from ERA5 data and neglect the causal relationships between these physical variables, failing to fully capture the spatial and temporal patterns required for intensity forecasting. To address this issue, we propose a Multi-modal multi-Scale Causal AutoRegressive model (MSCAR), which is the first model that combines causal relationships with large-scale multi-modal data for global TC intensity autoregressive forecasting. Furthermore, given the current absence of a TC dataset that offers a wide range of spatial variables, we present the Satellite and ERA5-based Tropical Cyclone Dataset (SETCD), which stands as the longest and most comprehensive global dataset related to TCs. Experiments on the dataset show that MSCAR outperforms the state-of-the-art methods, achieving maximum reductions in global and regional forecast errors of 9.52% and 6.74%, respectively. The code and dataset are publicly available at https://anonymous.4open.science/r/MSCAR.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
431,180
2112.01984
Fixed-Gain AF Relaying for RF-THz Wireless System over $\alpha$-$\kappa$-$\mu$ Shadowed and $\alpha$-$\mu$ Channels
Recent research investigates the decode-and-forward (DF) relaying for mixed radio frequency (RF) and terahertz (THz) wireless links with zero-boresight pointing errors. In this letter, we analyze the performance of a fixed-gain amplify-and-forward (AF) relaying for the RF-THz link to interface the access network on the RF technology with wireless THz transmissions. We develop probability density function (PDF) and cumulative distribution function (CDF) of the end-to-end SNR for the relay-assisted system in terms of bivariate Fox's H function considering $\alpha$-$\mu$ fading for the THz system with non-zero boresight pointing errors and $\alpha$-$\kappa$-$\mu$ shadowed ($\alpha$-KMS) fading model for the RF link. Using the derived PDF and CDF, we present exact analytical expressions of the outage probability, average bit-error-rate (BER), and ergodic capacity of the considered system. We also analyze the outage probability and average BER asymptotically for a better insight into the system behavior at high SNR. We use simulations to compare the performance of the AF relaying having a semi-blind gain factor with the recently proposed DF relaying for THz-RF transmissions.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
269,694
2005.04354
Exact Asymptotics for Learning Tree-Structured Graphical Models with Side Information: Noiseless and Noisy Samples
Given side information that an Ising tree-structured graphical model is homogeneous and has no external field, we derive the exact asymptotics of learning its structure from independently drawn samples. Our results, which leverage the use of probabilistic tools from the theory of strong large deviations, refine the large deviation (error exponents) results of Tan, Anandkumar, Tong, and Willsky [IEEE Trans. on Inform. Th., 57(3):1714--1735, 2011] and strictly improve those of Bresler and Karzand [Ann. Statist., 2020]. In addition, we extend our results to the scenario in which the samples are observed in random noise. In this case, we show that they strictly improve on the recent results of Nikolakakis, Kalogerias, and Sarwate [Proc. AISTATS, 1771--1782, 2019]. Our theoretical results demonstrate keen agreement with experimental results for sample sizes as small as that in the hundreds.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
176,436
2404.11357
Detector Collapse: Physical-World Backdooring Object Detection to Catastrophic Overload or Blindness in Autonomous Driving
Object detection tasks, crucial in safety-critical systems like autonomous driving, focus on pinpointing object locations. These detectors are known to be susceptible to backdoor attacks. However, existing backdoor techniques have primarily been adapted from classification tasks, overlooking deeper vulnerabilities specific to object detection. This paper is dedicated to bridging this gap by introducing Detector Collapse} (DC), a brand-new backdoor attack paradigm tailored for object detection. DC is designed to instantly incapacitate detectors (i.e., severely impairing detector's performance and culminating in a denial-of-service). To this end, we develop two innovative attack schemes: Sponge for triggering widespread misidentifications and Blinding for rendering objects invisible. Remarkably, we introduce a novel poisoning strategy exploiting natural objects, enabling DC to act as a practical backdoor in real-world environments. Our experiments on different detectors across several benchmarks show a significant improvement ($\sim$10\%-60\% absolute and $\sim$2-7$\times$ relative) in attack efficacy over state-of-the-art attacks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
447,479
2007.15342
The optimality of syntactic dependency distances
It is often stated that human languages, as other biological systems, are shaped by cost-cutting pressures but, to what extent? Attempts to quantify the degree of optimality of languages by means of an optimality score have been scarce and focused mostly on English. Here we recast the problem of the optimality of the word order of a sentence as an optimization problem on a spatial network where the vertices are words, arcs indicate syntactic dependencies and the space is defined by the linear order of the words in the sentence. We introduce a new score to quantify the cognitive pressure to reduce the distance between linked words in a sentence. The analysis of sentences from 93 languages representing 19 linguistic families reveals that half of languages are optimized to a 70% or more. The score indicates that distances are not significantly reduced in a few languages and confirms two theoretical predictions, i.e. that longer sentences are more optimized and that distances are more likely to be longer than expected by chance in short sentences. We present a new hierarchical ranking of languages by their degree of optimization. The new score has implications for various fields of language research (dependency linguistics, typology, historical linguistics, clinical linguistics and cognitive science). Finally, the principles behind the design of the score have implications for network science.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
189,647
2409.11884
Recent Advances in OOD Detection: Problems and Approaches
Out-of-distribution (OOD) detection aims to detect test samples outside the training category space, which is an essential component in building reliable machine learning systems. Existing reviews on OOD detection primarily focus on method taxonomy, surveying the field by categorizing various approaches. However, many recent works concentrate on non-traditional OOD detection scenarios, such as test-time adaptation, multi-modal data sources and other novel contexts. In this survey, we uniquely review recent advances in OOD detection from the problem scenario perspective for the first time. According to whether the training process is completely controlled, we divide OOD detection methods into training-driven and training-agnostic. Besides, considering the rapid development of pre-trained models, large pre-trained model-based OOD detection is also regarded as an important category and discussed separately. Furthermore, we provide a discussion of the evaluation scenarios, a variety of applications, and several future research directions. We believe this survey with new taxonomy will benefit the proposal of new methods and the expansion of more practical scenarios. A curated list of related papers is provided in the Github repository: https://github.com/shuolucs/Awesome-Out-Of-Distribution-Detection
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
489,353
2401.01054
Elastic Multi-Gradient Descent for Parallel Continual Learning
The goal of Continual Learning (CL) is to continuously learn from new data streams and accomplish the corresponding tasks. Previously studied CL assumes that data are given in sequence nose-to-tail for different tasks, thus indeed belonging to Serial Continual Learning (SCL). This paper studies the novel paradigm of Parallel Continual Learning (PCL) in dynamic multi-task scenarios, where a diverse set of tasks is encountered at different time points. PCL presents challenges due to the training of an unspecified number of tasks with varying learning progress, leading to the difficulty of guaranteeing effective model updates for all encountered tasks. In our previous conference work, we focused on measuring and reducing the discrepancy among gradients in a multi-objective optimization problem, which, however, may still contain negative transfers in every model update. To address this issue, in the dynamic multi-objective optimization problem, we introduce task-specific elastic factors to adjust the descent direction towards the Pareto front. The proposed method, called Elastic Multi-Gradient Descent (EMGD), ensures that each update follows an appropriate Pareto descent direction, minimizing any negative impact on previously learned tasks. To balance the training between old and new tasks, we also propose a memory editing mechanism guided by the gradient computed using EMGD. This editing process updates the stored data points, reducing interference in the Pareto descent direction from previous tasks. Experiments on public datasets validate the effectiveness of our EMGD in the PCL setting.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
419,205
2008.07730
Parallel Extraction of Long-term Trends and Short-term Fluctuation Framework for Multivariate Time Series Forecasting
Multivariate time series forecasting is widely used in various fields. Reasonable prediction results can assist people in planning and decision-making, generate benefits and avoid risks. Normally, there are two characteristics of time series, that is, long-term trend and short-term fluctuation. For example, stock prices will have a long-term upward trend with the market, but there may be a small decline in the short term. These two characteristics are often relatively independent of each other. However, the existing prediction methods often do not distinguish between them, which reduces the accuracy of the prediction model. In this paper, a MTS forecasting framework that can capture the long-term trends and short-term fluctuations of time series in parallel is proposed. This method uses the original time series and its first difference to characterize long-term trends and short-term fluctuations. Three prediction sub-networks are constructed to predict long-term trends, short-term fluctuations and the final value to be predicted. In the overall optimization goal, the idea of multi-task learning is used for reference, which is to make the prediction results of long-term trends and short-term fluctuations as close to the real values as possible while requiring to approximate the values to be predicted. In this way, the proposed method uses more supervision information and can more accurately capture the changing trend of the time series, thereby improving the forecasting performance.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
192,205
2202.08703
Robust Frequency Constrained UC Using Data Driven Logistic Regression for Island Power Systems
In the current practice of short-term power scheduling, online power reserves are used to address generation mismatches and contingencies. Neither online inertia nor the speed of the committed units is considered in the scheduling process. With the increasing injection of uncertain renewable energy sources, this practice is starting to fall short especially in island power systems, where the primary frequency response is already scarce, and any contingency leads to potentially poor frequency response. This paper introduces a data driven linear constraint to improve the post-fault frequency quality in island power systems. A coherent initial data-set is obtained by simulating the system frequency response of single outages. Then logistic regression is employed as a predictive analytic procedure to differentiate the acceptable and unacceptable incidents. To compare the conventional methods with the proposed approach and also to handle the uncertain nature of renewable energy generation, an adaptive robust unit commitment formulation is utilized. Results for the island power system of La Palma show that depending on the chosen cut-point on the logistic regression estimation the proposed method can improve the frequency response quality of the system while reducing the operation costs.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
280,965
2104.03058
Optimizing Memory Efficiency of Graph Neural Networks on Edge Computing Platforms
Graph neural networks (GNN) have achieved state-of-the-art performance on various industrial tasks. However, the poor efficiency of GNN inference and frequent Out-Of-Memory (OOM) problem limit the successful application of GNN on edge computing platforms. To tackle these problems, a feature decomposition approach is proposed for memory efficiency optimization of GNN inference. The proposed approach could achieve outstanding optimization on various GNN models, covering a wide range of datasets, which speeds up the inference by up to 3x. Furthermore, the proposed feature decomposition could significantly reduce the peak memory usage (up to 5x in memory efficiency improvement) and mitigate OOM problems during GNN inference.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
228,954
cs/0407047
Channel-Independent and Sensor-Independent Stimulus Representations
This paper shows how a machine, which observes stimuli through an uncharacterized, uncalibrated channel and sensor, can glean machine-independent information (i.e., channel- and sensor-independent information) about the stimuli. First, we demonstrate that a machine defines a specific coordinate system on the stimulus state space, with the nature of that coordinate system depending on the device's channel and sensor. Thus, machines with different channels and sensors "see" the same stimulus trajectory through state space, but in different machine-specific coordinate systems. For a large variety of physical stimuli, statistical properties of that trajectory endow the stimulus configuration space with differential geometric structure (a metric and parallel transfer procedure), which can then be used to represent relative stimulus configurations in a coordinate-system-independent manner (and, therefore, in a channel- and sensor-independent manner). The resulting description is an "inner" property of the stimulus time series in the sense that it does not depend on extrinsic factors like the observer's choice of a coordinate system in which the stimulus is viewed (i.e., the observer's choice of channel and sensor). This methodology is illustrated with analytic examples and with a numerically simulated experiment. In an intelligent sensory device, this kind of representation "engine" could function as a "front-end" that passes channel/sensor-independent stimulus representations to a pattern recognition module. After a pattern recognizer has been trained in one of these devices, it could be used without change in other devices having different channels and sensors.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
538,276
2409.14083
SURf: Teaching Large Vision-Language Models to Selectively Utilize Retrieved Information
Large Vision-Language Models (LVLMs) have become pivotal at the intersection of computer vision and natural language processing. However, the full potential of LVLMs Retrieval-Augmented Generation (RAG) capabilities remains underutilized. Existing works either focus solely on the text modality or are limited to specific tasks. Moreover, most LVLMs struggle to selectively utilize retrieved information and are sensitive to irrelevant or misleading references. To address these challenges, we propose a self-refinement framework designed to teach LVLMs to Selectively Utilize Retrieved Information (SURf). Specifically, when given questions that are incorrectly answered by the LVLM backbone, we obtain references that help correct the answers (positive references) and those that do not (negative references). We then fine-tune the LVLM backbone using a combination of these positive and negative references. Our experiments across three tasks and seven datasets demonstrate that our framework significantly enhances LVLMs ability to effectively utilize retrieved multimodal references and improves their robustness against irrelevant or misleading information. The source code is available at https://github.com/GasolSun36/SURf.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
490,315
2311.06554
PGODE: Towards High-quality System Dynamics Modeling
This paper studies the problem of modeling multi-agent dynamical systems, where agents could interact mutually to influence their behaviors. Recent research predominantly uses geometric graphs to depict these mutual interactions, which are then captured by powerful graph neural networks (GNNs). However, predicting interacting dynamics in challenging scenarios such as out-of-distribution shift and complicated underlying rules remains unsolved. In this paper, we propose a new approach named Prototypical Graph ODE (PGODE) to address the problem. The core of PGODE is to incorporate prototype decomposition from contextual knowledge into a continuous graph ODE framework. Specifically, PGODE employs representation disentanglement and system parameters to extract both object-level and system-level contexts from historical trajectories, which allows us to explicitly model their independent influence and thus enhances the generalization capability under system changes. Then, we integrate these disentangled latent representations into a graph ODE model, which determines a combination of various interacting prototypes for enhanced model expressivity. The entire model is optimized using an end-to-end variational inference framework to maximize the likelihood. Extensive experiments in both in-distribution and out-of-distribution settings validate the superiority of PGODE compared to various baselines.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
406,987
1512.05868
On Voting and Facility Location
We study mechanisms for candidate selection that seek to minimize the social cost, where voters and candidates are associated with points in some underlying metric space. The social cost of a candidate is the sum of its distances to each voter. Some of our work assumes that these points can be modeled on a real line, but other results of ours are more general. A question closely related to candidate selection is that of minimizing the sum of distances for facility location. The difference is that in our setting there is a fixed set of candidates, whereas the large body of work on facility location seems to consider every point in the metric space to be a possible candidate. This gives rise to three types of mechanisms which differ in the granularity of their input space (voting, ranking and location mechanisms). We study the relationships between these three classes of mechanisms. While it may seem that Black's 1948 median algorithm is optimal for candidate selection on the line, this is not the case. We give matching upper and lower bounds for a variety of settings. In particular, when candidates and voters are on the line, our universally truthful spike mechanism gives a [tight] approximation of two. When assessing candidate selection mechanisms, we seek several desirable properties: (a) efficiency (minimizing the social cost) (b) truthfulness (dominant strategy incentive compatibility) and (c) simplicity (a smaller input space). We quantify the effect that truthfulness and simplicity impose on the efficiency.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
50,265
1901.11074
A Convolutional Neural Network for the Automatic Diagnosis of Collagen VI related Muscular Dystrophies
The development of machine learning systems for the diagnosis of rare diseases is challenging mainly due the lack of data to study them. Despite this challenge, this paper proposes a system for the Computer Aided Diagnosis (CAD) of low-prevalence, congenital muscular dystrophies from confocal microscopy images. The proposed CAD system relies on a Convolutional Neural Network (CNN) which performs an independent classification for non-overlapping patches tiling the input image, and generates an overall decision summarizing the individual decisions for the patches on the query image. This decision scheme points to the possibly problematic areas in the input images and provides a global quantitative evaluation of the state of the patients, which is fundamental for diagnosis and to monitor the efficiency of therapies.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
120,159
2004.08582
BiFNet: Bidirectional Fusion Network for Road Segmentation
Multi-sensor fusion-based road segmentation plays an important role in the intelligent driving system since it provides a drivable area. The existing mainstream fusion method is mainly to feature fusion in the image space domain which causes the perspective compression of the road and damages the performance of the distant road. Considering the bird's eye views(BEV) of the LiDAR remains the space structure in horizontal plane, this paper proposes a bidirectional fusion network(BiFNet) to fuse the image and BEV of the point cloud. The network consists of two modules: 1) Dense space transformation module, which solves the mutual conversion between camera image space and BEV space. 2) Context-based feature fusion module, which fuses the different sensors information based on the scenes from corresponding features.This method has achieved competitive results on KITTI dataset.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
173,107
2309.13130
Insights from an OTTR-centric Ontology Engineering Methodology
OTTR is a language for representing ontology modeling patterns, which enables to build ontologies or knowledge bases by instantiating templates. Thereby, particularities of the ontological representation language are hidden from the domain experts, and it enables ontology engineers to, to some extent, separate the processes of deciding about what information to model from deciding about how to model the information, e.g., which design patterns to use. Certain decisions can thus be postponed for the benefit of focusing on one of these processes. To date, only few works on ontology engineering where ontology templates are applied are described in the literature. In this paper, we outline our methodology and report findings from our ontology engineering activities in the domain of Material Science. In these activities, OTTR templates play a key role. Our ontology engineering process is bottom-up, as we begin modeling activities from existing data that is then, via templates, fed into a knowledge graph, and it is top-down, as we first focus on which data to model and postpone the decision of how to model the data. We find, among other things, that OTTR templates are especially useful as a means of communication with domain experts. Furthermore, we find that because OTTR templates encapsulate modeling decisions, the engineering process becomes flexible, meaning that design decisions can be changed at little cost.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
true
false
394,067
2205.03513
Digital Twin Framework for Time to Failure Forecasting of Wind Turbine Gearbox: A Concept
Wind turbine is a complex machine with its rotating and non-rotating equipment being sensitive to faults. Due to increased wear and tear, the maintenance aspect of a wind turbine is of critical importance. Unexpected failure of wind turbine components can lead to increased O\&M costs which ultimately reduces effective power capture of a wind farm. Fault detection in wind turbines is often supplemented with SCADA data available from wind farm operators in the form of time-series format with a 10-minute sample interval. Moreover, time-series analysis and data representation has become a powerful tool to get a deeper understating of the dynamic processes in complex machinery like wind turbine. Wind turbine SCADA data is usually available in form of a multivariate time-series with variables like gearbox oil temperature, gearbox bearing temperature, nacelle temperature, rotor speed and active power produced. In this preprint, we discuss the concept of a digital twin for time to failure forecasting of the wind turbine gearbox where a predictive module continuously gets updated with real-time SCADA data and generates meaningful insights for the wind farm operator.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
295,304
2012.02994
Attention-Driven Dynamic Graph Convolutional Network for Multi-Label Image Recognition
Recent studies often exploit Graph Convolutional Network (GCN) to model label dependencies to improve recognition accuracy for multi-label image recognition. However, constructing a graph by counting the label co-occurrence possibilities of the training data may degrade model generalizability, especially when there exist occasional co-occurrence objects in test images. Our goal is to eliminate such bias and enhance the robustness of the learnt features. To this end, we propose an Attention-Driven Dynamic Graph Convolutional Network (ADD-GCN) to dynamically generate a specific graph for each image. ADD-GCN adopts a Dynamic Graph Convolutional Network (D-GCN) to model the relation of content-aware category representations that are generated by a Semantic Attention Module (SAM). Extensive experiments on public multi-label benchmarks demonstrate the effectiveness of our method, which achieves mAPs of 85.2%, 96.0%, and 95.5% on MS-COCO, VOC2007, and VOC2012, respectively, and outperforms current state-of-the-art methods with a clear margin. All codes can be found at https://github.com/Yejin0111/ADD-GCN.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
209,943
2412.19876
WiSER-X: Wireless Signals-based Efficient Decentralized Multi-Robot Exploration without Explicit Information Exchange
We introduce a Wireless Signal based Efficient multi-Robot eXploration (WiSER-X) algorithm applicable to a decentralized team of robots exploring an unknown environment with communication bandwidth constraints. WiSER-X relies only on local inter-robot relative position estimates, that can be obtained by exchanging signal pings from onboard sensors such as WiFi, Ultra-Wide Band, amongst others, to inform the exploration decisions of individual robots to minimize redundant coverage overlaps. Furthermore, WiSER-X also enables asynchronous termination without requiring a shared map between the robots. It also adapts to heterogeneous robot behaviors and even complete failures in unknown environment while ensuring complete coverage. Simulations show that WiSER-X leads to 58% lower overlap than a zero-information-sharing baseline algorithm-1 and only 23% more overlap than a full-information-sharing algorithm baseline algorithm-2.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
521,010
1105.2864
The Rate-Distortion Function for Product of Two Sources with Side-Information at Decoders
This paper investigates a lossy source coding problem in which two decoders can access their side-information respectively. The correlated sources are a product of two component correlated sources, and we exclusively investigate the case such that each component is degraded. We show the rate-distortion function for that case, and give the following observations. When the components are degraded in matched order, the rate distortion function of the product sources is equal to the sum of the component-wise rate distortion functions. On the otherhand, the former is strictly smaller than the latter when the component sources are degraded in mismatched order. The converse proof for the mismatched case is motivated by the enhancement technique used for broadcast channels. For binary Hamming and Gaussian examples, we evaluate the rate-distortion functions.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
10,368
2301.12995
FedFA: Federated Feature Augmentation
Federated learning is a distributed paradigm that allows multiple parties to collaboratively train deep models without exchanging the raw data. However, the data distribution among clients is naturally non-i.i.d., which leads to severe degradation of the learnt model. The primary goal of this paper is to develop a robust federated learning algorithm to address feature shift in clients' samples, which can be caused by various factors, e.g., acquisition differences in medical imaging. To reach this goal, we propose FedFA to tackle federated learning from a distinct perspective of federated feature augmentation. FedFA is based on a major insight that each client's data distribution can be characterized by statistics (i.e., mean and standard deviation) of latent features; and it is likely to manipulate these local statistics globally, i.e., based on information in the entire federation, to let clients have a better sense of the underlying distribution and therefore alleviate local data bias. Based on this insight, we propose to augment each local feature statistic probabilistically based on a normal distribution, whose mean is the original statistic and variance quantifies the augmentation scope. Key to our approach is the determination of a meaningful Gaussian variance, which is accomplished by taking into account not only biased data of each individual client, but also underlying feature statistics characterized by all participating clients. We offer both theoretical and empirical justifications to verify the effectiveness of FedFA. Our code is available at https://github.com/tfzhou/FedFA.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
342,749
2402.16749
MISC: Ultra-low Bitrate Image Semantic Compression Driven by Large Multimodal Model
With the evolution of storage and communication protocols, ultra-low bitrate image compression has become a highly demanding topic. However, existing compression algorithms must sacrifice either consistency with the ground truth or perceptual quality at ultra-low bitrate. In recent years, the rapid development of the Large Multimodal Model (LMM) has made it possible to balance these two goals. To solve this problem, this paper proposes a method called Multimodal Image Semantic Compression (MISC), which consists of an LMM encoder for extracting the semantic information of the image, a map encoder to locate the region corresponding to the semantic, an image encoder generates an extremely compressed bitstream, and a decoder reconstructs the image based on the above information. Experimental results show that our proposed MISC is suitable for compressing both traditional Natural Sense Images (NSIs) and emerging AI-Generated Images (AIGIs) content. It can achieve optimal consistency and perception results while saving 50% bitrate, which has strong potential applications in the next generation of storage and communication. The code will be released on https://github.com/lcysyzxdxc/MISC.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
432,678
0812.1780
On the Energy Efficiency of Orthogonal Signaling
In this paper, transmission over the additive white Gaussian noise (AWGN) channel, and coherent and noncoherent fading channels using M-ary orthogonal frequency-shift keying (FSK) or on-off frequency-shift keying (OOFSK) is considered. The receiver is assumed to perform hard-decision detection. In this setting, energy required to reliably send one bit of information is investigated. It is shown that for fixed M and duty cycle, bit energy requirements grow without bound as the signal-to-noise ratio (SNR) vanishes. The minimum bit energy values are numerically obtained for different values of M and the duty cycle. The impact of fading on the energy efficiency is identified. Requirements to approach the minimum bit energy of -1.59 dB are determined.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
2,771