id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2011.08508
Generalized Continual Zero-Shot Learning
Recently, zero-shot learning (ZSL) emerged as an exciting topic and attracted a lot of attention. ZSL aims to classify unseen classes by transferring the knowledge from seen classes to unseen classes based on the class description. Despite showing promising performance, ZSL approaches assume that the training samples from all seen classes are available during the training, which is practically not feasible. To address this issue, we propose a more generalized and practical setup for ZSL, i.e., continual ZSL (CZSL), where classes arrive sequentially in the form of a task and it actively learns from the changing environment by leveraging the past experience. Further, to enhance the reliability, we develop CZSL for a single head continual learning setting where task identity is revealed during the training process but not during the testing. To avoid catastrophic forgetting and intransigence, we use knowledge distillation and storing and replay the few samples from previous tasks using a small episodic memory. We develop baselines and evaluate generalized CZSL on five ZSL benchmark datasets for two different settings of continual learning: with and without class incremental. Moreover, CZSL is developed for two types of variational autoencoders, which generates two types of features for classification: (i) generated features at output space and (ii) generated discriminative features at the latent space. The experimental results clearly indicate the single head CZSL is more generalizable and suitable for practical applications.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
206,898
1806.03798
Hyperviscosity-Based Stabilization for Radial Basis Function-Finite Difference (RBF-FD) Discretizations of Advection-Diffusion Equations
We present a novel hyperviscosity formulation for stabilizing RBF-FD discretizations of the advection-diffusion equation. The amount of hyperviscosity is determined quasi-analytically for commonly-used explicit, implicit, and implicit-explicit (IMEX) time integrators by using a simple 1D semi-discrete Von Neumann analysis. The analysis is applied to an analytical model of spurious growth in RBF-FD solutions that uses auxiliary differential operators mimicking the undesirable properties of RBF-FD differentiation matrices. The resulting hyperviscosity formulation is a generalization of existing ones in the literature, but is free of any tuning parameters and can be computed efficiently. To further improve robustness, we introduce a simple new scaling law for polynomial-augmented RBF-FD that relates the degree of polyharmonic spline (PHS) RBFs to the degree of the appended polynomial. When used in a novel ghost node formulation in conjunction with the recently-developed overlapped RBF-FD method, the resulting method is robust and free of stagnation errors. We validate the high-order convergence rates of our method on 2D and 3D test cases over a wide range of Peclet numbers (1-1000). We then use our method to solve a 3D coupled problem motivated by models of platelet aggregation and coagulation, again demonstrating high-order convergence rates.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
100,088
2307.03854
inTformer: A Time-Embedded Attention-Based Transformer for Crash Likelihood Prediction at Intersections Using Connected Vehicle Data
The real-time crash likelihood prediction model is an essential component of the proactive traffic safety management system. Over the years, numerous studies have attempted to construct a crash likelihood prediction model in order to enhance traffic safety, but mostly on freeways. In the majority of the existing studies, researchers have primarily employed a deep learning-based framework to identify crash potential. Lately, Transformer has emerged as a potential deep neural network that fundamentally operates through attention-based mechanisms. Transformer has several functional benefits over extant deep learning models such as LSTM, CNN, etc. Firstly, Transformer can readily handle long-term dependencies in a data sequence. Secondly, Transformers can parallelly process all elements in a data sequence during training. Finally, a Transformer does not have the vanishing gradient issue. Realizing the immense possibility of Transformers, this paper proposes inTersection-Transformer (inTformer), a time-embedded attention-based Transformer model that can effectively predict intersection crash likelihood in real-time. The proposed model was evaluated using connected vehicle data extracted from Signal Analytics Platform. Acknowledging the complex traffic operation mechanism at intersection, this study developed zone-specific models by dividing the intersection region into two distinct zones: within-intersection and approach zone. The best inTformer models in 'within-intersection,' and 'approach' zone achieved a sensitivity of 73%, and 70%, respectively. The zone-level models were also compared to earlier studies on crash likelihood prediction at intersections and with several established deep learning models trained on the same connected vehicle dataset.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
378,173
2305.01808
Hamming Similarity and Graph Laplacians for Class Partitioning and Adversarial Image Detection
Researchers typically investigate neural network representations by examining activation outputs for one or more layers of a network. Here, we investigate the potential for ReLU activation patterns (encoded as bit vectors) to aid in understanding and interpreting the behavior of neural networks. We utilize Representational Dissimilarity Matrices (RDMs) to investigate the coherence of data within the embedding spaces of a deep neural network. From each layer of a network, we extract and utilize bit vectors to construct similarity scores between images. From these similarity scores, we build a similarity matrix for a collection of images drawn from 2 classes. We then apply Fiedler partitioning to the associated Laplacian matrix to separate the classes. Our results indicate, through bit vector representations, that the network continues to refine class detectability with the last ReLU layer achieving better than 95\% separation accuracy. Additionally, we demonstrate that bit vectors aid in adversarial image detection, again achieving over 95\% accuracy in separating adversarial and non-adversarial images using a simple classifier.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
361,806
2209.02021
When Robotics Meets Wireless Communications: An Introductory Tutorial
The importance of ground Mobile Robots (MRs) and Unmanned Aerial Vehicles (UAVs) within the research community, industry, and society is growing fast. Many of these agents are nowadays equipped with communication systems that are, in some cases, essential to successfully achieve certain tasks. In this context, we have begun to witness the development of a new interdisciplinary research field at the intersection of robotics and communications. This research field has been boosted by the intention of integrating UAVs within the 5G and 6G communication networks. This research will undoubtedly lead to many important applications in the near future. Nevertheless, one of the main obstacles to the development of this research area is that most researchers address these problems by oversimplifying either the robotics or the communications aspect. This impedes the ability of reaching the full potential of this new interdisciplinary research area. In this tutorial, we present some of the modelling tools necessary to address problems involving both robotics and communication from an interdisciplinary perspective. As an illustrative example of such problems, we focus in this tutorial on the issue of communication-aware trajectory planning.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
316,078
cs/0702138
On the Maximal Diversity Order of Spatial Multiplexing with Transmit Antenna Selection
Zhang et. al. recently derived upper and lower bounds on the achievable diversity of an N_R x N_T i.i.d. Rayleigh fading multiple antenna system using transmit antenna selection, spatial multiplexing and a linear receiver structure. For the case of L = 2 transmitting (out of N_T available) antennas the bounds are tight and therefore specify the maximal diversity order. For the general case with L <= min(N_R,N_T) transmitting antennas it was conjectured that the maximal diversity is (N_T-L+1)(N_R-L+1) which coincides with the lower bound. Herein, we prove this conjecture for the zero forcing and zero forcing decision feedback (with optimal detection ordering) receiver structures.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
540,188
1709.01894
Convolutional Gaussian Processes
We present a practical way of introducing convolutional structure into Gaussian processes, making them more suited to high-dimensional inputs like images. The main contribution of our work is the construction of an inter-domain inducing point approximation that is well-tailored to the convolutional kernel. This allows us to gain the generalisation benefit of a convolutional kernel, together with fast but accurate posterior inference. We investigate several variations of the convolutional kernel, and apply it to MNIST and CIFAR-10, which have both been known to be challenging for Gaussian processes. We also show how the marginal likelihood can be used to find an optimal weighting between convolutional and RBF kernels to further improve performance. We hope that this illustration of the usefulness of a marginal likelihood will help automate discovering architectures in larger models.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
80,169
2302.01193
Imitating careful experts to avoid catastrophic events
RL is increasingly being used to control robotic systems that interact closely with humans. This interaction raises the problem of safe RL: how to ensure that a RL-controlled robotic system never, for instance, injures a human. This problem is especially challenging in rich, realistic settings where it is not even possible to clearly write down a reward function which incorporates these outcomes. In these circumstances, perhaps the only viable approach is based on IRL, which infers rewards from human demonstrations. However, IRL is massively underdetermined as many different rewards can lead to the same optimal policies; we show that this makes it difficult to distinguish catastrophic outcomes (such as injuring a human) from merely undesirable outcomes. Our key insight is that humans do display different behaviour when catastrophic outcomes are possible: they become much more careful. We incorporate carefulness signals into IRL, and find that they do indeed allow IRL to disambiguate undesirable from catastrophic outcomes, which is critical to ensuring safety in future real-world human-robot interactions.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
343,517
2306.14649
CIMulator: A Comprehensive Simulation Platform for Computing-In-Memory Circuit Macros with Low Bit-Width and Real Memory Materials
This paper presents a simulation platform, namely CIMulator, for quantifying the efficacy of various synaptic devices in neuromorphic accelerators for different neural network architectures. Nonvolatile memory devices, such as resistive random-access memory, ferroelectric field-effect transistor, and volatile static random-access memory devices, can be selected as synaptic devices. A multilayer perceptron and convolutional neural networks (CNNs), such as LeNet-5, VGG-16, and a custom CNN named C4W-1, are simulated to evaluate the effects of these synaptic devices on the training and inference outcomes. The dataset used in the simulations are MNIST, CIFAR-10, and a white blood cell dataset. By applying batch normalization and appropriate optimizers in the training phase, neuromorphic systems with very low-bit-width or binary weights could achieve high pattern recognition rates that approach software-based CNN accuracy. We also introduce spiking neural networks with RRAM-based synaptic devices for the recognition of MNIST handwritten digits.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
375,763
2106.02023
Slepian Scale-Discretised Wavelets on the Sphere
This work presents the construction of a novel spherical wavelet basis designed for incomplete spherical datasets, i.e. datasets which are missing in a particular region of the sphere. The eigenfunctions of the Slepian spatial-spectral concentration problem (the Slepian functions) are a set of orthogonal basis functions which are more concentrated within a defined region. Slepian functions allow one to compute a convolution on the incomplete sphere by leveraging the recently proposed sifting convolution and extending it to any set of basis functions. Through a tiling of the Slepian harmonic line, one may construct scale-discretised wavelets. An illustration is presented based on an example region on the sphere defined by the topographic map of the Earth. The Slepian wavelets and corresponding wavelet coefficients are constructed from this region and are used in a straightforward denoising example.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
238,702
0710.2268
Complexity of some Path Problems in DAGs and Linear Orders
We investigate here the computational complexity of three natural problems in directed acyclic graphs. We prove their NP Completeness and consider their restrictions to linear orders.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
776
2104.01727
DSRC-Enabled Train Safety Communication System at Unmanned Crossings
Although wireless technology is available for safety-critical applications, few applications have been used to improve train crossing safety. To prevent potential collisions between trains and vehicles, we present a Dedicated Short-Range Communication (DSRC)-enabled train safety communication system targeting to implement at unmanned crossings. Since our application's purpose is preventing collisions between trains and vehicles, we present a method to calculate the minimum required warning time for head-to-head collision at the train crossing. Furthermore, we define the best- and worst-case scenarios and provide practical measurements at six operating crossings in the U.S. with numerous system configurations such as modulation scheme, transmission power, antenna type, train speed, and vehicle braking distances. From our measurements, we find that the warning application coverage range is independent of the train speed, that the omnidirectional antenna with high transmission power is the best configuration for our system, and that the latency values are mostly less than 5 ms. We use the radio communication coverage to evaluate the time to avoid collision and introduce the safeness level metric. From the measured data, we observe that the DSRC-enabled train safety communication system is feasible for up to 35 mph train speeds which is providing more than 25-30 s time to avoid the collision for 25-65 mph vehicle speeds. Higher train speeds are expected to be safe, but more measurements beyond the 200 m mark with respect to a crossing considered here are needed for a definite conclusion.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
228,452
1303.2184
Complex Support Vector Machines for Regression and Quaternary Classification
The paper presents a new framework for complex Support Vector Regression as well as Support Vector Machines for quaternary classification. The method exploits the notion of widely linear estimation to model the input-out relation for complex-valued data and considers two cases: a) the complex data are split into their real and imaginary parts and a typical real kernel is employed to map the complex data to a complexified feature space and b) a pure complex kernel is used to directly map the data to the induced complex feature space. The recently developed Wirtinger's calculus on complex reproducing kernel Hilbert spaces (RKHS) is employed in order to compute the Lagrangian and derive the dual optimization problem. As one of our major results, we prove that any complex SVM/SVR task is equivalent with solving two real SVM/SVR tasks exploiting a specific real kernel which is generated by the chosen complex kernel. In particular, the case of pure complex kernels leads to the generation of new kernels, which have not been considered before. In the classification case, the proposed framework inherently splits the complex space into four parts. This leads naturally in solving the four class-task (quaternary classification), instead of the typical two classes of the real SVM. In turn, this rationale can be used in a multiclass problem as a split-class scenario based on four classes, as opposed to the one-versus-all method; this can lead to significant computational savings. Experiments demonstrate the effectiveness of the proposed framework for regression and classification tasks that involve complex data.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
22,801
1911.11634
GBCNs: Genetic Binary Convolutional Networks for Enhancing the Performance of 1-bit DCNNs
Training 1-bit deep convolutional neural networks (DCNNs) is one of the most challenging problems in computer vision, because it is much easier to get trapped into local minima than conventional DCNNs. The reason lies in that the binarized kernels and activations of 1-bit DCNNs cause a significant accuracy loss and training inefficiency. To address this problem, we propose Genetic Binary Convolutional Networks (GBCNs) to optimize 1-bit DCNNs, by introducing a new balanced Genetic Algorithm (BGA) to improve the representational ability in an end-to-end framework. The BGA method is proposed to modify the binary process of GBCNs to alleviate the local minima problem, which can significantly improve the performance of 1-bit DCNNs. We develop a new BGA module that is generic and flexible, and can be easily incorporated into existing DCNNs, such asWideResNets and ResNets. Extensive experiments on the object classification tasks (CIFAR, ImageNet) validate the effectiveness of the proposed method. To highlight, our method shows strong generalization on the object recognition task, i.e., face recognition, facial and person re-identification.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
155,184
2410.12885
Exploiting Longitudinal Speech Sessions via Voice Assistant Systems for Early Detection of Cognitive Decline
Mild Cognitive Impairment (MCI) is an early stage of Alzheimer's disease (AD), a form of neurodegenerative disorder. Early identification of MCI is crucial for delaying its progression through timely interventions. Existing research has demonstrated the feasibility of detecting MCI using speech collected from clinical interviews or digital devices. However, these approaches typically analyze data collected at limited time points, limiting their ability to identify cognitive changes over time. This paper presents a longitudinal study using voice assistant systems (VAS) to remotely collect seven-session speech data at three-month intervals across 18 months. We propose two methods to improve MCI detection and the prediction of cognitive changes. The first method incorporates historical data, while the second predicts cognitive changes at two time points. Our results indicate improvements when incorporating historical data: the average F1-score for MCI detection improves from 58.6% to 71.2% (by 12.6%) in the case of acoustic features and from 62.1% to 75.1% (by 13.0%) in the case of linguistic features. Additionally, the prediction of cognitive changes achieves an F1-score of 73.7% in the case of acoustic features. These results confirm the potential of VAS-based speech sessions for early detection of cognitive decline.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
499,268
2307.03135
Distilling Large Vision-Language Model with Out-of-Distribution Generalizability
Large vision-language models have achieved outstanding performance, but their size and computational requirements make their deployment on resource-constrained devices and time-sensitive tasks impractical. Model distillation, the process of creating smaller, faster models that maintain the performance of larger models, is a promising direction towards the solution. This paper investigates the distillation of visual representations in large teacher vision-language models into lightweight student models using a small- or mid-scale dataset. Notably, this study focuses on open-vocabulary out-of-distribution (OOD) generalization, a challenging problem that has been overlooked in previous model distillation literature. We propose two principles from vision and language modality perspectives to enhance student's OOD generalization: (1) by better imitating teacher's visual representation space, and carefully promoting better coherence in vision-language alignment with the teacher; (2) by enriching the teacher's language representations with informative and finegrained semantic attributes to effectively distinguish between different labels. We propose several metrics and conduct extensive experiments to investigate their techniques. The results demonstrate significant improvements in zero-shot and few-shot student performance on open-vocabulary out-of-distribution classification, highlighting the effectiveness of our proposed approaches. Poster: https://xuanlinli17.github.io/pdfs/iccv23_large_vlm_distillation_poster.pdf Code: https://github.com/xuanlinli17/large_vlm_distillation_ood
false
false
false
false
true
false
true
false
true
false
false
true
false
false
false
false
false
false
377,938
1811.10756
Learning with Stochastic Guidance for Navigation
Due to the sparse rewards and high degree of environment variation, reinforcement learning approaches such as Deep Deterministic Policy Gradient (DDPG) are plagued by issues of high variance when applied in complex real world environments. We present a new framework for overcoming these issues by incorporating a stochastic switch, allowing an agent to choose between high and low variance policies. The stochastic switch can be jointly trained with the original DDPG in the same framework. In this paper, we demonstrate the power of the framework in a navigation task, where the robot can dynamically choose to learn through exploration, or to use the output of a heuristic controller as guidance. Instead of starting from completely random moves, the navigation capability of a robot can be quickly bootstrapped by several simple independent controllers. The experimental results show that with the aid of stochastic guidance we are able to effectively and efficiently train DDPG navigation policies and achieve significantly better performance than state-of-the-art baselines models.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
114,571
1811.10052
An overview of deep learning in medical imaging focusing on MRI
What has happened in machine learning lately, and what does it mean for the future of medical image analysis? Machine learning has witnessed a tremendous amount of attention over the last few years. The current boom started around 2009 when so-called deep artificial neural networks began outperforming other established models on a number of important benchmarks. Deep neural networks are now the state-of-the-art machine learning models across a variety of areas, from image analysis to natural language processing, and widely deployed in academia and industry. These developments have a huge potential for medical imaging technology, medical data analysis, medical diagnostics and healthcare in general, slowly being realized. We provide a short overview of recent advances and some associated challenges in machine learning applied to medical image processing and image analysis. As this has become a very broad and fast expanding field we will not survey the entire landscape of applications, but put particular focus on deep learning in MRI. Our aim is threefold: (i) give a brief introduction to deep learning with pointers to core references; (ii) indicate how deep learning has been applied to the entire MRI processing chain, from acquisition to image retrieval, from segmentation to disease prediction; (iii) provide a starting point for people interested in experimenting and perhaps contributing to the field of machine learning for medical imaging by pointing out good educational resources, state-of-the-art open-source code, and interesting sources of data and problems related medical imaging.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
114,396
2206.14530
Deep Active Visual Attention for Real-time Robot Motion Generation: Emergence of Tool-body Assimilation and Adaptive Tool-use
Sufficiently perceiving the environment is a critical factor in robot motion generation. Although the introduction of deep visual processing models have contributed in extending this ability, existing methods lack in the ability to actively modify what to perceive; humans perform internally during visual cognitive processes. This paper addresses the issue by proposing a novel robot motion generation model, inspired by a human cognitive structure. The model incorporates a state-driven active top-down visual attention module, which acquires attentions that can actively change targets based on task states. We term such attentions as role-based attentions, since the acquired attention directed to targets that shared a coherent role throughout the motion. The model was trained on a robot tool-use task, in which the role-based attentions perceived the robot grippers and tool as identical end-effectors, during object picking and object dragging motions respectively. This is analogous to a biological phenomenon called tool-body assimilation, in which one regards a handled tool as an extension of one's body. The results suggested an improvement of flexibility in model's visual perception, which sustained stable attention and motion even if it was provided with untrained tools or exposed to experimenter's distractions.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
305,315
2112.05786
Guided Generative Models using Weak Supervision for Detecting Object Spatial Arrangement in Overhead Images
The increasing availability and accessibility of numerous overhead images allows us to estimate and assess the spatial arrangement of groups of geospatial target objects, which can benefit many applications, such as traffic monitoring and agricultural monitoring. Spatial arrangement estimation is the process of identifying the areas which contain the desired objects in overhead images. Traditional supervised object detection approaches can estimate accurate spatial arrangement but require large amounts of bounding box annotations. Recent semi-supervised clustering approaches can reduce manual labeling but still require annotations for all object categories in the image. This paper presents the target-guided generative model (TGGM), under the Variational Auto-encoder (VAE) framework, which uses Gaussian Mixture Models (GMM) to estimate the distributions of both hidden and decoder variables in VAE. Modeling both hidden and decoder variables by GMM reduces the required manual annotations significantly for spatial arrangement estimation. Unlike existing approaches that the training process can only update the GMM as a whole in the optimization iterations (e.g., a "minibatch"), TGGM allows the update of individual GMM components separately in the same optimization iteration. Optimizing GMM components separately allows TGGM to exploit the semantic relationships in spatial data and requires only a few labels to initiate and guide the generative process. Our experiments shows that TGGM achieves results comparable to the state-of-the-art semi-supervised methods and outperforms unsupervised methods by 10% based on the $F_{1}$ scores, while requiring significantly fewer labeled data.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
270,954
2403.05534
Bayesian Preference Elicitation with Language Models
Aligning AI systems to users' interests requires understanding and incorporating humans' complex values and preferences. Recently, language models (LMs) have been used to gather information about the preferences of human users. This preference data can be used to fine-tune or guide other LMs and/or AI systems. However, LMs have been shown to struggle with crucial aspects of preference learning: quantifying uncertainty, modeling human mental states, and asking informative questions. These challenges have been addressed in other areas of machine learning, such as Bayesian Optimal Experimental Design (BOED), which focus on designing informative queries within a well-defined feature space. But these methods, in turn, are difficult to scale and apply to real-world problems where simply identifying the relevant features can be difficult. We introduce OPEN (Optimal Preference Elicitation with Natural language) a framework that uses BOED to guide the choice of informative questions and an LM to extract features and translate abstract BOED queries into natural language questions. By combining the flexibility of LMs with the rigor of BOED, OPEN can optimize the informativity of queries while remaining adaptable to real-world domains. In user studies, we find that OPEN outperforms existing LM- and BOED-based methods for preference elicitation.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
436,041
1912.06809
Operator splitting schemes for American options under the two-asset Merton jump-diffusion model
This paper deals with the efficient numerical solution of the two-dimensional partial integro-differential complementarity problem (PIDCP) that holds for the value of American-style options under the two-asset Merton jump-diffusion model. We consider the adaptation of various operator splitting schemes of both the implicit-explicit (IMEX) and the alternating direction implicit (ADI) kind that have recently been studied for partial integro-differential equations (PIDEs) in [3]. Each of these schemes conveniently treats the nonlocal integral part in an explicit manner. Their adaptation to PIDCPs is achieved through a combination with the Ikonen-Toivanen splitting technique [14] as well as with the penalty method [32]. The convergence behaviour and relative performance of the acquired eight operator splitting methods is investigated in extensive numerical experiments for American put-on-the-min and put-on-the-average options.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
157,434
1109.0418
Tropical Algebraic approach to Consensus over Networks
In this paper we study the convergence of the max-consensus protocol. Tropical algebra is used to formulate the problem. Necessary and sufficient conditions for convergence of the max-consensus protocol over fixed as well as switching topology networks are given.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
11,936
2407.13777
DIR-BHRNet: A Lightweight Network for Real-time Vision-based Multi-person Pose Estimation on Smartphones
Human pose estimation (HPE), particularly multi-person pose estimation (MPPE), has been applied in many domains such as human-machine systems. However, the current MPPE methods generally run on powerful GPU systems and take a lot of computational costs. Real-time MPPE on mobile devices with low-performance computing is a challenging task. In this paper, we propose a lightweight neural network, DIR-BHRNet, for real-time MPPE on smartphones. In DIR-BHRNet, we design a novel lightweight convolutional module, Dense Inverted Residual (DIR), to improve accuracy by adding a depthwise convolution and a shortcut connection into the well-known Inverted Residual, and a novel efficient neural network structure, Balanced HRNet (BHRNet), to reduce computational costs by reconfiguring the proper number of convolutional blocks on each branch. We evaluate DIR-BHRNet on the well-known COCO and CrowdPose datasets. The results show that DIR-BHRNet outperforms the state-of-the-art methods in terms of accuracy with a real-time computational cost. Finally, we implement the DIR-BHRNet on the current mainstream Android smartphones, which perform more than 10 FPS. The free-used executable file (Android 10), source code, and a video description of this work are publicly available on the page 1 to facilitate the development of real-time MPPE on smartphones.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
474,520
1405.6173
An Effective Evolutionary Clustering Algorithm: Hepatitis C Case Study
Clustering analysis plays an important role in scientific research and commercial application. K-means algorithm is a widely used partition method in clustering. However, it is known that the K-means algorithm may get stuck at suboptimal solutions, depending on the choice of the initial cluster centers. In this article, we propose a technique to handle large scale data, which can select initial clustering center purposefully using Genetic algorithms (GAs), reduce the sensitivity to isolated point, avoid dissevering big cluster, and overcome deflexion of data in some degree that caused by the disproportion in data partitioning owing to adoption of multi-sampling. We applied our method to some public datasets these show the advantages of the proposed approach for example Hepatitis C dataset that has been taken from the machine learning warehouse of University of California. Our aim is to evaluate hepatitis dataset. In order to evaluate this dataset we did some preprocessing operation, the reason to preprocessing is to summarize the data in the best and suitable way for our algorithm. Missing values of the instances are adjusted using local mean method.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
33,352
2402.07506
NeuralSentinel: Safeguarding Neural Network Reliability and Trustworthiness
The usage of Artificial Intelligence (AI) systems has increased exponentially, thanks to their ability to reduce the amount of data to be analyzed, the user efforts and preserving a high rate of accuracy. However, introducing this new element in the loop has converted them into attacked points that can compromise the reliability of the systems. This new scenario has raised crucial challenges regarding the reliability and trustworthiness of the AI models, as well as about the uncertainties in their response decisions, becoming even more crucial when applied in critical domains such as healthcare, chemical, electrical plants, etc. To contain these issues, in this paper, we present NeuralSentinel (NS), a tool able to validate the reliability and trustworthiness of AI models. This tool combines attack and defence strategies and explainability concepts to stress an AI model and help non-expert staff increase their confidence in this new system by understanding the model decisions. NS provide a simple and easy-to-use interface for helping humans in the loop dealing with all the needed information. This tool was deployed and used in a Hackathon event to evaluate the reliability of a skin cancer image detector. During the event, experts and non-experts attacked and defended the detector, learning which factors were the most important for model misclassification and which techniques were the most efficient. The event was also used to detect NS's limitations and gather feedback for further improvements.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
428,749
1706.06702
Using Convolutional Neural Networks in Robots with Limited Computational Resources: Detecting NAO Robots while Playing Soccer
The main goal of this paper is to analyze the general problem of using Convolutional Neural Networks (CNNs) in robots with limited computational capabilities, and to propose general design guidelines for their use. In addition, two different CNN based NAO robot detectors that are able to run in real-time while playing soccer are proposed. One of the detectors is based on the XNOR-Net and the other on the SqueezeNet. Each detector is able to process a robot object-proposal in ~1ms, with an average number of 1.5 proposals per frame obtained by the upper camera of the NAO. The obtained detection rate is ~97%.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
75,726
2305.06426
Planning a Community Approach to Diabetes Care in Low- and Middle-Income Countries Using Optimization
Diabetes is a global health priority, especially in low- and-middle-income countries, where over 50% of premature deaths are attributed to high blood glucose. Several studies have demonstrated the feasibility of using Community Health Worker (CHW) programs to provide affordable and culturally tailored solutions for early detection and management of diabetes. Yet, scalable models to design and implement CHW programs while accounting for screening, management, and patient enrollment decisions have not been proposed. We introduce an optimization framework to determine personalized CHW visits that maximize glycemic control at a community-level. Our framework explicitly models the trade-off between screening new patients and providing management visits to individuals who are already enrolled in treatment. We account for patients' motivational states, which affect their decisions to enroll or drop out of treatment and, therefore, the effectiveness of the intervention. We incorporate these decisions by modeling patients as utility-maximizing agents within a bi-level provider problem that we solve using approximate dynamic programming. By estimating patients' health and motivational states, our model builds visit plans that account for patients' tradeoffs when deciding to enroll in treatment, leading to reduced dropout rates and improved resource allocation. We apply our approach to generate CHW visit plans using operational data from a social enterprise serving low-income neighborhoods in urban areas of India. Through extensive simulation experiments, we find that our framework requires up to 73.4% less capacity than the best naive policy to achieve the same performance in terms of glycemic control. Our experiments also show that our solution algorithm can improve upon naive policies by up to 124.5% using the same CHW capacity.
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
363,528
1904.05160
Large-Scale Long-Tailed Recognition in an Open World
Real world data often have a long-tailed and open-ended distribution. A practical recognition system must classify among majority and minority classes, generalize from a few known instances, and acknowledge novelty upon a never seen instance. We define Open Long-Tailed Recognition (OLTR) as learning from such naturally distributed data and optimizing the classification accuracy over a balanced test set which include head, tail, and open classes. OLTR must handle imbalanced classification, few-shot learning, and open-set recognition in one integrated algorithm, whereas existing classification approaches focus only on one aspect and deliver poorly over the entire class spectrum. The key challenges are how to share visual knowledge between head and tail classes and how to reduce confusion between tail and open classes. We develop an integrated OLTR algorithm that maps an image to a feature space such that visual concepts can easily relate to each other based on a learned metric that respects the closed-world classification while acknowledging the novelty of the open world. Our so-called dynamic meta-embedding combines a direct image feature and an associated memory feature, with the feature norm indicating the familiarity to known classes. On three large-scale OLTR datasets we curate from object-centric ImageNet, scene-centric Places, and face-centric MS1M data, our method consistently outperforms the state-of-the-art. Our code, datasets, and models enable future OLTR research and are publicly available at https://liuziwei7.github.io/projects/LongTail.html.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
127,226
2106.07451
Noise-robust Graph Learning by Estimating and Leveraging Pairwise Interactions
Teaching Graph Neural Networks (GNNs) to accurately classify nodes under severely noisy labels is an important problem in real-world graph learning applications, but is currently underexplored. Although pairwise training methods have demonstrated promise in supervised metric learning and unsupervised contrastive learning, they remain less studied on noisy graphs, where the structural pairwise interactions (PI) between nodes are abundant and thus might benefit label noise learning rather than the pointwise methods. This paper bridges the gap by proposing a pairwise framework for noisy node classification on graphs, which relies on the PI as a primary learning proxy in addition to the pointwise learning from the noisy node class labels. Our proposed framework PI-GNN contributes two novel components: (1) a confidence-aware PI estimation model that adaptively estimates the PI labels, which are defined as whether the two nodes share the same node labels, and (2) a decoupled training approach that leverages the estimated PI labels to regularize a node classification model for robust node classification. Extensive experiments on different datasets and GNN architectures demonstrate the effectiveness of PI-GNN, yielding a promising improvement over the state-of-the-art methods. Code is publicly available at https://github.com/TianBian95/pi-gnn.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
240,927
2312.10742
Exploring Sound vs Vibration for Robust Fault Detection on Rotating Machinery
Robust and real-time detection of faults on rotating machinery has become an ultimate objective for predictive maintenance in various industries. Vibration-based Deep Learning (DL) methodologies have become the de facto standard for bearing fault detection as they can produce state-of-the-art detection performances under certain conditions. Despite such particular focus on the vibration signal, the utilization of sound, on the other hand, has been neglected whilst only a few studies have been proposed during the last two decades, all of which were based on a conventional ML approach. One major reason is the lack of a benchmark dataset providing a large volume of both vibration and sound data over several working conditions for different machines and sensor locations. In this study, we address this need by presenting the new benchmark Qatar University Dual-Machine Bearing Fault Benchmark dataset (QU-DMBF), which encapsulates sound and vibration data from two different motors operating under 1080 working conditions overall. Then we draw the focus on the major limitations and drawbacks of vibration-based fault detection due to numerous installation and operational conditions. Finally, we propose the first DL approach for sound-based fault detection and perform comparative evaluations between the sound and vibration over the QU-DMBF dataset. A wide range of experimental results shows that the sound-based fault detection method is significantly more robust than its vibration-based counterpart, as it is entirely independent of the sensor location, cost-effective (requiring no sensor and sensor maintenance), and can achieve the same level of the best detection performance by its vibration-based counterpart. With this study, the QU-DMBF dataset, the optimized source codes in PyTorch, and comparative evaluations are now publicly shared.
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
416,301
1809.05397
Energy Efficient Multi-User MISO Communication using Low Resolution Large Intelligent Surfaces
We consider a multi-user Multiple-Input Single-Output (MISO) communication system comprising of a multi-antenna base station communicating in the downlink simultaneously with multiple single-antenna mobile users. This communication is assumed to be assisted by a Large Intelligent Surface (LIS) that consists of many nearly passive antenna elements, whose parameters can be tuned according to desired objectives. The latest design advances on these surfaces suggest cheap elements effectively acting as low resolution (even $1$-bit resolution) phase shifters, whose joint configuration affects the electromagnetic behavior of the wireless propagation channel. In this paper, we investigate the suitability of LIS for green communications in terms of Energy Efficiency (EE), which is expressed as the number of bits per Joule. In particular, for the considered multi-user MISO system, we design the transmit powers per user and the values for the surface elements that jointly maximize the system's EE performance. Our representative simulation results show that LIS-assisted communication, even with nearly passive $1$-bit resolution antenna elements, provides significant EE gains compared to conventional relay-assisted communication.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
107,786
2101.11177
LSOIE: A Large-Scale Dataset for Supervised Open Information Extraction
Open Information Extraction (OIE) systems seek to compress the factual propositions of a sentence into a series of n-ary tuples. These tuples are useful for downstream tasks in natural language processing like knowledge base creation, textual entailment, and natural language understanding. However, current OIE datasets are limited in both size and diversity. We introduce a new dataset by converting the QA-SRL 2.0 dataset to a large-scale OIE dataset (LSOIE). Our LSOIE dataset is 20 times larger than the next largest human-annotated OIE dataset. We construct and evaluate several benchmark OIE models on LSOIE, providing baselines for future improvements on the task. Our LSOIE data, models, and code are made publicly available
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
217,181
1002.4062
Modelling and Analysis of Biochemical Signalling Pathway Cross-talk
Signalling pathways are abstractions that help life scientists structure the coordination of cellular activity. Cross-talk between pathways accounts for many of the complex behaviours exhibited by signalling pathways and is often critical in producing the correct signal-response relationship. Formal models of signalling pathways and cross-talk in particular can aid understanding and drive experimentation. We define an approach to modelling based on the concept that a pathway is the (synchronising) parallel composition of instances of generic modules (with internal and external labels). Pathways are then composed by (synchronising) parallel composition and renaming; different types of cross-talk result from different combinations of synchronisation and renaming. We define a number of generic modules in PRISM and five types of cross-talk: signal flow, substrate availability, receptor function, gene expression and intracellular communication. We show that Continuous Stochastic Logic properties can both detect and distinguish the types of cross-talk. The approach is illustrated with small examples and an analysis of the cross-talk between the TGF-b/BMP, WNT and MAPK pathways.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
5,756
2105.13020
On the Globalization of the QAnon Conspiracy Theory Through Telegram
QAnon is a far-right conspiracy theory that became popular and mainstream over the past few years. Worryingly, the QAnon conspiracy theory has implications in the real world, with supporters of the theory participating in real-world violent acts like the US capitol attack in 2021. At the same time, the QAnon theory started evolving into a global phenomenon by attracting followers across the globe and, in particular, in Europe. Therefore, it is imperative to understand how the QAnon theory became a worldwide phenomenon and how this dissemination has been happening in the online space. This paper performs a large-scale data analysis of QAnon through Telegram by collecting 4.5M messages posted in 161 QAnon groups/channels. Using Google's Perspective API, we analyze the toxicity of QAnon content across languages and over time. Also, using a BERT-based topic modeling approach, we analyze the QAnon discourse across multiple languages. Among other things, we find that the German language is prevalent in QAnon groups/channels on Telegram, even overshadowing English after 2020. Also, we find that content posted in German and Portuguese tends to be more toxic compared to English. Our topic modeling indicates that QAnon supporters discuss various topics of interest within far-right movements, including world politics, conspiracy theories, COVID-19, and the anti-vaccination movement. Taken all together, we perform the first multilingual study on QAnon through Telegram and paint a nuanced overview of the globalization of the QAnon theory.
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
false
237,180
2412.17165
Survey on Abstractive Text Summarization: Dataset, Models, and Metrics
The advancements in deep learning, particularly the introduction of transformers, have been pivotal in enhancing various natural language processing (NLP) tasks. These include text-to-text applications such as machine translation, text classification, and text summarization, as well as data-to-text tasks like response generation and image-to-text tasks such as captioning. Transformer models are distinguished by their attention mechanisms, pretraining on general knowledge, and fine-tuning for downstream tasks. This has led to significant improvements, particularly in abstractive summarization, where sections of a source document are paraphrased to produce summaries that closely resemble human expression. The effectiveness of these models is assessed using diverse metrics, encompassing techniques like semantic overlap and factual correctness. This survey examines the state of the art in text summarization models, with a specific focus on the abstractive summarization approach. It reviews various datasets and evaluation metrics used to measure model performance. Additionally, it includes the results of test cases using abstractive summarization models to underscore the advantages and limitations of contemporary transformer-based models. The source codes and the data are available at https://github.com/gospelnnadi/Text-Summarization-SOTA-Experiment.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
519,859
1605.04056
Causal Discovery for Manufacturing Domains
Yield and quality improvement is of paramount importance to any manufacturing company. One of the ways of improving yield is through discovery of the root causal factors affecting yield. We propose the use of data-driven interpretable causal models to identify key factors affecting yield. We focus on factors that are measured in different stages of production and testing in the manufacturing cycle of a product. We apply causal structure learning techniques on real data collected from this line. Specifically, the goal of this work is to learn interpretable causal models from observational data produced by manufacturing lines. Emphasis has been given to the interpretability of the models to make them actionable in the field of manufacturing. We highlight the challenges presented by assembly line data and propose ways to alleviate them.We also identify unique characteristics of data originating from assembly lines and how to leverage them in order to improve causal discovery. Standard evaluation techniques for causal structure learning shows that the learned causal models seem to closely represent the underlying latent causal relationship between different factors in the production process. These results were also validated by manufacturing domain experts who found them promising. This work demonstrates how data mining and knowledge discovery can be used for root cause analysis in the domain of manufacturing and connected industry.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
55,822
2406.02919
MultifacetEval: Multifaceted Evaluation to Probe LLMs in Mastering Medical Knowledge
Large language models (LLMs) have excelled across domains, also delivering notable performance on the medical evaluation benchmarks, such as MedQA. However, there still exists a significant gap between the reported performance and the practical effectiveness in real-world medical scenarios. In this paper, we aim to explore the causes of this gap by employing a multifaceted examination schema to systematically probe the actual mastery of medical knowledge by current LLMs. Specifically, we develop a novel evaluation framework MultifacetEval to examine the degree and coverage of LLMs in encoding and mastering medical knowledge at multiple facets (comparison, rectification, discrimination, and verification) concurrently. Based on the MultifacetEval framework, we construct two multifaceted evaluation datasets: MultiDiseK (by producing questions from a clinical disease knowledge base) and MultiMedQA (by rephrasing each question from a medical benchmark MedQA into multifaceted questions). The experimental results on these multifaceted datasets demonstrate that the extent of current LLMs in mastering medical knowledge is far below their performance on existing medical benchmarks, suggesting that they lack depth, precision, and comprehensiveness in mastering medical knowledge. Consequently, current LLMs are not yet ready for application in real-world medical tasks. The codes and datasets are available at https://github.com/THUMLP/MultifacetEval.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
461,005
2109.02038
NAS-OoD: Neural Architecture Search for Out-of-Distribution Generalization
Recent advances on Out-of-Distribution (OoD) generalization reveal the robustness of deep learning models against distribution shifts. However, existing works focus on OoD algorithms, such as invariant risk minimization, domain generalization, or stable learning, without considering the influence of deep model architectures on OoD generalization, which may lead to sub-optimal performance. Neural Architecture Search (NAS) methods search for architecture based on its performance on the training data, which may result in poor generalization for OoD tasks. In this work, we propose robust Neural Architecture Search for OoD generalization (NAS-OoD), which optimizes the architecture with respect to its performance on generated OoD data by gradient descent. Specifically, a data generator is learned to synthesize OoD data by maximizing losses computed by different neural architectures, while the goal for architecture search is to find the optimal architecture parameters that minimize the synthetic OoD data losses. The data generator and the neural architecture are jointly optimized in an end-to-end manner, and the minimax training process effectively discovers robust architectures that generalize well for different distribution shifts. Extensive experimental results show that NAS-OoD achieves superior performance on various OoD generalization benchmarks with deep models having a much fewer number of parameters. In addition, on a real industry dataset, the proposed NAS-OoD method reduces the error rate by more than 70% compared with the state-of-the-art method, demonstrating the proposed method's practicality for real applications.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
253,612
1804.00492
Regional Priority Based Anomaly Detection using Autoencoders
In the recent times, autoencoders, besides being used for compression, have been proven quite useful even for regenerating similar images or help in image denoising. They have also been explored for anomaly detection in a few cases. However, due to location invariance property of convolutional neural network, autoencoders tend to learn from or search for learned features in the complete image. This creates issues when all the items in the image are not equally important and their location matters. For such cases, a semi supervised solution - regional priority based autoencoder (RPAE) has been proposed. In this model, similar to object detection models, a region proposal network identifies the relevant areas in the images as belonging to one of the predefined categories and then those bounding boxes are fed into appropriate decoder based on the category they belong to. Finally, the error scores from all the decoders are combined based on their importance to provide total reconstruction error.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
94,043
1603.08984
SMASH: Physics-guided Reconstruction of Collisions from Videos
Collision sequences are commonly used in games and entertainment to add drama and excitement. Authoring even two body collisions in the real world can be difficult, as one has to get timing and the object trajectories to be correctly synchronized. After tedious trial-and-error iterations, when objects can actually be made to collide, then they are difficult to capture in 3D. In contrast, synthetically generating plausible collisions is difficult as it requires adjusting different collision parameters (e.g., object mass ratio, coefficient of restitution, etc.) and appropriate initial parameters. We present SMASH to directly read off appropriate collision parameters directly from raw input video recordings. Technically we enable this by utilizing laws of rigid body collision to regularize the problem of lifting 2D trajectories to a physically valid 3D reconstruction of the collision. The reconstructed sequences can then be modified and combined to easily author novel and plausible collisions. We evaluate our system on a range of synthetic scenes and demonstrate the effectiveness of our method by accurately reconstructing several complex real world collision events.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
53,854
1510.05555
Shape Expressions Schemas
We present Shape Expressions (ShEx), an expressive schema language for RDF designed to provide a high-level, user friendly syntax with intuitive semantics. ShEx allows to describe the vocabulary and the structure of an RDF graph, and to constrain the allowed values for the properties of a node. It includes an algebraic grouping operator, a choice operator, cardinalitiy constraints for the number of allowed occurrences of a property, and negation. We define the semantics of the language and illustrate it with examples. We then present a validation algorithm that, given a node in an RDF graph and a constraint defined by the ShEx schema, allows to check whether the node satisfies that constraint. The algorithm outputs a proof that contains trivially verifiable associations of nodes and the constraints that they satisfy. The structure can be used for complex post-processing tasks, such as transforming the RDF graph to other graph or tree structures, verifying more complex constraints, or debugging (w.r.t. the schema). We also show the inherent difficulty of error identification of ShEx.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
48,033
1912.05864
Totally Deep Support Vector Machines
Support vector machines (SVMs) have been successful in solving many computer vision tasks including image and video category recognition especially for small and mid-scale training problems. The principle of these non-parametric models is to learn hyperplanes that separate data belonging to different classes while maximizing their margins. However, SVMs constrain the learned hyperplanes to lie in the span of support vectors, fixed/taken from training data, and this reduces their representational power and may lead to limited generalization performances. In this paper, we relax this constraint and allow the support vectors to be learned (instead of being fixed/taken from training data) in order to better fit a given classification task. Our approach, referred to as deep total variation support vector machines, is parametric and relies on a novel deep architecture that learns not only the SVM and the kernel parameters but also the support vectors, resulting into highly effective classifiers. We also show (under a particular setting of the activation functions in this deep architecture) that a large class of kernels and their combinations can be learned. Experiments conducted on the challenging task of skeleton-based action recognition show the outperformance of our deep total variation SVMs w.r.t different baselines as well as the related work.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
157,207
2008.02775
Forecasting Photovoltaic Power Production using a Deep Learning Sequence to Sequence Model with Attention
Rising penetration levels of (residential) photovoltaic (PV) power as distributed energy resource pose a number of challenges to the electricity infrastructure. High quality, general tools to provide accurate forecasts of power production are urgently needed. In this article, we propose a supervised deep learning model for end-to-end forecasting of PV power production. The proposed model is based on two seminal concepts that led to significant performance improvements of deep learning approaches in other sequence-related fields, but not yet in the area of time series prediction: the sequence to sequence architecture and attention mechanism as a context generator. The proposed model leverages numerical weather predictions and high-resolution historical measurements to forecast a binned probability distribution over the prognostic time intervals, rather than the expected values of the prognostic variable. This design offers significant performance improvements compared to common baseline approaches, such as fully connected neural networks and one-block long short-term memory architectures. Using normalized root mean square error based forecast skill score as a performance indicator, the proposed approach is compared to other models. The results show that the new design performs at or above the current state of the art of PV power forecasting.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
190,708
2208.05802
Stability Analysis of a Class of Discontinuous Discrete-Time Systems
The stability analysis of a class of discontinuous discrete-time systems is studied in this paper. The system under study is modeled as a feedback interconnection of a linear system and a set-valued nonlinearity. An equivalent representation, based on a constrained optimization problem, is proposed to represent the set-valued nonlinearity via a collection of linear and quadratic constraints. Relying on this description and on the use of a generalized quadratic set-valued Lyapunov functions, sufficient conditions in the form of linear matrix inequalities for global exponential stability are obtained. Numerical examples corroborate the theoretical findings.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
312,513
2212.10747
Performance Analysis of LOS THz Systems under Misalignment and Deterministic Fading
Line-of-sight (LOS) wireless communication at terahertz (THz) frequency bands is envisioned to play a major role in defining next-generation wireless technologies. This work analyzes the performance of a potential LOS THz system experiencing propagation loss and misaligned antenna beams. The THz channel particularities are discussed in terms of deterministic path loss, molecular absorption effect and stochastic fading due to antenna pointing errors. Assuming phase shift keying (PSK) modulation schemes, simplified analytical expressions are approximated for computing symbol error rate (SER) of the proposed THz system. Monte Carlo simulations are applied to verify theoretical model accuracy over various transmission distances and misalignment scenarios. The derived SER formulas match simulation results for Signal-to-noise ratio (SNR) above 35 dB at transmission distance up to 100 m and antenna displacement jitter variance of 0.05 $m^2$. In general, the theoretical model mismatch does not exceed 2 dB for lower SNR levels.
false
false
false
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
337,587
2111.14724
Encoding Causal Macrovariables
In many scientific disciplines, coarse-grained causal models are used to explain and predict the dynamics of more fine-grained systems. Naturally, such models require appropriate macrovariables. Automated procedures to detect suitable variables would be useful to leverage increasingly available high-dimensional observational datasets. This work introduces a novel algorithmic approach that is inspired by a new characterisation of causal macrovariables as information bottlenecks between microstates. Its general form can be adapted to address individual needs of different scientific goals. After a further transformation step, the causal relationships between learned variables can be investigated through additive noise models. Experiments on both simulated data and on a real climate dataset are reported. In a synthetic dataset, the algorithm robustly detects the ground-truth variables and correctly infers the causal relationships between them. In a real climate dataset, the algorithm robustly detects two variables that correspond to the two known variations of the El Nino phenomenon.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
268,681
1806.05272
Benchmarks for Image Classification and Other High-dimensional Pattern Recognition Problems
A good classification method should yield more accurate results than simple heuristics. But there are classification problems, especially high-dimensional ones like the ones based on image/video data, for which simple heuristics can work quite accurately; the structure of the data in such problems is easy to uncover without any sophisticated or computationally expensive method. On the other hand, some problems have a structure that can only be found with sophisticated pattern recognition methods. We are interested in quantifying the difficulty of a given high-dimensional pattern recognition problem. We consider the case where the patterns come from two pre-determined classes and where the objects are represented by points in a high-dimensional vector space. However, the framework we propose is extendable to an arbitrarily large number of classes. We propose classification benchmarks based on simple random projection heuristics. Our benchmarks are 2D curves parameterized by the classification error and computational cost of these simple heuristics. Each curve divides the plane into a "positive- gain" and a "negative-gain" region. The latter contains methods that are ill-suited for the given classification problem. The former is divided into two by the curve asymptote; methods that lie in the small region under the curve but right of the asymptote merely provide a computational gain but no structural advantage over the random heuristics. We prove that the curve asymptotes are optimal (i.e. at Bayes error) in some cases, and thus no sophisticated method can provide a structural advantage over the random heuristics. Such classification problems, an example of which we present in our numerical experiments, provide poor ground for testing new pattern classification methods.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
100,432
2311.18344
DSeg: Direct Line Segments Detection
This paper presents a model-driven approach to detect image line segments. The approach incrementally detects segments on the gradient image using a linear Kalman filter that estimates the supporting line parameters and their associated variances. The algorithm is fast and robust with respect to image noise and illumination variations, it allows the detection of longer line segments than data-driven approaches, and does not require any tedious parameters tuning. An extension of the algorithm that exploits a pyramidal approach to enhance the quality of results is proposed. Results with varying scene illumination and comparisons to classic existing approaches are presented.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
411,651
2403.02998
Towards Calibrated Deep Clustering Network
Deep clustering has exhibited remarkable performance; however, the over-confidence problem, i.e., the estimated confidence for a sample belonging to a particular cluster greatly exceeds its actual prediction accuracy, has been overlooked in prior research. To tackle this critical issue, we pioneer the development of a calibrated deep clustering framework. Specifically, we propose a novel dual-head (calibration head and clustering head) deep clustering model that can effectively calibrate the estimated confidence and the actual accuracy. The calibration head adjusts the overconfident predictions of the clustering head, generating prediction confidence that match the model learning status. Then, the clustering head dynamically select reliable high-confidence samples estimated by the calibration head for pseudo-label self-training. Additionally, we introduce an effective network initialization strategy that enhances both training speed and network robustness. The effectiveness of the proposed calibration approach and initialization strategy are both endorsed with solid theoretical guarantees. Extensive experiments demonstrate the proposed calibrated deep clustering model not only surpasses state-of-the-art deep clustering methods by 10 times in terms of expected calibration error but also significantly outperforms them in terms of clustering accuracy.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
435,032
1003.4994
Weak Decoupling Duality and Quantum Identification
If a quantum system is subject to noise, it is possible to perform quantum error correction reversing the action of the noise if and only if no information about the system's quantum state leaks to the environment. In this article, we develop an analogous duality in the case that the environment approximately forgets the identity of the quantum state, a weaker condition satisfied by epsilon-randomizing maps and approximate unitary designs. Specifically, we show that the environment approximately forgets quantum states if and only if the original channel approximately preserves pairwise fidelities of pure inputs, an observation we call weak decoupling duality. Using this tool, we then go on to study the task of using the output of a channel to simulate restricted classes of measurements on a space of input states. The case of simulating measurements that test whether the input state is an arbitrary pure state is known as equality testing or quantum identification. An immediate consequence of weak decoupling duality is that the ability to perform quantum identification cannot be cloned. We furthermore establish that the optimal amortized rate at which quantum states can be identified through a noisy quantum channel is equal to the entanglement-assisted classical capacity of the channel, despite the fact that the task is quantum, not classical, and entanglement-assistance is not allowed. In particular, this rate is strictly positive for every non-constant quantum channel, including classical channels.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
6,009
1908.09800
Learning Action Models from Disordered and Noisy Plan Traces
There is increasing awareness in the planning community that the burden of specifying complete domain models is too high, which impedes the applicability of planning technology in many real-world domains. Although there have many learning systems that help automatically learning domain models, most existing work assumes that the input traces are completely correct. A more realistic situation is that the plan traces are disordered and noisy, such as plan traces described by natural language. In this paper we propose and evaluate an approach for doing this. Our approach takes as input a set of plan traces with disordered actions and noise and outputs action models that can best explain the plan traces. We use a MAX-SAT framework for learning, where the constraints are derived from the given plan traces. Unlike traditional action models learners, the states in plan traces can be partially observable and noisy as well as the actions in plan traces can be disordered and parallel. We demonstrate the effectiveness of our approach through a systematic empirical evaluation with both IPC domains and the real-world dataset extracted from natural language documents.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
142,936
1910.00107
Q-learning for POMDP: An application to learning locomotion gaits
This paper presents a Q-learning framework for learning optimal locomotion gaits in robotic systems modeled as coupled rigid bodies. Inspired by prevalence of periodic gaits in bio-locomotion, an open loop periodic input is assumed to (say) affect a nominal gait. The learning problem is to learn a new (modified) gait by using only partial noisy measurements of the state. The objective of learning is to maximize a given reward modeled as an objective function in optimal control settings. The proposed control architecture has three main components: (i) Phase modeling of dynamics by a single phase variable; (ii) A coupled oscillator feedback particle filter to represent the posterior distribution of the phase conditioned in the sensory measurements; and (iii) A Q-learning algorithm to learn the approximate optimal control law. The architecture is illustrated with the aid of a planar two-body system. The performance of the learning is demonstrated in a simulation environment.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
147,576
2410.11063
Towards the methodology for solving the minimum enclosing ball and related problems
Methodology is provided towards the solution of the minimum enclosing ball problem. This problem concerns the determination of the unique spherical surface of smallest radius enclosing a given bounded set in the d-dimensional Euclidean space. Mathematical formulation and typical methods for solving this problem are presented. Also, the paper is focused on areas that are related to this problem, namely: (a) promise problems and property testing, (b) theorems for partitioning and enclosing (covering) a set, and (c) computation of the diameter of a set.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
498,363
1410.7787
Correcting Errors in Digital Lexicographic Resources Using a Dictionary Manipulation Language
We describe a paradigm for combining manual and automatic error correction of noisy structured lexicographic data. Modifications to the structure and underlying text of the lexicographic data are expressed in a simple, interpreted programming language. Dictionary Manipulation Language (DML) commands identify nodes by unique identifiers, and manipulations are performed using simple commands such as create, move, set text, etc. Corrected lexicons are produced by applying sequences of DML commands to the source version of the lexicon. DML commands can be written manually to repair one-off errors or generated automatically to correct recurring problems. We discuss advantages of the paradigm for the task of editing digital bilingual dictionaries.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
37,100
1612.02742
Joint Hand Detection and Rotation Estimation by Using CNN
Hand detection is essential for many hand related tasks, e.g. parsing hand pose, understanding gesture, which are extremely useful for robotics and human-computer interaction. However, hand detection in uncontrolled environments is challenging due to the flexibility of wrist joint and cluttered background. We propose a deep learning based approach which detects hands and calibrates in-plane rotation under supervision at the same time. To guarantee the recall, we propose a context aware proposal generation algorithm which significantly outperforms the selective search. We then design a convolutional neural network(CNN) which handles object rotation explicitly to jointly solve the object detection and rotation estimation tasks. Experiments show that our method achieves better results than state-of-the-art detection models on widely-used benchmarks such as Oxford and Egohands database. We further show that rotation estimation and classification can mutually benefit each other.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
65,279
1906.08839
Autonomous Navigation of MAVs in Unknown Cluttered Environments
This paper presents an autonomous navigation framework for reaching a goal in unknown 3D cluttered environments. The framework consists of three main components. First, a computationally efficient method for mapping the environment from the disparity measurements obtained from a depth sensor. Second, a stochastic method to generate a path to a given goal, taking into account field of view constraints on the space that is assumed to be safe for navigation. Third, a fast method for the online generation of motion plans, taking into account the robot's dynamic constraints, model, and environmental uncertainty and disturbances. To highlight the contribution with respect to the available literature, we provide a qualitative and quantitative comparison with the state of the art methods for reaching a goal and for exploration in unknown environments, showing the superior performance of our approach. To illustrate the effectiveness of the proposed framework, we present experiments in multiple indoors and outdoors environments running the algorithm fully on board and in real-time, using a robotic platform based on the Intel Ready to Fly drone kit, which represents the implementation in the most frugal platform for navigation in unknown cluttered environments demonstrated to date. Open source code is available at:~\url{https://github.com/IntelLabs/autonomousmavs}. The video of the experimental results can be found at~\url{https://youtu.be/Wq0e7vF6nZM}.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
135,975
2409.18228
Analysis of Spatial augmentation in Self-supervised models in the purview of training and test distributions
In this paper, we present an empirical study of typical spatial augmentation techniques used in self-supervised representation learning methods (both contrastive and non-contrastive), namely random crop and cutout. Our contributions are: (a) we dissociate random cropping into two separate augmentations, overlap and patch, and provide a detailed analysis on the effect of area of overlap and patch size to the accuracy on down stream tasks. (b) We offer an insight into why cutout augmentation does not learn good representation, as reported in earlier literature. Finally, based on these analysis, (c) we propose a distance-based margin to the invariance loss for learning scene-centric representations for the downstream task on object-centric distribution, showing that as simple as a margin proportional to the pixel distance between the two spatial views in the scence-centric images can improve the learned representation. Our study furthers the understanding of the spatial augmentations, and the effect of the domain-gap between the training augmentations and the test distribution.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
492,150
2010.04928
Contrastive Rendering for Ultrasound Image Segmentation
Ultrasound (US) image segmentation embraced its significant improvement in deep learning era. However, the lack of sharp boundaries in US images still remains an inherent challenge for segmentation. Previous methods often resort to global context, multi-scale cues or auxiliary guidance to estimate the boundaries. It is hard for these methods to approach pixel-level learning for fine-grained boundary generating. In this paper, we propose a novel and effective framework to improve boundary estimation in US images. Our work has three highlights. First, we propose to formulate the boundary estimation as a rendering task, which can recognize ambiguous points (pixels/voxels) and calibrate the boundary prediction via enriched feature representation learning. Second, we introduce point-wise contrastive learning to enhance the similarity of points from the same class and contrastively decrease the similarity of points from different classes. Boundary ambiguities are therefore further addressed. Third, both rendering and contrastive learning tasks contribute to consistent improvement while reducing network parameters. As a proof-of-concept, we performed validation experiments on a challenging dataset of 86 ovarian US volumes. Results show that our proposed method outperforms state-of-the-art methods and has the potential to be used in clinical practice.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
199,927
2411.14465
Testing Uncertainty of Large Language Models for Physics Knowledge and Reasoning
Large Language Models (LLMs) have gained significant popularity in recent years for their ability to answer questions in various fields. However, these models have a tendency to "hallucinate" their responses, making it challenging to evaluate their performance. A major challenge is determining how to assess the certainty of a model's predictions and how it correlates with accuracy. In this work, we introduce an analysis for evaluating the performance of popular open-source LLMs, as well as gpt-3.5 Turbo, on multiple choice physics questionnaires. We focus on the relationship between answer accuracy and variability in topics related to physics. Our findings suggest that most models provide accurate replies in cases where they are certain, but this is by far not a general behavior. The relationship between accuracy and uncertainty exposes a broad horizontal bell-shaped distribution. We report how the asymmetry between accuracy and uncertainty intensifies as the questions demand more logical reasoning of the LLM agent, while the same relationship remains sharp for knowledge retrieval tasks.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
510,177
1804.06002
Joint Quantizer Optimization based on Neural Quantizer for Sum-Product Decoder
A low-precision analog-to-digital converter (ADC) is required to implement a frontend device of wideband digital communication systems in order to reduce its power consumption. The goal of this paper is to present a novel joint quantizer optimization method for minimizing lower-precision quantizers matched to the sum-product algorithms. The principal idea is to introduce a quantizer that includes a feed-forward neural network and the soft staircase function. Since the soft staircase function is differentiable and has non-zero gradient values everywhere, we can exploit backpropagation and a stochastic gradient descent method to train the feed-forward neural network in the quantizer. The expected loss regarding the channel input and the decoder output is minimized in a supervised training phase. The experimental results indicate that the joint quantizer optimization method successfully provides an 8-level quantizer for a low-density parity-check (LDPC) code that achieves only a 0.1-dB performance loss compared to the unquantized system.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
95,200
2105.14274
Korean-English Machine Translation with Multiple Tokenization Strategy
This work was conducted to find out how tokenization methods affect the training results of machine translation models. In this work, alphabet tokenization, morpheme tokenization, and BPE tokenization were applied to Korean as the source language and English as the target language respectively, and the comparison experiment was conducted by repeating 50,000 epochs of each 9 models using the Transformer neural network. As a result of measuring the BLEU scores of the experimental models, the model that applied BPE tokenization to Korean and morpheme tokenization to English recorded 35.73, showing the best performance.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
237,608
2404.13056
Variational Bayesian Optimal Experimental Design with Normalizing Flows
Bayesian optimal experimental design (OED) seeks experiments that maximize the expected information gain (EIG) in model parameters. Directly estimating the EIG using nested Monte Carlo is computationally expensive and requires an explicit likelihood. Variational OED (vOED), in contrast, estimates a lower bound of the EIG without likelihood evaluations by approximating the posterior distributions with variational forms, and then tightens the bound by optimizing its variational parameters. We introduce the use of normalizing flows (NFs) for representing variational distributions in vOED; we call this approach vOED-NFs. Specifically, we adopt NFs with a conditional invertible neural network architecture built from compositions of coupling layers, and enhanced with a summary network for data dimension reduction. We present Monte Carlo estimators to the lower bound along with gradient expressions to enable a gradient-based simultaneous optimization of the variational parameters and the design variables. The vOED-NFs algorithm is then validated in two benchmark problems, and demonstrated on a partial differential equation-governed application of cathodic electrophoretic deposition and an implicit likelihood case with stochastic modeling of aphid population. The findings suggest that a composition of 4--5 coupling layers is able to achieve lower EIG estimation bias, under a fixed budget of forward model runs, compared to previous approaches. The resulting NFs produce approximate posteriors that agree well with the true posteriors, able to capture non-Gaussian and multi-modal features effectively.
false
true
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
448,135
1501.00329
Multi-Access Communications with Energy Harvesting: A Multi-Armed Bandit Model and the Optimality of the Myopic Policy
A multi-access wireless network with N transmitting nodes, each equipped with an energy harvesting (EH) device and a rechargeable battery of finite capacity, is studied. At each time slot (TS) a node is operative with a certain probability, which may depend on the availability of data, or the state of its channel. The energy arrival process at each node is modelled as an independent two-state Markov process, such that, at each TS, a node either harvests one unit of energy, or none. At each TS a subset of the nodes is scheduled by the access point (AP). The scheduling policy that maximises the total throughput is studied assuming that the AP does not know the states of either the EH processes or the batteries. The problem is identified as a restless multiarmed bandit (RMAB) problem, and an upper bound on the optimal scheduling policy is found. Under certain assumptions regarding the EH processes and the battery sizes, the optimality of the myopic policy (MP) is proven. For the general case, the performance of MP is compared numerically to the upper bound.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
38,982
2402.16045
Harnessing the Synergy between Pushing, Grasping, and Throwing to Enhance Object Manipulation in Cluttered Scenarios
In this work, we delve into the intricate synergy among non-prehensile actions like pushing, and prehensile actions such as grasping and throwing, within the domain of robotic manipulation. We introduce an innovative approach to learning these synergies by leveraging model-free deep reinforcement learning. The robot's workflow involves detecting the pose of the target object and the basket at each time step, predicting the optimal push configuration to isolate the target object, determining the appropriate grasp configuration, and inferring the necessary parameters for an accurate throw into the basket. This empowers robots to skillfully reconfigure cluttered scenarios through pushing, creating space for collision-free grasping actions. Simultaneously, we integrate throwing behavior, showcasing how this action significantly extends the robot's operational reach. Ensuring safety, we developed a simulation environment in Gazebo for robot training, applying the learned policy directly to our real robot. Notably, this work represents a pioneering effort to learn the synergy between pushing, grasping, and throwing actions. Extensive experimentation in both simulated and real-robot scenarios substantiates the effectiveness of our approach across diverse settings. Our approach achieves a success rate exceeding 80\% in both simulated and real-world scenarios. A video showcasing our experiments is available online at: https://youtu.be/q1l4BJVDbRw
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
432,402
2202.11760
Googling for Abortion: Search Engine Mediation of Abortion Accessibility in the United States
Among the myriad barriers to abortion access, crisis pregnancy centers (CPCs) pose an additional difficulty by targeting women with unexpected or "crisis" pregnancies in order to dissuade them from the procedure. Web search engines may prove to be another barrier, being in a powerful position to direct their users to health information, and above all, health services. In this study we ask, to what degree does Google Search provide quality responses to users searching for an abortion provider, specifically in terms of directing them to abortion clinics (ACs) or CPCs. To answer this question, we considered the scenario of a woman searching for abortion services online, and conducted 10 abortion-related queries from 467 locations across the United States once a week for 14 weeks. Overall, among Google's location results that feature businesses alongside a map, 79.4% were ACs, and 6.9% were CPCs. When an AC was returned, it was the closest known AC location 86.9% of the time. However, when a CPC appeared in a result set, it was the closest one to the search location 75.9% of the time. Examining correlates of AC results, we found that fewer AC results were returned for searches from poorer and rural areas, and those with TRAP laws governing AC facility and clinician requirements. We also observed that Google's performance on our queries significantly improved following a major algorithm update. These results have important implications concerning health access quality and equity, both for individual users and public health policy.
false
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
false
281,984
2404.00386
Jetsons at FinNLP 2024: Towards Understanding the ESG Impact of a News Article using Transformer-based Models
In this paper, we describe the different approaches explored by the Jetsons team for the Multi-Lingual ESG Impact Duration Inference (ML-ESG-3) shared task. The shared task focuses on predicting the duration and type of the ESG impact of a news article. The shared task dataset consists of 2,059 news titles and articles in English, French, Korean, and Japanese languages. For the impact duration classification task, we fine-tuned XLM-RoBERTa with a custom fine-tuning strategy and using self-training and DeBERTa-v3 using only English translations. These models individually ranked first on the leaderboard for Korean and Japanese and in an ensemble for the English language, respectively. For the impact type classification task, our XLM-RoBERTa model fine-tuned using a custom fine-tuning strategy ranked first for the English language.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
442,898
2411.19886
PDDLFuse: A Tool for Generating Diverse Planning Domains
Various real-world challenges require planning algorithms that can adapt to a broad range of domains. Traditionally, the creation of planning domains has relied heavily on human implementation, which limits the scale and diversity of available domains. While recent advancements have leveraged generative AI technologies such as large language models (LLMs) for domain creation, these efforts have predominantly focused on translating existing domains from natural language descriptions rather than generating novel ones. In contrast, the concept of domain randomization, which has been highly effective in reinforcement learning, enhances performance and generalizability by training on a diverse array of randomized new domains. Inspired by this success, our tool, PDDLFuse, aims to bridge this gap in Planning Domain Definition Language (PDDL). PDDLFuse is designed to generate new, diverse planning domains that can be used to validate new planners or test foundational planning models. We have developed methods to adjust the domain generators parameters to modulate the difficulty of the domains it generates. This adaptability is crucial as existing domain-independent planners often struggle with more complex problems. Initial tests indicate that PDDLFuse efficiently creates intricate and varied domains, representing a significant advancement over traditional domain generation methods and making a contribution towards planning research.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
512,418
1703.07608
Deep Exploration via Randomized Value Functions
We study the use of randomized value functions to guide deep exploration in reinforcement learning. This offers an elegant means for synthesizing statistically and computationally efficient exploration with common practical approaches to value function learning. We present several reinforcement learning algorithms that leverage randomized value functions and demonstrate their efficacy through computational studies. We also prove a regret bound that establishes statistical efficiency with a tabular representation.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
70,426
2206.14759
How Train-Test Leakage Affects Zero-shot Retrieval
Neural retrieval models are often trained on (subsets of) the millions of queries of the MS MARCO / ORCAS datasets and then tested on the 250 Robust04 queries or other TREC benchmarks with often only 50 queries. In such setups, many of the few test queries can be very similar to queries from the huge training data -- in fact, 69% of the Robust04 queries have near-duplicates in MS MARCO / ORCAS. We investigate the impact of this unintended train-test leakage by training neural retrieval models on combinations of a fixed number of MS MARCO / ORCAS queries that are highly similar to the actual test queries and an increasing number of other queries. We find that leakage can improve effectiveness and even change the ranking of systems. However, these effects diminish as the amount of leakage among all training instances decreases and thus becomes more realistic.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
305,390
2008.11673
Orientation-Disentangled Unsupervised Representation Learning for Computational Pathology
Unsupervised learning enables modeling complex images without the need for annotations. The representation learned by such models can facilitate any subsequent analysis of large image datasets. However, some generative factors that cause irrelevant variations in images can potentially get entangled in such a learned representation causing the risk of negatively affecting any subsequent use. The orientation of imaged objects, for instance, is often arbitrary/irrelevant, thus it can be desired to learn a representation in which the orientation information is disentangled from all other factors. Here, we propose to extend the Variational Auto-Encoder framework by leveraging the group structure of rotation-equivariant convolutional networks to learn orientation-wise disentangled generative factors of histopathology images. This way, we enforce a novel partitioning of the latent space, such that oriented and isotropic components get separated. We evaluated this structured representation on a dataset that consists of tissue regions for which nuclear pleomorphism and mitotic activity was assessed by expert pathologists. We show that the trained models efficiently disentangle the inherent orientation information of single-cell images. In comparison to classical approaches, the resulting aggregated representation of sub-populations of cells produces higher performances in subsequent tasks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
193,347
2207.11709
TVCalib: Camera Calibration for Sports Field Registration in Soccer
Sports field registration in broadcast videos is typically interpreted as the task of homography estimation, which provides a mapping between a planar field and the corresponding visible area of the image. In contrast to previous approaches, we consider the task as a camera calibration problem. First, we introduce a differentiable objective function that is able to learn the camera pose and focal length from segment correspondences (e.g., lines, point clouds), based on pixel-level annotations for segments of a known calibration object. The calibration module iteratively minimizes the segment reprojection error induced by the estimated camera parameters. Second, we propose a novel approach for 3D sports field registration from broadcast soccer images. Compared to the typical solution, which subsequently refines an initial estimation, our solution does it in one step. The proposed method is evaluated for sports field registration on two datasets and achieves superior results compared to two state-of-the-art approaches.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
309,747
2407.01229
On the Parameters of Codes for Data Access
This paper studies two crucial problems in the context of coded distributed storage systems directly related to their performance: 1) for a fixed alphabet size, determine the minimum number of servers the system must have for its service rate region to contain a prescribed set of points; 2) for a given number of servers, determine the minimum alphabet size for which the service rate region of the system contains a prescribed set of points. The paper establishes rigorous upper and lower bounds, as well as code constructions based on techniques from coding theory, optimization, and projective geometry.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
469,199
2306.01029
SPINEX: Similarity-based Predictions and Explainable Neighbors Exploration for Regression and Classification Tasks in Machine Learning
The field of machine learning (ML) has witnessed significant advancements in recent years. However, many existing algorithms lack interpretability and struggle with high-dimensional and imbalanced data. This paper proposes SPINEX, a novel similarity-based interpretable neighbor exploration algorithm designed to address these limitations. This algorithm combines ensemble learning and feature interaction analysis to achieve accurate predictions and meaningful insights by quantifying each feature's contribution to predictions and identifying interactions between features, thereby enhancing the interpretability of the algorithm. To evaluate the performance of SPINEX, extensive experiments on 59 synthetic and real datasets were conducted for both regression and classification tasks. The results demonstrate that SPINEX achieves comparative performance and, in some scenarios, may outperform commonly adopted ML algorithms. The same findings demonstrate the effectiveness and competitiveness of SPINEX, making it a promising approach for various real-world applications.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
370,272
2405.17782
Post-Fair Federated Learning: Achieving Group and Community Fairness in Federated Learning via Post-processing
Federated Learning (FL) is a distributed machine learning framework in which a set of local communities collaboratively learn a shared global model while retaining all training data locally within each community. Two notions of fairness have recently emerged as important issues for federated learning: group fairness and community fairness. Group fairness requires that a model's decisions do not favor any particular group based on a set of legally protected attributes such as race or gender. Community fairness requires that global models exhibit similar levels of performance (accuracy) across all collaborating communities. Both fairness concepts can coexist within an FL framework, but the existing literature has focused on either one concept or the other. This paper proposes and analyzes a post-processing fair federated learning (FFL) framework called post-FFL. Post-FFL uses a linear program to simultaneously enforce group and community fairness while maximizing the utility of the global model. Because Post-FFL is a post-processing approach, it can be used with existing FL training pipelines whose convergence properties are well understood. This paper uses post-FFL on real-world datasets to mimic how hospital networks, for example, use federated learning to deliver community health care. Theoretical results bound the accuracy lost when post-FFL enforces both notion of fairness. Experimental results illustrate that post-FFL simultaneously improves both group and community fairness in FL. Moreover, post-FFL outperforms the existing in-processing fair federated learning in terms of improving both notions of fairness, communication efficiency and computation cost.
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
458,113
2007.01647
Learning intuitive physics and one-shot imitation using state-action-prediction self-organizing maps
Human learning and intelligence work differently from the supervised pattern recognition approach adopted in most deep learning architectures. Humans seem to learn rich representations by exploration and imitation, build causal models of the world, and use both to flexibly solve new tasks. We suggest a simple but effective unsupervised model which develops such characteristics. The agent learns to represent the dynamical physical properties of its environment by intrinsically motivated exploration, and performs inference on this representation to reach goals. For this, a set of self-organizing maps which represent state-action pairs is combined with a causal model for sequence prediction. The proposed system is evaluated in the cartpole environment. After an initial phase of playful exploration, the agent can execute kinematic simulations of the environment's future, and use those for action planning. We demonstrate its performance on a set of several related, but different one-shot imitation tasks, which the agent flexibly solves in an active inference style.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
185,497
1211.1819
Accurate Sampling Timing Acquisition for Baseband OFDM Power-line Communication in Non-Gaussian Noise
In this paper, a novel technique is proposed to address the joint sampling timing acquisition for baseband and broadband power-line communication (BB-PLC) systems using Orthogonal-Frequency-Division-Multiplexing (OFDM), including the sampling phase offset (SPO) and the sampling clock offset (SCO). Under pairwise correlation and joint Gaussian assumption of received signals in frequency domain, an approximated form of the log-likelihood function is derived. Instead of a high complexity two-dimension grid-search on the likelihood function, a five-step method is employed for accurate estimations. Several variants are presented in the same framework with different complexities. Unlike conventional pilot-assisted schemes using the extra phase rotations within one OFDM block, the proposed technique turns to the phase rotations between adjacent OFDM blocks. Analytical expressions of the variances and biases are derived. Extensive simulation results indicate significant performance improvements over conventional schemes. Additionally, effects of several noise models including non-Gaussianity, cyclo-stationarity, and temporal correlation are analyzed and simulated. Robustness of the proposed technique against violation of the joint Gaussian assumption is also verified by simulations.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
19,633
2103.07351
Monocular Quasi-Dense 3D Object Tracking
A reliable and accurate 3D tracking framework is essential for predicting future locations of surrounding objects and planning the observer's actions in numerous applications such as autonomous driving. We propose a framework that can effectively associate moving objects over time and estimate their full 3D bounding box information from a sequence of 2D images captured on a moving platform. The object association leverages quasi-dense similarity learning to identify objects in various poses and viewpoints with appearance cues only. After initial 2D association, we further utilize 3D bounding boxes depth-ordering heuristics for robust instance association and motion-based 3D trajectory prediction for re-identification of occluded vehicles. In the end, an LSTM-based object velocity learning module aggregates the long-term trajectory information for more accurate motion extrapolation. Experiments on our proposed simulation data and real-world benchmarks, including KITTI, nuScenes, and Waymo datasets, show that our tracking framework offers robust object association and tracking on urban-driving scenarios. On the Waymo Open benchmark, we establish the first camera-only baseline in the 3D tracking and 3D detection challenges. Our quasi-dense 3D tracking pipeline achieves impressive improvements on the nuScenes 3D tracking benchmark with near five times tracking accuracy of the best vision-only submission among all published methods. Our code, data and trained models are available at https://github.com/SysCV/qd-3dt.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
224,569
2501.06704
Fine-tuning ChatGPT for Automatic Scoring of Written Scientific Explanations in Chinese
The development of explanations for scientific phenomena is essential in science assessment, but scoring student-written explanations remains challenging and resource-intensive. Large language models (LLMs) have shown promise in addressing this issue, particularly in alphabetic languages like English. However, their applicability to logographic languages is less explored. This study investigates the potential of fine-tuning ChatGPT, a leading LLM, to automatically score scientific explanations written in Chinese. Student responses to seven scientific explanation tasks were collected and automatically scored, with scoring accuracy examined in relation to reasoning complexity using the Kendall correlation. A qualitative analysis explored how linguistic features influenced scoring accuracy. The results show that domain-specific adaptation enables ChatGPT to score Chinese scientific explanations with accuracy. However, scoring accuracy correlates with reasoning complexity: a negative correlation for lower-level responses and a positive one for higher-level responses. The model overrates complex reasoning in low-level responses with intricate sentence structures and underrates high-level responses using concise causal reasoning. These correlations stem from linguistic features--simplicity and clarity enhance accuracy for lower-level responses, while comprehensiveness improves accuracy for higher-level ones. Simpler, shorter responses tend to score more accurately at lower levels, whereas longer, information-rich responses yield better accuracy at higher levels. These findings demonstrate the effectiveness of LLMs in automatic scoring within a Chinese context and emphasize the importance of linguistic features and reasoning complexity in fine-tuning scoring models for educational assessments.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
524,089
0805.0154
The Tsallis entropy and the Shannon entropy of a universal probability
We study the properties of Tsallis entropy and Shannon entropy from the point of view of algorithmic randomness. In algorithmic information theory, there are two equivalent ways to define the program-size complexity K(s) of a given finite binary string s. In the standard way, K(s) is defined as the length of the shortest input string for the universal self-delimiting Turing machine to output s. In the other way, the so-called universal probability m is introduced first, and then K(s) is defined as -log_2 m(s) without reference to the concept of program-size. In this paper, we investigate the properties of the Shannon entropy, the power sum, and the Tsallis entropy of a universal probability by means of the notion of program-size complexity. We determine the convergence or divergence of each of these three quantities, and evaluate its degree of randomness if it converges.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
1,696
2409.16921
Moner: Motion Correction in Undersampled Radial MRI with Unsupervised Neural Representation
Motion correction (MoCo) in radial MRI is a challenging problem due to the unpredictability of subject's motion. Current state-of-the-art (SOTA) MoCo algorithms often use extensive high-quality MR images to pre-train neural networks, obtaining excellent reconstructions. However, the need for large-scale datasets significantly increases costs and limits model generalization. In this work, we propose Moner, an unsupervised MoCo method that jointly solves artifact-free MR images and accurate motion from undersampled, rigid motion-corrupted k-space data, without requiring training data. Our core idea is to leverage the continuous prior of implicit neural representation (INR) to constrain this ill-posed inverse problem, enabling ideal solutions. Specifically, we incorporate a quasi-static motion model into the INR, granting its ability to correct subject's motion. To stabilize model optimization, we reformulate radial MRI as a back-projection problem using the Fourier-slice theorem. Additionally, we propose a novel coarse-to-fine hash encoding strategy, significantly enhancing MoCo accuracy. Experiments on multiple MRI datasets show our Moner achieves performance comparable to SOTA MoCo techniques on in-domain data, while demonstrating significant improvements on out-of-domain data. The code is available at: https://github.com/iwuqing/Moner
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
491,575
1508.07741
Model Guided Sampling Optimization for Low-dimensional Problems
Optimization of very expensive black-box functions requires utilization of maximum information gathered by the process of optimization. Model Guided Sampling Optimization (MGSO) forms a more robust alternative to Jones' Gaussian-process-based EGO algorithm. Instead of EGO's maximizing expected improvement, the MGSO uses sampling the probability of improvement which is shown to be helpful against trapping in local minima. Further, the MGSO can reach close-to-optimum solutions faster than standard optimization algorithms on low dimensional or smooth problems.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
46,434
1704.05181
"Short-Dot": Computing Large Linear Transforms Distributedly Using Coded Short Dot Products
Faced with saturation of Moore's law and increasing dimension of data, system designers have increasingly resorted to parallel and distributed computing. However, distributed computing is often bottle necked by a small fraction of slow processors called "stragglers" that reduce the speed of computation because the fusion node has to wait for all processors to finish. To combat the effect of stragglers, recent literature introduces redundancy in computations across processors, e.g.,~using repetition-based strategies or erasure codes. The fusion node can exploit this redundancy by completing the computation using outputs from only a subset of the processors, ignoring the stragglers. In this paper, we propose a novel technique -- that we call "Short-Dot" -- to introduce redundant computations in a coding theory inspired fashion, for computing linear transforms of long vectors. Instead of computing long dot products as required in the original linear transform, we construct a larger number of redundant and short dot products that can be computed faster and more efficiently at individual processors. In reference to comparable schemes that introduce redundancy to tackle stragglers, Short-Dot reduces the cost of computation, storage and communication since shorter portions are stored and computed at each processor, and also shorter portions of the input is communicated to each processor. We demonstrate through probabilistic analysis as well as experiments that Short-Dot offers significant speed-up compared to existing techniques. We also derive trade-offs between the length of the dot-products and the resilience to stragglers (number of processors to wait for), for any such strategy and compare it to that achieved by our strategy.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
71,958
1801.01253
Approximate Ranking from Pairwise Comparisons
A common problem in machine learning is to rank a set of n items based on pairwise comparisons. Here ranking refers to partitioning the items into sets of pre-specified sizes according to their scores, which includes identification of the top-k items as the most prominent special case. The score of a given item is defined as the probability that it beats a randomly chosen other item. Finding an exact ranking typically requires a prohibitively large number of comparisons, but in practice, approximate rankings are often adequate. Accordingly, we study the problem of finding approximate rankings from pairwise comparisons. We analyze an active ranking algorithm that counts the number of comparisons won, and decides whether to stop or which pair of items to compare next, based on confidence intervals computed from the data collected in previous steps. We show that this algorithm succeeds in recovering approximate rankings using a number of comparisons that is close to optimal up to logarithmic factors. We also present numerical results, showing that in practice, approximation can drastically reduce the number of comparisons required to estimate a ranking.
false
false
false
false
true
false
true
false
false
true
false
false
false
false
false
false
false
false
87,697
2311.08804
Channel Capacity and Bounds In Mixed Gaussian-Impulsive Noise
Communication systems suffer from the mixed noise consisting of both non-Gaussian impulsive noise (IN) and white Gaussian noise (WGN) in many practical applications. However, there is little literature about the channel capacity under mixed noise. In this paper, we prove the existence of the capacity under p-th moment constraint and show that there are only finite mass points in the capacity-achieving distribution. Moreover, we provide lower and upper capacity bounds with closed forms. It is shown that the lower bounds can degenerate to the well-known Shannon formula under special scenarios. In addition, the capacity for specific modulations and the corresponding lower bounds are discussed. Numerical results reveal that the capacity decreases when the impulsiveness of the mixed noise becomes dominant and the obtained capacity bounds are shown to be very tight.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
407,874
1805.07960
Stochastic Gradient Descent for Stochastic Doubly-Nonconvex Composite Optimization
The stochastic gradient descent has been widely used for solving composite optimization problems in big data analyses. Many algorithms and convergence properties have been developed. The composite functions were convex primarily and gradually nonconvex composite functions have been adopted to obtain more desirable properties. The convergence properties have been investigated, but only when either of composite functions is nonconvex. There is no convergence property when both composite functions are nonconvex, which is named the \textit{doubly-nonconvex} case.To overcome this difficulty, we assume a simple and weak condition that the penalty function is \textit{quasiconvex} and then we obtain convergence properties for the stochastic doubly-nonconvex composite optimization problem.The convergence rate obtained here is of the same order as the existing work.We deeply analyze the convergence rate with the constant step size and mini-batch size and give the optimal convergence rate with appropriate sizes, which is superior to the existing work. Experimental results illustrate that our method is superior to existing methods.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
98,008
1402.1347
Simulation work on Fractional Order PI{\lambda} Control Strategy for speed control of DC motor based on stability boundary locus method
This paper deals with the design of Fractional Order Proportional Integral (FO-PI{\lambda}) controller for the speed control of DC motor. A mathematical model of DC motor control system is derived and based on this model fractional order PI{\lambda} controller is designed using stability boundary locus method to satisfy required gain margin (GM) and phase margin (PM) of the system. Servo and Regulatory tracking simulation runs are carried out for the speed control of DC motor. The performance of the fractional order PI{\lambda} (FO-PI{\lambda}) controller is compared with Integer Order Relay Feedback Proportional Integral (IO-RFPI) controller. Finally the stability of both control system is considered.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
30,658
2309.08535
Visual Speech Recognition for Languages with Limited Labeled Data using Automatic Labels from Whisper
This paper proposes a powerful Visual Speech Recognition (VSR) method for multiple languages, especially for low-resource languages that have a limited number of labeled data. Different from previous methods that tried to improve the VSR performance for the target language by using knowledge learned from other languages, we explore whether we can increase the amount of training data itself for the different languages without human intervention. To this end, we employ a Whisper model which can conduct both language identification and audio-based speech recognition. It serves to filter data of the desired languages and transcribe labels from the unannotated, multilingual audio-visual data pool. By comparing the performances of VSR models trained on automatic labels and the human-annotated labels, we show that we can achieve similar VSR performance to that of human-annotated labels even without utilizing human annotations. Through the automated labeling process, we label large-scale unlabeled multilingual databases, VoxCeleb2 and AVSpeech, producing 1,002 hours of data for four low VSR resource languages, French, Italian, Spanish, and Portuguese. With the automatic labels, we achieve new state-of-the-art performance on mTEDx in four languages, significantly surpassing the previous methods. The automatic labels are available online: https://github.com/JeongHun0716/Visual-Speech-Recognition-for-Low-Resource-Languages
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
392,207
2203.06271
Bit-Metric Decoding Rate in Multi-User MIMO Systems: Theory
Link-adaptation (LA) is one of the most important aspects of wireless communications where the modulation and coding scheme (MCS) used by the transmitter is adapted to the channel conditions in order to meet a certain target error-rate. In a single-user SISO (SU-SISO) system with out-of-cell interference, LA is performed by computing the post-equalization signal-to-interference-noise ratio (SINR) at the receiver. The same technique can be employed in multi-user MIMO (MU-MIMO) receivers that use linear detectors. Another important use of post-equalization SINR is for physical layer (PHY) abstraction, where several PHY blocks like the channel encoder, the detector, and the channel decoder are replaced by an abstraction model in order to speed up system-level simulations. However, for MU-MIMO systems with non-linear receivers, there is no known equivalent of post-equalization SINR which makes both LA and PHY abstraction extremely challenging. This important issue is addressed in this two-part paper. In this part, a metric called the bit-metric decoding rate (BMDR) of a detector, which is the proposed equivalent of post-equalization SINR, is presented. Since BMDR does not have a closed form expression that would enable its instantaneous calculation, a machine-learning approach to predict it is presented along with extensive simulation results.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
285,053
0705.2310
On-Line Condition Monitoring using Computational Intelligence
This paper presents bushing condition monitoring frameworks that use multi-layer perceptrons (MLP), radial basis functions (RBF) and support vector machines (SVM) classifiers. The first level of the framework determines if the bushing is faulty or not while the second level determines the type of fault. The diagnostic gases in the bushings are analyzed using the dissolve gas analysis. MLP gives superior performance in terms of accuracy and training time than SVM and RBF. In addition, an on-line bushing condition monitoring approach, which is able to adapt to newly acquired data are introduced. This approach is able to accommodate new classes that are introduced by incoming data and is implemented using an incremental learning algorithm that uses MLP. The testing results improved from 67.5% to 95.8% as new data were introduced and the testing results improved from 60% to 95.3% as new conditions were introduced. On average the confidence value of the framework on its decision was 0.92.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
256
2010.12573
Object-aware Feature Aggregation for Video Object Detection
We present an Object-aware Feature Aggregation (OFA) module for video object detection (VID). Our approach is motivated by the intriguing property that video-level object-aware knowledge can be employed as a powerful semantic prior to help object recognition. As a consequence, augmenting features with such prior knowledge can effectively improve the classification and localization performance. To make features get access to more content about the whole video, we first capture the object-aware knowledge of proposals and incorporate such knowledge with the well-established pair-wise contexts. With extensive experimental results on the ImageNet VID dataset, our approach demonstrates the effectiveness of object-aware knowledge with the superior performance of 83.93% and 86.09% mAP with ResNet-101 and ResNeXt-101, respectively. When further equipped with Sequence DIoU NMS, we obtain the best-reported mAP of 85.07% and 86.88% upon the paper submitted. The code to reproduce our results will be released after acceptance.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
202,746
1901.02998
Sentence Rewriting for Semantic Parsing
A major challenge of semantic parsing is the vocabulary mismatch problem between natural language and target ontology. In this paper, we propose a sentence rewriting based semantic parsing method, which can effectively resolve the mismatch problem by rewriting a sentence into a new form which has the same structure with its target logical form. Specifically, we propose two sentence-rewriting methods for two common types of mismatch: a dictionary-based method for 1-N mismatch and a template-based method for N-1 mismatch. We evaluate our entence rewriting based semantic parser on the benchmark semantic parsing dataset -- WEBQUESTIONS. Experimental results show that our system outperforms the base system with a 3.4% gain in F1, and generates logical forms more accurately and parses sentences more robustly.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
118,318
2208.14586
Few-shot Adaptive Object Detection with Cross-Domain CutMix
In object detection, data amount and cost are a trade-off, and collecting a large amount of data in a specific domain is labor intensive. Therefore, existing large-scale datasets are used for pre-training. However, conventional transfer learning and domain adaptation cannot bridge the domain gap when the target domain differs significantly from the source domain. We propose a data synthesis method that can solve the large domain gap problem. In this method, a part of the target image is pasted onto the source image, and the position of the pasted region is aligned by utilizing the information of the object bounding box. In addition, we introduce adversarial learning to discriminate whether the original or the pasted regions. The proposed method trains on a large number of source images and a few target domain images. The proposed method achieves higher accuracy than conventional methods in a very different domain problem setting, where RGB images are the source domain, and thermal infrared images are the target domain. Similarly, the proposed method achieves higher accuracy in the cases of simulation images to real images.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
315,360
2102.13151
Clustering for epidemics on networks: a geometric approach
Infectious diseases typically spread over a contact network with millions of individuals, whose sheer size is a tremendous challenge to analysing and controlling an epidemic outbreak. For some contact networks, it is possible to group individuals into clusters. A high-level description of the epidemic between a few clusters is considerably simpler than on an individual level. However, to cluster individuals, most studies rely on equitable partitions, a rather restrictive structural property of the contact network. In this work, we focus on Susceptible-Infected-Susceptible (SIS) epidemics, and our contribution is threefold. First, we propose a geometric approach to specify all networks for which an epidemic outbreak simplifies to the interaction of only a few clusters. Second, for the complete graph and any initial viral state vectors, we derive the closed-form solution of the nonlinear differential equations of the N-Intertwined Mean-Field Approximation (NIMFA) of the SIS process. Third, by relaxing the notion of equitable partitions, we derive low-complexity approximations and bounds for epidemics on arbitrary contact networks. Our results are an important step towards understanding and controlling epidemics on large networks.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
221,962
1008.3730
Poisoned Feedback: The Impact of Malicious Users in Closed-Loop Multiuser MIMO Systems
Accurate channel state information (CSI) at the transmitter is critical for maximizing spectral efficiency on the downlink of multi-antenna networks. In this work we analyze a novel form of physical layer attacks on such closed-loop wireless networks. Specifically, this paper considers the impact of deliberately inaccurate feedback by malicious users in a multiuser multicast system. Numerical results demonstrate the significant degradation in performance of closed-loop transmission schemes due to intentional feedback of false CSI by adversarial users.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
7,332
2307.09796
Forecasting Early with Meta Learning
In the early observation period of a time series, there might be only a few historic observations available to learn a model. However, in cases where an existing prior set of datasets is available, Meta learning methods can be applicable. In this paper, we devise a Meta learning method that exploits samples from additional datasets and learns to augment time series through adversarial learning as an auxiliary task for the target dataset. Our model (FEML), is equipped with a shared Convolutional backbone that learns features for varying length inputs from different datasets and has dataset specific heads to forecast for different output lengths. We show that FEML can meta learn across datasets and by additionally learning on adversarial generated samples as auxiliary samples for the target dataset, it can improve the forecasting performance compared to single task learning, and various solutions adapted from Joint learning, Multi-task learning and classic forecasting baselines.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
380,299
1403.0306
An extended isogeometric analysis for vibration of cracked FGM plates using higher-order shear deformation theory
A novel and effective formulation that combines the eXtended IsoGeometric Approach (XIGA) and Higher-order Shear Deformation Theory (HSDT) is proposed to study the free vibration of cracked Functionally Graded Material (FGM) plates. Herein, the general HSDT model with five unknown variables per node is applied for calculating the stiffness matrix without needing Shear Correction Factor (SCF). In order to model the discontinuous and singular phenomena in the cracked plates, IsoGeometric Analysis (IGA) utilizing the Non-Uniform Rational B-Spline (NURBS) functions is incorporated with enrichment functions through the partition of unity method. NURBS basis functions with their inherent arbitrary high order smoothness permit the C1 requirement of the HSDT model. The material properties of the FGM plates vary continuously through the plate thickness according to an exponent function. The effects of gradient index, crack length, crack location, length to thickness on the natural frequencies and mode shapes of simply supported and clamped FGM plate are studied. Numerical examples are provided to show excellent performance of the proposed method compared with other published solutions in the literature.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
31,282
1709.07610
Efficient Nearest-Neighbor Search for Dynamical Systems with Nonholonomic Constraints
Nearest-neighbor search dominates the asymptotic complexity of sampling-based motion planning algorithms and is often addressed with k-d tree data structures. While it is generally believed that the expected complexity of nearest-neighbor queries is $O(log(N))$ in the size of the tree, this paper reveals that when a classic k-d tree approach is used with sub-Riemannian metrics, the expected query complexity is in fact $\Theta(N^p \log(N))$ for a number $p \in [0, 1)$ determined by the degree of nonholonomy of the system. These metrics arise naturally in nonholonomic mechanical systems, including classic wheeled robot models. To address this negative result, we propose novel k-d tree build and query strategies tailored to sub-Riemannian metrics and demonstrate significant improvements in the running time of nearest-neighbor search queries.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
true
81,313
2407.19218
A Versatility Measure for Parametric Risk Models
Parametric statistical methods play a central role in analyzing risk through its underlying frequency and severity components. Given the wide availability of numerical algorithms and high-speed computers, researchers and practitioners often model these separate (although possibly statistically dependent) random variables by fitting a large number of parametric probability distributions to historical data and then comparing goodness-of-fit statistics. However, this approach is highly susceptible to problems of overfitting because it gives insufficient weight to fundamental considerations of functional simplicity and adaptability. To address this shortcoming, we propose a formal mathematical measure for assessing the versatility of frequency and severity distributions prior to their application. We then illustrate this approach by computing and comparing values of the versatility measure for a variety of probability distributions commonly used in risk analysis.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
476,693
1905.09135
A Joint Named-Entity Recognizer for Heterogeneous Tag-sets Using a Tag Hierarchy
We study a variant of domain adaptation for named-entity recognition where multiple, heterogeneously tagged training sets are available. Furthermore, the test tag-set is not identical to any individual training tag-set. Yet, the relations between all tags are provided in a tag hierarchy, covering the test tags as a combination of training tags. This setting occurs when various datasets are created using different annotation schemes. This is also the case of extending a tag-set with a new tag by annotating only the new tag in a new dataset. We propose to use the given tag hierarchy to jointly learn a neural network that shares its tagging layer among all tag-sets. We compare this model to combining independent models and to a model based on the multitasking approach. Our experiments show the benefit of the tag-hierarchy model, especially when facing non-trivial consolidation of tag-sets.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
131,654