id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2306.08222 | Optimisation of Nonlinear Spring and Damper Characteristics for Vehicle
Ride and Handling Improvement | In this paper, the optimum linear/nonlinear spring and linear/nonlinear damper force versus displacement and force versus velocity characteristic functions, respectively, are determined using simple lumped parameter models of a quarter car front independent suspension and a half car rear solid axle suspension of a light commercial vehicle. The complexity of a nonlinear function optimisation problem is reduced by determining the shape a priori based on typical shapes supplied by the car manufacturer and then scaling it up or down in the optimisation process. The vehicle ride and handling responses are investigated considering models of increased complexity. The linear and nonlinear optimised spring characteristics are first obtained using lower complexity lumped parameter models. The commercial vehicle dynamics software Carmaker is then used in the optimisation as the higher complexity, more realistic model. The performance of the optimised suspension units are also verified using this more realistic Carmaker model. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 373,328 |
2205.09860 | Mean-Field Analysis of Two-Layer Neural Networks: Global Optimality with
Linear Convergence Rates | We consider optimizing two-layer neural networks in the mean-field regime where the learning dynamics of network weights can be approximated by the evolution in the space of probability measures over the weight parameters associated with the neurons. The mean-field regime is a theoretically attractive alternative to the NTK (lazy training) regime which is only restricted locally in the so-called neural tangent kernel space around specialized initializations. Several prior works (\cite{chizat2018global, mei2018mean}) establish the asymptotic global optimality of the mean-field regime, but it is still challenging to obtain a quantitative convergence rate due to the complicated unbounded nonlinearity of the training dynamics. This work establishes the first linear convergence result for vanilla two-layer neural networks trained by continuous-time noisy gradient descent in the mean-field regime. Our result relies on a novel time-depdendent estimate of the logarithmic Sobolev constants for a family of measures determined by the evolving distribution of hidden neurons. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 297,446 |
2405.16924 | Demystifying amortized causal discovery with transformers | Supervised learning approaches for causal discovery from observational data often achieve competitive performance despite seemingly avoiding explicit assumptions that traditional methods make for identifiability. In this work, we investigate CSIvA (Ke et al., 2023), a transformer-based model promising to train on synthetic data and transfer to real data. First, we bridge the gap with existing identifiability theory and show that constraints on the training data distribution implicitly define a prior on the test observations. Consistent with classical approaches, good performance is achieved when we have a good prior on the test data, and the underlying model is identifiable. At the same time, we find new trade-offs. Training on datasets generated from different classes of causal models, unambiguously identifiable in isolation, improves the test generalization. Performance is still guaranteed, as the ambiguous cases resulting from the mixture of identifiable causal models are unlikely to occur (which we formally prove). Overall, our study finds that amortized causal discovery still needs to obey identifiability theory, but it also differs from classical methods in how the assumptions are formulated, trading more reliance on assumptions on the noise type for fewer hypotheses on the mechanisms. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 457,681 |
2409.10597 | Optimizing Resource Consumption in Diffusion Models through
Hallucination Early Detection | Diffusion models have significantly advanced generative AI, but they encounter difficulties when generating complex combinations of multiple objects. As the final result heavily depends on the initial seed, accurately ensuring the desired output can require multiple iterations of the generation process. This repetition not only leads to a waste of time but also increases energy consumption, echoing the challenges of efficiency and accuracy in complex generative tasks. To tackle this issue, we introduce HEaD (Hallucination Early Detection), a new paradigm designed to swiftly detect incorrect generations at the beginning of the diffusion process. The HEaD pipeline combines cross-attention maps with a new indicator, the Predicted Final Image, to forecast the final outcome by leveraging the information available at early stages of the generation process. We demonstrate that using HEaD saves computational resources and accelerates the generation process to get a complete image, i.e. an image where all requested objects are accurately depicted. Our findings reveal that HEaD can save up to 12% of the generation time on a two objects scenario and underscore the importance of early detection mechanisms in generative models. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 488,819 |
1908.08399 | Controllable Dual Skew Divergence Loss for Neural Machine Translation | In sequence prediction tasks like neural machine translation, training with cross-entropy loss often leads to models that overgeneralize and plunge into local optima. In this paper, we propose an extended loss function called \emph{dual skew divergence} (DSD) that integrates two symmetric terms on KL divergences with a balanced weight. We empirically discovered that such a balanced weight plays a crucial role in applying the proposed DSD loss into deep models. Thus we eventually develop a controllable DSD loss for general-purpose scenarios. Our experiments indicate that switching to the DSD loss after the convergence of ML training helps models escape local optima and stimulates stable performance improvements. Our evaluations on the WMT 2014 English-German and English-French translation tasks demonstrate that the proposed loss as a general and convenient mean for NMT training indeed brings performance improvement in comparison to strong baselines. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 142,542 |
1903.05486 | A Distributed Observer for a Discrete-Time Linear System | A simply structured distributed observer is described for estimating the state of a discrete-time, jointly observable, input-free, linear system whose sensed outputs are distributed across a time-varying network. It is explained how to construct the local estimators which comprise the observer so that their state estimation errors all converge exponentially fast to zero at a fixed, but arbitrarily chosen rate provided the network's graph is strongly connected for all time. This is accomplished by exploiting several well-known properties of invariant subspaces plus several kinds of suitably defined matrix norms. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 124,172 |
2306.07096 | Global and Local Semantic Completion Learning for Vision-Language
Pre-training | Cross-modal alignment plays a crucial role in vision-language pre-training (VLP) models, enabling them to capture meaningful associations across different modalities. For this purpose, numerous masked modeling tasks have been proposed for VLP to further promote cross-modal interactions. The core idea of previous masked modeling tasks is to focus on reconstructing the masked tokens based on visible context for learning local-local alignment. However, most of them pay little attention to the global semantic features generated for the masked data, resulting in a limited cross-modal alignment ability of global representations to local features of the other modality. Therefore, in this paper, we propose a novel Global and Local Semantic Completion Learning (GLSCL) task to facilitate global-local alignment and local-local alignment simultaneously. Specifically, the GLSCL task complements the missing semantics of masked data and recovers global and local features by cross-modal interactions. Our GLSCL consists of masked global semantic completion (MGSC) and masked local token completion (MLTC). MGSC promotes learning more representative global features, which have a great impact on the performance of downstream tasks, while MLTC reconstructs modal-fusion local tokens, further enhancing accurate comprehension of multimodal data. To evaluate the proposed approaches on cross-modal alignment, we develop a validation benchmark called ALIGN-BENCH. Moreover, we present a flexible vision encoder, enabling our model to simultaneously perform image-text and video-text multimodal tasks. Experimental results show that our proposed method obtains state-of-the-art performance on various vision-language benchmarks, such as visual question answering, image-text retrieval, and video-text retrieval. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 372,878 |
1701.06110 | Multi-Erasure Locally Recoverable Codes Over Small Fields For Flash
Memory Array | Erasure codes play an important role in storage systems to prevent data loss. In this work, we study a class of erasure codes called Multi-Erasure Locally Recoverable Codes (ME-LRCs) for flash memory array. Compared to previous related works, we focus on the construction of ME-LRCs over small fields. We first develop upper and lower bounds on the minimum distance of ME-LRCs. These bounds explicitly take the field size into account. Our main contribution is to propose a general construction of ME-LRCs based on generalized tensor product codes, and study their erasure-correcting property. A decoding algorithm tailored for erasure recovery is given. We then prove that our construction yields optimal ME-LRCs with a wide range of code parameters. Finally, we present several families of ME-LRCs over different fields. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 67,071 |
2409.07843 | Real-time Multi-view Omnidirectional Depth Estimation System for Robots
and Autonomous Driving on Real Scenes | Omnidirectional Depth Estimation has broad application prospects in fields such as robotic navigation and autonomous driving. In this paper, we propose a robotic prototype system and corresponding algorithm designed to validate omnidirectional depth estimation for navigation and obstacle avoidance in real-world scenarios for both robots and vehicles. The proposed HexaMODE system captures 360$^\circ$ depth maps using six surrounding arranged fisheye cameras. We introduce a combined spherical sweeping method and optimize the model architecture for proposed RtHexa-OmniMVS algorithm to achieve real-time omnidirectional depth estimation. To ensure high accuracy, robustness, and generalization in real-world environments, we employ a teacher-student self-training strategy, utilizing large-scale unlabeled real-world data for model training. The proposed algorithm demonstrates high accuracy in various complex real-world scenarios, both indoors and outdoors, achieving an inference speed of 15 fps on edge computing platforms. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 487,686 |
1807.06454 | Global sensitivity analysis of frequency band gaps in one-dimensional
phononic crystals | Phononic crystals have been widely employed in many engineering fields, which is due to their unique feature of frequency band gaps. For example, their capability to filter out the incoming elastic waves, which include seismic waves, will have a significant impact on the seismic safety of nuclear infrastructure. In order to accurately design the desired frequency band gaps, one must pay attention on how the input parameters and the interaction of the parameters can affect the frequency band gaps. Global sensitivity analysis can decompose the dispersion relationship of the phononic crystals and screen the variance attributed to each of the parameters and the interaction between them. Prior to the application in one-dimensional (1D) phononic crystals, this paper will first review the theory of global sensitivity analysis using variance decomposition (Sobol sensitivity analysis). Afterwards, the sensitivity analysis is applied to study a simple mathematical model with three input variables for better understanding of the concept. Then, the sensitivity analysis is utilized to study the characteristic of the first frequency band gap in 1D phononic crystals with respect to the input parameters. This study reveals the quantified influence of the parameters and their correlation in determining the first frequency band gap. In addition, simple straight-forward design equations based on reduced Sobol functions are proposed to easily estimate the first frequency band gap. Finally, the error associated with the proposed design equations is also addressed. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 103,122 |
1605.01681 | Brain Emotional Learning-Based Prediction Model (For Long-Term Chaotic
Prediction Applications) | This study suggests a new prediction model for chaotic time series inspired by the brain emotional learning of mammals. We describe the structure and function of this model, which is referred to as BELPM (Brain Emotional Learning-Based Prediction Model). Structurally, the model mimics the connection between the regions of the limbic system, and functionally it uses weighted k nearest neighbors to imitate the roles of those regions. The learning algorithm of BELPM is defined using steepest descent (SD) and the least square estimator (LSE). Two benchmark chaotic time series, Lorenz and Henon, have been used to evaluate the performance of BELPM. The obtained results have been compared with those of other prediction methods. The results show that BELPM has the capability to achieve a reasonable accuracy for long-term prediction of chaotic time series, using a limited amount of training data and a reasonably low computational time. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 55,513 |
2501.15071 | Gaze-based Task Decomposition for Robot Manipulation in Imitation
Learning | In imitation learning for robotic manipulation, decomposing object manipulation tasks into multiple sub-tasks is essential. This decomposition enables the reuse of learned skills in varying contexts and the combination of acquired skills to perform novel tasks, rather than merely replicating demonstrated motions. Gaze plays a critical role in human object manipulation, where it is strongly correlated with hand movements. We hypothesize that an imitating agent's gaze control, fixating on specific landmarks and transitioning between them, simultaneously segments demonstrated manipulations into sub-tasks. In this study, we propose a simple yet robust task decomposition method based on gaze transitions. The method leverages teleoperation, a common modality in robotic manipulation for collecting demonstrations, in which a human operator's gaze is measured and used for task decomposition as a substitute for an imitating agent's gaze. Notably, our method achieves consistent task decomposition across all demonstrations for each task, which is desirable in contexts such as machine learning. We applied this method to demonstrations of various tasks and evaluated the characteristics and consistency of the resulting sub-tasks. Furthermore, through extensive testing across a wide range of hyperparameter variations, we demonstrated that the proposed method possesses the robustness necessary for application to different robotic systems. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 527,390 |
2207.02454 | Ordinal Regression via Binary Preference vs Simple Regression:
Statistical and Experimental Perspectives | Ordinal regression with anchored reference samples (ORARS) has been proposed for predicting the subjective Mean Opinion Score (MOS) of input stimuli automatically. The ORARS addresses the MOS prediction problem by pairing a test sample with each of the pre-scored anchored reference samples. A trained binary classifier is then used to predict which sample, test or anchor, is better statistically. Posteriors of the binary preference decision are then used to predict the MOS of the test sample. In this paper, rigorous framework, analysis, and experiments to demonstrate that ORARS are advantageous over simple regressions are presented. The contributions of this work are: 1) Show that traditional regression can be reformulated into multiple preference tests to yield a better performance, which is confirmed with simulations experimentally; 2) Generalize ORARS to other regression problems and verify its effectiveness; 3) Provide some prerequisite conditions which can insure proper application of ORARS. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 306,525 |
2202.12509 | RRL:Regional Rotation Layer in Convolutional Neural Networks | Convolutional Neural Networks (CNNs) perform very well in image classification and object detection in recent years, but even the most advanced models have limited rotation invariance. Known solutions include the enhancement of training data and the increase of rotation invariance by globally merging the rotation equivariant features. These methods either increase the workload of training or increase the number of model parameters. To address this problem, this paper proposes a module that can be inserted into the existing networks, and directly incorporates the rotation invariance into the feature extraction layers of the CNNs. This module does not have learnable parameters and will not increase the complexity of the model. At the same time, only by training the upright data, it can perform well on the rotated testing set. These advantages will be suitable for fields such as biomedicine and astronomy where it is difficult to obtain upright samples or the target has no directionality. Evaluate our module with LeNet-5, ResNet-18 and tiny-yolov3, we get impressive results. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 282,268 |
2110.02636 | Learning Sparse Masks for Diffusion-based Image Inpainting | Diffusion-based inpainting is a powerful tool for the reconstruction of images from sparse data. Its quality strongly depends on the choice of known data. Optimising their spatial location -- the inpainting mask -- is challenging. A commonly used tool for this task are stochastic optimisation strategies. However, they are slow as they compute multiple inpainting results. We provide a remedy in terms of a learned mask generation model. By emulating the complete inpainting pipeline with two networks for mask generation and neural surrogate inpainting, we obtain a model for highly efficient adaptive mask generation. Experiments indicate that our model can achieve competitive quality with an acceleration by as much as four orders of magnitude. Our findings serve as a basis for making diffusion-based inpainting more attractive for applications such as image compression, where fast encoding is highly desirable. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 259,196 |
2008.00645 | Active Classification with Uncertainty Comparison Queries | Noisy pairwise comparison feedback has been incorporated to improve the overall query complexity of interactively learning binary classifiers. The \textit{positivity comparison oracle} is used to provide feedback on which is more likely to be positive given a pair of data points. Because it is impossible to infer accurate labels using this oracle alone \textit{without knowing the classification threshold}, existing methods still rely on the traditional \textit{explicit labeling oracle}, which directly answers the label given a data point. Existing methods conduct sorting on all data points and use explicit labeling oracle to find the classification threshold. The current methods, however, have two drawbacks: (1) they needs unnecessary sorting for label inference; (2) quick sort is naively adapted to noisy feedback and negatively affects practical performance. In order to avoid this inefficiency and acquire information of the classification threshold, we propose a new pairwise comparison oracle concerning uncertainties. This oracle receives two data points as input and answers which one has higher uncertainty. We then propose an efficient adaptive labeling algorithm using the proposed oracle and the positivity comparison oracle. In addition, we also address the situation where the labeling budget is insufficient compared to the dataset size, which can be dealt with by plugging the proposed algorithm into an active learning algorithm. Furthermore, we confirm the feasibility of the proposed oracle and the performance of the proposed algorithm theoretically and empirically. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 190,068 |
2202.07094 | Matching Tweets With Applicable Fact-Checks Across Languages | An important challenge for news fact-checking is the effective dissemination of existing fact-checks. This in turn brings the need for reliable methods to detect previously fact-checked claims. In this paper, we focus on automatically finding existing fact-checks for claims made in social media posts (tweets). We conduct both classification and retrieval experiments, in monolingual (English only), multilingual (Spanish, Portuguese), and cross-lingual (Hindi-English) settings using multilingual transformer models such as XLM-RoBERTa and multilingual embeddings such as LaBSE and SBERT. We present promising results for "match" classification (86% average accuracy) in four language pairs. We also find that a BM25 baseline outperforms or is on par with state-of-the-art multilingual embedding models for the retrieval task during our monolingual experiments. We highlight and discuss NLP challenges while addressing this problem in different languages, and we introduce a novel curated dataset of fact-checks and corresponding tweets for future research. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 280,426 |
2409.02714 | MOOSS: Mask-Enhanced Temporal Contrastive Learning for Smooth State
Evolution in Visual Reinforcement Learning | In visual Reinforcement Learning (RL), learning from pixel-based observations poses significant challenges on sample efficiency, primarily due to the complexity of extracting informative state representations from high-dimensional data. Previous methods such as contrastive-based approaches have made strides in improving sample efficiency but fall short in modeling the nuanced evolution of states. To address this, we introduce MOOSS, a novel framework that leverages a temporal contrastive objective with the help of graph-based spatial-temporal masking to explicitly model state evolution in visual RL. Specifically, we propose a self-supervised dual-component strategy that integrates (1) a graph construction of pixel-based observations for spatial-temporal masking, coupled with (2) a multi-level contrastive learning mechanism that enriches state representations by emphasizing temporal continuity and change of states. MOOSS advances the understanding of state dynamics by disrupting and learning from spatial-temporal correlations, which facilitates policy learning. Our comprehensive evaluation on multiple continuous and discrete control benchmarks shows that MOOSS outperforms previous state-of-the-art visual RL methods in terms of sample efficiency, demonstrating the effectiveness of our method. Our code is released at https://github.com/jsun57/MOOSS. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 485,810 |
2310.15641 | Guaranteed Coverage Prediction Intervals with Gaussian Process
Regression | Gaussian Process Regression (GPR) is a popular regression method, which unlike most Machine Learning techniques, provides estimates of uncertainty for its predictions. These uncertainty estimates however, are based on the assumption that the model is well-specified, an assumption that is violated in most practical applications, since the required knowledge is rarely available. As a result, the produced uncertainty estimates can become very misleading; for example the prediction intervals (PIs) produced for the 95% confidence level may cover much less than 95% of the true labels. To address this issue, this paper introduces an extension of GPR based on a Machine Learning framework called, Conformal Prediction (CP). This extension guarantees the production of PIs with the required coverage even when the model is completely misspecified. The proposed approach combines the advantages of GPR with the valid coverage guarantee of CP, while the performed experimental results demonstrate its superiority over existing methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 402,388 |
2401.15854 | LSTM-based Deep Neural Network With A Focus on Sentence Representation
for Sequential Sentence Classification in Medical Scientific Abstracts | The Sequential Sentence Classification task within the domain of medical abstracts, termed as SSC, involves the categorization of sentences into pre-defined headings based on their roles in conveying critical information in the abstract. In the SSC task, sentences are sequentially related to each other. For this reason, the role of sentence embeddings is crucial for capturing both the semantic information between words in the sentence and the contextual relationship of sentences within the abstract, which then enhances the SSC system performance. In this paper, we propose a LSTM-based deep learning network with a focus on creating comprehensive sentence representation at the sentence level. To demonstrate the efficacy of the created sentence representation, a system utilizing these sentence embeddings is also developed, which consists of a Convolutional-Recurrent neural network (C-RNN) at the abstract level and a multi-layer perception network (MLP) at the segment level. Our proposed system yields highly competitive results compared to state-of-the-art systems and further enhances the F1 scores of the baseline by 1.0%, 2.8%, and 2.6% on the benchmark datasets PudMed 200K RCT, PudMed 20K RCT and NICTA-PIBOSO, respectively. This indicates the significant impact of improving sentence representation on boosting model performance. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 424,611 |
2404.14647 | Human Behavior Modeling via Identification of Task Objective and
Variability | Human behavior modeling is important for the design and implementation of human-automation interactive control systems. In this context, human behavior refers to a human's control input to systems. We propose a novel method for human behavior modeling that uses human demonstrations for a given task to infer the unknown task objective and the variability. The task objective represents the human's intent or desire. It can be inferred by the inverse optimal control and improve the understanding of human behavior by providing an explainable objective function behind the given human behavior. Meanwhile, the variability denotes the intrinsic uncertainty in human behavior. It can be described by a Gaussian mixture model and capture the uncertainty in human behavior which cannot be encoded by the task objective. The proposed method can improve the prediction accuracy of human behavior by leveraging both task objective and variability. The proposed method is demonstrated through human-subject experiments using an illustrative quadrotor remote control example. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 448,753 |
2403.02484 | Encodings for Prediction-based Neural Architecture Search | Predictor-based methods have substantially enhanced Neural Architecture Search (NAS) optimization. The efficacy of these predictors is largely influenced by the method of encoding neural network architectures. While traditional encodings used an adjacency matrix describing the graph structure of a neural network, novel encodings embrace a variety of approaches from unsupervised pretraining of latent representations to vectors of zero-cost proxies. In this paper, we categorize and investigate neural encodings from three main types: structural, learned, and score-based. Furthermore, we extend these encodings and introduce \textit{unified encodings}, that extend NAS predictors to multiple search spaces. Our analysis draws from experiments conducted on over 1.5 million neural network architectures on NAS spaces such as NASBench-101 (NB101), NB201, NB301, Network Design Spaces (NDS), and TransNASBench-101. Building on our study, we present our predictor \textbf{FLAN}: \textbf{Fl}ow \textbf{A}ttention for \textbf{N}AS. FLAN integrates critical insights on predictor design, transfer learning, and \textit{unified encodings} to enable more than an order of magnitude cost reduction for training NAS accuracy predictors. Our implementation and encodings for all neural networks are open-sourced at \href{https://github.com/abdelfattah-lab/flan_nas}{https://github.com/abdelfattah-lab/flan\_nas}. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | true | false | false | 434,817 |
2001.09508 | Bilevel Optimization for Differentially Private Optimization in Energy
Systems | This paper studies how to apply differential privacy to constrained optimization problems whose inputs are sensitive. This task raises significant challenges since random perturbations of the input data often render the constrained optimization problem infeasible or change significantly the nature of its optimal solutions. To address this difficulty, this paper proposes a bilevel optimization model that can be used as a post-processing step: It redistributes the noise introduced by a differentially private mechanism optimally while restoring feasibility and near-optimality. The paper shows that, under a natural assumption, this bilevel model can be solved efficiently for real-life large-scale nonlinear nonconvex optimization problems with sensitive customer data. The experimental results demonstrate the accuracy of the privacy-preserving mechanism and showcases significant benefits compared to standard approaches. | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | false | false | false | 161,595 |
2404.01147 | Do LLMs Find Human Answers To Fact-Driven Questions Perplexing? A Case
Study on Reddit | Large language models (LLMs) have been shown to be proficient in correctly answering questions in the context of online discourse. However, the study of using LLMs to model human-like answers to fact-driven social media questions is still under-explored. In this work, we investigate how LLMs model the wide variety of human answers to fact-driven questions posed on several topic-specific Reddit communities, or subreddits. We collect and release a dataset of 409 fact-driven questions and 7,534 diverse, human-rated answers from 15 r/Ask{Topic} communities across 3 categories: profession, social identity, and geographic location. We find that LLMs are considerably better at modeling highly-rated human answers to such questions, as opposed to poorly-rated human answers. We present several directions for future research based on our initial findings. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 443,275 |
2201.11202 | Low-Resolution Precoding for Multi-Antenna Downlink Channels and OFDM | Downlink precoding is considered for multi-path multi-input single-output channels where the base station uses orthogonal frequency-division multiplexing and low-resolution signaling. A quantized coordinate minimization (QCM) algorithm is proposed and its performance is compared to other precoding algorithms including squared infinity-norm relaxation (SQUID), multi-antenna greedy iterative quantization (MAGIQ), and maximum safety margin precoding. MAGIQ and QCM achieve the highest information rates and QCM has the lowest complexity measured in the number of multiplications. The information rates are computed for pilot-aided channel estimation and data-aided channel estimation. Bit error rates for a 5G low-density parity-check code confirm the information-theoretic calculations. Simulations with imperfect channel knowledge at the transmitter show that the performance of QCM and SQUID degrades in a similar fashion as zero-forcing precoding with high resolution quantizers. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 277,218 |
1608.00104 | World Knowledge as Indirect Supervision for Document Clustering | One of the key obstacles in making learning protocols realistic in applications is the need to supervise them, a costly process that often requires hiring domain experts. We consider the framework to use the world knowledge as indirect supervision. World knowledge is general-purpose knowledge, which is not designed for any specific domain. Then the key challenges are how to adapt the world knowledge to domains and how to represent it for learning. In this paper, we provide an example of using world knowledge for domain dependent document clustering. We provide three ways to specify the world knowledge to domains by resolving the ambiguity of the entities and their types, and represent the data with world knowledge as a heterogeneous information network. Then we propose a clustering algorithm that can cluster multiple types and incorporate the sub-type information as constraints. In the experiments, we use two existing knowledge bases as our sources of world knowledge. One is Freebase, which is collaboratively collected knowledge about entities and their organizations. The other is YAGO2, a knowledge base automatically extracted from Wikipedia and maps knowledge to the linguistic knowledge base, WordNet. Experimental results on two text benchmark datasets (20newsgroups and RCV1) show that incorporating world knowledge as indirect supervision can significantly outperform the state-of-the-art clustering algorithms as well as clustering algorithms enhanced with world knowledge features. | false | false | false | false | false | true | true | false | true | false | false | false | false | false | false | false | false | false | 59,224 |
2109.02917 | Fine-grained Hand Gesture Recognition in Multi-viewpoint Hand Hygiene | This paper contributes a new high-quality dataset for hand gesture recognition in hand hygiene systems, named "MFH". Generally, current datasets are not focused on: (i) fine-grained actions; and (ii) data mismatch between different viewpoints, which are available under realistic settings. To address the aforementioned issues, the MFH dataset is proposed to contain a total of 731147 samples obtained by different camera views in 6 non-overlapping locations. Additionally, each sample belongs to one of seven steps introduced by the World Health Organization (WHO). As a minor contribution, inspired by advances in fine-grained image recognition and distribution adaptation, this paper recommends using the self-supervised learning method to handle these preceding problems. The extensive experiments on the benchmarking MFH dataset show that the introduced method yields competitive performance in both the Accuracy and the Macro F1-score. The code and the MFH dataset are available at https://github.com/willogy-team/hand-gesture-recognition-smc2021. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 253,898 |
2408.09426 | A Robust Algorithm for Contactless Fingerprint Enhancement and Matching | Compared to contact fingerprint images, contactless fingerprint images exhibit four distinct characteristics: (1) they contain less noise; (2) they have fewer discontinuities in ridge patterns; (3) the ridge-valley pattern is less distinct; and (4) they pose an interoperability problem, as they lack the elastic deformation caused by pressing the finger against the capture device. These properties present significant challenges for the enhancement of contactless fingerprint images. In this study, we propose a novel contactless fingerprint identification solution that enhances the accuracy of minutiae detection through improved frequency estimation and a new region-quality-based minutia extraction algorithm. In addition, we introduce an efficient and highly accurate minutiae-based encoding and matching algorithm. We validate the effectiveness of our approach through extensive experimental testing. Our method achieves a minimum Equal Error Rate (EER) of 2.84\% on the PolyU contactless fingerprint dataset, demonstrating its superior performance compared to existing state-of-the-art techniques. The proposed fingerprint identification method exhibits notable precision and resilience, proving to be an effective and feasible solution for contactless fingerprint-based identification systems. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 481,431 |
2102.01739 | An enhanced parametric nonlinear reduced order model for imperfect
structures using Neumann expansion | We present an enhanced version of the parametric nonlinear reduced order model for shape imperfections in structural dynamics we studied in a previous work [1]. The model is computed intrusively and with no training using information about the nominal geometry of the structure and some user-defined displacement fields representing shape defects, i.e. small deviations from the nominal geometry parametrized by their respective amplitudes. The linear superposition of these artificial displacements describe the defected geometry and can be embedded in the strain formulation in such a way that, in the end, nonlinear internal elastic forces can be expressed as a polynomial function of both these defect fields and the actual displacement field. This way, a tensorial representation of the internal forces can be obtained and, owning the reduction in size of the model given by a Galerkin projection, high simulation speed-ups can be achieved. We show that by adopting a rigorous deformation framework we are able to achieve better accuracy as compared to the previous work. In particular, exploiting Neumann expansion in the definition of the Green-Lagrange strain tensor, we show that our previous model is a lower order approximation with respect to the one we present now. Two numerical examples of a clamped beam and a MEMS gyroscope finally demonstrate the benefits of the method in terms of speed and increased accuracy. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 218,204 |
2401.06325 | Faster Sampling without Isoperimetry via Diffusion-based Monte Carlo | To sample from a general target distribution $p_*\propto e^{-f_*}$ beyond the isoperimetric condition, Huang et al. (2023) proposed to perform sampling through reverse diffusion, giving rise to Diffusion-based Monte Carlo (DMC). Specifically, DMC follows the reverse SDE of a diffusion process that transforms the target distribution to the standard Gaussian, utilizing a non-parametric score estimation. However, the original DMC algorithm encountered high gradient complexity, resulting in an exponential dependency on the error tolerance $\epsilon$ of the obtained samples. In this paper, we demonstrate that the high complexity of DMC originates from its redundant design of score estimation, and proposed a more efficient algorithm, called RS-DMC, based on a novel recursive score estimation method. In particular, we first divide the entire diffusion process into multiple segments and then formulate the score estimation step (at any time step) as a series of interconnected mean estimation and sampling subproblems accordingly, which are correlated in a recursive manner. Importantly, we show that with a proper design of the segment decomposition, all sampling subproblems will only need to tackle a strongly log-concave distribution, which can be very efficient to solve using the Langevin-based samplers with a provably rapid convergence rate. As a result, we prove that the gradient complexity of RS-DMC only has a quasi-polynomial dependency on $\epsilon$, which significantly improves exponential gradient complexity in Huang et al. (2023). Furthermore, under commonly used dissipative conditions, our algorithm is provably much faster than the popular Langevin-based algorithms. Our algorithm design and theoretical framework illuminate a novel direction for addressing sampling problems, which could be of broader applicability in the community. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 421,117 |
1004.4462 | BiLingual Information Retrieval System for English and Tamil | This paper addresses the design and implementation of BiLingual Information Retrieval system on the domain, Festivals. A generic platform is built for BiLingual Information retrieval which can be extended to any foreign or Indian language working with the same efficiency. Search for the solution of the query is not done in a specific predefined set of standard languages but is chosen dynamically on processing the user's query. This paper deals with Indian language Tamil apart from English. The task is to retrieve the solution for the user given query in the same language as that of the query. In this process, a Ontological tree is built for the domain in such a way that there are entries in the above listed two languages in every node of the tree. A Part-Of-Speech (POS) Tagger is used to determine the keywords from the given query. Based on the context, the keywords are translated to appropriate languages using the Ontological tree. A search is performed and documents are retrieved based on the keywords. With the use of the Ontological tree, Information Extraction is done. Finally, the solution for the query is translated back to the query language (if necessary) and produced to the user. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 6,278 |
2010.11665 | Spike and slab variational Bayes for high dimensional logistic
regression | Variational Bayes (VB) is a popular scalable alternative to Markov chain Monte Carlo for Bayesian inference. We study a mean-field spike and slab VB approximation of widely used Bayesian model selection priors in sparse high-dimensional logistic regression. We provide non-asymptotic theoretical guarantees for the VB posterior in both $\ell_2$ and prediction loss for a sparse truth, giving optimal (minimax) convergence rates. Since the VB algorithm does not depend on the unknown truth to achieve optimality, our results shed light on effective prior choices. We confirm the improved performance of our VB algorithm over common sparse VB approaches in a numerical study. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 202,358 |
2209.07972 | A Multi-turn Machine Reading Comprehension Framework with Rethink
Mechanism for Emotion-Cause Pair Extraction | Emotion-cause pair extraction (ECPE) is an emerging task in emotion cause analysis, which extracts potential emotion-cause pairs from an emotional document. Most recent studies use end-to-end methods to tackle the ECPE task. However, these methods either suffer from a label sparsity problem or fail to model complicated relations between emotions and causes. Furthermore, they all do not consider explicit semantic information of clauses. To this end, we transform the ECPE task into a document-level machine reading comprehension (MRC) task and propose a Multi-turn MRC framework with Rethink mechanism (MM-R). Our framework can model complicated relations between emotions and causes while avoiding generating the pairing matrix (the leading cause of the label sparsity problem). Besides, the multi-turn structure can fuse explicit semantic information flow between emotions and causes. Extensive experiments on the benchmark emotion cause corpus demonstrate the effectiveness of our proposed framework, which outperforms existing state-of-the-art methods. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 317,961 |
2405.06105 | Can Perplexity Reflect Large Language Model's Ability in Long Text
Understanding? | Recent studies have shown that Large Language Models (LLMs) have the potential to process extremely long text. Many works only evaluate LLMs' long-text processing ability on the language modeling task, with perplexity (PPL) as the evaluation metric. However, in our study, we find that there is no correlation between PPL and LLMs' long-text understanding ability. Besides, PPL may only reflect the model's ability to model local information instead of catching long-range dependency. Therefore, only using PPL to prove the model could process long text is inappropriate. The local focus feature of PPL could also explain some existing phenomena, such as the great extrapolation ability of the position method ALiBi. When evaluating a model's ability in long text, we might pay more attention to PPL's limitation and avoid overly relying on it. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 453,171 |
1307.1303 | Submodularity of a Set Label Disagreement Function | A set label disagreement function is defined over the number of variables that deviates from the dominant label. The dominant label is the value assumed by the largest number of variables within a set of binary variables. The submodularity of a certain family of set label disagreement function is discussed in this manuscript. Such disagreement function could be utilized as a cost function in combinatorial optimization approaches for problems defined over hypergraphs. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 25,614 |
2306.07532 | Referring Camouflaged Object Detection | We consider the problem of referring camouflaged object detection (Ref-COD), a new task that aims to segment specified camouflaged objects based on a small set of referring images with salient target objects. We first assemble a large-scale dataset, called R2C7K, which consists of 7K images covering 64 object categories in real-world scenarios. Then, we develop a simple but strong dual-branch framework, dubbed R2CNet, with a reference branch embedding the common representations of target objects from referring images and a segmentation branch identifying and segmenting camouflaged objects under the guidance of the common representations. In particular, we design a Referring Mask Generation module to generate pixel-level prior mask and a Referring Feature Enrichment module to enhance the capability of identifying specified camouflaged objects. Extensive experiments show the superiority of our Ref-COD methods over their COD counterparts in segmenting specified camouflaged objects and identifying the main body of target objects. Our code and dataset are publicly available at https://github.com/zhangxuying1004/RefCOD. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 373,050 |
1910.03135 | DexPilot: Vision Based Teleoperation of Dexterous Robotic Hand-Arm
System | Teleoperation offers the possibility of imparting robotic systems with sophisticated reasoning skills, intuition, and creativity to perform tasks. However, current teleoperation solutions for high degree-of-actuation (DoA), multi-fingered robots are generally cost-prohibitive, while low-cost offerings usually provide reduced degrees of control. Herein, a low-cost, vision based teleoperation system, DexPilot, was developed that allows for complete control over the full 23 DoA robotic system by merely observing the bare human hand. DexPilot enables operators to carry out a variety of complex manipulation tasks that go beyond simple pick-and-place operations. This allows for collection of high dimensional, multi-modality, state-action data that can be leveraged in the future to learn sensorimotor policies for challenging manipulation tasks. The system performance was measured through speed and reliability metrics across two human demonstrators on a variety of tasks. The videos of the experiments can be found at https://sites.google.com/view/dex-pilot. | false | false | false | false | false | false | true | true | false | false | false | true | false | false | false | false | false | false | 148,420 |
2008.12473 | Pre-training of Graph Neural Network for Modeling Effects of Mutations
on Protein-Protein Binding Affinity | Modeling the effects of mutations on the binding affinity plays a crucial role in protein engineering and drug design. In this study, we develop a novel deep learning based framework, named GraphPPI, to predict the binding affinity changes upon mutations based on the features provided by a graph neural network (GNN). In particular, GraphPPI first employs a well-designed pre-training scheme to enforce the GNN to capture the features that are predictive of the effects of mutations on binding affinity in an unsupervised manner and then integrates these graphical features with gradient-boosting trees to perform the prediction. Experiments showed that, without any annotated signals, GraphPPI can capture meaningful patterns of the protein structures. Also, GraphPPI achieved new state-of-the-art performance in predicting the binding affinity changes upon both single- and multi-point mutations on five benchmark datasets. In-depth analyses also showed GraphPPI can accurately estimate the effects of mutations on the binding affinity between SARS-CoV-2 and its neutralizing antibodies. These results have established GraphPPI as a powerful and useful computational tool in the studies of protein design. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 193,589 |
1402.6404 | On the Algebraic Structure of Linear Trellises | Trellises are crucial graphical representations of codes. While conventional trellises are well understood, the general theory of (tail-biting) trellises is still under development. Iterative decoding concretely motivates such theory. In this paper we first develop a new algebraic framework for a systematic analysis of linear trellises which enables us to address open foundational questions. In particular, we present a useful and powerful characterization of linear trellis isomorphy. We also obtain a new proof of the Factorization Theorem of Koetter/Vardy and point out unnoticed problems for the group case. Next, we apply our work to: describe all the elementary trellis factorizations of linear trellises and consequently to determine all the minimal linear trellises for a given code; prove that nonmergeable one-to-one linear trellises are strikingly determined by the edge-label sequences of certain closed paths; prove self-duality theorems for minimal linear trellises; analyze quasi-cyclic linear trellises and consequently extend results on reduced linear trellises to nonreduced ones. To achieve this, we also provide new insight into mergeability and path connectivity properties of linear trellises. Our classification results are important for iterative decoding as we show that minimal linear trellises can yield different pseudocodewords even if they have the same graph structure. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 31,173 |
2209.02690 | Classification Protocols with Minimal Disclosure | We consider multi-party protocols for classification that are motivated by applications such as e-discovery in court proceedings. We identify a protocol that guarantees that the requesting party receives all responsive documents and the sending party discloses the minimal amount of non-responsive documents necessary to prove that all responsive documents have been received. This protocol can be embedded in a machine learning framework that enables automated labeling of points and the resulting multi-party protocol is equivalent to the standard one-party classification problem (if the one-party classification problem satisfies a natural independence-of-irrelevant-alternatives property). Our formal guarantees focus on the case where there is a linear classifier that correctly partitions the documents. | false | false | false | false | false | false | true | false | false | false | false | false | true | true | false | false | false | true | 316,279 |
2104.02558 | Comparing CTC and LFMMI for out-of-domain adaptation of wav2vec 2.0
acoustic model | In this work, we investigate if the wav2vec 2.0 self-supervised pretraining helps mitigate the overfitting issues with connectionist temporal classification (CTC) training to reduce its performance gap with flat-start lattice-free MMI (E2E-LFMMI) for automatic speech recognition with limited training data. Towards that objective, we use the pretrained wav2vec 2.0 BASE model and fine-tune it on three different datasets including out-of-domain (Switchboard) and cross-lingual (Babel) scenarios. Our results show that for supervised adaptation of the wav2vec 2.0 model, both E2E-LFMMI and CTC achieve similar results; significantly outperforming the baselines trained only with supervised data. Fine-tuning the wav2vec 2.0 model with E2E-LFMMI and CTC we obtain the following relative WER improvements over the supervised baseline trained with E2E-LFMMI. We get relative improvements of 40% and 44% on the clean-set and 64% and 58% on the test set of Librispeech (100h) respectively. On Switchboard (300h) we obtain relative improvements of 33% and 35% respectively. Finally, for Babel languages, we obtain relative improvements of 26% and 23% on Swahili (38h) and 18% and 17% on Tagalog (84h) respectively. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 228,763 |
2406.16724 | {\mu}-Net: A Deep Learning-Based Architecture for {\mu}-CT Segmentation | X-ray computed microtomography ({\mu}-CT) is a non-destructive technique that can generate high-resolution 3D images of the internal anatomy of medical and biological samples. These images enable clinicians to examine internal anatomy and gain insights into the disease or anatomical morphology. However, extracting relevant information from 3D images requires semantic segmentation of the regions of interest, which is usually done manually and results time-consuming and tedious. In this work, we propose a novel framework that uses a convolutional neural network (CNN) to automatically segment the full morphology of the heart of Carassius auratus. The framework employs an optimized 2D CNN architecture that can infer a 3D segmentation of the sample, avoiding the high computational cost of a 3D CNN architecture. We tackle the challenges of handling large and high-resoluted image data (over a thousand pixels in each dimension) and a small training database (only three samples) by proposing a standard protocol for data normalization and processing. Moreover, we investigate how the noise, contrast, and spatial resolution of the sample and the training of the architecture are affected by the reconstruction technique, which depends on the number of input images. Experiments show that our framework significantly reduces the time required to segment new samples, allowing a faster microtomography analysis of the Carassius auratus heart shape. Furthermore, our framework can work with any bio-image (biological and medical) from {\mu}-CT with high-resolution and small dataset size | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 467,247 |
1704.00107 | Online Geographical Load Balancing for Mobile Edge Computing with Energy
Harvesting | Mobile Edge Computing (MEC) (a.k.a. fog computing) has recently emerged to enable low-latency and location-aware data processing at the edge of mobile networks. Since providing grid power supply in support of MEC can be costly and even infeasible in some scenarios, on-site renewable energy is mandated as a major or even sole power supply. Nonetheless, the high intermittency and unpredictability of energy harvesting creates many new challenges of performing effective MEC. In this paper, we develop an algorithm called GLOBE that performs joint geographical load balancing (GLB) and admission control for optimizing the system performance of a network of MEC-enabled and energy harvesting-powered base stations. By leveraging and extending the Lyapunov optimization with perturbation technique, GLOBE operates online without requiring future system information and addresses significant challenges caused by battery state dynamics and energy causality constraints. Moreover, GLOBE works in a distributed manner, which makes our algorithm scalable to large networks. We prove that GLOBE achieves a close-to-optimal system performance compared to the offline algorithm that knows full future information, and present a critical tradeoff between battery capacity and system performance. Simulation results validate our analysis and demonstrate the superior performance of GLOBE compared to benchmark algorithms. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 71,025 |
2410.10446 | Robust co-design framework for buildings operated by predictive control | Cost-effective decarbonisation of the built environment is a stepping stone to achieving net-zero carbon emissions since buildings are globally responsible for more than a quarter of global energy-related CO$_2$ emissions. Improving energy utilization and decreasing costs naturally requires considering multiple domain-specific performance criteria. The resulting problem is often computationally infeasible. The paper proposes an approach based on decomposition and selection of significant operating conditions to achieve a formulation with reduced computational complexity. We present a robust framework to optimise the physical design, the controller, and the operation of residential buildings in an integrated fashion, considering external weather conditions and time-varying electricity prices. The framework explicitly includes operational constraints and increases the utilization of the energy generated by intermittent resources. A case study illustrates the potential of co-design in enhancing the reliability, flexibility and self-sufficiency of a system operating under different conditions. Specifically, numerical results demonstrate reductions in costs up to $30$% compared to a deterministic formulation. Furthermore, the proposed approach achieves a computational time reduction of at least $10$ times lower compared to the original problem with a deterioration in the performance of only 0.6%. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 498,079 |
cs/0604042 | Adaptative combination rule and proportional conflict redistribution
rule for information fusion | This paper presents two new promising rules of combination for the fusion of uncertain and potentially highly conflicting sources of evidences in the framework of the theory of belief functions in order to palliate the well-know limitations of Dempster's rule and to work beyond the limits of applicability of the Dempster-Shafer theory. We present both a new class of adaptive combination rules (ACR) and a new efficient Proportional Conflict Redistribution (PCR) rule allowing to deal with highly conflicting sources for static and dynamic fusion applications. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 539,383 |
2009.09355 | Multi Agent Path Finding with Awareness for Spatially Extended Agents | Path finding problems involve identification of a plan for conflict free movement of agents over a common road network. Most approaches to this problem handle the agents as point objects, wherein the size of the agent is significantly smaller than the road on which it travels. In this paper, we consider spatially extended agents which have a size comparable to the length of the road on which they travel. An optimal multi agent path finding approach for spatially-extended agents was proposed in the eXtended Conflict Based Search (XCBS) algorithm. As XCBS resolves only a pair of conflicts at a time, it results in deeper search trees in case of cascading or multiple (more than two agent) conflicts at a given location. This issue is addressed in eXtended Conflict Based Search with Awareness (XCBS-A) in which an agent uses awareness of other agents' plans to make its own plan. In this paper, we explore XCBS-A in greater detail, we theoretically prove its completeness and empirically demonstrate its performance with other algorithms in terms of variances in road characteristics, agent characteristics and plan characteristics. We demonstrate the distributive nature of the algorithm by evaluating its performance when distributed over multiple machines. XCBS-A generates a huge search space impacting its efficiency in terms of memory; to address this we propose an approach for memory-efficiency and empirically demonstrate the performance of the algorithm. The nature of XCBS-A is such that it may lead to suboptimal solutions, hence the final contribution of this paper is an enhanced approach, XCBS-Local Awareness (XCBS-LA) which we prove will be optimal and complete. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 196,552 |
1703.01977 | Linear, Machine Learning and Probabilistic Approaches for Time Series
Analysis | In this paper we study different approaches for time series modeling. The forecasting approaches using linear models, ARIMA alpgorithm, XGBoost machine learning algorithm are described. Results of different model combinations are shown. For probabilistic modeling the approaches using copulas and Bayesian inference are considered. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 69,470 |
2102.00676 | Underwater Image Enhancement via Learning Water Type Desensitized
Representations | We present a novel underwater image enhancement method termed SCNet to improve the image quality meanwhile cope with the degradation diversity caused by the water. SCNet is based on normalization schemes across both spatial and channel dimensions with the key idea of learning water type desensitized features. Specifically, we apply whitening to de-correlate activations across spatial dimensions for each instance in a mini-batch. We also eliminate channel-wise correlation by standardizing and re-injecting the first two moments of the activations across channels. The normalization schemes of spatial and channel dimensions are performed at each scale of the U-Net to obtain multi-scale representations. With such water type irrelevant encodings, the decoder can easily reconstruct the clean signal and be unaffected by the distortion types. Experimental results on two real-world underwater image datasets show that our approach can successfully enhance images with diverse water types, and achieves competitive performance in visual quality improvement. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 217,876 |
2312.12807 | All but One: Surgical Concept Erasing with Model Preservation in
Text-to-Image Diffusion Models | Text-to-Image models such as Stable Diffusion have shown impressive image generation synthesis, thanks to the utilization of large-scale datasets. However, these datasets may contain sexually explicit, copyrighted, or undesirable content, which allows the model to directly generate them. Given that retraining these large models on individual concept deletion requests is infeasible, fine-tuning algorithms have been developed to tackle concept erasing in diffusion models. While these algorithms yield good concept erasure, they all present one of the following issues: 1) the corrupted feature space yields synthesis of disintegrated objects, 2) the initially synthesized content undergoes a divergence in both spatial structure and semantics in the generated images, and 3) sub-optimal training updates heighten the model's susceptibility to utility harm. These issues severely degrade the original utility of generative models. In this work, we present a new approach that solves all of these challenges. We take inspiration from the concept of classifier guidance and propose a surgical update on the classifier guidance term while constraining the drift of the unconditional score term. Furthermore, our algorithm empowers the user to select an alternative to the erasing concept, allowing for more controllability. Our experimental results show that our algorithm not only erases the target concept effectively but also preserves the model's generation capability. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 417,114 |
2209.09188 | Avoiding Biased Clinical Machine Learning Model Performance Estimates in
the Presence of Label Selection | When evaluating the performance of clinical machine learning models, one must consider the deployment population. When the population of patients with observed labels is only a subset of the deployment population (label selection), standard model performance estimates on the observed population may be misleading. In this study we describe three classes of label selection and simulate five causally distinct scenarios to assess how particular selection mechanisms bias a suite of commonly reported binary machine learning model performance metrics. Simulations reveal that when selection is affected by observed features, naive estimates of model discrimination may be misleading. When selection is affected by labels, naive estimates of calibration fail to reflect reality. We borrow traditional weighting estimators from causal inference literature and find that when selection probabilities are properly specified, they recover full population estimates. We then tackle the real-world task of monitoring the performance of deployed machine learning models whose interactions with clinicians feed-back and affect the selection mechanism of the labels. We train three machine learning models to flag low-yield laboratory diagnostics, and simulate their intended consequence of reducing wasteful laboratory utilization. We find that naive estimates of AUROC on the observed population undershoot actual performance by up to 20%. Such a disparity could be large enough to lead to the wrongful termination of a successful clinical decision support tool. We propose an altered deployment procedure, one that combines injected randomization with traditional weighted estimates, and find it recovers true model performance. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 318,416 |
2412.19833 | Multi-atlas Ensemble Graph Neural Network Model For Major Depressive
Disorder Detection Using Functional MRI Data | Major depressive disorder (MDD) is one of the most common mental disorders, with significant impacts on many daily activities and quality of life. It stands as one of the most common mental disorders globally and ranks as the second leading cause of disability. The current diagnostic approach for MDD primarily relies on clinical observations and patient-reported symptoms, overlooking the diverse underlying causes and pathophysiological factors contributing to depression. Therefore, scientific researchers and clinicians must gain a deeper understanding of the pathophysiological mechanisms involved in MDD. There is growing evidence in neuroscience that depression is a brain network disorder, and the use of neuroimaging, such as magnetic resonance imaging (MRI), plays a significant role in identifying and treating MDD. Rest-state functional MRI (rs-fMRI) is among the most popular neuroimaging techniques used to study MDD. Deep learning techniques have been widely applied to neuroimaging data to help with early mental health disorder detection. Recent years have seen a rise in interest in graph neural networks (GNNs), which are deep neural architectures specifically designed to handle graph-structured data like rs-fMRI. This research aimed to develop an ensemble-based GNN model capable of detecting discriminative features from rs-fMRI images for the purpose of diagnosing MDD. Specifically, we constructed an ensemble model by combining features from multiple brain region segmentation atlases to capture brain complexity and detect distinct features more accurately than single atlas-based models. Further, the effectiveness of our model is demonstrated by assessing its performance on a large multi-site MDD dataset. The best performing model among all folds achieved an accuracy of 75.80%, a sensitivity of 88.89%, a specificity of 61.84%, a precision of 71.29%, and an F1-score of 79.12%. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 520,989 |
2009.09883 | Selectivity Estimation with Attribute Value Dependencies using Linked
Bayesian Networks | Relational query optimisers rely on cost models to choose between different query execution plans. Selectivity estimates are known to be a crucial input to the cost model. In practice, standard selectivity estimation procedures are prone to large errors. This is mostly because they rely on the so-called attribute value independence and join uniformity assumptions. Therefore, multidimensional methods have been proposed to capture dependencies between two or more attributes both within and across relations. However, these methods require a large computational cost which makes them unusable in practice. We propose a method based on Bayesian networks that is able to capture cross-relation attribute value dependencies with little overhead. Our proposal is based on the assumption that dependencies between attributes are preserved when joins are involved. Furthermore, we introduce a parameter for trading between estimation accuracy and computational cost. We validate our work by comparing it with other relevant methods on a large workload derived from the JOB and TPC-DS benchmarks. Our results show that our method is an order of magnitude more efficient than existing methods, whilst maintaining a high level of accuracy. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 196,720 |
2002.04034 | Sperm Detection and Tracking in Phase-Contrast Microscopy Image
Sequences using Deep Learning and Modified CSR-DCF | Nowadays, computer-aided sperm analysis (CASA) systems have made a big leap in extracting the characteristics of spermatozoa for studies or measuring human fertility. The first step in sperm characteristics analysis is sperm detection in the frames of the video sample. In this article, we used RetinaNet, a deep fully convolutional neural network as the object detector. Sperms are small objects with few attributes, that makes the detection more difficult in high-density samples and especially when there are other particles in semen, which could be like sperm heads. One of the main attributes of sperms is their movement, but this attribute cannot be extracted when only one frame would be fed to the network. To improve the performance of the sperm detection network, we concatenated some consecutive frames to use as the input of the network. With this method, the motility attribute has also been extracted, and then with the help of the deep convolutional network, we have achieved high accuracy in sperm detection. The second step is tracking the sperms, for extracting the motility parameters that are essential for indicating fertility and other studies on sperms. In the tracking phase, we modify the CSR-DCF algorithm. This method also has shown excellent results in sperm tracking even in high-density sperm samples, occlusions, sperm colliding, and when sperms exit from a frame and re-enter in the next frames. The average precision of the detection phase is 99.1%, and the F1 score of the tracking method evaluation is 96.61%. These results can be a great help in studies investigating sperm behavior and analyzing fertility possibility. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 163,486 |
2006.13840 | A Holistic Framework for Parameter Coordination of Interconnected
Microgrids against Disasters | This paper proposes a holistic framework for parameter coordination of a power electronic-interfaced microgrid interconnection against natural disasters. The paper identifies a transient stability issue in a microgrid interconnection. Based on recent advances in control theory, we design a framework that can systematically coordinate system parameters, such that post-disaster equilibrium points of microgrid interconnections are asymptotically stable. The core of the framework is a stability assessment algorithm using sum of squares programming. The efficacy of the proposed framework is tested in a four-microgrid interconnection. The proposed framework has potential to extend to microgrid interconnections with a wide range of hierarchical control schemes. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 184,043 |
2110.09807 | Learning to Learn Graph Topologies | Learning a graph topology to reveal the underlying relationship between data entities plays an important role in various machine learning and data analysis tasks. Under the assumption that structured data vary smoothly over a graph, the problem can be formulated as a regularised convex optimisation over a positive semidefinite cone and solved by iterative algorithms. Classic methods require an explicit convex function to reflect generic topological priors, e.g. the $\ell_1$ penalty for enforcing sparsity, which limits the flexibility and expressiveness in learning rich topological structures. We propose to learn a mapping from node data to the graph structure based on the idea of learning to optimise (L2O). Specifically, our model first unrolls an iterative primal-dual splitting algorithm into a neural network. The key structural proximal projection is replaced with a variational autoencoder that refines the estimated graph with enhanced topological properties. The model is trained in an end-to-end fashion with pairs of node data and graph samples. Experiments on both synthetic and real-world data demonstrate that our model is more efficient than classic iterative algorithms in learning a graph with specific topological properties. | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 261,937 |
2410.22367 | MAMMAL -- Molecular Aligned Multi-Modal Architecture and Language | Drug discovery typically consists of multiple steps, including identifying a target protein key to a disease's etiology, validating that interacting with this target could prevent symptoms or cure the disease, discovering a small molecule or biologic therapeutic to interact with it, and optimizing the candidate molecule through a complex landscape of required properties. Drug discovery related tasks often involve prediction and generation while considering multiple entities that potentially interact, which poses a challenge for typical AI models. For this purpose we present MAMMAL - Molecular Aligned Multi-Modal Architecture and Language - a method that we applied to create a versatile multi-task multi-align foundation model that learns from large-scale biological datasets (2 billion samples) across diverse modalities, including proteins, small molecules, and genes. We introduce a prompt syntax that supports a wide range of classification, regression, and generation tasks. It allows combining different modalities and entity types as inputs and/or outputs. Our model handles combinations of tokens and scalars and enables the generation of small molecules and proteins, property prediction, and transcriptomic lab test predictions. We evaluated the model on 11 diverse downstream tasks spanning different steps within a typical drug discovery pipeline, where it reaches new SOTA in 9 tasks and is comparable to SOTA in 2 tasks. This performance is achieved while using a unified architecture serving all tasks, in contrast to the original SOTA performance achieved using tailored architectures. The model code and pretrained weights are publicly available at https://github.com/BiomedSciAI/biomed-multi-alignment and https://huggingface.co/ibm/biomed.omics.bl.sm.ma-ted-458m. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 503,608 |
2308.14232 | Research Report -- Persistent Autonomy and Robot Learning Lab | Robots capable of performing manipulation tasks in a broad range of missions in unstructured environments can develop numerous applications to impact and enhance human life. Existing work in robot learning has shown success in applying conventional machine learning algorithms to enable robots for replicating rather simple manipulation tasks in manufacturing, service and healthcare applications, among others. However, learning robust and versatile models for complex manipulation tasks that are inherently multi-faceted and naturally intricate demands algorithmic advancements in robot learning. Our research supports the long-term goal of making robots more accessible and serviceable to the general public by expanding robot applications to real-world scenarios that require systems capable of performing complex tasks. To achieve this goal, we focus on identifying and investigating knowledge gaps in robot learning of complex manipulation tasks by leveraging upon human-robot interaction and robot learning from human instructions. This document presents an overview of the recent research developments in the Persistent Autonomy and Robot Learning (PeARL) lab at the University of Massachusetts Lowell. Here, I briefly discuss different research directions, and present a few proposed approaches in our most recent publications. For each proposed approach, I then mention potential future directions that can advance the field. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 388,243 |
1310.4169 | Naming Game on Networks: Let Everyone be Both Speaker and Hearer | To investigate how consensus is reached on a large self-organized peer-to-peer network, we extended the naming game model commonly used in language and communication to Naming Game in Groups (NGG). Differing from other existing naming game models, in NGG, everyone in the population (network) can be both speaker and hearer simultaneously, which resembles in a closer manner to real-life scenarios. Moreover, NGG allows the transmission (communication) of multiple words (opinions) for multiple intra-group consensuses. The communications among indirectly-connected nodes are also enabled in NGG. We simulated and analyzed the consensus process in some typical network topologies, including random-graph networks, small-world networks and scale-free networks, to better understand how global convergence (consensus) could be reached on one common word. The results are interpreted on group negotiation of a peer-to-peer network, which shows that global consensus in the population can be reached more rapidly when more opinions are permitted within each group or when the negotiating groups in the population are larger in size. The novel features and properties introduced by our model have demonstrated its applicability in better investigating general consensus problems on peer-to-peer networks. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 27,795 |
2407.05892 | An efficient method to automate tooth identification and 3D bounding box
extraction from Cone Beam CT Images | Accurate identification, localization, and segregation of teeth from Cone Beam Computed Tomography (CBCT) images are essential for analyzing dental pathologies. Modeling an individual tooth can be challenging and intricate to accomplish, especially when fillings and other restorations introduce artifacts. This paper proposes a method for automatically detecting, identifying, and extracting teeth from CBCT images. Our approach involves dividing the three-dimensional images into axial slices for image detection. Teeth are pinpointed and labeled using a single-stage object detector. Subsequently, bounding boxes are delineated and identified to create three-dimensional representations of each tooth. The proposed solution has been successfully integrated into the dental analysis tool Dentomo. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 471,170 |
2105.05636 | VL-NMS: Breaking Proposal Bottlenecks in Two-Stage Visual-Language
Matching | The prevailing framework for matching multimodal inputs is based on a two-stage process: 1) detecting proposals with an object detector and 2) matching text queries with proposals. Existing two-stage solutions mostly focus on the matching step. In this paper, we argue that these methods overlook an obvious \emph{mismatch} between the roles of proposals in the two stages: they generate proposals solely based on the detection confidence (i.e., query-agnostic), hoping that the proposals contain all instances mentioned in the text query (i.e., query-aware). Due to this mismatch, chances are that proposals relevant to the text query are suppressed during the filtering process, which in turn bounds the matching performance. To this end, we propose VL-NMS, which is the first method to yield query-aware proposals at the first stage. VL-NMS regards all mentioned instances as critical objects, and introduces a lightweight module to predict a score for aligning each proposal with a critical object. These scores can guide the NMS operation to filter out proposals irrelevant to the text query, increasing the recall of critical objects, resulting in a significantly improved matching performance. Since VL-NMS is agnostic to the matching step, it can be easily integrated into any state-of-the-art two-stage matching methods. We validate the effectiveness of VL-NMS on two multimodal matching tasks, namely referring expression grounding and image-text matching. Extensive ablation studies on several baselines and benchmarks consistently demonstrate the superiority of VL-NMS. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 234,879 |
1901.06788 | Faithful Simulation of Distributed Quantum Measurements with
Applications in Distributed Rate-Distortion Theory | We consider the task of faithfully simulating a distributed quantum measurement, wherein we provide a protocol for the three parties, Alice, Bob and Eve, to simulate a repeated action of a distributed quantum measurement using a pair of non-product approximating measurements by Alice and Bob, followed by a stochastic mapping at Eve. The objective of the protocol is to utilize minimum resources, in terms of classical bits needed by Alice and Bob to communicate their measurement outcomes to Eve, and the common randomness shared among the three parties, while faithfully simulating independent repeated instances of the original measurement. To achieve this, we develop a mutual covering lemma and a technique for random binning of distributed quantum measurements, and, in turn, characterize a set of sufficient communication and common randomness rates required for asymptotic simulatability in terms of single-letter quantum information quantities. Furthermore, using these results we address a distributed quantum rate-distortion problem, where we characterize the achievable rate-distortion region through a single-letter inner bound. Finally, via a technique of single-letterization of multi-letter quantum information quantities, we provide an outer bound for the rate-distortion region. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 119,082 |
1909.10683 | Optimally Resilient Codes for List-Decoding from Insertions and
Deletions | We give a complete answer to the following basic question: "What is the maximal fraction of deletions or insertions tolerable by $q$-ary list-decodable codes with non-vanishing information rate?" This question has been open even for binary codes, including the restriction to the binary insertion-only setting, where the best-known result was that a $\gamma\leq 0.707$ fraction of insertions is tolerable by some binary code family. For any desired $\epsilon > 0$, we construct a family of binary codes of positive rate which can be efficiently list-decoded from any combination of $\gamma$ fraction of insertions and $\delta$ fraction of deletions as long as $ \gamma+2\delta\leq 1-\epsilon$. On the other hand, for any $\gamma,\delta$ with $\gamma+2\delta=1$ list-decoding is impossible. Our result thus precisely characterizes the feasibility region of binary list-decodable codes for insertions and deletions. We further generalize our result to codes over any finite alphabet of size $q$. Surprisingly, our work reveals that the feasibility region for $q>2$ is not the natural generalization of the binary bound above. We provide tight upper and lower bounds that precisely pin down the feasibility region, which turns out to have a $(q-1)$-piece-wise linear boundary whose $q$ corner-points lie on a quadratic curve. The main technical work in our results is proving the existence of code families of sufficiently large size with good list-decoding properties for any combination of $\delta,\gamma$ within the claimed feasibility region. We achieve this via an intricate analysis of codes introduced by [Bukh, Ma; SIAM J. Discrete Math; 2014]. Finally, we give a simple yet powerful concatenation scheme for list-decodable insertion-deletion codes which transforms any such (non-efficient) code family (with vanishing information rate) into an efficiently decodable code family with constant rate. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 146,603 |
2302.08670 | Cascaded information enhancement and cross-modal attention feature
fusion for multispectral pedestrian detection | Multispectral pedestrian detection is a technology designed to detect and locate pedestrians in Color and Thermal images, which has been widely used in automatic driving, video surveillance, etc. So far most available multispectral pedestrian detection algorithms only achieved limited success in pedestrian detection because of the lacking take into account the confusion of pedestrian information and background noise in Color and Thermal images. Here we propose a multispectral pedestrian detection algorithm, which mainly consists of a cascaded information enhancement module and a cross-modal attention feature fusion module. On the one hand, the cascaded information enhancement module adopts the channel and spatial attention mechanism to perform attention weighting on the features fused by the cascaded feature fusion block. Moreover, it multiplies the single-modal features with the attention weight element by element to enhance the pedestrian features in the single-modal and thus suppress the interference from the background. On the other hand, the cross-modal attention feature fusion module mines the features of both Color and Thermal modalities to complement each other, then the global features are constructed by adding the cross-modal complemented features element by element, which are attentionally weighted to achieve the effective fusion of the two modal features. Finally, the fused features are input into the detection head to detect and locate pedestrians. Extensive experiments have been performed on two improved versions of annotations (sanitized annotations and paired annotations) of the public dataset KAIST. The experimental results show that our method demonstrates a lower pedestrian miss rate and more accurate pedestrian detection boxes compared to the comparison method. Additionally, the ablation experiment also proved the effectiveness of each module designed in this paper. | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | 346,132 |
2206.09811 | Shapley-NAS: Discovering Operation Contribution for Neural Architecture
Search | In this paper, we propose a Shapley value based method to evaluate operation contribution (Shapley-NAS) for neural architecture search. Differentiable architecture search (DARTS) acquires the optimal architectures by optimizing the architecture parameters with gradient descent, which significantly reduces the search cost. However, the magnitude of architecture parameters updated by gradient descent fails to reveal the actual operation importance to the task performance and therefore harms the effectiveness of obtained architectures. By contrast, we propose to evaluate the direct influence of operations on validation accuracy. To deal with the complex relationships between supernet components, we leverage Shapley value to quantify their marginal contributions by considering all possible combinations. Specifically, we iteratively optimize the supernet weights and update the architecture parameters by evaluating operation contributions via Shapley value, so that the optimal architectures are derived by selecting the operations that contribute significantly to the tasks. Since the exact computation of Shapley value is NP-hard, the Monte-Carlo sampling based algorithm with early truncation is employed for efficient approximation, and the momentum update mechanism is adopted to alleviate fluctuation of the sampling process. Extensive experiments on various datasets and various search spaces show that our Shapley-NAS outperforms the state-of-the-art methods by a considerable margin with light search cost. The code is available at https://github.com/Euphoria16/Shapley-NAS.git | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 303,695 |
2406.00002 | VR Isle Academy: A VR Digital Twin Approach for Robotic Surgical Skill
Development | Contemporary progress in the field of robotics, marked by improved efficiency and stability, has paved the way for the global adoption of surgical robotic systems (SRS). While these systems enhance surgeons' skills by offering a more accurate and less invasive approach to operations, they come at a considerable cost. Moreover, SRS components often involve heavy machinery, making the training process challenging due to limited access to such equipment. In this paper we introduce a cost-effective way to facilitate training for a simulator of a SRS via a portable, device-agnostic, ultra realistic simulation with hand tracking and feet tracking support. Error assessment is accessible in both real-time and offline, which enables the monitoring and tracking of users' performance. The VR application has been objectively evaluated by several untrained testers showcasing significant reduction in error metrics as the number of training sessions increases. This indicates that the proposed VR application denoted as VR Isle Academy operates efficiently, improving the robot - controlling skills of the testers in an intuitive and immersive way towards reducing the learning curve at minimal cost. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | 459,641 |
1911.00483 | Explanation by Progressive Exaggeration | As machine learning methods see greater adoption and implementation in high stakes applications such as medical image diagnosis, the need for model interpretability and explanation has become more critical. Classical approaches that assess feature importance (e.g. saliency maps) do not explain how and why a particular region of an image is relevant to the prediction. We propose a method that explains the outcome of a classification black-box by gradually exaggerating the semantic effect of a given class. Given a query input to a classifier, our method produces a progressive set of plausible variations of that query, which gradually changes the posterior probability from its original class to its negation. These counter-factually generated samples preserve features unrelated to the classification decision, such that a user can employ our method as a "tuning knob" to traverse a data manifold while crossing the decision boundary. Our method is model agnostic and only requires the output value and gradient of the predictor with respect to its input. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 151,828 |
1609.09043 | A Moving Target Approach for Identifying Malicious Sensors in Control
Systems | In this paper, we consider the problem of attack identification in cyber-physical systems (CPS). Attack identification is often critical for the recovery and performance of a CPS that is targeted by malicious entities, allowing defenders to construct algorithms which bypass harmful nodes. Previous work has characterized limitations in the perfect identification of adversarial attacks on deterministic LTI systems. For instance, a system must remain observable after removing any 2q sensors to only identify q attacks. However, the ability for an attacker to create an unidentifiable attack requires knowledge of the system model. In this paper, we aim to limit the adversary's knowledge of the system model with the goal of accurately identifying all sensor attacks. Such a scheme will allow systems to withstand larger attacks or system operators to allocate fewer sensing devices to a control system while maintaining security. We explore how changing the dynamics of the system as a function of time allows us to actively identify malicious/faulty sensors in a control system. We discuss the design of time varying system matrices to meet this goal and evaluate performance in deterministic and stochastic systems. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 61,666 |
2107.07576 | Real-Time Face Recognition System for Remote Employee Tracking | During the COVID-19 pandemic, most of the human-to-human interactions have been stopped. To mitigate the spread of deadly coronavirus, many offices took the initiative so that the employees can work from home. But, tracking the employees and finding out if they are really performing what they were supposed to turn out to be a serious challenge for all the companies and organizations who are facilitating "Work From Home". To deal with the challenge effectively, we came up with a solution to track the employees with face recognition. We have been testing this system experimentally for our office. To train the face recognition module, we used FaceNet with KNN using the Labeled Faces in the Wild (LFW) dataset and achieved 97.8\% accuracy. We integrated the trained model into our central system, where the employees log their time. In this paper, we discuss in brief the system we have been experimenting with and the pros and cons of the system. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 246,461 |
2312.02901 | Concept Drift Adaptation in Text Stream Mining Settings: A Systematic
Review | The society produces textual data online in several ways, e.g., via reviews and social media posts. Therefore, numerous researchers have been working on discovering patterns in textual data that can indicate peoples' opinions, interests, etc. Most tasks regarding natural language processing are addressed using traditional machine learning methods and static datasets. This setting can lead to several problems, e.g., outdated datasets and models, which degrade in performance over time. This is particularly true regarding concept drift, in which the data distribution changes over time. Furthermore, text streaming scenarios also exhibit further challenges, such as the high speed at which data arrives over time. Models for stream scenarios must adhere to the aforementioned constraints while learning from the stream, thus storing texts for limited periods and consuming low memory. This study presents a systematic literature review regarding concept drift adaptation in text stream scenarios. Considering well-defined criteria, we selected 48 papers published between 2018 and August 2024 to unravel aspects such as text drift categories, detection types, model update mechanisms, stream mining tasks addressed, and text representation methods and their update mechanisms. Furthermore, we discussed drift visualization and simulation and listed real-world datasets used in the selected papers. Finally, we brought forward a discussion on existing works in the area, also highlighting open challenges and future research directions for the community. | false | false | false | false | false | true | true | false | true | false | false | false | false | false | false | false | false | false | 413,040 |
2105.02851 | Algorithmic Ethics: Formalization and Verification of Autonomous Vehicle
Obligations | We develop a formal framework for automatic reasoning about the obligations of autonomous cyber-physical systems, including their social and ethical obligations. Obligations, permissions and prohibitions are distinct from a system's mission, and are a necessary part of specifying advanced, adaptive AI-equipped systems. They need a dedicated deontic logic of obligations to formalize them. Most existing deontic logics lack corresponding algorithms and system models that permit automatic verification. We demonstrate how a particular deontic logic, Dominance Act Utilitarianism (DAU), is a suitable starting point for formalizing the obligations of autonomous systems like self-driving cars. We demonstrate its usefulness by formalizing a subset of Responsibility-Sensitive Safety (RSS) in DAU; RSS is an industrial proposal for how self-driving cars should and should not behave in traffic. We show that certain logical consequences of RSS are undesirable, indicating a need to further refine the proposal. We also demonstrate how obligations can change over time, which is necessary for long-term autonomy. We then demonstrate a model-checking algorithm for DAU formulas on weighted transition systems, and illustrate it by model-checking obligations of a self-driving car controller from the literature. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 233,952 |
2009.08547 | Practical Dynamic SC-Flip Polar Decoders: Algorithm and Implementation | SC-Flip (SCF) is a low-complexity polar code decoding algorithm with improved performance, and is an alternative to high-complexity (CRC)-aided SC-List (CA-SCL) decoding. However, the performance improvement of SCF is limited since it can correct up to only one channel error ($\omega=1$). Dynamic SCF (DSCF) algorithm tackles this problem by tackling multiple errors ($\omega \geq 1$), but it requires logarithmic and exponential computations, which make it infeasible for practical applications. In this work, we propose simplifications and approximations to make DSCF practically feasible. First, we reduce the transcendental computations of DSCF decoding to a constant approximation. Then, we show how to incorporate special node decoding techniques into DSCF algorithm, creating the Fast-DSCF decoding. Next, we reduce the search span within the special nodes to further reduce the computational complexity. Following, we describe a hardware architecture for the Fast-DSCF decoder, in which we introduce additional simplifications such as metric normalization and sorter length reduction. All the simplifications and approximations are shown to have minimal impact on the error-correction performance, and the reported Fast-DSCF decoder is the only SCF-based architecture that can correct multiple errors. The Fast-DSCF decoders synthesized using TSMC $65$nm CMOS technology can achieve a $1.25$, $1.06$ and $0.93$ Gbps throughput for $\omega \in \{1,2,3\}$, respectively. Compared to the state-of-the-art fast CA-SCL decoders with equivalent FER performance, the proposed decoders are up to $5.8\times$ more area-efficient. Finally, observations at energy dissipation indicate that the Fast-DSCF is more energy-efficient than its CA-SCL-based counterparts. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 196,266 |
2208.14687 | TRUST: An Accurate and End-to-End Table structure Recognizer Using
Splitting-based Transformers | Table structure recognition is a crucial part of document image analysis domain. Its difficulty lies in the need to parse the physical coordinates and logical indices of each cell at the same time. However, the existing methods are difficult to achieve both these goals, especially when the table splitting lines are blurred or tilted. In this paper, we propose an accurate and end-to-end transformer-based table structure recognition method, referred to as TRUST. Transformers are suitable for table structure recognition because of their global computations, perfect memory, and parallel computation. By introducing novel Transformer-based Query-based Splitting Module and Vertex-based Merging Module, the table structure recognition problem is decoupled into two joint optimization sub-tasks: multi-oriented table row/column splitting and table grid merging. The Query-based Splitting Module learns strong context information from long dependencies via Transformer networks, accurately predicts the multi-oriented table row/column separators, and obtains the basic grids of the table accordingly. The Vertex-based Merging Module is capable of aggregating local contextual information between adjacent basic grids, providing the ability to merge basic girds that belong to the same spanning cell accurately. We conduct experiments on several popular benchmarks including PubTabNet and SynthTable, our method achieves new state-of-the-art results. In particular, TRUST runs at 10 FPS on PubTabNet, surpassing the previous methods by a large margin. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 315,394 |
2502.09173 | Two-Stage Representation Learning for Analyzing Movement Behavior
Dynamics in People Living with Dementia | In remote healthcare monitoring, time series representation learning reveals critical patient behavior patterns from high-frequency data. This study analyzes home activity data from individuals living with dementia by proposing a two-stage, self-supervised learning approach tailored to uncover low-rank structures. The first stage converts time-series activities into text sequences encoded by a pre-trained language model, providing a rich, high-dimensional latent state space using a PageRank-based method. This PageRank vector captures latent state transitions, effectively compressing complex behaviour data into a succinct form that enhances interpretability. This low-rank representation not only enhances model interpretability but also facilitates clustering and transition analysis, revealing key behavioral patterns correlated with clinicalmetrics such as MMSE and ADAS-COG scores. Our findings demonstrate the framework's potential in supporting cognitive status prediction, personalized care interventions, and large-scale health monitoring. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 533,336 |
2107.07455 | Shifts: A Dataset of Real Distributional Shift Across Multiple
Large-Scale Tasks | There has been significant research done on developing methods for improving robustness to distributional shift and uncertainty estimation. In contrast, only limited work has examined developing standard datasets and benchmarks for assessing these approaches. Additionally, most work on uncertainty estimation and robustness has developed new techniques based on small-scale regression or image classification tasks. However, many tasks of practical interest have different modalities, such as tabular data, audio, text, or sensor data, which offer significant challenges involving regression and discrete or continuous structured prediction. Thus, given the current state of the field, a standardized large-scale dataset of tasks across a range of modalities affected by distributional shifts is necessary. This will enable researchers to meaningfully evaluate the plethora of recently developed uncertainty quantification methods, as well as assessment criteria and state-of-the-art baselines. In this work, we propose the Shifts Dataset for evaluation of uncertainty estimates and robustness to distributional shift. The dataset, which has been collected from industrial sources and services, is composed of three tasks, with each corresponding to a particular data modality: tabular weather prediction, machine translation, and self-driving car (SDC) vehicle motion prediction. All of these data modalities and tasks are affected by real, "in-the-wild" distributional shifts and pose interesting challenges with respect to uncertainty estimation. In this work we provide a description of the dataset and baseline results for all tasks. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 246,430 |
1506.04319 | Generating and Exploring S-Box Multivariate Quadratic Equation Systems
with SageMath | A new method to derive Multivariate Quadratic equation systems (MQ) for the input and output bit variables of a cryptographic S-box from its algebraic expressions with the aid of the computer mathematics software system SageMath is presented. We consolidate the deficiency of previously presented MQ metrics, supposed to quantify the resistance of S-boxes against algebraic attacks. | false | false | false | false | true | false | true | false | false | false | false | false | true | false | false | false | false | false | 44,150 |
1810.12085 | Extractive Summarization of EHR Discharge Notes | Patient summarization is essential for clinicians to provide coordinated care and practice effective communication. Automated summarization has the potential to save time, standardize notes, aid clinical decision making, and reduce medical errors. Here we provide an upper bound on extractive summarization of discharge notes and develop an LSTM model to sequentially label topics of history of present illness notes. We achieve an F1 score of 0.876, which indicates that this model can be employed to create a dataset for evaluation of extractive summarization methods. | false | false | false | false | false | true | true | false | true | false | false | false | false | false | false | false | false | false | 111,678 |
2409.12450 | Domain Generalization for Endoscopic Image Segmentation by Disentangling
Style-Content Information and SuperPixel Consistency | Frequent monitoring is necessary to stratify individuals based on their likelihood of developing gastrointestinal (GI) cancer precursors. In clinical practice, white-light imaging (WLI) and complementary modalities such as narrow-band imaging (NBI) and fluorescence imaging are used to assess risk areas. However, conventional deep learning (DL) models show degraded performance due to the domain gap when a model is trained on one modality and tested on a different one. In our earlier approach, we used a superpixel-based method referred to as "SUPRA" to effectively learn domain-invariant information using color and space distances to generate groups of pixels. One of the main limitations of this earlier work is that the aggregation does not exploit structural information, making it suboptimal for segmentation tasks, especially for polyps and heterogeneous color distributions. Therefore, in this work, we propose an approach for style-content disentanglement using instance normalization and instance selective whitening (ISW) for improved domain generalization when combined with SUPRA. We evaluate our approach on two datasets: EndoUDA Barrett's Esophagus and EndoUDA polyps, and compare its performance with three state-of-the-art (SOTA) methods. Our findings demonstrate a notable enhancement in performance compared to both baseline and SOTA methods across the target domain data. Specifically, our approach exhibited improvements of 14%, 10%, 8%, and 18% over the baseline and three SOTA methods on the polyp dataset. Additionally, it surpassed the second-best method (EndoUDA) on the Barrett's Esophagus dataset by nearly 2%. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 489,597 |
2402.00646 | Cell-Free Massive MIMO SWIPT with Beyond Diagonal Reconfigurable
Intelligent Surfaces | This paper investigates the integration of beyond-diagonal reconfigurable intelligent surfaces (BD-RISs) into cell-free massive multiple-input multiple-output (CF-mMIMO) systems, focusing on applications involving simultaneous wireless information and power transfer (SWIPT). The system supports concurrently two user groups: information users (IUs) and energy users (EUs). A BD-RIS is employed to enhance the wireless power transfer (WPT) directed towards the EUs. To comprehensively evaluate the system's performance, we present an analytical framework for the spectral efficiency (SE) of IUs and the average harvested energy (HE) of EUs in the presence of spatial correlation among the BD-RIS elements and for a non-linear energy harvesting circuit. Our findings offer important insights into the transformative potential of BD-RIS, setting the stage for the development of more efficient and effective SWIPT networks. Finally, incorporating a heuristic scattering matrix design at the BD-RIS results in a substantial improvement compared to the scenario with random scattering matrix design. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 425,667 |
2311.15875 | Nodal Hydraulic Head Estimation through Unscented Kalman Filter for
Data-driven Leak Localization in Water Networks | In this paper, we present a nodal hydraulic head estimation methodology for water distribution networks (WDN) based on an Unscented Kalman Filter (UKF) scheme with application to leak localization. The UKF refines an initial estimation of the hydraulic state by considering the prediction model, as well as available pressure and demand measurements. To this end, it provides customized prediction and data assimilation steps. Additionally, the method is enhanced by dynamically updating the prediction function weight matrices. Performance testing on the Modena benchmark under realistic conditions demonstrates the method's effectiveness in enhancing state estimation and data-driven leak localization. | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | true | 410,669 |
1909.00429 | An Improved Neural Baseline for Temporal Relation Extraction | Determining temporal relations (e.g., before or after) between events has been a challenging natural language understanding task, partly due to the difficulty to generate large amounts of high-quality training data. Consequently, neural approaches have not been widely used on it, or showed only moderate improvements. This paper proposes a new neural system that achieves about 10% absolute improvement in accuracy over the previous best system (25% error reduction) on two benchmark datasets. The proposed system is trained on the state-of-the-art MATRES dataset and applies contextualized word embeddings, a Siamese encoder of a temporal common sense knowledge base, and global inference via integer linear programming (ILP). We suggest that the new approach could serve as a strong baseline for future research in this area. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 143,633 |
2303.14128 | The crime of being poor | The criminalization of poverty has been widely denounced as a collective bias against the most vulnerable. NGOs and international organizations claim that the poor are blamed for their situation, are more often associated with criminal offenses than the wealthy strata of society and even incur criminal offenses simply as a result of being poor. While no evidence has been found in the literature that correlates poverty and overall criminality rates, this paper offers evidence of a collective belief that associates both concepts. This brief report measures the societal bias that correlates criminality with the poor, as compared to the rich, by using Natural Language Processing (NLP) techniques in Twitter. The paper quantifies the level of crime-poverty bias in a panel of eight different English-speaking countries. The regional differences in the association between crime and poverty cannot be justified based on different levels of inequality or unemployment, which the literature correlates to property crimes. The variation in the observed rates of crime-poverty bias for different geographic locations could be influenced by cultural factors and the tendency to overestimate the equality of opportunities and social mobility in specific countries. These results have consequences for policy-making and open a new path of research for poverty mitigation with the focus not only on the poor but on society as a whole. Acting on the collective bias against the poor would facilitate the approval of poverty reduction policies, as well as the restoration of the dignity of the persons affected. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 353,962 |
2103.11405 | Non-Autoregressive Translation by Learning Target Categorical Codes | Non-autoregressive Transformer is a promising text generation model. However, current non-autoregressive models still fall behind their autoregressive counterparts in translation quality. We attribute this accuracy gap to the lack of dependency modeling among decoder inputs. In this paper, we propose CNAT, which learns implicitly categorical codes as latent variables into the non-autoregressive decoding. The interaction among these categorical codes remedies the missing dependencies and improves the model capacity. Experiment results show that our model achieves comparable or better performance in machine translation tasks, compared with several strong baselines. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 225,799 |
1206.4639 | Adaptive Regularization for Weight Matrices | Algorithms for learning distributions over weight-vectors, such as AROW were recently shown empirically to achieve state-of-the-art performance at various problems, with strong theoretical guaranties. Extending these algorithms to matrix models pose challenges since the number of free parameters in the covariance of the distribution scales as $n^4$ with the dimension $n$ of the matrix, and $n$ tends to be large in real applications. We describe, analyze and experiment with two new algorithms for learning distribution of matrix models. Our first algorithm maintains a diagonal covariance over the parameters and can handle large covariance matrices. The second algorithm factors the covariance to capture inter-features correlation while keeping the number of parameters linear in the size of the original matrix. We analyze both algorithms in the mistake bound model and show a superior precision performance of our approach over other algorithms in two tasks: retrieving similar images, and ranking similar documents. The factored algorithm is shown to attain faster convergence rate. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 16,690 |
1308.1482 | Increasing Robustness of the Anesthesia Process from Difference
Patient's Delay Using a State-Space Model Predictive Controller | The process of anesthesia is nonlinear with time delay and also there are some constraints which have to be considered in calculating administrative drug dosage. We present an Extended Kalman Filter (EKF) observer to estimate drug concentration in the patient's body and use this estimation in a state-space based Model of Predictive Controller (MPC) for controlling the depth of anesthesia. Bispectral Index (BIS) is used as a patient consciousness index and propofol as an anesthetic agent. Performance evaluations of the proposed controller, the results have been compared with those of a MPC controller. The results demonstrate that state-space MPC including the EKF estimator for controlling the anesthesia process can significantly increase the robustness in encountering patients' delay deviations in comparison with the MPC. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 26,306 |
1712.01707 | IEOPF: An Active Contour Model for Image Segmentation with
Inhomogeneities Estimated by Orthogonal Primary Functions | Image segmentation is still an open problem especially when intensities of the interested objects are overlapped due to the presence of intensity inhomogeneity (also known as bias field). To segment images with intensity inhomogeneities, a bias correction embedded level set model is proposed where Inhomogeneities are Estimated by Orthogonal Primary Functions (IEOPF). In the proposed model, the smoothly varying bias is estimated by a linear combination of a given set of orthogonal primary functions. An inhomogeneous intensity clustering energy is then defined and membership functions of the clusters described by the level set function are introduced to rewrite the energy as a data term of the proposed model. Similar to popular level set methods, a regularization term and an arc length term are also included to regularize and smooth the level set function, respectively. The proposed model is then extended to multichannel and multiphase patterns to segment colourful images and images with multiple objects, respectively. It has been extensively tested on both synthetic and real images that are widely used in the literature and public BrainWeb and IBSR datasets. Experimental results and comparison with state-of-the-art methods demonstrate that advantages of the proposed model in terms of bias correction and segmentation accuracy. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 86,160 |
0710.2674 | Linguistic Information Energy | In this treatment a text is considered to be a series of word impulses which are read at a constant rate. The brain then assembles these units of information into higher units of meaning. A classical systems approach is used to model an initial part of this assembly process. The concepts of linguistic system response, information energy, and ordering energy are defined and analyzed. Finally, as a demonstration, information energy is used to estimate the publication dates of a series of texts and the similarity of a set of texts. | false | false | false | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | 784 |
2305.15020 | An Efficient Multilingual Language Model Compression through Vocabulary
Trimming | Multilingual language model (LM) have become a powerful tool in NLP especially for non-English languages. Nevertheless, model parameters of multilingual LMs remain large due to the larger embedding matrix of the vocabulary covering tokens in different languages. On the contrary, monolingual LMs can be trained in a target language with the language-specific vocabulary only, but this requires a large budget and availability of reliable corpora to achieve a high-quality LM from scratch. In this paper, we propose vocabulary-trimming (VT), a method to reduce a multilingual LM vocabulary to a target language by deleting irrelevant tokens from its vocabulary. In theory, VT can compress any existing multilingual LM to build monolingual LMs in any language covered by the multilingual LM. In our experiments, we show that VT can retain the original performance of the multilingual LM, while being smaller in size (in general around 50% of the original vocabulary size is enough) than the original multilingual LM. The evaluation is performed over four NLP tasks (two generative and two classification tasks) among four widely used multilingual LMs in seven languages. Finally, we show that this methodology can keep the best of both monolingual and multilingual worlds by keeping a small size as monolingual models without the need for specifically retraining them, and even limiting potentially harmful social biases. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 367,399 |
2206.06800 | Artificial Neural Network For Transient Stability Assessment: A Review | Integration of large-scale renewable energy sources and increasing uncertainty has drastically changed the dynamics of power system and has consequently brought various challenges. Rapid transient stability assessment of modern power system is a vital requirement for accurate power system planning and operation. The conventional methods are unable to fulfil this requirement. Therefore, novel approaches are required in this regard. Machine leaning approaches such as artificial neural network can play a significant role in this regard. Therefore, this paper aims to review the application of artificial neural network for transient stability assessment of power systems. It is believed that this work will provide a solid foundation for researchers in the domain of machine learning applications to power system security and stability. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 302,504 |
2403.10840 | MSI-NeRF: Linking Omni-Depth with View Synthesis through Multi-Sphere
Image aided Generalizable Neural Radiance Field | Panoramic observation using fisheye cameras is significant in virtual reality (VR) and robot perception. However, panoramic images synthesized by traditional methods lack depth information and can only provide three degrees-of-freedom (3DoF) rotation rendering in VR applications. To fully preserve and exploit the parallax information within the original fisheye cameras, we introduce MSI-NeRF, which combines deep learning omnidirectional depth estimation and novel view synthesis. We construct a multi-sphere image as a cost volume through feature extraction and warping of the input images. We further build an implicit radiance field using spatial points and interpolated 3D feature vectors as input, which can simultaneously realize omnidirectional depth estimation and 6DoF view synthesis. Leveraging the knowledge from depth estimation task, our method can learn scene appearance by source view supervision only. It does not require novel target views and can be trained conveniently on existing panorama depth estimation datasets. Our network has the generalization ability to reconstruct unknown scenes efficiently using only four images. Experimental results show that our method outperforms existing methods in both depth estimation and novel view synthesis tasks. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 438,390 |
1805.02375 | Consistency and differences between centrality measures across distinct
classes of networks | The roles of different nodes within a network are often understood through centrality analysis, which aims to quantify the capacity of a node to influence, or be influenced by, other nodes via its connection topology. Many different centrality measures have been proposed, but the degree to which they offer unique information, and such whether it is advantageous to use multiple centrality measures to define node roles, is unclear. Here we calculate correlations between 17 different centrality measures across 212 diverse real-world networks, examine how these correlations relate to variations in network density and global topology, and investigate whether nodes can be clustered into distinct classes according to their centrality profiles. We find that centrality measures are generally positively correlated to each other, the strength of these correlations varies across networks, and network modularity plays a key role in driving these cross-network variations. Data-driven clustering of nodes based on centrality profiles can distinguish different roles, including topological cores of highly central nodes and peripheries of less central nodes. Our findings illustrate how network topology shapes the pattern of correlations between centrality measures and demonstrate how a comparative approach to network centrality can inform the interpretation of nodal roles in complex networks. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 96,852 |
2406.18586 | Cut-and-Paste with Precision: a Content and Perspective-aware Data
Augmentation for Road Damage Detection | Damage to road pavement can develop into cracks, potholes, spallings, and other issues posing significant challenges to the integrity, safety, and durability of the road structure. Detecting and monitoring the evolution of these damages is crucial for maintaining the condition and structural health of road infrastructure. In recent years, researchers have explored various data-driven methods for image-based damage detection in road monitoring applications. The field gained attention with the introduction of the Road Damage Detection Challenge (RDDC2018), encouraging competition in developing object detectors on street-view images from various countries. Leading teams have demonstrated the effectiveness of ensemble models, mostly based on the YOLO and Faster R-CNN series. Data augmentations have also shown benefits in object detection within the computer vision field, including transformations such as random flipping, cropping, cutting out patches, as well as cut-and-pasting object instances. Applying cut-and-paste augmentation to road damages appears to be a promising approach to increase data diversity. However, the standard cut-and-paste technique, which involves sampling an object instance from a random image and pasting it at a random location onto the target image, has demonstrated limited effectiveness for road damage detection. This method overlooks the location of the road and disregards the difference in perspective between the sampled damage and the target image, resulting in unrealistic augmented images. In this work, we propose an improved Cut-and-Paste augmentation technique that is both content-aware (i.e. considers the true location of the road in the image) and perspective-aware (i.e. takes into account the difference in perspective between the injected damage and the target image). | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 468,093 |
2003.06515 | Evaluation of Cross-View Matching to Improve Ground Vehicle Localization
with Aerial Perception | Cross-view matching refers to the problem of finding the closest match for a given query ground view image to one from a database of aerial images. If the aerial images are geotagged, then the closest matching aerial image can be used to localize the query ground view image. Due to the recent success of deep learning methods, several cross-view matching techniques have been proposed. These approaches perform well for the matching of isolated query images. However, their evaluation over a trajectory is limited. In this paper, we evaluate cross-view matching for the task of localizing a ground vehicle over a longer trajectory. We treat these cross-view matches as sensor measurements that are fused using a particle filter. We evaluate the performance of this method using a city-wide dataset collected in a photorealistic simulation by varying four parameters: height of aerial images, the pitch of the aerial camera mount, FOV of the ground camera, and the methodology of fusing cross-view measurements in the particle filter. We also report the results obtained using our pipeline on a real-world dataset collected using Google Street View and satellite view APIs. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 168,137 |
1901.10028 | Optimal Multiuser Loading in Quantized Massive MIMO under Spatially
Correlated Channels | Low-resolution digital-to-analog converter (DAC) has shown great potential in facilitating cost- and power-efficient implementation of massive multiple-input multiple-output (MIMO) systems. We investigate the performance of a massive MIMO downlink network with low-resolution DACs using regularized zero-forcing (RZF) precoding. It serves multiple receivers equipped with finite-resolution analog-to-digital converters (ADCs). By taking the quantization errors at both the transmitter and receivers into account under spatially correlated channels, the regularization parameter for RZF is optimized with a closed-form solution by applying the asymptotic random matrix theory. The optimal regularization parameter increases linearly with respect to the user loading ratio while independent of the ADC quantization resolution and the channel correlation. Furthermore, asymptotic sum rate performance is characterized and a closed-form expression for the optimal user loading ratio is obtained at low signal-to-noise ratio. The optimal ratio increases with the DAC resolution while it decreases with the ADC resolution. Numerical simulations verify our observations. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 119,902 |
1501.00077 | Index Coding with Coded Side-Information | This letter investigates a new class of index coding problems. One sender broadcasts packets to multiple users, each desiring a subset, by exploiting prior knowledge of linear combinations of packets. We refer to this class of problems as index coding with coded side-information. Our aim is to characterize the minimum index code length that the sender needs to transmit to simultaneously satisfy all user requests. We show that the optimal binary vector index code length is equal to the minimum rank (minrank) of a matrix whose elements consist of the sets of desired packet indices and side- information encoding matrices. This is the natural extension of matrix minrank in the presence of coded side information. Using the derived expression, we propose a greedy randomized algorithm to minimize the rank of the derived matrix. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 38,955 |
2105.00163 | Autonomous Reconfigurable Intelligent Surfaces Through Wireless Energy
Harvesting | In this paper, we examine the potential for a reconfigurable intelligent surface (RIS) to be powered by energy harvested from information signals. This feature might be key to reap the benefits of RIS technology's lower power consumption compared to active relays. We first identify the main RIS power-consuming components and then propose an energy harvesting and power consumption model. Furthermore, we formulate and solve the problem of the optimal RIS placement together with the amplitude and phase response adjustment of its elements in order to maximize the signal-to-noise ratio (SNR) while harvesting sufficient energy for its operation. Finally, numerical results validate the autonomous operation potential and reveal the range of power consumption values that enables it. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 233,111 |
1312.4182 | Adaptive Protocols for Interactive Communication | How much adversarial noise can protocols for interactive communication tolerate? This question was examined by Braverman and Rao (IEEE Trans. Inf. Theory, 2014) for the case of "robust" protocols, where each party sends messages only in fixed and predetermined rounds. We consider a new class of non-robust protocols for Interactive Communication, which we call adaptive protocols. Such protocols adapt structurally to the noise induced by the channel in the sense that both the order of speaking, and the length of the protocol may vary depending on observed noise. We define models that capture adaptive protocols and study upper and lower bounds on the permissible noise rate in these models. When the length of the protocol may adaptively change according to the noise, we demonstrate a protocol that tolerates noise rates up to $1/3$. When the order of speaking may adaptively change as well, we demonstrate a protocol that tolerates noise rates up to $2/3$. Hence, adaptivity circumvents an impossibility result of $1/4$ on the fraction of tolerable noise (Braverman and Rao, 2014). | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 29,107 |
2109.00456 | Weakly-Supervised Surface Crack Segmentation by Generating Pseudo-Labels
using Localization with a Classifier and Thresholding | Surface cracks are a common sight on public infrastructure nowadays. Recent work has been addressing this problem by supporting structural maintenance measures using machine learning methods. Those methods are used to segment surface cracks from their background, making them easier to localize. However, a common issue is that to create a well-functioning algorithm, the training data needs to have detailed annotations of pixels that belong to cracks. Our work proposes a weakly supervised approach that leverages a CNN classifier in a novel way to create surface crack pseudo labels. First, we use the classifier to create a rough crack localization map by using its class activation maps and a patch based classification approach and fuse this with a thresholding based approach to segment the mostly darker crack pixels. The classifier assists in suppressing noise from the background regions, which commonly are incorrectly highlighted as cracks by standard thresholding methods. Then, the pseudo labels can be used in an end-to-end approach when training a standard CNN for surface crack segmentation. Our method is shown to yield sufficiently accurate pseudo labels. Those labels, incorporated into segmentation CNN training using multiple recent crack segmentation architectures, achieve comparable performance to fully supervised methods on four popular crack segmentation datasets. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 253,108 |
2203.07640 | Unsupervised Keyphrase Extraction via Interpretable Neural Networks | Keyphrase extraction aims at automatically extracting a list of "important" phrases representing the key concepts in a document. Prior approaches for unsupervised keyphrase extraction resorted to heuristic notions of phrase importance via embedding clustering or graph centrality, requiring extensive domain expertise. Our work presents a simple alternative approach which defines keyphrases as document phrases that are salient for predicting the topic of the document. To this end, we propose INSPECT -- an approach that uses self-explaining models for identifying influential keyphrases in a document by measuring the predictive impact of input phrases on the downstream task of the document topic classification. We show that this novel method not only alleviates the need for ad-hoc heuristics but also achieves state-of-the-art results in unsupervised keyphrase extraction in four datasets across two domains: scientific publications and news articles. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 285,497 |
2302.00864 | CLIPood: Generalizing CLIP to Out-of-Distributions | Out-of-distribution (OOD) generalization, where the model needs to handle distribution shifts from training, is a major challenge of machine learning. Contrastive language-image pre-training (CLIP) models have shown impressive zero-shot ability, but the further adaptation of CLIP on downstream tasks undesirably degrades OOD performances. This paper aims at generalizing CLIP to out-of-distribution test data on downstream tasks. We propose CLIPood, a fine-tuning method that can adapt CLIP models to OOD situations where both domain shifts and open classes may occur on the unseen test data. To exploit the semantic relations between classes from the text modality, CLIPood introduces a new training objective, margin metric softmax (MMS), with class adaptive margins for fine-tuning. To incorporate both pre-trained zero-shot model and fine-tuned task-adaptive model, CLIPood leverages a new optimization strategy, Beta moving average (BMA), to maintain a temporal ensemble weighted by Beta distribution. Experiments on diverse datasets with different OOD scenarios show that CLIPood consistently outperforms existing generalization techniques. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 343,378 |
2402.09989 | LLMs as Bridges: Reformulating Grounded Multimodal Named Entity
Recognition | Grounded Multimodal Named Entity Recognition (GMNER) is a nascent multimodal task that aims to identify named entities, entity types and their corresponding visual regions. GMNER task exhibits two challenging properties: 1) The weak correlation between image-text pairs in social media results in a significant portion of named entities being ungroundable. 2) There exists a distinction between coarse-grained referring expressions commonly used in similar tasks (e.g., phrase localization, referring expression comprehension) and fine-grained named entities. In this paper, we propose RiVEG, a unified framework that reformulates GMNER into a joint MNER-VE-VG task by leveraging large language models (LLMs) as a connecting bridge. This reformulation brings two benefits: 1) It maintains the optimal MNER performance and eliminates the need for employing object detection methods to pre-extract regional features, thereby naturally addressing two major limitations of existing GMNER methods. 2) The introduction of entity expansion expression and Visual Entailment (VE) module unifies Visual Grounding (VG) and Entity Grounding (EG). It enables RiVEG to effortlessly inherit the Visual Entailment and Visual Grounding capabilities of any current or prospective multimodal pretraining models. Extensive experiments demonstrate that RiVEG outperforms state-of-the-art methods on the existing GMNER dataset and achieves absolute leads of 10.65%, 6.21%, and 8.83% in all three subtasks. | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | 429,759 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.