id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
1811.12019
Large-Scale Distributed Second-Order Optimization Using Kronecker-Factored Approximate Curvature for Deep Convolutional Neural Networks
Large-scale distributed training of deep neural networks suffer from the generalization gap caused by the increase in the effective mini-batch size. Previous approaches try to solve this problem by varying the learning rate and batch size over epochs and layers, or some ad hoc modification of the batch normalization. We propose an alternative approach using a second-order optimization method that shows similar generalization capability to first-order methods, but converges faster and can handle larger mini-batches. To test our method on a benchmark where highly optimized first-order methods are available as references, we train ResNet-50 on ImageNet. We converged to 75% Top-1 validation accuracy in 35 epochs for mini-batch sizes under 16,384, and achieved 75% even with a mini-batch size of 131,072, which took only 978 iterations.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
114,914
2105.04817
Fibrational Initial Algebra-Final Coalgebra Coincidence over Initial Algebras: Turning Verification Witnesses Upside Down
The coincidence between initial algebras (IAs) and final coalgebras (FCs) is a phenomenon that underpins various important results in theoretical computer science. In this paper, we identify a general fibrational condition for the IA-FC coincidence, namely in the fiber over an initial algebra in the base category. Identifying (co)algebras in a fiber as (co)inductive predicates, our fibrational IA-FC coincidence allows one to use coinductive witnesses (such as invariants) for verifying inductive properties (such as liveness). Our general fibrational theory features the technical condition of stability of chain colimits; we extend the framework to the presence of a monadic effect, too, restricting to fibrations of complete lattice-valued predicates. Practical benefits of our categorical theory are exemplified by new "upside-down" witness notions for three verification problems: probabilistic liveness, and acceptance and model-checking with respect to bottom-up tree automata.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
234,627
2502.10443
One Class Restricted Kernel Machines
Restricted kernel machines (RKMs) have demonstrated a significant impact in enhancing generalization ability in the field of machine learning. Recent studies have introduced various methods within the RKM framework, combining kernel functions with the least squares support vector machine (LSSVM) in a manner similar to the energy function of restricted boltzmann machines (RBM), such that a better performance can be achieved. However, RKM's efficacy can be compromised by the presence of outliers and other forms of contamination within the dataset. These anomalies can skew the learning process, leading to less accurate and reliable outcomes. To address this critical issue and to ensure the robustness of the model, we propose the novel one-class RKM (OCRKM). In the framework of OCRKM, we employ an energy function akin to that of the RBM, which integrates both visible and hidden variables in a nonprobabilistic setting. The formulation of the proposed OCRKM facilitates the seamless integration of one-class classification method with the RKM, enhancing its capability to detect outliers and anomalies effectively. The proposed OCRKM model is evaluated over UCI benchmark datasets. Experimental findings and statistical analyses consistently emphasize the superior generalization capabilities of the proposed OCRKM model over baseline models across all scenarios.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
533,884
2501.06224
Detection, Retrieval, and Explanation Unified: A Violence Detection System Based on Knowledge Graphs and GAT
Recently, violence detection systems developed using unified multimodal models have achieved significant success and attracted widespread attention. However, most of these systems face two critical challenges: the lack of interpretability as black-box models and limited functionality, offering only classification or retrieval capabilities. To address these challenges, this paper proposes a novel interpretable violence detection system, termed the Three-in-One (TIO) System. The TIO system integrates knowledge graphs (KG) and graph attention networks (GAT) to provide three core functionalities: detection, retrieval, and explanation. Specifically, the system processes each video frame along with text descriptions generated by a large language model (LLM) for videos containing potential violent behavior. It employs ImageBind to generate high-dimensional embeddings for constructing a knowledge graph, uses GAT for reasoning, and applies lightweight time series modules to extract video embedding features. The final step connects a classifier and retriever for multi-functional outputs. The interpretability of KG enables the system to verify the reasoning process behind each output. Additionally, the paper introduces several lightweight methods to reduce the resource consumption of the TIO system and enhance its efficiency. Extensive experiments conducted on the XD-Violence and UCF-Crime datasets validate the effectiveness of the proposed system. A case study further reveals an intriguing phenomenon: as the number of bystanders increases, the occurrence of violent behavior tends to decrease.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
523,886
2312.12476
DSAF: A Dual-Stage Adaptive Framework for Numerical Weather Prediction Downscaling
While widely recognized as one of the most substantial weather forecasting methodologies, Numerical Weather Prediction (NWP) usually suffers from relatively coarse resolution and inevitable bias due to tempo-spatial discretization, physical parametrization process, and computation limitation. With the roaring growth of deep learning-based techniques, we propose the Dual-Stage Adaptive Framework (DSAF), a novel framework to address regional NWP downscaling and bias correction tasks. DSAF uniquely incorporates adaptive elements in its design to ensure a flexible response to evolving weather conditions. Specifically, NWP downscaling and correction are well-decoupled in the framework and can be applied independently, which strategically guides the optimization trajectory of the model. Utilizing a multi-task learning mechanism and an uncertainty-weighted loss function, DSAF facilitates balanced training across various weather factors. Additionally, our specifically designed attention-centric learnable module effectively integrates geographic information, proficiently managing complex interrelationships. Experimental validation on the ECMWF operational forecast (HRES) and reanalysis (ERA5) archive demonstrates DSAF's superior performance over existing state-of-the-art models and shows substantial improvements when existing models are augmented using our proposed modules. Code is publicly available at https://github.com/pengwei07/DSAF.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
416,978
2404.11613
InFusion: Inpainting 3D Gaussians via Learning Depth Completion from Diffusion Prior
3D Gaussians have recently emerged as an efficient representation for novel view synthesis. This work studies its editability with a particular focus on the inpainting task, which aims to supplement an incomplete set of 3D Gaussians with additional points for visually harmonious rendering. Compared to 2D inpainting, the crux of inpainting 3D Gaussians is to figure out the rendering-relevant properties of the introduced points, whose optimization largely benefits from their initial 3D positions. To this end, we propose to guide the point initialization with an image-conditioned depth completion model, which learns to directly restore the depth map based on the observed image. Such a design allows our model to fill in depth values at an aligned scale with the original depth, and also to harness strong generalizability from largescale diffusion prior. Thanks to the more accurate depth completion, our approach, dubbed InFusion, surpasses existing alternatives with sufficiently better fidelity and efficiency under various complex scenarios. We further demonstrate the effectiveness of InFusion with several practical applications, such as inpainting with user-specific texture or with novel object insertion.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
447,557
1812.10235
A Bi-model based RNN Semantic Frame Parsing Model for Intent Detection and Slot Filling
Intent detection and slot filling are two main tasks for building a spoken language understanding(SLU) system. Multiple deep learning based models have demonstrated good results on these tasks . The most effective algorithms are based on the structures of sequence to sequence models (or "encoder-decoder" models), and generate the intents and semantic tags either using separate models or a joint model. Most of the previous studies, however, either treat the intent detection and slot filling as two separate parallel tasks, or use a sequence to sequence model to generate both semantic tags and intent. Most of these approaches use one (joint) NN based model (including encoder-decoder structure) to model two tasks, hence may not fully take advantage of the cross-impact between them. In this paper, new Bi-model based RNN semantic frame parsing network structures are designed to perform the intent detection and slot filling tasks jointly, by considering their cross-impact to each other using two correlated bidirectional LSTMs (BLSTM). Our Bi-model structure with a decoder achieves state-of-the-art result on the benchmark ATIS data, with about 0.5$\%$ intent accuracy improvement and 0.9 $\%$ slot filling improvement.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
117,322
1802.00750
Optimal probabilistic polynomial time compression and the Slepian-Wolf theorem: tighter version and simple proofs
We give simplify the proofs of the 2 results in Marius Zimand's paper "Kolmogorov complexity version of Slepian-Wolf coding, proceedings of STOC 2017, p22--32". The first is a universal polynomial time compression algorithm: on input $\varepsilon > 0$, a number $k$ and a string $x$ it computes in polynomial time with probability $1-\varepsilon$ a program that outputs $x$ and has length $k + O(\log^2 (|x|/\varepsilon))$, provided that there exists such a program of length at most $k$. The second result, is a distributed compression algorithm, in which several parties each send some string to a common receiver. Marius Zimand proved a variant of the Slepian-Wolf theorem using Kolmogorov complexity (in stead of Shannon entropy). With our simpler proof we improve the parameters of Zimand's result.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
89,464
2202.11838
Explanatory Paradigms in Neural Networks
In this article, we present a leap-forward expansion to the study of explainability in neural networks by considering explanations as answers to abstract reasoning-based questions. With $P$ as the prediction from a neural network, these questions are `Why P?', `What if not P?', and `Why P, rather than Q?' for a given contrast prediction $Q$. The answers to these questions are observed correlations, observed counterfactuals, and observed contrastive explanations respectively. Together, these explanations constitute the abductive reasoning scheme. We term the three explanatory schemes as observed explanatory paradigms. The term observed refers to the specific case of post-hoc explainability, when an explanatory technique explains the decision $P$ after a trained neural network has made the decision $P$. The primary advantage of viewing explanations through the lens of abductive reasoning-based questions is that explanations can be used as reasons while making decisions. The post-hoc field of explainability, that previously only justified decisions, becomes active by being involved in the decision making process and providing limited, but relevant and contextual interventions. The contributions of this article are: ($i$) realizing explanations as reasoning paradigms, ($ii$) providing a probabilistic definition of observed explanations and their completeness, ($iii$) creating a taxonomy for evaluation of explanations, and ($iv$) positioning gradient-based complete explanainability's replicability and reproducibility across multiple applications and data modalities, ($v$) code repositories, publicly available at https://github.com/olivesgatech/Explanatory-Paradigms.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
282,017
1909.05159
On-line collision avoidance for collaborative robot manipulators by adjusting off-line generated paths: An industrial use case
Human-robot collision avoidance is a key in collaborative robotics and in the framework of Industry 4.0. It plays an important role for achieving safety criteria while having humans and machines working side-by-side in unstructured and time-varying environment. This study introduces the subject of manipulator's on-line collision avoidance into a real industrial application implementing typical sensors and a commonly used collaborative industrial manipulator, KUKA iiwa. In the proposed methodology, the human co-worker and the robot are represented by geometric primitives (capsules). The minimum distance and relative velocity between them is calculated, when human/obstacles are nearby the concept of hypothetical repulsion and attraction vectors is used. By coupling this concept with a mathematical representation of robot's kinematics, a task level control with collision avoidance capability is achieved. Consequently, the off-line generated nominal path of the industrial task is modified on-the-fly so the robot is able to avoid collision with the co-worker safely while being able to fulfill the industrial operation. To guarantee motion continuity when switching between different tasks, the notion of repulsion-vector-reshaping is introduced. Tests on an assembly robotic cell in automotive industry show that the robot moves smoothly and avoids collisions successfully by adjusting the off-line generated nominal paths.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
145,016
2307.02932
When No-Rejection Learning is Consistent for Regression with Rejection
Learning with rejection has been a prototypical model for studying the human-AI interaction on prediction tasks. Upon the arrival of a sample instance, the model first uses a rejector to decide whether to accept and use the AI predictor to make a prediction or reject and defer the sample to humans. Learning such a model changes the structure of the original loss function and often results in undesirable non-convexity and inconsistency issues. For the classification with rejection problem, several works develop consistent surrogate losses for the joint learning of the predictor and the rejector, while there have been fewer works for the regression counterpart. This paper studies the regression with rejection (RwR) problem and investigates a no-rejection learning strategy that uses all the data to learn the predictor. We first establish the consistency for such a strategy under the weak realizability condition. Then for the case without the weak realizability, we show that the excessive risk can also be upper bounded with the sum of two parts: prediction error and calibration error. Lastly, we demonstrate the advantage of such a proposed learning strategy with empirical evidence.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
377,869
2009.02285
Flow Field Reconstructions with GANs based on Radial Basis Functions
Nonlinear sparse data regression and generation have been a long-term challenge, to cite the flow field reconstruction as a typical example. The huge computational cost of computational fluid dynamics (CFD) makes it much expensive for large scale CFD data producing, which is the reason why we need some cheaper ways to do this, of which the traditional reduced order models (ROMs) were promising but they couldn't generate a large number of full domain flow field data (FFD) to realize high-precision flow field reconstructions. Motivated by the problems of existing approaches and inspired by the success of the generative adversarial networks (GANs) in the field of computer vision, we prove an optimal discriminator theorem that the optimal discriminator of a GAN is a radial basis function neural network (RBFNN) while dealing with nonlinear sparse FFD regression and generation. Based on this theorem, two radial basis function-based GANs (RBF-GAN and RBFC-GAN), for regression and generation purposes, are proposed. Three different datasets are applied to verify the feasibility of our models. The results show that the performance of the RBF-GAN and the RBFC-GAN are better than that of GANs/cGANs by means of both the mean square error (MSE) and the mean square percentage error (MSPE). Besides, compared with GANs/cGANs, the stability of the RBF-GAN and the RBFC-GAN improve by 34.62% and 72.31%, respectively. Consequently, our proposed models can be used to generate full domain FFD from limited and sparse datasets, to meet the requirement of high-precision flow field reconstructions.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
194,509
2412.09240
VLMs meet UDA: Boosting Transferability of Open Vocabulary Segmentation with Unsupervised Domain Adaptation
Segmentation models are typically constrained by the categories defined during training. To address this, researchers have explored two independent approaches: adapting Vision-Language Models (VLMs) and leveraging synthetic data. However, VLMs often struggle with granularity, failing to disentangle fine-grained concepts, while synthetic data-based methods remain limited by the scope of available datasets. This paper proposes enhancing segmentation accuracy across diverse domains by integrating Vision-Language reasoning with key strategies for Unsupervised Domain Adaptation (UDA). First, we improve the fine-grained segmentation capabilities of VLMs through multi-scale contextual data, robust text embeddings with prompt augmentation, and layer-wise fine-tuning in our proposed Foundational-Retaining Open Vocabulary Semantic Segmentation (FROVSS) framework. Next, we incorporate these enhancements into a UDA framework by employing distillation to stabilize training and cross-domain mixed sampling to boost adaptability without compromising generalization. The resulting UDA-FROVSS framework is the first UDA approach to effectively adapt across domains without requiring shared categories.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
516,409
1210.2646
A General Methodology for the Determination of 2D Bodies Elastic Deformation Invariants. Application to the Automatic Identification of Parasites
A novel methodology is introduced here that exploits 2D images of arbitrary elastic body deformation instances, so as to quantify mechano-elastic characteristics that are deformation invariant. Determination of such characteristics allows for developing methods offering an image of the undeformed body. General assumptions about the mechano-elastic properties of the bodies are stated, which lead to two different approaches for obtaining bodies' deformation invariants. One was developed to spot deformed body's neutral line and its cross sections, while the other solves deformation PDEs by performing a set of equivalent image operations on the deformed body images. Both these processes may furnish a body undeformed version from its deformed image. This was confirmed by obtaining the undeformed shape of deformed parasites, cells (protozoa), fibers and human lips. In addition, the method has been applied to the important problem of parasite automatic classification from their microscopic images. To achieve this, we first apply the previous method to straighten the highly deformed parasites and then we apply a dedicated curve classification method to the straightened parasite contours. It is demonstrated that essentially different deformations of the same parasite give rise to practically the same undeformed shape, thus confirming the consistency of the introduced methodology. Finally, the developed pattern recognition method classifies the unwrapped parasites into 6 families, with an accuracy rate of 97.6 %.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
19,033
2403.16188
Cross-domain Multi-modal Few-shot Object Detection via Rich Text
Cross-modal feature extraction and integration have led to steady performance improvements in few-shot learning tasks due to generating richer features. However, existing multi-modal object detection (MM-OD) methods degrade when facing significant domain-shift and are sample insufficient. We hypothesize that rich text information could more effectively help the model to build a knowledge relationship between the vision instance and its language description and can help mitigate domain shift. Specifically, we study the Cross-Domain few-shot generalization of MM-OD (CDMM-FSOD) and propose a meta-learning based multi-modal few-shot object detection method that utilizes rich text semantic information as an auxiliary modality to achieve domain adaptation in the context of FSOD. Our proposed network contains (i) a multi-modal feature aggregation module that aligns the vision and language support feature embeddings and (ii) a rich text semantic rectify module that utilizes bidirectional text feature generation to reinforce multi-modal feature alignment and thus to enhance the model's language understanding capability. We evaluate our model on common standard cross-domain object detection datasets and demonstrate that our approach considerably outperforms existing FSOD methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
440,917
1812.09809
Writer-Aware CNN for Parsimonious HMM-Based Offline Handwritten Chinese Text Recognition
Recently, the hybrid convolutional neural network hidden Markov model (CNN-HMM) has been introduced for offline handwritten Chinese text recognition (HCTR) and has achieved state-of-the-art performance. However, modeling each of the large vocabulary of Chinese characters with a uniform and fixed number of hidden states requires high memory and computational costs and makes the tens of thousands of HMM state classes confusing. Another key issue of CNN-HMM for HCTR is the diversified writing style, which leads to model strain and a significant performance decline for specific writers. To address these issues, we propose a writer-aware CNN based on parsimonious HMM (WCNN-PHMM). First, PHMM is designed using a data-driven state-tying algorithm to greatly reduce the total number of HMM states, which not only yields a compact CNN by state sharing of the same or similar radicals among different Chinese characters but also improves the recognition accuracy due to the more accurate modeling of tied states and the lower confusion among them. Second, WCNN integrates each convolutional layer with one adaptive layer fed by a writer-dependent vector, namely, the writer code, to extract the irrelevant variability in writer information to improve recognition performance. The parameters of writer-adaptive layers are jointly optimized with other network parameters in the training stage, while a multiple-pass decoding strategy is adopted to learn the writer code and generate recognition results. Validated on the ICDAR 2013 competition of CASIA-HWDB database, the more compact WCNN-PHMM of a 7360-class vocabulary can achieve a relative character error rate (CER) reduction of 16.6% over the conventional CNN-HMM without considering language modeling. By adopting a powerful hybrid language model (N-gram language model and recurrent neural network language model), the CER of WCNN-PHMM is reduced to 3.17%.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
117,235
2501.00289
Dual Diffusion for Unified Image Generation and Understanding
Diffusion models have gained tremendous success in text-to-image generation, yet still lag behind with visual understanding tasks, an area dominated by autoregressive vision-language models. We propose a large-scale and fully end-to-end diffusion model for multi-modal understanding and generation that significantly improves on existing diffusion-based multimodal models, and is the first of its kind to support the full suite of vision-language modeling capabilities. Inspired by the multimodal diffusion transformer (MM-DiT) and recent advances in discrete diffusion language modeling, we leverage a cross-modal maximum likelihood estimation framework that simultaneously trains the conditional likelihoods of both images and text jointly under a single loss function, which is back-propagated through both branches of the diffusion transformer. The resulting model is highly flexible and capable of a wide range of tasks including image generation, captioning, and visual question answering. Our model attained competitive performance compared to recent unified image understanding and generation models, demonstrating the potential of multimodal diffusion modeling as a promising alternative to autoregressive next-token prediction models.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
521,608
2402.13108
On the Convergence of Gradient Descent for Large Learning Rates
A vast literature on convergence guarantees for gradient descent and derived methods exists at the moment. However, a simple practical situation remains unexplored: when a fixed step size is used, can we expect gradient descent to converge starting from any initialization? We provide fundamental impossibility results showing that convergence becomes impossible no matter the initialization if the step size gets too big. Looking at the asymptotic value of the gradient norm along the optimization trajectory, we see that there is a sharp transition as the step size crosses a critical value. This has been observed by practitioners, yet the true mechanisms through which this happens remain unclear beyond heuristics. Using results from dynamical systems theory, we provide a proof of this in the case of linear neural networks with a squared loss. We also prove the impossibility of convergence for more general losses without requiring strong assumptions such as Lipschitz continuity for the gradient. We validate our findings through experiments with non-linear networks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
431,116
1702.00716
Analysing Temporal Evolution of Interlingual Wikipedia Article Pairs
Wikipedia articles representing an entity or a topic in different language editions evolve independently within the scope of the language-specific user communities. This can lead to different points of views reflected in the articles, as well as complementary and inconsistent information. An analysis of how the information is propagated across the Wikipedia language editions can provide important insights in the article evolution along the temporal and cultural dimensions and support quality control. To facilitate such analysis, we present MultiWiki - a novel web-based user interface that provides an overview of the similarities and differences across the article pairs originating from different language editions on a timeline. MultiWiki enables users to observe the changes in the interlingual article similarity over time and to perform a detailed visual comparison of the article snapshots at a particular time point.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
67,695
1912.03366
Med2Meta: Learning Representations of Medical Concepts with Meta-Embeddings
Distributed representations of medical concepts have been used to support downstream clinical tasks recently. Electronic Health Records (EHR) capture different aspects of patients' hospital encounters and serve as a rich source for augmenting clinical decision making by learning robust medical concept embeddings. However, the same medical concept can be recorded in different modalities (e.g., clinical notes, lab results)-with each capturing salient information unique to that modality-and a holistic representation calls for relevant feature ensemble from all information sources. We hypothesize that representations learned from heterogeneous data types would lead to performance enhancement on various clinical informatics and predictive modeling tasks. To this end, our proposed approach makes use of meta-embeddings, embeddings aggregated from learned embeddings. Firstly, modality-specific embeddings for each medical concept is learned with graph autoencoders. The ensemble of all the embeddings is then modeled as a meta-embedding learning problem to incorporate their correlating and complementary information through a joint reconstruction. Empirical results of our model on both quantitative and qualitative clinical evaluations have shown improvements over state-of-the-art embedding models, thus validating our hypothesis.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
156,570
1709.01562
Optimizing for Measure of Performance in Max-Margin Parsing
Many statistical learning problems in the area of natural language processing including sequence tagging, sequence segmentation and syntactic parsing has been successfully approached by means of structured prediction methods. An appealing property of the corresponding discriminative learning algorithms is their ability to integrate the loss function of interest directly into the optimization process, which potentially can increase the resulting performance accuracy. Here, we demonstrate on the example of constituency parsing how to optimize for F1-score in the max-margin framework of structural SVM. In particular, the optimization is with respect to the original (not binarized) trees.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
80,102
2308.05646
AST-MHSA : Code Summarization using Multi-Head Self-Attention
Code summarization aims to generate concise natural language descriptions for source code. The prevailing approaches adopt transformer-based encoder-decoder architectures, where the Abstract Syntax Tree (AST) of the source code is utilized for encoding structural information. However, ASTs are much longer than the corresponding source code, and existing methods ignore this size constraint by directly feeding the entire linearized AST into the encoders. This simplistic approach makes it challenging to extract truly valuable dependency relations from the overlong input sequence and leads to significant computational overhead due to self-attention applied to all nodes in the AST. To address this issue effectively and efficiently, we present a model, AST-MHSA that uses multi-head attention to extract the important semantic information from the AST. The model consists of two main components: an encoder and a decoder. The encoder takes as input the abstract syntax tree (AST) of the code and generates a sequence of hidden states. The decoder then takes these hidden states as input and generates a natural language summary of the code. The multi-head attention mechanism allows the model to learn different representations of the input code, which can be combined to generate a more comprehensive summary. The model is trained on a dataset of code and summaries, and the parameters of the model are optimized to minimize the loss between the generated summaries and the ground-truth summaries.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
true
384,865
1804.05459
Comparative study of motion detection methods for video surveillance systems
The objective of this study is to compare several change detection methods for a mono static camera and identify the best method for different complex environments and backgrounds in indoor and outdoor scenes. To this end, we used the CDnet video dataset as a benchmark that consists of many challenging problems, ranging from basic simple scenes to complex scenes affected by bad weather and dynamic backgrounds. Twelve change detection methods, ranging from simple temporal differencing to more sophisticated methods, were tested and several performance metrics were used to precisely evaluate the results. Because most of the considered methods have not previously been evaluated on this recent large scale dataset, this work compares these methods to fill a lack in the literature, and thus this evaluation joins as complementary compared with the previous comparative evaluations. Our experimental results show that there is no perfect method for all challenging cases, each method performs well in certain cases and fails in others. However, this study enables the user to identify the most suitable method for his or her needs.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
95,081
2405.13957
Exploring the Relationship Between Feature Attribution Methods and Model Performance
Machine learning and deep learning models are pivotal in educational contexts, particularly in predicting student success. Despite their widespread application, a significant gap persists in comprehending the factors influencing these models' predictions, especially in explainability within education. This work addresses this gap by employing nine distinct explanation methods and conducting a comprehensive analysis to explore the correlation between the agreement among these methods in generating explanations and the predictive model's performance. Applying Spearman's correlation, our findings reveal a very strong correlation between the model's performance and the agreement level observed among the explanation methods.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
456,162
2107.02474
Viscos Flows: Variational Schur Conditional Sampling With Normalizing Flows
We present a method for conditional sampling for pre-trained normalizing flows when only part of an observation is available. We derive a lower bound to the conditioning variable log-probability using Schur complement properties in the spirit of Gaussian conditional sampling. Our derivation relies on partitioning flow's domain in such a way that the flow restrictions to subdomains remain bijective, which is crucial for the Schur complement application. Simulation from the variational conditional flow then amends to solving an equality constraint. Our contribution is three-fold: a) we provide detailed insights on the choice of variational distributions; b) we discuss how to partition the input space of the flow to preserve bijectivity property; c) we propose a set of methods to optimise the variational distribution. Our numerical results indicate that our sampling method can be successfully applied to invertible residual networks for inference and classification.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
244,839
2206.02536
The impact of spatio-temporal travel distance on epidemics using an interpretable attention-based sequence-to-sequence model
Amidst the COVID-19 pandemic, travel restrictions have emerged as crucial interventions for mitigating the spread of the virus. In this study, we enhance the predictive capabilities of our model, Sequence-to-Sequence Epidemic Attention Network (S2SEA-Net), by incorporating an attention module, allowing us to assess the impact of distinct classes of travel distances on epidemic dynamics. Furthermore, our model provides forecasts for new confirmed cases and deaths. To achieve this, we leverage daily data on population movement across various travel distance categories, coupled with county-level epidemic data in the United States. Our findings illuminate a compelling relationship between the volume of travelers at different distance ranges and the trajectories of COVID-19. Notably, a discernible spatial pattern emerges with respect to these travel distance categories on a national scale. We unveil the geographical variations in the influence of population movement at different travel distances on the dynamics of epidemic spread. This will contribute to the formulation of strategies for future epidemic prevention and public health policies.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
300,931
2407.16388
Anwendung von Causal-Discovery-Algorithmen zur Root-Cause-Analyse in der Fahrzeugmontage
Root Cause Analysis (RCA) is a quality management method that aims to systematically investigate and identify the cause-and-effect relationships of problems and their underlying causes. Traditional methods are based on the analysis of problems by subject matter experts. In modern production processes, large amounts of data are collected. For this reason, increasingly computer-aided and data-driven methods are used for RCA. One of these methods are Causal Discovery Algorithms (CDA). This publication demonstrates the application of CDA on data from the assembly of a leading automotive manufacturer. The algorithms used learn the causal structure between the characteristics of the manufactured vehicles, the ergonomics and the temporal scope of the involved assembly processes, and quality-relevant product features based on representative data. This publication compares various CDAs in terms of their suitability in the context of quality management. For this purpose, the causal structures learned by the algorithms as well as their runtime are compared. This publication provides a contribution to quality management and demonstrates how CDAs can be used for RCA in assembly processes.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
475,584
2404.00358
Spread Your Wings: A Radial Strip Transformer for Image Deblurring
Exploring motion information is important for the motion deblurring task. Recent the window-based transformer approaches have achieved decent performance in image deblurring. Note that the motion causing blurry results is usually composed of translation and rotation movements and the window-shift operation in the Cartesian coordinate system by the window-based transformer approaches only directly explores translation motion in orthogonal directions. Thus, these methods have the limitation of modeling the rotation part. To alleviate this problem, we introduce the polar coordinate-based transformer, which has the angles and distance to explore rotation motion and translation information together. In this paper, we propose a Radial Strip Transformer (RST), which is a transformer-based architecture that restores the blur images in a polar coordinate system instead of a Cartesian one. RST contains a dynamic radial embedding module (DRE) to extract the shallow feature by a radial deformable convolution. We design a polar mask layer to generate the offsets for the deformable convolution, which can reshape the convolution kernel along the radius to better capture the rotation motion information. Furthermore, we proposed a radial strip attention solver (RSAS) as deep feature extraction, where the relationship of windows is organized by azimuth and radius. This attention module contains radial strip windows to reweight image features in the polar coordinate, which preserves more useful information in rotation and translation motion together for better recovering the sharp images. Experimental results on six synthesis and real-world datasets prove that our method performs favorably against other SOTA methods for the image deblurring task.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
442,882
1604.07928
Distributed Flexible Nonlinear Tensor Factorization
Tensor factorization is a powerful tool to analyse multi-way data. Compared with traditional multi-linear methods, nonlinear tensor factorization models are capable of capturing more complex relationships in the data. However, they are computationally expensive and may suffer severe learning bias in case of extreme data sparsity. To overcome these limitations, in this paper we propose a distributed, flexible nonlinear tensor factorization model. Our model can effectively avoid the expensive computations and structural restrictions of the Kronecker-product in existing TGP formulations, allowing an arbitrary subset of tensorial entries to be selected to contribute to the training. At the same time, we derive a tractable and tight variational evidence lower bound (ELBO) that enables highly decoupled, parallel computations and high-quality inference. Based on the new bound, we develop a distributed inference algorithm in the MapReduce framework, which is key-value-free and can fully exploit the memory cache mechanism in fast MapReduce systems such as SPARK. Experimental results fully demonstrate the advantages of our method over several state-of-the-art approaches, in terms of both predictive performance and computational efficiency. Moreover, our approach shows a promising potential in the application of Click-Through-Rate (CTR) prediction for online advertising.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
55,147
2312.09773
In vivo learning-based control of microbial populations density in bioreactors
A key problem toward the use of microorganisms as bio-factories is reaching and maintaining cellular communities at a desired density and composition so that they can efficiently convert their biomass into useful compounds. Promising technological platforms for the real time, scalable control of cellular density are bioreactors. In this work, we developed a learning-based strategy to expand the toolbox of available control algorithms capable of regulating the density of a \textit{single} bacterial population in bioreactors. Specifically, we used a sim-to-real paradigm, where a simple mathematical model, calibrated using a few data, was adopted to generate synthetic data for the training of the controller. The resulting policy was then exhaustively tested in vivo using a low-cost bioreactor known as Chi.Bio, assessing performance and robustness. In addition, we compared the performance with more traditional controllers (namely, a PI and an MPC), confirming that the learning-based controller exhibits similar performance in vivo. Our work showcases the viability of learning-based strategies for the control of cellular density in bioreactors, making a step forward toward their use for the control of the composition of microbial consortia.
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
415,872
2105.00146
EntrapNet: a Blockchain-Based Verification Protocol for Trustless Computing
In this paper, we propose a blockchain-based computing verification protocol, called EntrapNet, for distributed shared computing networks, an emerging underlying network for many internet of things (IoT) applications. EntrapNet borrows the idea from the practice of entrapment in criminal law to reduce the possibility of receiving incorrect computing results from trustless service providers who have offered the computing resources. Furthermore, we mathematically optimize EntrapNet to deal with the fundamental tradeoff of a network: security and efficiency. We present an asymptotic optimal solution to this optimization. It will be seen that EntrapNet can be performed as an independent and low-cost layer atop any trustless network that requires outsourced computing, thus making secure computing affordable and practical.
false
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
233,103
2110.08619
SAGAN: Adversarial Spatial-asymmetric Attention for Noisy Nona-Bayer Reconstruction
Nona-Bayer colour filter array (CFA) pattern is considered one of the most viable alternatives to traditional Bayer patterns. Despite the substantial advantages, such non-Bayer CFA patterns are susceptible to produce visual artefacts while reconstructing RGB images from noisy sensor data. This study addresses the challenges of learning RGB image reconstruction from noisy Nona-Bayer CFA comprehensively. We propose a novel spatial-asymmetric attention module to jointly learn bi-direction transformation and large-kernel global attention to reduce the visual artefacts. We combine our proposed module with adversarial learning to produce plausible images from Nona-Bayer CFA. The feasibility of the proposed method has been verified and compared with the state-of-the-art image reconstruction method. The experiments reveal that the proposed method can reconstruct RGB images from noisy Nona-Bayer CFA without producing any visually disturbing artefacts. Also, it can outperform the state-of-the-art image reconstruction method in both qualitative and quantitative comparison. Code available: https://github.com/sharif-apu/SAGAN_BMVC21.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
261,486
1808.01363
GeneSys: Enabling Continuous Learning through Neural Network Evolution in Hardware
Modern deep learning systems rely on (a) a hand-tuned neural network topology, (b) massive amounts of labeled training data, and (c) extensive training over large-scale compute resources to build a system that can perform efficient image classification or speech recognition. Unfortunately, we are still far away from implementing adaptive general purpose intelligent systems which would need to learn autonomously in unknown environments and may not have access to some or any of these three components. Reinforcement learning and evolutionary algorithm (EA) based methods circumvent this problem by continuously interacting with the environment and updating the models based on obtained rewards. However, deploying these algorithms on ubiquitous autonomous agents at the edge (robots/drones) demands extremely high energy-efficiency due to (i) tight power and energy budgets, (ii) continuous/lifelong interaction with the environment, (iii) intermittent or no connectivity to the cloud to run heavy-weight processing. To address this need, we present GENESYS, an HW-SW prototype of an EA-based learning system, that comprises a closed loop learning engine called EvE and an inference engine called ADAM. EvE can evolve the topology and weights of neural networks completely in hardware for the task at hand, without requiring hand-optimization or backpropagation training. ADAM continuously interacts with the environment and is optimized for efficiently running the irregular neural networks generated by EvE. GENESYS identifies and leverages multiple unique avenues of parallelism unique to EAs that we term 'gene'- level parallelism, and 'population'-level parallelism. We ran GENESYS with a suite of environments from OpenAI gym and observed 2-5 orders of magnitude higher energy-efficiency over state-of-the-art embedded and desktop CPU and GPU systems.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
104,555
1907.00294
Generative Mask Pyramid Network for CT/CBCT Metal Artifact Reduction with Joint Projection-Sinogram Correction
A conventional approach to computed tomography (CT) or cone beam CT (CBCT) metal artifact reduction is to replace the X-ray projection data within the metal trace with synthesized data. However, existing projection or sinogram completion methods cannot always produce anatomically consistent information to fill the metal trace, and thus, when the metallic implant is large, significant secondary artifacts are often introduced. In this work, we propose to replace metal artifact affected regions with anatomically consistent content through joint projection-sinogram correction as well as adversarial learning. To handle the metallic implants of diverse shapes and large sizes, we also propose a novel mask pyramid network that enforces the mask information across the network's encoding layers and a mask fusion loss that reduces early saturation of adversarial training. Our experimental results show that the proposed projection-sinogram correction designs are effective and our method recovers information from the metal traces better than the state-of-the-art methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
136,998
1401.2184
Variations on Memetic Algorithms for Graph Coloring Problems
Graph vertex coloring with a given number of colors is a well-known and much-studied NP-complete problem.The most effective methods to solve this problem are proved to be hybrid algorithms such as memetic algorithms or quantum annealing. Those hybrid algorithms use a powerful local search inside a population-based algorithm.This paper presents a new memetic algorithm based on one of the most effective algorithms: the Hybrid Evolutionary Algorithm HEA from Galinier and Hao (1999).The proposed algorithm, denoted HEAD - for HEA in Duet - works with a population of only two individuals.Moreover, a new way of managing diversity is brought by HEAD.These two main differences greatly improve the results, both in terms of solution quality and computational time.HEAD has produced several good results for the popular DIMACS benchmark graphs, such as 222-colorings for \textless{}dsjc1000.9\textgreater{}, 81-colorings for \textless{}flat1000\_76\_0\textgreater{} and even 47-colorings for \textless{}dsjc500.5\textgreater{} and 82-colorings for \textless{}dsjc1000.5\textgreater{}.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
false
29,725
2302.08664
Socialz: Multi-Feature Social Fuzz Testing
Online social networks have become an integral aspect of our daily lives and play a crucial role in shaping our relationships with others. However, bugs and glitches, even minor ones, can cause anything from frustrating problems to serious data leaks that can have farreaching impacts on millions of users. To mitigate these risks, fuzz testing, a method of testing with randomised inputs, can provide increased confidence in the correct functioning of a social network. However, implementing traditional fuzz testing methods can be prohibitively difficult or impractical for programmers outside of the social network's development team. To tackle this challenge, we present Socialz, a novel approach to social fuzz testing that (1) characterises real users of a social network, (2) diversifies their interaction using evolutionary computation across multiple, non-trivial features, and (3) collects performance data as these interactions are executed. With Socialz, we aim to put social testing tools in everybody's hands, thereby improving the reliability and security of social networks used worldwide. In our study, we came across (1) one known limitation of the current GitLab CE and (2) 6,907 errors, of which 40.16% are beyond our debugging skills.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
true
346,130
2502.08365
Towards Principled Multi-Agent Task Agnostic Exploration
In reinforcement learning, we typically refer to task-agnostic exploration when we aim to explore the environment without access to the task specification a priori. In a single-agent setting the problem has been extensively studied and mostly understood. A popular approach cast the task-agnostic objective as maximizing the entropy of the state distribution induced by the agent's policy, from which principles and methods follows. In contrast, little is known about task-agnostic exploration in multi-agent settings, which are ubiquitous in the real world. How should different agents explore in the presence of others? In this paper, we address this question through a generalization to multiple agents of the problem of maximizing the state distribution entropy. First, we investigate alternative formulations, highlighting respective positives and negatives. Then, we present a scalable, decentralized, trust-region policy search algorithm to address the problem in practical settings. Finally, we provide proof of concept experiments to both corroborate the theoretical findings and pave the way for task-agnostic exploration in challenging multi-agent settings.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
532,994
2109.10642
Decentralized Learning of Tree-Structured Gaussian Graphical Models from Noisy Data
This paper studies the decentralized learning of tree-structured Gaussian graphical models (GGMs) from noisy data. In decentralized learning, data set is distributed across different machines (sensors), and GGMs are widely used to model complex networks such as gene regulatory networks and social networks. The proposed decentralized learning uses the Chow-Liu algorithm for estimating the tree-structured GGM. In previous works, upper bounds on the probability of incorrect tree structure recovery were given mostly without any practical noise for simplification. While this paper investigates the effects of three common types of noisy channels: Gaussian, Erasure, and binary symmetric channel. For Gaussian channel case, to satisfy the failure probability upper bound $\delta > 0$ in recovering a $d$-node tree structure, our proposed theorem requires only $\mathcal{O}(\log(\frac{d}{\delta}))$ samples for the smallest sample size ($n$) comparing to the previous literature \cite{Nikolakakis} with $\mathcal{O}(\log^4(\frac{d}{\delta}))$ samples by using the positive correlation coefficient assumption that is used in some important works in the literature. Moreover, the approximately bounded Gaussian random variable assumption does not appear in \cite{Nikolakakis}. Given some knowledge about the tree structure, the proposed Algorithmic Bound will achieve obviously better performance with small sample size (e.g., $< 2000$) comparing with formulaic bounds. Finally, we validate our theoretical results by performing simulations on synthetic data sets.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
256,695
1701.04931
Characterizing Linguistic Attributes for Automatic Classification of Intent Based Racist/Radicalized Posts on Tumblr Micro-Blogging Website
Research shows that many like-minded people use popular microblogging websites for posting hateful speech against various religions and race. Automatic identification of racist and hate promoting posts is required for building social media intelligence and security informatics based solutions. However, just keyword spotting based techniques cannot be used to accurately identify the intent of a post. In this paper, we address the challenge of the presence of ambiguity in such posts by identifying the intent of author. We conduct our study on Tumblr microblogging website and develop a cascaded ensemble learning classifier for identifying the posts having racist or radicalized intent. We train our model by identifying various semantic, sentiment and linguistic features from free-form text. Our experimental results shows that the proposed approach is effective and the emotion tone, social tendencies, language cues and personality traits of a narrative are discriminatory features for identifying the racist intent behind a post.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
66,912
1701.07254
Cascaded Incremental Nonlinear Dynamic Inversion Control for MAV Disturbance Rejection
Micro Aerial Vehicles (MAVs) are limited in their operation outdoors near obstacles by their ability to withstand wind gusts. Currently widespread position control methods such as Proportional Integral Derivative control do not perform well under the influence of gusts. Incremental Nonlinear Dynamic Inversion (INDI) is a sensor-based control technique that can control nonlinear systems subject to disturbances. It was developed for the attitude control of manned aircraft or MAVs. In this paper we generalize this method to the outer loop control of MAVs under severe gust loads. Significant improvements over a traditional Proportional Integral Derivative (PID) controller are demonstrated in an experiment where the quadrotor flies in and out of a windtunnel exhaust at 10 m/s. The control method does not rely on frequent position updates, as is demonstrated in an outside experiment using a standard GPS module. Finally, we investigate the effect of using a linearization to calculate thrust vector increments, compared to a nonlinear calculation. The method requires little modeling and is computationally efficient.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
67,260
0905.3769
Multiset Ordering Constraints
We identify a new and important global (or non-binary) constraint. This constraint ensures that the values taken by two vectors of variables, when viewed as multisets, are ordered. This constraint is useful for a number of different applications including breaking symmetry and fuzzy constraint satisfaction. We propose and implement an efficient linear time algorithm for enforcing generalised arc consistency on such a multiset ordering constraint. Experimental results on several problem domains show considerable promise.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
3,754
2306.02586
Internet of Things Meets Robotics: A Survey of Cloud-based Robots
This work presents a survey of existing literature on the fusion of the Internet of Things (IoT) with robotics and explores the integration of these technologies for the development of the Internet of Robotics Things (IoRT). The survey focuses on the applications of IoRT in healthcare and agriculture, while also addressing key concerns regarding the adoption of IoT and robotics. Additionally, an online survey was conducted to examine how companies utilize IoT technology in their organizations. The findings highlight the benefits of IoT in improving customer experience, reducing costs, and accelerating product development. However, concerns regarding unauthorized access, data breaches, and privacy need to be addressed for successful IoT deployment.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
370,965
2401.11333
Error bounds of constant gain least-mean-squares algorithms
Constant gain least-mean-squares (LMS) algorithms have a wide range of applications in trajectory tracking problems, but the formal convergence of LMS in mean square is not yet fully established. This work provides an upper bound on the constant gain that guarantees a bounded mean-squared error of LMS for a general design vector. These results highlight the role of the fourth-order moment of the design vector. Numerical examples demonstrate the applicability of this upper bound in setting a constant gain in LMS, while existing criteria may fail. We also provide the associated error bound, which can be applied to design vectors with linearly dependent elements.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
422,955
2005.08224
#Coronavirus or #Chinesevirus?!: Understanding the negative sentiment reflected in Tweets with racist hashtags across the development of COVID-19
Situated in the global outbreak of COVID-19, our study enriches the discussion concerning the emergent racism and xenophobia on social media. With big data extracted from Twitter, we focus on the analysis of negative sentiment reflected in tweets marked with racist hashtags, as racism and xenophobia are more likely to be delivered via the negative sentiment. Especially, we propose a stage-based approach to capture how the negative sentiment changes along with the three development stages of COVID-19, under which it transformed from a domestic epidemic into an international public health emergency and later, into the global pandemic. At each stage, sentiment analysis enables us to recognize the negative sentiment from tweets with racist hashtags, and keyword extraction allows for the discovery of themes in the expression of negative sentiment by these tweets. Under this public health crisis of human beings, this stage-based approach enables us to provide policy suggestions for the enactment of stage-specific intervention strategies to combat racism and xenophobia on social media in a more effective way.
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
false
false
false
177,558
2409.00159
LLMs hallucinate graphs too: a structural perspective
It is known that LLMs do hallucinate, that is, they return incorrect information as facts. In this paper, we introduce the possibility to study these hallucinations under a structured form: graphs. Hallucinations in this context are incorrect outputs when prompted for well known graphs from the literature (e.g. Karate club, Les Mis\'erables, graph atlas). These hallucinated graphs have the advantage of being much richer than the factual accuracy -- or not -- of a statement; this paper thus argues that such rich hallucinations can be used to characterize the outputs of LLMs. Our first contribution observes the diversity of topological hallucinations from major modern LLMs. Our second contribution is the proposal of a metric for the amplitude of such hallucinations: the Graph Atlas Distance, that is the average graph edit distance from several graphs in the graph atlas set. We compare this metric to the Hallucination Leaderboard, a hallucination rank that leverages 10,000 times more prompts to obtain its ranking.
false
false
false
true
true
false
false
false
true
false
false
false
false
false
false
false
false
false
484,814
2006.01402
A Smart Background Scheduler for Storage Systems
In today's enterprise storage systems, supported data services such as snapshot delete or drive rebuild can cause tremendous performance interference if executed inline along with heavy foreground IO, often leading to missing SLOs (Service Level Objectives). Typical storage system applications such as web or VDI (Virtual Desktop Infrastructure) follow a repetitive high/low workload pattern that can be learned and forecasted. We propose a priority-based background scheduler that learns this repetitive pattern and allows storage systems to maintain peak performance and in turn meet service level objectives (SLOs) while supporting a number of data services. When foreground IO demand intensifies, system resources are dedicated to service foreground IO requests and any background processing that can be deferred are recorded to be processed in future idle cycles as long as forecast shows that storage pool has remaining capacity. The smart background scheduler adopts a resource partitioning model that allows both foreground and background IO to execute together as long as foreground IOs are not impacted where the scheduler harness any free cycle to clear background debt. Using traces from VDI application, we show how our technique surpasses a method that statically limit the deferred background debt and improve SLO violations from 54.6% when using a fixed background debt watermark to merely a 6.2% if dynamically set by our smart background scheduler.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
179,761
2209.07754
On the Robustness of Graph Neural Diffusion to Topology Perturbations
Neural diffusion on graphs is a novel class of graph neural networks that has attracted increasing attention recently. The capability of graph neural partial differential equations (PDEs) in addressing common hurdles of graph neural networks (GNNs), such as the problems of over-smoothing and bottlenecks, has been investigated but not their robustness to adversarial attacks. In this work, we explore the robustness properties of graph neural PDEs. We empirically demonstrate that graph neural PDEs are intrinsically more robust against topology perturbation as compared to other GNNs. We provide insights into this phenomenon by exploiting the stability of the heat semigroup under graph topology perturbations. We discuss various graph diffusion operators and relate them to existing graph neural PDEs. Furthermore, we propose a general graph neural PDE framework based on which a new class of robust GNNs can be defined. We verify that the new model achieves comparable state-of-the-art performance on several benchmark datasets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
317,875
1909.02214
Auxiliary Learning for Deep Multi-task Learning
Multi-task learning (MTL) is an efficient solution to solve multiple tasks simultaneously in order to get better speed and performance than handling each single-task in turn. The most current methods can be categorized as either: (i) hard parameter sharing where a subset of the parameters is shared among tasks while other parameters are task-specific; or (ii) soft parameter sharing where all parameters are task-specific but they are jointly regularized. Both methods suffer from limitations: the shared hidden layers of the former are difficult to optimize due to the competing objectives while the complexity of the latter grows linearly with the increasing number of tasks. To mitigate those drawbacks, this paper proposes an alternative, where we explicitly construct an auxiliary module to mimic the soft parameter sharing for assisting the optimization of the hard parameter sharing layers in the training phase. In particular, the auxiliary module takes the outputs of the shared hidden layers as inputs and is supervised by the auxiliary task loss. During training, the auxiliary module is jointly optimized with the MTL network, serving as a regularization by introducing an inductive bias to the shared layers. In the testing phase, only the original MTL network is kept. Thus our method avoids the limitation of both categories. We evaluate the proposed auxiliary module on pixel-wise prediction tasks, including semantic segmentation, depth estimation, and surface normal prediction with different network structures. The extensive experiments over various settings verify the effectiveness of our methods.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
144,129
1711.09670
Improving OCR Accuracy on Early Printed Books by utilizing Cross Fold Training and Voting
In this paper we introduce a method that significantly reduces the character error rates for OCR text obtained from OCRopus models trained on early printed books. The method uses a combination of cross fold training and confidence based voting. After allocating the available ground truth in different subsets several training processes are performed, each resulting in a specific OCR model. The OCR text generated by these models then gets voted to determine the final output by taking the recognized characters, their alternatives, and the confidence values assigned to each character into consideration. Experiments on seven early printed books show that the proposed method outperforms the standard approach considerably by reducing the amount of errors by up to 50% and more.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
85,457
1304.0419
Top-K Product Design Based on Collaborative Tagging Data
The widespread use and popularity of collaborative content sites (e.g., IMDB, Amazon, Yelp, etc.) has created rich resources for users to consult in order to make purchasing decisions on various products such as movies, e-commerce products, restaurants, etc. Products with desirable tags (e.g., modern, reliable, etc.) have higher chances of being selected by prospective customers. This creates an opportunity for product designers to design better products that are likely to attract desirable tags when published. In this paper, we investigate how to mine collaborative tagging data to decide the attribute values of new products and to return the top-k products that are likely to attract the maximum number of desirable tags when published. Given a training set of existing products with their features and user-submitted tags, we first build a Naive Bayes Classifier for each tag. We show that the problem of is NP-complete even if simple Naive Bayes Classifiers are used for tag prediction. We present a suite of algorithms for solving this problem: (a) an exact two tier algorithm(based on top-k querying techniques), which performs much better than the naive brute-force algorithm and works well for moderate problem instances, and (b) a set of approximation algorithms for larger problem instances: a novel polynomial-time approximation algorithm with provable error bound and a practical hill-climbing heuristic. We conduct detailed experiments on synthetic and real data crawled from the web to evaluate the efficiency and quality of our proposed algorithms, as well as show how product designers can benefit by leveraging collaborative tagging information.
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
true
23,387
2007.15850
Resist : Reconstruction of irises from templates
Iris recognition systems transform an iris image into a feature vector. The seminal pipeline segments an image into iris and non-iris pixels, normalizes this region into a fixed-dimension rectangle, and extracts features which are stored and called a template (Daugman, 2009). This template is stored on a system. A future reading of an iris can be transformed and compared against template vectors to determine or verify the identity of an individual. As templates are often stored together, they are a valuable target to an attacker. We show how to invert templates across a variety of iris recognition systems. That is, we show how to transform templates into realistic looking iris images that are also deemed as the same iris by the corresponding recognition system. Our inversion is based on a convolutional neural network architecture we call RESIST (REconStructing IriSes from Templates). We apply RESIST to a traditional Gabor filter pipeline, to a DenseNet (Huang et al., CVPR 2017) feature extractor, and to a DenseNet architecture that works without normalization. Both DenseNet feature extractors are based on the recent ThirdEye recognition system (Ahmad and Fuller, BTAS 2019). When training and testing using the ND-0405 dataset, reconstructed images demonstrate a rank-1 accuracy of 100%, 76%, and 96% respectively for the three pipelines. The core of our approach is similar to an autoencoder. However, standalone training the core produced low accuracy. The final architecture integrates into an generative adversarial network (Goodfellow et al., NeurIPS, 2014) producing higher accuracy.
false
false
false
false
false
false
false
false
false
false
false
true
true
false
false
false
false
false
189,784
2101.11560
Wisdom of the Contexts: Active Ensemble Learning for Contextual Anomaly Detection
In contextual anomaly detection, an object is only considered anomalous within a specific context. Most existing methods for CAD use a single context based on a set of user-specified contextual features. However, identifying the right context can be very challenging in practice, especially in datasets, with a large number of attributes. Furthermore, in real-world systems, there might be multiple anomalies that occur in different contexts and, therefore, require a combination of several "useful" contexts to unveil them. In this work, we leverage active learning and ensembles to effectively detect complex contextual anomalies in situations where the true contextual and behavioral attributes are unknown. We propose a novel approach, called WisCon (Wisdom of the Contexts), that automatically creates contexts from the feature set. Our method constructs an ensemble of multiple contexts, with varying importance scores, based on the assumption that not all useful contexts are equally so. Experiments show that WisCon significantly outperforms existing baselines in different categories (i.e., active classifiers, unsupervised contextual and non-contextual anomaly detectors, and supervised classifiers) on seven datasets. Furthermore, the results support our initial hypothesis that there is no single perfect context that successfully uncovers all kinds of contextual anomalies, and leveraging the "wisdom" of multiple contexts is necessary.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
217,322
1606.09560
Neural Network-based Word Alignment through Score Aggregation
We present a simple neural network for word alignment that builds source and target word window representations to compute alignment scores for sentence pairs. To enable unsupervised training, we use an aggregation operation that summarizes the alignment scores for a given target word. A soft-margin objective increases scores for true target words while decreasing scores for target words that are not present. Compared to the popular Fast Align model, our approach improves alignment accuracy by 7 AER on English-Czech, by 6 AER on Romanian-English and by 1.7 AER on English-French alignment.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
58,005
2405.00973
Active Cell Balancing for Extended Operational Time of Lithium-Ion Battery Systems in Energy Storage Applications
Cell inconsistency within a lithium-ion battery system poses a significant challenge in maximizing the system operational time. This study presents an optimization-driven active balancing method to minimize the effects of cell inconsistency on the system operational time while simultaneously satisfying the system output power demand and prolonging the system operational time in energy storage applications. The proposed method utilizes a fractional order model to forecast the terminal voltage dynamics of each cell within a battery system, enhanced with a particle-swarm-optimisation-genetic algorithm for precise parameter identification. It is implemented under two distinct cell-level balancing topologies: independent cell balancing and differential cell balancing. Subsequently, the current distribution for each topology is determined by resolving two optimization control problems constrained by the battery's operational specifications and power demands. The effectiveness of the proposed method is validated by extensive experiments based on the two balancing topologies. The results demonstrate that the proposed method increases the operational time by 3.2%.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
451,157
2302.04453
Data Quality-aware Mixed-precision Quantization via Hybrid Reinforcement Learning
Mixed-precision quantization mostly predetermines the model bit-width settings before actual training due to the non-differential bit-width sampling process, obtaining sub-optimal performance. Worse still, the conventional static quality-consistent training setting, i.e., all data is assumed to be of the same quality across training and inference, overlooks data quality changes in real-world applications which may lead to poor robustness of the quantized models. In this paper, we propose a novel Data Quality-aware Mixed-precision Quantization framework, dubbed DQMQ, to dynamically adapt quantization bit-widths to different data qualities. The adaption is based on a bit-width decision policy that can be learned jointly with the quantization training. Concretely, DQMQ is modeled as a hybrid reinforcement learning (RL) task that combines model-based policy optimization with supervised quantization training. By relaxing the discrete bit-width sampling to a continuous probability distribution that is encoded with few learnable parameters, DQMQ is differentiable and can be directly optimized end-to-end with a hybrid optimization target considering both task performance and quantization benefits. Trained on mixed-quality image datasets, DQMQ can implicitly select the most proper bit-width for each layer when facing uneven input qualities. Extensive experiments on various benchmark datasets and networks demonstrate the superiority of DQMQ against existing fixed/mixed-precision quantization methods.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
344,710
2308.12960
Towards Realistic Zero-Shot Classification via Self Structural Semantic Alignment
Large-scale pre-trained Vision Language Models (VLMs) have proven effective for zero-shot classification. Despite the success, most traditional VLMs-based methods are restricted by the assumption of partial source supervision or ideal vocabularies, which rarely satisfy the open-world scenario. In this paper, we aim at a more challenging setting, Realistic Zero-Shot Classification, which assumes no annotation but instead a broad vocabulary. To address this challenge, we propose the Self Structural Semantic Alignment (S^3A) framework, which extracts the structural semantic information from unlabeled data while simultaneously self-learning. Our S^3A framework adopts a unique Cluster-Vote-Prompt-Realign (CVPR) algorithm, which iteratively groups unlabeled data to derive structural semantics for pseudo-supervision. Our CVPR process includes iterative clustering on images, voting within each cluster to identify initial class candidates from the vocabulary, generating discriminative prompts with large language models to discern confusing candidates, and realigning images and the vocabulary as structural semantic alignment. Finally, we propose to self-learn the CLIP image encoder with both individual and structural semantic alignment through a teacher-student learning strategy. Our comprehensive experiments across various generic and fine-grained benchmarks demonstrate that the S^3A method offers substantial improvements over existing VLMs-based approaches, achieving a more than 15% accuracy improvement over CLIP on average. Our codes, models, and prompts are publicly released at https://github.com/sheng-eatamath/S3A.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
387,732
2410.11718
Converging to a Lingua Franca: Evolution of Linguistic Regions and Semantics Alignment in Multilingual Large Language Models
Large language models (LLMs) have demonstrated remarkable performance, particularly in multilingual contexts. While recent studies suggest that LLMs can transfer skills learned in one language to others, the internal mechanisms behind this ability remain unclear. We observed that the neuron activation patterns of LLMs exhibit similarities when processing the same language, revealing the existence and location of key linguistic regions. Additionally, we found that neuron activation patterns are similar when processing sentences with the same semantic meaning in different languages. This indicates that LLMs map semantically identical inputs from different languages into a "Lingua Franca", a common semantic latent space that allows for consistent processing across languages. This semantic alignment becomes more pronounced with training and increased model size, resulting in a more language-agnostic activation pattern. Moreover, we found that key linguistic neurons are concentrated in the first and last layers of LLMs, becoming denser in the first layers as training progresses. Experiments on BLOOM and LLaMA2 support these findings, highlighting the structural evolution of multilingual LLMs during training and scaling up. This paper provides insights into the internal workings of LLMs, offering a foundation for future improvements in their cross-lingual capabilities.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
498,685
2305.04114
Memory CODA: introducing memory effects in the Continuous Opinions and Discrete Actions model
The Continuous Opinions and Discrete Actions (CODA) model has been widely used to study the emergence of extremism in social networks. However, this standard model has been shown to generate unrealistic extreme opinions due to the reinforcement among agents. To address this issue, this paper introduces memory effects into the CODA model to explore how the dynamics of opinion formation change. Specifically, each agent is endowed with a memory that stores the previous opinions of its neighbors, which are then utilized to update its own opinion. The paper investigates how incorporating memory affects the strength of choices. We will see that while diminishing the opinion strength, the formation of local domains still causes a significant reinforcement effect. However, unlike the original model, the number of neighbors becomes a relevant variable, suggesting a way to test the results presented in this paper. Keywords: Opinion dynamics, CODA, Agent-based models, Memory, extremism
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
362,644
2007.09371
Tighter Generalization Bounds for Iterative Differentially Private Learning Algorithms
This paper studies the relationship between generalization and privacy preservation in iterative learning algorithms by two sequential steps. We first establish an alignment between generalization and privacy preservation for any learning algorithm. We prove that $(\varepsilon, \delta)$-differential privacy implies an on-average generalization bound for multi-database learning algorithms which further leads to a high-probability bound for any learning algorithm. This high-probability bound also implies a PAC-learnable guarantee for differentially private learning algorithms. We then investigate how the iterative nature shared by most learning algorithms influence privacy preservation and further generalization. Three composition theorems are proposed to approximate the differential privacy of any iterative algorithm through the differential privacy of its every iteration. By integrating the above two steps, we eventually deliver generalization bounds for iterative learning algorithms, which suggest one can simultaneously enhance privacy preservation and generalization. Our results are strictly tighter than the existing works. Particularly, our generalization bounds do not rely on the model size which is prohibitively large in deep learning. This sheds light to understanding the generalizability of deep learning. These results apply to a wide spectrum of learning algorithms. In this paper, we apply them to stochastic gradient Langevin dynamics and agnostic federated learning as examples.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
187,921
2207.01233
Domain Adaptive Nuclei Instance Segmentation and Classification via Category-aware Feature Alignment and Pseudo-labelling
Unsupervised domain adaptation (UDA) methods have been broadly utilized to improve the models' adaptation ability in general computer vision. However, different from the natural images, there exist huge semantic gaps for the nuclei from different categories in histopathology images. It is still under-explored how could we build generalized UDA models for precise segmentation or classification of nuclei instances across different datasets. In this work, we propose a novel deep neural network, namely Category-Aware feature alignment and Pseudo-Labelling Network (CAPL-Net) for UDA nuclei instance segmentation and classification. Specifically, we first propose a category-level feature alignment module with dynamic learnable trade-off weights. Second, we propose to facilitate the model performance on the target data via self-supervised training with pseudo labels based on nuclei-level prototype features. Comprehensive experiments on cross-domain nuclei instance segmentation and classification tasks demonstrate that our approach outperforms state-of-the-art UDA methods with a remarkable margin.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
306,094
2212.13876
xFBD: Focused Building Damage Dataset and Analysis
The xView2 competition and xBD dataset spurred significant advancements in overhead building damage detection, but the competition's pixel level scoring can lead to reduced solution performance in areas with tight clusters of buildings or uninformative context. We seek to advance automatic building damage assessment for disaster relief by proposing an auxiliary challenge to the original xView2 competition. This new challenge involves a new dataset and metrics indicating solution performance when damage is more local and limited than in xBD. Our challenge measures a network's ability to identify individual buildings and their damage level without excessive reliance on the buildings' surroundings. Methods that succeed on this challenge will provide more fine-grained, precise damage information than original xView2 solutions. The best-performing xView2 networks' performances dropped noticeably in our new limited/local damage detection task. The common causes of failure observed are that (1) building objects and their classifications are not separated well, and (2) when they are, the classification is strongly biased by surrounding buildings and other damage context. Thus, we release our augmented version of the dataset with additional object-level scoring metrics (https://drive.google.com/drive/folders/1VuQZuAg6-Yo8r5J4OCx3ZRpa_fv9aaDX?usp=sharing) to test independence and separability of building objects, alongside the pixel-level performance metrics of the original competition. We also experiment with new baseline models which improve independence and separability of building damage predictions. Our results indicate that building damage detection is not a fully-solved problem, and we invite others to use and build on our dataset augmentations and metrics.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
338,416
2411.08583
An Empirical Examination of the Evaluative AI Framework
This study empirically examines the "Evaluative AI" framework, which aims to enhance the decision-making process for AI users by transitioning from a recommendation-based approach to a hypothesis-driven one. Rather than offering direct recommendations, this framework presents users pro and con evidence for hypotheses to support more informed decisions. However, findings from the current behavioral experiment reveal no significant improvement in decision-making performance and limited user engagement with the evidence provided, resulting in cognitive processes similar to those observed in traditional AI systems. Despite these results, the framework still holds promise for further exploration in future research.
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
507,942
2006.01644
Learning Efficient Representations of Mouse Movements to Predict User Attention
Tracking mouse cursor movements can be used to predict user attention on heterogeneous page layouts like SERPs. So far, previous work has relied heavily on handcrafted features, which is a time-consuming approach that often requires domain expertise. We investigate different representations of mouse cursor movements, including time series, heatmaps, and trajectory-based images, to build and contrast both recurrent and convolutional neural networks that can predict user attention to direct displays, such as SERP advertisements. Our models are trained over raw mouse cursor data and achieve competitive performance. We conclude that neural network models should be adopted for downstream tasks involving mouse cursor movements, since they can provide an invaluable implicit feedback signal for re-ranking and evaluation.
true
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
179,830
1901.10469
Automated Prototype for Asteroids Detection
Near Earth Asteroids (NEAs) are discovered daily, mainly by few major surveys, nevertheless many of them remain unobserved for years, even decades. Even so, there is room for new discoveries, including those submitted by smaller projects and amateur astronomers. Besides the well-known surveys that have their own automated system of asteroid detection, there are only a few software solutions designed to help amateurs and mini-surveys in NEAs discovery. Some of these obtain their results based on the blink method in which a set of reduced images are shown one after another and the astronomer has to visually detect real moving objects in a series of images. This technique becomes harder with the increase in size of the CCD cameras. Aiming to replace manual detection we propose an automated pipeline prototype for asteroids detection, written in Python under Linux, which calls some 3rd party astrophysics libraries.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
120,029
1903.06708
Live Reconstruction of Large-Scale Dynamic Outdoor Worlds
Standard 3D reconstruction pipelines assume stationary world, therefore suffer from `ghost artifacts' whenever dynamic objects are present in the scene. Recent approaches has started tackling this issue, however, they typically either only discard dynamic information, represent it using bounding boxes or per-frame depth or rely on approaches that are inherently slow and not suitable to online settings. We propose an end-to-end system for live reconstruction of large-scale outdoor dynamic environments. We leverage recent advances in computationally efficient data-driven approaches for 6-DoF object pose estimation to segment the scene into objects and stationary `background'. This allows us to represent the scene using a time-dependent (dynamic) map, in which each object is explicitly represented as a separate instance and reconstructed in its own volume. For each time step, our dynamic map maintains a relative pose of each volume with respect to the stationary background. Our system operates in incremental manner which is essential for on-line reconstruction, handles large-scale environments with objects at large distances and runs in (near) real-time. We demonstrate the efficacy of our approach on the KITTI dataset, and provide qualitative and quantitative results showing high-quality dense 3D reconstructions of a number of dynamic scenes.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
124,447
2204.10979
Smoothed Online Combinatorial Optimization Using Imperfect Predictions
Smoothed online combinatorial optimization considers a learner who repeatedly chooses a combinatorial decision to minimize an unknown changing cost function with a penalty on switching decisions in consecutive rounds. We study smoothed online combinatorial optimization problems when an imperfect predictive model is available, where the model can forecast the future cost functions with uncertainty. We show that using predictions to plan for a finite time horizon leads to regret dependent on the total predictive uncertainty and an additional switching cost. This observation suggests choosing a suitable planning window to balance between uncertainty and switching cost, which leads to an online algorithm with guarantees on the upper and lower bounds of the cumulative regret. Empirically, our algorithm shows a significant improvement in cumulative regret compared to other baselines in synthetic online distributed streaming problems.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
292,985
1707.08279
A General and Yet Efficient Scheme for Sub-Nyquist Radar Processing
We study the target parameter estimation for sub-Nyquist pulse-Doppler radar. Several past works have addressed this problem but either have low estimation accuracy for off-grid targets, take large computation load, or lack versatility for analog-to-information conversion (AIC) systems. To overcome these difficulties, we present a general and efficient estimation scheme. The scheme first formulates a general model in the sense that it is applicable to all AICs regardless of whether the targets are on or off the grids. The estimation of Doppler shifts and delays is performed sequentially, in which the Doppler estimation is formulated into a spatial spectrum estimation problem and the delay estimation is decomposed into a series of compressive parameter estimation problems with each corresponding to an estimated Doppler shift. By the sequential and decomposed processing, the computational complexity is substantially reduced, and by the parametric estimation techniques, the high accurate estimation is obtained. Theoretical analyses and numerical experiments show the effectiveness and the correctness of the proposed scheme.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
77,794
2108.13382
Exploring Multi-Tasking Learning in Document Attribute Classification
In this work, we adhere to explore a Multi-Tasking learning (MTL) based network to perform document attribute classification such as the font type, font size, font emphasis and scanning resolution classification of a document image. To accomplish these tasks, we operate on either segmented word level or on uniformed size patches randomly cropped out of the document. Furthermore, a hybrid convolution neural network (CNN) architecture "MTL+MI", which is based on the combination of MTL and Multi-Instance (MI) of patch and word is used to accomplish joint learning for the classification of the same document attributes. The contribution of this paper are three fold: firstly, based on segmented word images and patches, we present a MTL based network for the classification of a full document image. Secondly, we propose a MTL and MI (using segmented words and patches) based combined CNN architecture ("MTL+MI") for the classification of same document attributes. Thirdly, based on the multi-tasking classifications of the words and/or patches, we propose an intelligent voting system which is based on the posterior probabilities of each words and/or patches to perform the classification of document's attributes of complete document image.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
252,791
1711.02549
Remote Sensing Image Fusion Based on Two-stream Fusion Network
Remote sensing image fusion (also known as pan-sharpening) aims at generating high resolution multi-spectral (MS) image from inputs of a high spatial resolution single band panchromatic (PAN) image and a low spatial resolution multi-spectral image. Inspired by the astounding achievements of convolutional neural networks (CNNs) in a variety of computer vision tasks, in this paper, we propose a two-stream fusion network (TFNet) to address the problem of pan-sharpening. Unlike previous CNN based methods that consider pan-sharpening as a super resolution problem and perform pan-sharpening in pixel level, the proposed TFNet aims to fuse PAN and MS images in feature level and reconstruct the pan-sharpened image from the fused features. The TFNet mainly consists of three parts. The first part is comprised of two networks extracting features from PAN and MS images, respectively. The subsequent network fuses them together to form compact features that represent both spatial and spectral information of PAN and MS images, simultaneously. Finally, the desired high spatial resolution MS image is recovered from the fused features through an image reconstruction network. Experiments on Quickbird and \mbox{GaoFen-1} satellite images demonstrate that the proposed TFNet can fuse PAN and MS images, effectively, and produce pan-sharpened images competitive with even superior to state of the arts.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
84,080
2311.07888
RoboSense At Edge: Detecting Slip, Crumple and Shape of the Object in Robotic Hand for Teleoprations
Slip and crumple detection is essential for performing robust manipulation tasks with a robotic hand (RH) like remote surgery. It has been one of the challenging problems in the robotics manipulation community. In this work, we propose a technique based on machine learning (ML) based techniques to detect the slip, and crumple as well as the shape of an object that is currently held in the robotic hand. We proposed ML model will detect the slip, crumple, and shape using the force/torque exerted and the angular positions of the actuators present in the RH. The proposed model would be integrated into the loop of a robotic hand(RH) and haptic glove(HG). This would help us to reduce the latency in case of teleoperation
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
407,507
2407.05557
$R^2$-Guard: Robust Reasoning Enabled LLM Guardrail via Knowledge-Enhanced Logical Reasoning
As LLMs become increasingly prevalent across various applications, it is critical to establish safety guardrails to moderate input/output content of LLMs. Existing guardrail models treat various safety categories independently and fail to explicitly capture the intercorrelations among them. This has led to limitations such as ineffectiveness due to inadequate training on long-tail data from correlated safety categories, susceptibility to jailbreaking attacks, and inflexibility regarding new safety categories. To address these limitations, we propose $R^2$-Guard, a robust reasoning enabled LLM guardrail via knowledge-enhanced logical reasoning. Specifically, $R^2$-Guard comprises two parts: data-driven category-specific learning and reasoning components. The data-driven guardrail models provide unsafety probabilities of moderated content on different safety categories. We then encode safety knowledge among different categories as first-order logical rules and embed them into a probabilistic graphic model (PGM) based reasoning component. The unsafety probabilities of different categories from data-driven guardrail models are sent to the reasoning component for final inference. We employ two types of PGMs: Markov logic networks (MLNs) and probabilistic circuits (PCs), and optimize PCs to achieve precision-efficiency balance via improved graph structure. To further perform stress tests for guardrail models, we employ a pairwise construction method to construct a new safety benchmark TwinSafety, which features principled categories. We demonstrate the effectiveness of $R^2$-Guard by comparisons with eight strong guardrail models on six safety benchmarks, and demonstrate the robustness of $R^2$-Guard against four SOTA jailbreaking attacks. $R^2$-Guard significantly surpasses SOTA method LlamaGuard by 30.2% on ToxicChat and by 59.5% against jailbreaking attacks.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
471,027
2407.04638
Semi-Supervised Segmentation via Embedding Matching
Deep convolutional neural networks are widely used in medical image segmentation but require many labeled images for training. Annotating three-dimensional medical images is a time-consuming and costly process. To overcome this limitation, we propose a novel semi-supervised segmentation method that leverages mostly unlabeled images and a small set of labeled images in training. Our approach involves assessing prediction uncertainty to identify reliable predictions on unlabeled voxels from the teacher model. These voxels serve as pseudo-labels for training the student model. In voxels where the teacher model produces unreliable predictions, pseudo-labeling is carried out based on voxel-wise embedding correspondence using reference voxels from labeled images. We applied this method to automate hip bone segmentation in CT images, achieving notable results with just 4 CT scans. The proposed approach yielded a Hausdorff distance with 95th percentile (HD95) of 3.30 and IoU of 0.929, surpassing existing methods achieving HD95 (4.07) and IoU (0.927) at their best.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
470,641
2007.00622
A Multi-spectral Dataset for Evaluating Motion Estimation Systems
Visible images have been widely used for motion estimation. Thermal images, in contrast, are more challenging to be used in motion estimation since they typically have lower resolution, less texture, and more noise. In this paper, a novel dataset for evaluating the performance of multi-spectral motion estimation systems is presented. All the sequences are recorded from a handheld multi-spectral device. It consists of a standard visible-light camera, a long-wave infrared camera, an RGB-D camera, and an inertial measurement unit (IMU). The multi-spectral images, including both color and thermal images in full sensor resolution (640 x 480), are obtained from a standard and a long-wave infrared camera at 32Hz with hardware-synchronization. The depth images are captured by a Microsoft Kinect2 and can have benefits for learning cross-modalities stereo matching. For trajectory evaluation, accurate ground-truth camera poses obtained from a motion capture system are provided. In addition to the sequences with bright illumination, the dataset also contains dim, varying, and complex illumination scenes. The full dataset, including raw data and calibration data with detailed data format specifications, is publicly available.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
185,165
1703.05671
Upper bounds for the Holevo quantity and their use
We present a family of easily computable upper bounds for the Holevo quantity of ensemble of quantum states depending on a reference state as a free parameter. These upper bounds are obtained by combining probabilistic and metric characteristics of the ensemble. We show that appropriate choice of the reference state gives tight upper bounds for the Holevo quantity which in many cases improve existing estimates in the literature. We also present upper bound for the Holevo quantity of a generalized ensemble of quantum states with finite average energy depending on metric divergence of the ensemble. The specification of this upper bound for the multi-mode quantum oscillator is tight for large energy. The above results are used to obtain tight upper bounds for the Holevo capacity of finite-dimensional and infinite-dimensional energy-constrained quantum channels depending on metric characteristics of the channel output.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
70,119
2312.08650
PhyOT: Physics-informed object tracking in surveillance cameras
While deep learning has been very successful in computer vision, real world operating conditions such as lighting variation, background clutter, or occlusion hinder its accuracy across several tasks. Prior work has shown that hybrid models -- combining neural networks and heuristics/algorithms -- can outperform vanilla deep learning for several computer vision tasks, such as classification or tracking. We consider the case of object tracking, and evaluate a hybrid model (PhyOT) that conceptualizes deep neural networks as ``sensors'' in a Kalman filter setup, where prior knowledge, in the form of Newtonian laws of motion, is used to fuse sensor observations and to perform improved estimations. Our experiments combine three neural networks, performing position, indirect velocity and acceleration estimation, respectively, and evaluate such a formulation on two benchmark datasets: a warehouse security camera dataset that we collected and annotated and a traffic camera open dataset. Results suggest that our PhyOT can track objects in extreme conditions that the state-of-the-art deep neural networks fail while its performance in general cases does not degrade significantly from that of existing deep learning approaches. Results also suggest that our PhyOT components are generalizable and transferable.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
415,384
2005.03180
Model Reduction and Neural Networks for Parametric PDEs
We develop a general framework for data-driven approximation of input-output maps between infinite-dimensional spaces. The proposed approach is motivated by the recent successes of neural networks and deep learning, in combination with ideas from model reduction. This combination results in a neural network approximation which, in principle, is defined on infinite-dimensional spaces and, in practice, is robust to the dimension of finite-dimensional approximations of these spaces required for computation. For a class of input-output maps, and suitably chosen probability measures on the inputs, we prove convergence of the proposed approximation methodology. We also include numerical experiments which demonstrate the effectiveness of the method, showing convergence and robustness of the approximation scheme with respect to the size of the discretization, and compare it with existing algorithms from the literature; our examples include the mapping from coefficient to solution in a divergence form elliptic partial differential equation (PDE) problem, and the solution operator for viscous Burgers' equation.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
176,072
1309.0781
An Exploratory Data Survey of Drug Name Incidence and Prevalence From the FDA's Adverse Event Reporting System, 2004 to 2012Q2
Drug Names, Population Level Surveillance and the FDA's Adverse Event Reporting System: An Exploratory Data Survey of Drug Name Incidence and Prevalence, 2004-2012Q2 Purpose: To count and monitor the drug names reported in the publicly available version of the Federal Adverse Event Reporting System (FAERS) from 2004 to 2012Q2 in a maximized sensitivity relational model. Methods: Data mining and data modeling was conducted and event based summary statistics with plots were created from over nine continuous years of continuous FAERS data. Results: This FAERS model contains 344,452 individual drug names and 432,541,994 count references which occurred across 4,148,761 human subjects in the 34 quarter study period. Plots for the top 100 scoring drug name references are reported by year and quarter; the top 100 drug names contain 143,384,240 references or 33% of all drug name references over 34 quarters of continuous FAERS data. Conclusions: While FAERS contains many drugs and adverse event reports, its data pertains to very few of them. Drug name incidence lends timely and effective surveillance of large populations of Averse Event Reports and does not require the cause of the AE, nor its validity to be known to detect a mass poisoning.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
26,806
2405.01761
Multivariate Bayesian Last Layer for Regression: Uncertainty Quantification and Disentanglement
We present new Bayesian Last Layer models in the setting of multivariate regression under heteroscedastic noise, and propose an optimization algorithm for parameter learning. Bayesian Last Layer combines Bayesian modelling of the predictive distribution with neural networks for parameterization of the prior, and has the attractive property of uncertainty quantification with a single forward pass. The proposed framework is capable of disentangling the aleatoric and epistemic uncertainty, and can be used to transfer a canonically trained deep neural network to new data domains with uncertainty-aware capability.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
451,495
1910.01588
Probabilistic Robust Small-Signal Stability Framework using Gaussian Process Learning
While most power system small-signal stability assessments rely on the reduced Jacobian, which depends non-linearly on the states, uncertain operating points introduce nontrivial hurdles in certifying the system's stability. In this paper, a novel probabilistic robust small-signal stability (PRS) framework is developed for a power system based on Gaussian process (GP) learning. The proposed PRS assessment provides a robust stability certificate for a state subspace, such as that specified by the error bounds of the state estimation, with a given probability. With such a PRS certificate, all inner points of the concerned subspace will be stable with at least the corresponding confidence level. To this end, the behavior of the critical eigenvalue of the reduced Jacobian with state points in a state subspace is learned using GP. The proposed PRS certificate along with the Subspace-based Search and Confidence-based Search mechanisms constitute a holistic framework catering to all scenarios. The proposed framework is a powerful approach to assess the stability under uncertainty because it does not require input uncertainty distributions and other state-specific input-to-output approximations. Further, the critical eigenvalue behavior in a state subspace is analyzed using an upper bound of the eigenvalue variations and their inferences are discussed in detail. The results on three-machine nine-bus WSCC system show that the proposed certificate can find the robust stable state subspace with a given probability.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
147,977
2008.06319
OR-Gym: A Reinforcement Learning Library for Operations Research Problems
Reinforcement learning (RL) has been widely applied to game-playing and surpassed the best human-level performance in many domains, yet there are few use-cases in industrial or commercial settings. We introduce OR-Gym, an open-source library for developing reinforcement learning algorithms to address operations research problems. In this paper, we apply reinforcement learning to the knapsack, multi-dimensional bin packing, multi-echelon supply chain, and multi-period asset allocation model problems, as well as benchmark the RL solutions against MILP and heuristic models. These problems are used in logistics, finance, engineering, and are common in many business operation settings. We develop environments based on prototypical models in the literature and implement various optimization and heuristic models in order to benchmark the RL results. By re-framing a series of classic optimization problems as RL tasks, we seek to provide a new tool for the operations research community, while also opening those in the RL community to many of the problems and challenges in the OR field.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
191,775
1103.1255
A General Framework for Representing, Reasoning and Querying with Annotated Semantic Web Data
We describe a generic framework for representing and reasoning with annotated Semantic Web data, a task becoming more important with the recent increased amount of inconsistent and non-reliable meta-data on the web. We formalise the annotated language, the corresponding deductive system and address the query answering problem. Previous contributions on specific RDF annotation domains are encompassed by our unified reasoning formalism as we show by instantiating it on (i) temporal, (ii) fuzzy, and (iii) provenance annotations. Moreover, we provide a generic method for combining multiple annotation domains allowing to represent, e.g. temporally-annotated fuzzy RDF. Furthermore, we address the development of a query language -- AnQL -- that is inspired by SPARQL, including several features of SPARQL 1.1 (subqueries, aggregates, assignment, solution modifiers) along with the formal definitions of their semantics.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
9,503
1703.02223
Opinion diversity and community formation in adaptive networks
It is interesting and of significant importance to investigate how network structures co-evolve with opinions. The existing models of such co-evolution typically lead to the final states where network nodes either reach a global consensus or break into separated communities, each of which holding its own community consensus. Such results, however, can hardly explain the richness of real-life observations that opinions are always diversified with no global or even community consensus, and people seldom, if not never, totally cut off themselves from dissenters. In this article, we show that, a simple model integrating consensus formation, link rewiring and opinion change allows complex system dynamics to emerge, driving the system into a dynamic equilibrium with co-existence of diversified opinions. Specifically, similar opinion holders may form into communities yet with no strict community consensus; and rather than being separated into disconnected communities, different communities remain to be interconnected by non-trivial proportion of inter-community links. More importantly, we show that the complex dynamics may lead to different numbers of communities at steady state with a given tolerance between different opinion holders. We construct a framework for theoretically analyzing the co-evolution process. Theoretical analysis and extensive simulation results reveal some useful insights into the complex co-evolution process, including the formation of dynamic equilibrium, the phase transition between different steady states with different numbers of communities, and the dynamics between opinion distribution and network modularity, etc.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
69,519
1811.04076
AttS2S-VC: Sequence-to-Sequence Voice Conversion with Attention and Context Preservation Mechanisms
This paper describes a method based on a sequence-to-sequence learning (Seq2Seq) with attention and context preservation mechanism for voice conversion (VC) tasks. Seq2Seq has been outstanding at numerous tasks involving sequence modeling such as speech synthesis and recognition, machine translation, and image captioning. In contrast to current VC techniques, our method 1) stabilizes and accelerates the training procedure by considering guided attention and proposed context preservation losses, 2) allows not only spectral envelopes but also fundamental frequency contours and durations of speech to be converted, 3) requires no context information such as phoneme labels, and 4) requires no time-aligned source and target speech data in advance. In our experiment, the proposed VC framework can be trained in only one day, using only one GPU of an NVIDIA Tesla K80, while the quality of the synthesized speech is higher than that of speech converted by Gaussian mixture model-based VC and is comparable to that of speech generated by recurrent neural network-based text-to-speech synthesis, which can be regarded as an upper limit on VC performance.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
112,990
2210.03120
GBSVM: Granular-ball Support Vector Machine
GBSVM (Granular-ball Support Vector Machine) is a significant attempt to construct a classifier using the coarse-to-fine granularity of a granular-ball as input, rather than a single data point. It is the first classifier whose input contains no points. However, the existing model has some errors, and its dual model has not been derived. As a result, the current algorithm cannot be implemented or applied. To address these problems, this paper has fixed the errors of the original model of the existing GBSVM, and derived its dual model. Furthermore, a particle swarm optimization algorithm is designed to solve the dual model. The sequential minimal optimization algorithm is also carefully designed to solve the dual model. The solution is faster and more stable than the particle swarm optimization based version. The experimental results on the UCI benchmark datasets demonstrate that GBSVM has good robustness and efficiency. All codes have been released in the open source library at http://www.cquptshuyinxia.com/GBSVM.html or https://github.com/syxiaa/GBSVM.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
321,909
2312.16599
Relationship between auditory and semantic entrainment using Deep Neural Networks (DNN)
The tendency of people to engage in similar, matching, or synchronized behaviour when interacting is known as entrainment. Many studies examined linguistic (syntactic and lexical structures) and paralinguistic (pitch, intensity) entrainment, but less attention was given to finding the relationship between them. In this study, we utilized state-of-the-art DNN embeddings such as BERT and TRIpLet Loss network (TRILL) vectors to extract features for measuring semantic and auditory similarities of turns within dialogues in two comparable spoken corpora of two different languages. We found people's tendency to entrain on semantic features more when compared to auditory features. Additionally, we found that entrainment in semantic and auditory linguistic features are positively correlated. The findings of this study might assist in implementing the mechanism of entrainment in human-machine interaction (HMI).
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
418,443
2112.00202
3DVNet: Multi-View Depth Prediction and Volumetric Refinement
We present 3DVNet, a novel multi-view stereo (MVS) depth-prediction method that combines the advantages of previous depth-based and volumetric MVS approaches. Our key idea is the use of a 3D scene-modeling network that iteratively updates a set of coarse depth predictions, resulting in highly accurate predictions which agree on the underlying scene geometry. Unlike existing depth-prediction techniques, our method uses a volumetric 3D convolutional neural network (CNN) that operates in world space on all depth maps jointly. The network can therefore learn meaningful scene-level priors. Furthermore, unlike existing volumetric MVS techniques, our 3D CNN operates on a feature-augmented point cloud, allowing for effective aggregation of multi-view information and flexible iterative refinement of depth maps. Experimental results show our method exceeds state-of-the-art accuracy in both depth prediction and 3D reconstruction metrics on the ScanNet dataset, as well as a selection of scenes from the TUM-RGBD and ICL-NUIM datasets. This shows that our method is both effective and generalizes to new settings.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
269,054
1903.01454
Making the Dynamic Time Warping Distance Warping-Invariant
The literature postulates that the dynamic time warping (dtw) distance can cope with temporal variations but stores and processes time series in a form as if the dtw-distance cannot cope with such variations. To address this inconsistency, we first show that the dtw-distance is not warping-invariant. The lack of warping-invariance contributes to the inconsistency mentioned above and to a strange behavior. To eliminate these peculiarities, we convert the dtw-distance to a warping-invariant semi-metric, called time-warp-invariant (twi) distance. Empirical results suggest that the error rates of the twi and dtw nearest-neighbor classifier are practically equivalent in a Bayesian sense. However, the twi-distance requires less storage and computation time than the dtw-distance for a broad range of problems. These results challenge the current practice of applying the dtw-distance in nearest-neighbor classification and suggest the proposed twi-distance as a more efficient and consistent option.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
123,257
2407.06109
PerLDiff: Controllable Street View Synthesis Using Perspective-Layout Diffusion Models
Controllable generation is considered a potentially vital approach to address the challenge of annotating 3D data, and the precision of such controllable generation becomes particularly imperative in the context of data production for autonomous driving. Existing methods focus on the integration of diverse generative information into controlling inputs, utilizing frameworks such as GLIGEN or ControlNet, to produce commendable outcomes in controllable generation. However, such approaches intrinsically restrict generation performance to the learning capacities of predefined network architectures. In this paper, we explore the integration of controlling information and introduce PerLDiff (\textbf{Per}spective-\textbf{L}ayout \textbf{Diff}usion Models), a method for effective street view image generation that fully leverages perspective 3D geometric information. Our PerLDiff employs 3D geometric priors to guide the generation of street view images with precise object-level control within the network learning process, resulting in a more robust and controllable output. Moreover, it demonstrates superior controllability compared to alternative layout control methods. Empirical results justify that our PerLDiff markedly enhances the precision of generation on the NuScenes and KITTI datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
471,256
2405.05049
Seeds of Stereotypes: A Large-Scale Textual Analysis of Race and Gender Associations with Diseases in Online Sources
Background Advancements in Large Language Models (LLMs) hold transformative potential in healthcare, however, recent work has raised concern about the tendency of these models to produce outputs that display racial or gender biases. Although training data is a likely source of such biases, exploration of disease and demographic associations in text data at scale has been limited. Methods We conducted a large-scale textual analysis using a dataset comprising diverse web sources, including Arxiv, Wikipedia, and Common Crawl. The study analyzed the context in which various diseases are discussed alongside markers of race and gender. Given that LLMs are pre-trained on similar datasets, this approach allowed us to examine the potential biases that LLMs may learn and internalize. We compared these findings with actual demographic disease prevalence as well as GPT-4 outputs in order to evaluate the extent of bias representation. Results Our findings indicate that demographic terms are disproportionately associated with specific disease concepts in online texts. gender terms are prominently associated with disease concepts, while racial terms are much less frequently associated. We find widespread disparities in the associations of specific racial and gender terms with the 18 diseases analyzed. Most prominently, we see an overall significant overrepresentation of Black race mentions in comparison to population proportions. Conclusions Our results highlight the need for critical examination and transparent reporting of biases in LLM pretraining datasets. Our study suggests the need to develop mitigation strategies to counteract the influence of biased training data in LLMs, particularly in sensitive domains such as healthcare.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
452,776
1805.06660
Single Shot Active Learning using Pseudo Annotators
Standard myopic active learning assumes that human annotations are always obtainable whenever new samples are selected. This, however, is unrealistic in many real-world applications where human experts are not readily available at all times. In this paper, we consider the single shot setting: all the required samples should be chosen in a single shot and no human annotation can be exploited during the selection process. We propose a new method, Active Learning through Random Labeling (ALRL), which substitutes single human annotator for multiple, what we will refer to as, pseudo annotators. These pseudo annotators always provide uniform and random labels whenever new unlabeled samples are queried. This random labeling enables standard active learning algorithms to also exhibit the exploratory behavior needed for single shot active learning. The exploratory behavior is further enhanced by selecting the most representative sample via minimizing nearest neighbor distance between unlabeled samples and queried samples. Experiments on real-world datasets demonstrate that the proposed method outperforms several state-of-the-art approaches.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
97,661
2006.08052
Autofocused oracles for model-based design
Data-driven design is making headway into a number of application areas, including protein, small-molecule, and materials engineering. The design goal is to construct an object with desired properties, such as a protein that binds to a therapeutic target, or a superconducting material with a higher critical temperature than previously observed. To that end, costly experimental measurements are being replaced with calls to high-capacity regression models trained on labeled data, which can be leveraged in an in silico search for design candidates. However, the design goal necessitates moving into regions of the design space beyond where such models were trained. Therefore, one can ask: should the regression model be altered as the design algorithm explores the design space, in the absence of new data? Herein, we answer this question in the affirmative. In particular, we (i) formalize the data-driven design problem as a non-zero-sum game, (ii) develop a principled strategy for retraining the regression model as the design algorithm proceeds---what we refer to as autofocusing, and (iii) demonstrate the promise of autofocusing empirically.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
182,051
2011.13284
A question-answering system for aircraft pilots' documentation
The aerospace industry relies on massive collections of complex and technical documents covering system descriptions, manuals or procedures. This paper presents a question answering (QA) system that would help aircraft pilots access information in this documentation by naturally interacting with the system and asking questions in natural language. After describing each module of the dialog system, we present a multi-task based approach for the QA module which enables performance improvement on a Flight Crew Operating Manual (FCOM) dataset. A method to combine scores from the retriever and the QA modules is also presented.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
208,432
2412.15386
Systematic Evaluation of Long-Context LLMs on Financial Concepts
Long-context large language models (LC LLMs) promise to increase reliability of LLMs in real-world tasks requiring processing and understanding of long input documents. However, this ability of LC LLMs to reliably utilize their growing context windows remains under investigation. In this work, we evaluate the performance of state-of-the-art GPT-4 suite of LC LLMs in solving a series of progressively challenging tasks, as a function of factors such as context length, task difficulty, and position of key information by creating a real world financial news dataset. Our findings indicate that LC LLMs exhibit brittleness at longer context lengths even for simple tasks, with performance deteriorating sharply as task complexity increases. At longer context lengths, these state-of-the-art models experience catastrophic failures in instruction following resulting in degenerate outputs. Our prompt ablations also reveal unfortunate continued sensitivity to both the placement of the task instruction in the context window as well as minor markdown formatting. Finally, we advocate for more rigorous evaluation of LC LLMs by employing holistic metrics such as F1 (rather than recall) and reporting confidence intervals, thereby ensuring robust and conclusive findings.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
519,089
2409.13305
Model Predictive Control For Multiple Castaway Tracking with an Autonomous Aerial Agent
Over the past few years, a plethora of advancements in Unmanned Areal Vehicle (UAV) technology has paved the way for UAV-based search and rescue operations with transformative impact to the outcome of critical life-saving missions. This paper dives into the challenging task of multiple castaway tracking using an autonomous UAV agent. Leveraging on the computing power of the modern embedded devices, we propose a Model Predictive Control (MPC) framework for tracking multiple castaways assumed to drift afloat in the aftermath of a maritime accident. We consider a stationary radar sensor that is responsible for signaling the search mission by providing noisy measurements of each castaway's initial state. The UAV agent aims at detecting and tracking the moving targets with its equipped onboard camera sensor that has limited sensing range. In this work, we also experimentally determine the probability of target detection from real-world data by training and evaluating various Convolutional Neural Networks (CNNs). Extensive qualitative and quantitative evaluations demonstrate the performance of the proposed approach.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
489,934
2404.06220
Zero-Shot Relational Learning for Multimodal Knowledge Graphs
Relational learning is an essential task in the domain of knowledge representation, particularly in knowledge graph completion (KGC). While relational learning in traditional single-modal settings has been extensively studied, exploring it within a multimodal KGC context presents distinct challenges and opportunities. One of the major challenges is inference on newly discovered relations without any associated training data. This zero-shot relational learning scenario poses unique requirements for multimodal KGC, i.e., utilizing multimodality to facilitate relational learning.However, existing works fail to support the leverage of multimodal information and leave the problem unexplored. In this paper, we propose a novel end-to-end framework, consisting of three components, i.e., multimodal learner, structure consolidator, and relation embedding generator, to integrate diverse multimodal information and knowledge graph structures to facilitate the zero-shot relational learning. Evaluation results on three multimodal knowledge graphs demonstrate the superior performance of our proposed method.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
445,377
2109.04200
Double-Scale Self-Supervised Hypergraph Learning for Group Recommendation
With the prevalence of social media, there has recently been a proliferation of recommenders that shift their focus from individual modeling to group recommendation. Since the group preference is a mixture of various predilections from group members, the fundamental challenge of group recommendation is to model the correlations among members. Existing methods mostly adopt heuristic or attention-based preference aggregation strategies to synthesize group preferences. However, these models mainly focus on the pairwise connections of users and ignore the complex high-order interactions within and beyond groups. Besides, group recommendation suffers seriously from the problem of data sparsity due to severely sparse group-item interactions. In this paper, we propose a self-supervised hypergraph learning framework for group recommendation to achieve two goals: (1) capturing the intra- and inter-group interactions among users; (2) alleviating the data sparsity issue with the raw data itself. Technically, for (1), a hierarchical hypergraph convolutional network based on the user- and group-level hypergraphs is developed to model the complex tuplewise correlations among users within and beyond groups. For (2), we design a double-scale node dropout strategy to create self-supervision signals that can regularize user representations with different granularities against the sparsity issue. The experimental analysis on multiple benchmark datasets demonstrates the superiority of the proposed model and also elucidates the rationality of the hypergraph modeling and the double-scale self-supervision.
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
254,311
1609.05043
Linear representations of convolutional codes over rings
In this paper we extend the relation between convolutional codes and linear systems over finite fields to certain commutative rings through first order representations . We introduce the definition of rings with representations as those for which these representations always exist, and we show that finite products of finite fields belong to this class. We develop the input/state/output representations for convolutional codes over these rings, and we show how to use them to construct observable convolutional codes as in the classical case.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
61,063
1910.04210
Perturbation Sensitivity Analysis to Detect Unintended Model Biases
Data-driven statistical Natural Language Processing (NLP) techniques leverage large amounts of language data to build models that can understand language. However, most language data reflect the public discourse at the time the data was produced, and hence NLP models are susceptible to learning incidental associations around named referents at a particular point in time, in addition to general linguistic meaning. An NLP system designed to model notions such as sentiment and toxicity should ideally produce scores that are independent of the identity of such entities mentioned in text and their social associations. For example, in a general purpose sentiment analysis system, a phrase such as I hate Katy Perry should be interpreted as having the same sentiment as I hate Taylor Swift. Based on this idea, we propose a generic evaluation framework, Perturbation Sensitivity Analysis, which detects unintended model biases related to named entities, and requires no new annotations or corpora. We demonstrate the utility of this analysis by employing it on two different NLP models --- a sentiment model and a toxicity model --- applied on online comments in English language from four different genres.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
148,702
2306.04849
ScaleDet: A Scalable Multi-Dataset Object Detector
Multi-dataset training provides a viable solution for exploiting heterogeneous large-scale datasets without extra annotation cost. In this work, we propose a scalable multi-dataset detector (ScaleDet) that can scale up its generalization across datasets when increasing the number of training datasets. Unlike existing multi-dataset learners that mostly rely on manual relabelling efforts or sophisticated optimizations to unify labels across datasets, we introduce a simple yet scalable formulation to derive a unified semantic label space for multi-dataset training. ScaleDet is trained by visual-textual alignment to learn the label assignment with label semantic similarities across datasets. Once trained, ScaleDet can generalize well on any given upstream and downstream datasets with seen and unseen classes. We conduct extensive experiments using LVIS, COCO, Objects365, OpenImages as upstream datasets, and 13 datasets from Object Detection in the Wild (ODinW) as downstream datasets. Our results show that ScaleDet achieves compelling strong model performance with an mAP of 50.7 on LVIS, 58.8 on COCO, 46.8 on Objects365, 76.2 on OpenImages, and 71.8 on ODinW, surpassing state-of-the-art detectors with the same backbone.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
371,931
1501.03569
On the Capacity of Symmetric Gaussian Interference Channels with Feedback
In this paper, we propose a new coding scheme for symmetric Gaussian interference channels with feedback based on the ideas of time-varying coding schemes. The proposed scheme improves the Suh-Tse and Kramer inner bounds of the channel capacity for the cases of weak and not very strong interference. This improvement is more significant when the signal-to-noise ratio (SNR) is not very high. It is shown theoretically and numerically that our coding scheme can outperform the Kramer code. In addition, the generalized degrees-of-freedom of our proposed coding scheme is equal to the Suh-Tse scheme in the strong interference case. The numerical results show that our coding scheme can attain better performance than the Suh-Tse coding scheme for all channel parameters. Furthermore, the simplicity of the encoding/decoding algorithms is another strong point of our proposed coding scheme compared with the Suh-Tse coding scheme. More importantly, our results show that an optimal coding scheme for the symmetric Gaussian interference channels with feedback can be achieved by using only marginal posterior distributions under a better cooperation strategy between transmitters.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
39,279