id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
1903.06814
Generate What You Can't See - a View-dependent Image Generation
In order to operate autonomously, a robot should explore the environment and build a model of each of the surrounding objects. A common approach is to carefully scan the whole workspace. This is time-consuming. It is also often impossible to reach all the viewpoints required to acquire full knowledge about the environment. Humans can perform shape completion of occluded objects by relying on past experience. Therefore, we propose a method that generates images of an object from various viewpoints using a single input RGB image. A deep neural network is trained to imagine the object appearance from many viewpoints. We present the whole pipeline, which takes a single RGB image as input and returns a sequence of RGB and depth images of the object. The method utilizes a CNN-based object detector to extract the object from the natural scene. Then, the proposed network generates a set of RGB and depth images. We show the results both on a synthetic dataset and on real images.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
124,468
2403.18104
Mathematical Foundation and Corrections for Full Range Head Pose Estimation
Numerous works concerning head pose estimation (HPE) offer algorithms or proposed neural network-based approaches for extracting Euler angles from either facial key points or directly from images of the head region. However, many works failed to provide clear definitions of the coordinate systems and Euler or Tait-Bryan angles orders in use. It is a well-known fact that rotation matrices depend on coordinate systems, and yaw, roll, and pitch angles are sensitive to their application order. Without precise definitions, it becomes challenging to validate the correctness of the output head pose and drawing routines employed in prior works. In this paper, we thoroughly examined the Euler angles defined in the 300W-LP dataset, head pose estimation such as 3DDFA-v2, 6D-RepNet, WHENet, etc, and the validity of their drawing routines of the Euler angles. When necessary, we infer their coordinate system and sequence of yaw, roll, pitch from provided code. This paper presents (1) code and algorithms for inferring coordinate system from provided source code, code for Euler angle application order and extracting precise rotation matrices and the Euler angles, (2) code and algorithms for converting poses from one rotation system to another, (3) novel formulae for 2D augmentations of the rotation matrices, and (4) derivations and code for the correct drawing routines for rotation matrices and poses. This paper also addresses the feasibility of defining rotations with right-handed coordinate system in Wikipedia and SciPy, which makes the Euler angle extraction much easier for full-range head pose research.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
441,763
2004.11517
Computational Experiment Design for Operations Model Simulation
Computer simulations that demonstrate the valueof novel approaches are crucial to developing more flexibleand robust power systems operations with high penetrations ofrenewable energy at multiple geographic and temporal scales.However, optimization-based simulations that depend on forecastdata often face challenges in evaluating performance, reproducingresults, and testing under realistic simulation scenarios. In thispaper, we develop scientific computing best-practices for thevalidation and reproduction of power systems operational models.We then employ two case studies to demonstrate the proposedvalidation and reproduction framework.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
173,935
2208.11827
SSD -- Software for Systems with Delays: Reproducible Examples and Benchmarks on Model Reduction and H2 Norm Computation
We present SSD, Software for Systems with Delays, a de novo MATLAB package for the analysis and model reduction of retarded time delay systems (RTDS). Underneath, our delay system object bridges RTDS representation and Linear Fractional Transformation (LFT) representation of MATLAB. This allows seamless use of many available visualizations of MATLAB. In addition, we implemented a set of key functionalities such as H2 norm and system gramian computations, balanced realization and reduction by direct integral definitions and utilizing sparse computation. As a theoretical contribution, we extend the frequency-limited balanced reduction to delay systems first time, propose a computational algorithm and give its implementation. We collected two sets of benchmark problems on H2 norm computation and model reduction. SSD is publicly available in GitHub at https://github.com/gumussoysuat/ssd. Our reproducible paper and two benchmark collections are shared as executable notebooks.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
314,546
2405.13699
Uncertainty-aware Evaluation of Auxiliary Anomalies with the Expected Anomaly Posterior
Anomaly detection is the task of identifying examples that do not behave as expected. Because anomalies are rare and unexpected events, collecting real anomalous examples is often challenging in several applications. In addition, learning an anomaly detector with limited (or no) anomalies often yields poor prediction performance. One option is to employ auxiliary synthetic anomalies to improve the model training. However, synthetic anomalies may be of poor quality: anomalies that are unrealistic or indistinguishable from normal samples may deteriorate the detector's performance. Unfortunately, no existing methods quantify the quality of auxiliary anomalies. We fill in this gap and propose the expected anomaly posterior (EAP), an uncertainty-based score function that measures the quality of auxiliary anomalies by quantifying the total uncertainty of an anomaly detector. Experimentally on 40 benchmark datasets of images and tabular data, we show that EAP outperforms 12 adapted data quality estimators in the majority of cases.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
456,041
2011.05988
Maximum sampled conditional likelihood for informative subsampling
Subsampling is a computationally effective approach to extract information from massive data sets when computing resources are limited. After a subsample is taken from the full data, most available methods use an inverse probability weighted (IPW) objective function to estimate the model parameters. The IPW estimator does not fully utilize the information in the selected subsample. In this paper, we propose to use the maximum sampled conditional likelihood estimator (MSCLE) based on the sampled data. We established the asymptotic normality of the MSCLE and prove that its asymptotic variance covariance matrix is the smallest among a class of asymptotically unbiased estimators, including the IPW estimator. We further discuss the asymptotic results with the L-optimal subsampling probabilities and illustrate the estimation procedure with generalized linear models. Numerical experiments are provided to evaluate the practical performance of the proposed method.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
206,103
2207.04575
A Waste Copper Granules Rating System Based on Machine Vision
In the field of waste copper granules recycling, engineers should be able to identify all different sorts of impurities in waste copper granules and estimate their mass proportion relying on experience before rating. This manual rating method is costly, lacking in objectivity and comprehensiveness. To tackle this problem, we propose a waste copper granules rating system based on machine vision and deep learning. We firstly formulate the rating task into a 2D image recognition and purity regression task. Then we design a two-stage convolutional rating network to compute the mass purity and rating level of waste copper granules. Our rating network includes a segmentation network and a purity regression network, which respectively calculate the semantic segmentation heatmaps and purity results of the waste copper granules. After training the rating network on the augmented datasets, experiments on real waste copper granules demonstrate the effectiveness and superiority of the proposed network. Specifically, our system is superior to the manual method in terms of accuracy, effectiveness, robustness, and objectivity.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
307,242
2003.05856
Online Fast Adaptation and Knowledge Accumulation: a New Approach to Continual Learning
Continual learning studies agents that learn from streams of tasks without forgetting previous ones while adapting to new ones. Two recent continual-learning scenarios have opened new avenues of research. In meta-continual learning, the model is pre-trained to minimize catastrophic forgetting of previous tasks. In continual-meta learning, the aim is to train agents for faster remembering of previous tasks through adaptation. In their original formulations, both methods have limitations. We stand on their shoulders to propose a more general scenario, OSAKA, where an agent must quickly solve new (out-of-distribution) tasks, while also requiring fast remembering. We show that current continual learning, meta-learning, meta-continual learning, and continual-meta learning techniques fail in this new scenario. We propose Continual-MAML, an online extension of the popular MAML algorithm as a strong baseline for this scenario. We empirically show that Continual-MAML is better suited to the new scenario than the aforementioned methodologies, as well as standard continual learning and meta-learning approaches.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
167,966
2205.11957
Comparison of Fractional-Order and Integer-Order H-infinty Control of a Non-Collocated Two-Mass Oscillator
We consider the robust control of a two-mass oscillator with a dominant input delay. Our aim is to compare a fractional-order tuning approach including the partial compensation of non-minimum phase zeros with a classical H-infinity loop-shaping design, since both these designs lead to a relatively high controller order. First of all a detailed physical model is derived and validated using measurement data. Based on the linearized model both controllers are designed to be comparable, i.e. they show a similar crossover frequency in the open loop and the final controller order is reduced to the same range for both designs. The major differences between both are the different methods how the feed-forward action is included. The loop-shaping approach with fractional-order elements relies on the plant inverse using a flat output, whereas the H-infinty design incorporates a two-degree of freedom control, i.e. the reference signal is included into the known inputs of the generalized plant. Each controller is tested in simulation and experiment. As both open-loops are nearly identical in the frequency range of interest, the results from an input disturbance experiment show no major difference. The different design approaches of the feed-forward path are clearly visible in the tracking experiment.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
298,349
2305.13948
Decoupled Kullback-Leibler Divergence Loss
In this paper, we delve deeper into the Kullback-Leibler (KL) Divergence loss and mathematically prove that it is equivalent to the Decoupled Kullback-Leibler (DKL) Divergence loss that consists of 1) a weighted Mean Square Error (wMSE) loss and 2) a Cross-Entropy loss incorporating soft labels. Thanks to the decomposed formulation of DKL loss, we have identified two areas for improvement. Firstly, we address the limitation of KL/DKL in scenarios like knowledge distillation by breaking its asymmetric optimization property. This modification ensures that the $\mathbf{w}$MSE component is always effective during training, providing extra constructive cues. Secondly, we introduce class-wise global information into KL/DKL to mitigate bias from individual samples. With these two enhancements, we derive the Improved Kullback-Leibler (IKL) Divergence loss and evaluate its effectiveness by conducting experiments on CIFAR-10/100 and ImageNet datasets, focusing on adversarial training, and knowledge distillation tasks. The proposed approach achieves new state-of-the-art adversarial robustness on the public leaderboard -- RobustBench and competitive performance on knowledge distillation, demonstrating the substantial practical merits. Our code is available at https://github.com/jiequancui/DKL.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
366,781
1909.12483
Mapping with Reflection -- Detection and Utilization of Reflection in 3D Lidar Scans
This paper presents a method to detect reflection of 3D light detection and ranging (Lidar) scans and uses it to classify the points and also map objects outside the line of sight. Our software uses several approaches to analyze the point cloud, including intensity peak detection, dual return detection, plane fitting, and finding the boundaries. These approaches can classify the point cloud and detect the reflection in it. By mirroring the reflection points on the detected window pane and adding classification labels on the points, we can improve the map quality in a Simultaneous Localization and Mapping (SLAM) framework. Experiments using real scan data and ground truth data showcase the effectiveness of our method.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
147,144
1508.02626
Answering Fuzzy Conjunctive Queries over Finitely Valued Fuzzy Ontologies
Fuzzy Description Logics (DLs) provide a means for representing vague knowledge about an application domain. In this paper, we study fuzzy extensions of conjunctive queries (CQs) over the DL $\mathcal{SROIQ}$ based on finite chains of degrees of truth. To answer such queries, we extend a well-known technique that reduces the fuzzy ontology to a classical one, and use classical DL reasoners as a black box. We improve the complexity of previous reduction techniques for finitely valued fuzzy DLs, which allows us to prove tight complexity results for answering certain kinds of fuzzy CQs. We conclude with an experimental evaluation of a prototype implementation, showing the feasibility of our approach.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
45,925
1603.05361
Adaptive Rejection of Periodic Disturbances Acting on Linear Systems with Unknown Dynamics
This paper proposes a novel direct adaptive control method for rejecting unknown deterministic disturbances and tracking unknown trajectories in systems with uncertain dynamics when the disturbances or trajectories are the summation of multiple sinusoids with known frequencies, such as periodic profiles or disturbances. The proposed algorithm does not require a model of the plant dynamics and does not use batches of measurements in the adaptation process. Moreover, it is applicable to both minimum and non-minimum phase plants. The algorithm is a "direct" adaptive method, in the sense that the identification of system parameters and the control design are performed simultaneously. In order to verify the effectiveness of the proposed method, an add-on controller is designed and implemented in the servo system of a hard disk drive to track unknown nano-scale periodic trajectories.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
53,355
1906.01873
Towards conceptual generalization in the embedding space
Humans are able to conceive physical reality by jointly learning different facets thereof. To every pair of notions related to a perceived reality may correspond a mutual relation, which is a notion on its own, but one-level higher. Thus, we may have a description of perceived reality on at least two levels and the translation map between them is in general, due to their different content corpus, one-to-many. Following success of the unsupervised neural machine translation models, which are essentially one-to-one mappings trained separately on monolingual corpora, we examine further capabilities of the unsupervised deep learning methods used there and apply some of these methods to sets of notions of different level and measure. Using the graph and word embedding-like techniques, we build one-to-many map without parallel data in order to establish a unified vector representation of the outer world by combining notions of different kind into a unique conceptual framework. Due to their latent similarity, by aligning the two embedding spaces in purely unsupervised way, one obtains a geometric relation between objects of cognition on the two levels, making it possible to express a natural knowledge using one description in the context of the other.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
133,867
2205.03530
Gigs with Guarantees: Achieving Fair Wage for Food Delivery Workers
With the increasing popularity of food delivery platforms, it has become pertinent to look into the working conditions of the 'gig' workers in these platforms, especially providing them fair wages, reasonable working hours, and transparency on work availability. However, any solution to these problems must not degrade customer experience and be cost-effective to ensure that platforms are willing to adopt them. We propose WORK4FOOD, which provides income guarantees to delivery agents, while minimizing platform costs and ensuring customer satisfaction. WORK4FOOD ensures that the income guarantees are met in such a way that it does not lead to increased working hours or degrade environmental impact. To incorporate these objectives, WORK4FOOD balances supply and demand by controlling the number of agents in the system and providing dynamic payment guarantees to agents based on factors such as agent location, ratings, etc. We evaluate WORK4FOOD on a real-world dataset from a leading food delivery platform and establish its advantages over the state of the art in terms of the multi-dimensional objectives at hand.
false
false
false
true
true
false
false
false
false
false
false
false
false
true
false
false
false
false
295,311
1307.8104
Neural Network Capacity for Multilevel Inputs
This paper examines the memory capacity of generalized neural networks. Hopfield networks trained with a variety of learning techniques are investigated for their capacity both for binary and non-binary alphabets. It is shown that the capacity can be much increased when multilevel inputs are used. New learning strategies are proposed to increase Hopfield network capacity, and the scalability of these methods is also examined in respect to size of the network. The ability to recall entire patterns from stimulation of a single neuron is examined for the increased capacity networks.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
26,168
2004.06843
Bayesian differential programming for robust systems identification under uncertainty
This paper presents a machine learning framework for Bayesian systems identification from noisy, sparse and irregular observations of nonlinear dynamical systems. The proposed method takes advantage of recent developments in differentiable programming to propagate gradient information through ordinary differential equation solvers and perform Bayesian inference with respect to unknown model parameters using Hamiltonian Monte Carlo. This allows us to efficiently infer posterior distributions over plausible models with quantified uncertainty, while the use of sparsity-promoting priors enables the discovery of interpretable and parsimonious representations for the underlying latent dynamics. A series of numerical studies is presented to demonstrate the effectiveness of the proposed methods including nonlinear oscillators, predator-prey systems, chaotic dynamics and systems biology. Taken all together, our findings put forth a novel, flexible and robust workflow for data-driven model discovery under uncertainty.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
172,617
1810.00740
Improving the Generalization of Adversarial Training with Domain Adaptation
By injecting adversarial examples into training data, adversarial training is promising for improving the robustness of deep learning models. However, most existing adversarial training approaches are based on a specific type of adversarial attack. It may not provide sufficiently representative samples from the adversarial domain, leading to a weak generalization ability on adversarial examples from other attacks. Moreover, during the adversarial training, adversarial perturbations on inputs are usually crafted by fast single-step adversaries so as to scale to large datasets. This work is mainly focused on the adversarial training yet efficient FGSM adversary. In this scenario, it is difficult to train a model with great generalization due to the lack of representative adversarial samples, aka the samples are unable to accurately reflect the adversarial domain. To alleviate this problem, we propose a novel Adversarial Training with Domain Adaptation (ATDA) method. Our intuition is to regard the adversarial training on FGSM adversary as a domain adaption task with limited number of target domain samples. The main idea is to learn a representation that is semantically meaningful and domain invariant on the clean domain as well as the adversarial domain. Empirical evaluations on Fashion-MNIST, SVHN, CIFAR-10 and CIFAR-100 demonstrate that ATDA can greatly improve the generalization of adversarial training and the smoothness of the learned models, and outperforms state-of-the-art methods on standard benchmark datasets. To show the transfer ability of our method, we also extend ATDA to the adversarial training on iterative attacks such as PGD-Adversial Training (PAT) and the defense performance is improved considerably.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
109,250
2110.08191
Why don't people use character-level machine translation?
We present a literature and empirical survey that critically assesses the state of the art in character-level modeling for machine translation (MT). Despite evidence in the literature that character-level systems are comparable with subword systems, they are virtually never used in competitive setups in WMT competitions. We empirically show that even with recent modeling innovations in character-level natural language processing, character-level MT systems still struggle to match their subword-based counterparts. Character-level MT systems show neither better domain robustness, nor better morphological generalization, despite being often so motivated. However, we are able to show robustness towards source side noise and that translation quality does not degrade with increasing beam size at decoding time.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
261,286
2010.11525
Quiver Signal Processing (QSP)
In this paper we state the basics for a signal processing framework on quiver representations. A quiver is a directed graph and a quiver representation is an assignment of vector spaces to the nodes of the graph and of linear maps between the vector spaces associated to the nodes. Leveraging the tools from representation theory, we propose a signal processing framework that allows us to handle heterogeneous multidimensional information in networks. We provide a set of examples where this framework provides a natural set of tools to understand apparently hidden structure in information. We remark that the proposed framework states the basis for building graph neural networks where information can be processed and handled in alternative ways.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
202,291
2007.15869
Behavioral Economics for Human-in-the-loop Control Systems Design: Overconfidence and the hot hand fallacy
Successful design of human-in-the-loop control systems requires appropriate models for human decision makers. Whilst most paradigms adopted in the control systems literature hide the (limited) decision capability of humans, in behavioral economics individual decision making and optimization processes are well-known to be affected by perceptual and behavioral biases. Our goal is to enrich control engineering with some insights from behavioral economics research through exposing such biases in control-relevant settings. This paper addresses the following two key questions: 1) How do behavioral biases affect decision making? 2) What is the role played by feedback in human-in-the-loop control systems? Our experimental framework shows how individuals behave when faced with the task of piloting an UAV under risk and uncertainty, paralleling a real-world decision-making scenario. Our findings support the notion of humans in Cyberphysical Systems underlying behavioral biases regardless of -- or even because of -- receiving immediate outcome feedback. We observe substantial shares of drone controllers to act inefficiently through either flying excessively (overconfident) or overly conservatively (underconfident). Furthermore, we observe human-controllers to self-servingly misinterpret random sequences through being subject to a "hot hand fallacy". We advise control engineers to mind the human component in order not to compromise technological accomplishments through human issues.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
189,793
1907.00825
Time-to-Event Prediction with Neural Networks and Cox Regression
New methods for time-to-event prediction are proposed by extending the Cox proportional hazards model with neural networks. Building on methodology from nested case-control studies, we propose a loss function that scales well to large data sets, and enables fitting of both proportional and non-proportional extensions of the Cox model. Through simulation studies, the proposed loss function is verified to be a good approximation for the Cox partial log-likelihood. The proposed methodology is compared to existing methodologies on real-world data sets, and is found to be highly competitive, typically yielding the best performance in terms of Brier score and binomial log-likelihood. A python package for the proposed methods is available at https://github.com/havakv/pycox.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
137,153
1909.02603
Additive function approximation in the brain
Many biological learning systems such as the mushroom body, hippocampus, and cerebellum are built from sparsely connected networks of neurons. For a new understanding of such networks, we study the function spaces induced by sparse random features and characterize what functions may and may not be learned. A network with $d$ inputs per neuron is found to be equivalent to an additive model of order $d$, whereas with a degree distribution the network combines additive terms of different orders. We identify three specific advantages of sparsity: additive function approximation is a powerful inductive bias that limits the curse of dimensionality, sparse networks are stable to outlier noise in the inputs, and sparse random features are scalable. Thus, even simple brain architectures can be powerful function approximators. Finally, we hope that this work helps popularize kernel theories of networks among computational neuroscientists.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
144,237
1907.10526
A Convolutional Forward and Back-Projection Model for Fan-Beam Geometry
Iterative methods for tomographic image reconstruction have great potential for enabling high quality imaging from low-dose projection data. The computational burden of iterative reconstruction algorithms, however, has been an impediment in their adoption in practical CT reconstruction problems. We present an approach for highly efficient and accurate computation of forward model for image reconstruction in fan-beam geometry in X-ray CT. The efficiency of computations makes this approach suitable for large-scale optimization algorithms with on-the-fly, memory-less, computations of the forward and back-projection. Our experiments demonstrate the improvements in accuracy as well as efficiency of our model, specifically for first-order box splines (i.e., pixel-basis) compared to recently developed methods for this purpose, namely Look-up Table-based Ray Integration (LTRI) and Separable Footprints (SF) in 2-D.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
139,647
2411.19593
Self-Supervised Denoiser Framework
Reconstructing images using Computed Tomography (CT) in an industrial context leads to specific challenges that differ from those encountered in other areas, such as clinical CT. Indeed, non-destructive testing with industrial CT will often involve scanning multiple similar objects while maintaining high throughput, requiring short scanning times, which is not a relevant concern in clinical CT. Under-sampling the tomographic data (sinograms) is a natural way to reduce the scanning time at the cost of image quality since the latter depends on the number of measurements. In such a scenario, post-processing techniques are required to compensate for the image artifacts induced by the sinogram sparsity. We introduce the Self-supervised Denoiser Framework (SDF), a self-supervised training method that leverages pre-training on highly sampled sinogram data to enhance the quality of images reconstructed from undersampled sinogram data. The main contribution of SDF is that it proposes to train an image denoiser in the sinogram space by setting the learning task as the prediction of one sinogram subset from another. As such, it does not require ground-truth image data, leverages the abundant data modality in CT, the sinogram, and can drastically enhance the quality of images reconstructed from a fraction of the measurements. We demonstrate that SDF produces better image quality, in terms of peak signal-to-noise ratio, than other analytical and self-supervised frameworks in both 2D fan-beam or 3D cone-beam CT settings. Moreover, we show that the enhancement provided by SDF carries over when fine-tuning the image denoiser on a few examples, making it a suitable pre-training technique in a context where there is little high-quality image data. Our results are established on experimental datasets, making SDF a strong candidate for being the building block of foundational image-enhancement models in CT.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
512,320
1911.03962
Classical linear logic, cobordisms and categorial grammars
We propose a categorial grammar based on classical multiplicative linear logic. This can be seen as an extension of abstract categorial grammars (ACG) and is at least as expressive. However, constituents of {\it linear logic grammars (LLG)} are not abstract ${\lambda}$-terms, but simply tuples of words with labeled endpoints and supplied with specific {\it plugging instructions}: the sets of endpoints are subdivided into the {\it incoming} and the {\it outgoing} parts. We call such objects {\it word cobordisms}. A key observation is that word cobordisms can be organized in a category, very similar to the familiar category of topological cobordisms. This category is symmetric monoidal closed and compact closed and thus is a model of linear $\lambda$-calculus and classical, as well as intuitionistic linear logic. This allows us using linear logic as a typing system for word cobordisms. At least, this gives a concrete and intuitive representation of ACG. We think, however, that the category of word cobordisms, which has a rich structure and is independent of any grammar, might be interesting on its own right.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
152,844
2010.15437
Memory Attentive Fusion: External Language Model Integration for Transformer-based Sequence-to-Sequence Model
This paper presents a novel fusion method for integrating an external language model (LM) into the Transformer based sequence-to-sequence (seq2seq) model. While paired data are basically required to train the seq2seq model, the external LM can be trained with only unpaired data. Thus, it is important to leverage memorized knowledge in the external LM for building the seq2seq model, since it is hard to prepare a large amount of paired data. However, the existing fusion methods assume that the LM is integrated with recurrent neural network-based seq2seq models instead of the Transformer. Therefore, this paper proposes a fusion method that can explicitly utilize network structures in the Transformer. The proposed method, called {\bf memory attentive fusion}, leverages the Transformer-style attention mechanism that repeats source-target attention in a multi-hop manner for reading the memorized knowledge in the LM. Our experiments on two text-style conversion tasks demonstrate that the proposed method performs better than conventional fusion methods.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
203,772
2404.00354
Follow me: an architecture for user identification and social navigation with a mobile robot
Over the past decade, a multitude of service robots have been developed to fulfill a wide range of practical purposes. Notably, roles such as reception and robotic guidance have garnered extensive popularity. In these positions, robots are progressively assuming the responsibilities traditionally held by human staff in assisting customers. Ensuring the safe and socially acceptable operation of robots in such environments poses a fundamental challenge within the context of Socially Responsible Navigation (SRN). This article presents an architecture for user identification and social navigation with a mobile robot that employs computer vision, machine learning, and artificial intelligence algorithms to identify and guide users in a social navigation context, thereby providing an intuitive and user-friendly experience with the robot.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
442,879
2007.12603
IR-BERT: Leveraging BERT for Semantic Search in Background Linking for News Articles
This work describes our two approaches for the background linking task of TREC 2020 News Track. The main objective of this task is to recommend a list of relevant articles that the reader should refer to in order to understand the context and gain background information of the query article. Our first approach focuses on building an effective search query by combining weighted keywords extracted from the query document and uses BM25 for retrieval. The second approach leverages the capability of SBERT (Nils Reimers et al.) to learn contextual representations of the query in order to perform semantic search over the corpus. We empirically show that employing a language model benefits our approach in understanding the context as well as the background of the query article. The proposed approaches are evaluated on the TREC 2018 Washington Post dataset and our best model outperforms the TREC median as well as the highest scoring model of 2018 in terms of the nDCG@5 metric. We further propose a diversity measure to evaluate the effectiveness of the various approaches in retrieving a diverse set of documents. This would potentially motivate researchers to work on introducing diversity in their recommended list. We have open sourced our implementation on Github and plan to submit our runs for the background linking task in TREC 2020.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
188,870
2402.16006
ASETF: A Novel Method for Jailbreak Attack on LLMs through Translate Suffix Embeddings
The safety defense methods of Large language models(LLMs) stays limited because the dangerous prompts are manually curated to just few known attack types, which fails to keep pace with emerging varieties. Recent studies found that attaching suffixes to harmful instructions can hack the defense of LLMs and lead to dangerous outputs. However, similar to traditional text adversarial attacks, this approach, while effective, is limited by the challenge of the discrete tokens. This gradient based discrete optimization attack requires over 100,000 LLM calls, and due to the unreadable of adversarial suffixes, it can be relatively easily penetrated by common defense methods such as perplexity filters. To cope with this challenge, in this paper, we proposes an Adversarial Suffix Embedding Translation Framework (ASETF), aimed at transforming continuous adversarial suffix embeddings into coherent and understandable text. This method greatly reduces the computational overhead during the attack process and helps to automatically generate multiple adversarial samples, which can be used as data to strengthen LLMs security defense. Experimental evaluations were conducted on Llama2, Vicuna, and other prominent LLMs, employing harmful directives sourced from the Advbench dataset. The results indicate that our method significantly reduces the computation time of adversarial suffixes and achieves a much better attack success rate to existing techniques, while significantly enhancing the textual fluency of the prompts. In addition, our approach can be generalized into a broader method for generating transferable adversarial suffixes that can successfully attack multiple LLMs, even black-box LLMs, such as ChatGPT and Gemini.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
432,379
2311.11948
Mapping Pipelines and Simultaneous Localization for Petrochemical Industry Robots
Inspecting petrochemical pipelines is challenging due to hazardous materials, narrow diameters, and inaccessible locations. Mobile robots are promising for autonomous pipeline inspection and mapping. This project aimed to simulate and implement a robot capable of simultaneous localization and mapping (SLAM) in an indoor maze-like environment representing simplified pipelines. The approach involved simulating a differential drive robot in Gazebo/ROS, equipping it with sensors, implementing SLAM using mapping, and path planning with move_base. A physical robot was then built and tested by manually driving it in a constructed maze while collecting sensor data and mapping. Sensor fusion of wheel encoders, Kinect camera, and inertial measurement unit (IMU) data was explored to improve odometry and mapping accuracy without encoders. The final map had reasonable correspondence to the true maze despite lacking wheel encoders. In summary, results show the feasibility of using ROS-based SLAM for pipeline inspection if accounting for real-world complexities.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
409,138
2306.08671
Learning to Predict Scene-Level Implicit 3D from Posed RGBD Data
We introduce a method that can learn to predict scene-level implicit functions for 3D reconstruction from posed RGBD data. At test time, our system maps a previously unseen RGB image to a 3D reconstruction of a scene via implicit functions. While implicit functions for 3D reconstruction have often been tied to meshes, we show that we can train one using only a set of posed RGBD images. This setting may help 3D reconstruction unlock the sea of accelerometer+RGBD data that is coming with new phones. Our system, D2-DRDF, can match and sometimes outperform current methods that use mesh supervision and shows better robustness to sparse data.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
373,493
cs/0303015
Statistical efficiency of curve fitting algorithms
We study the problem of fitting parametrized curves to noisy data. Under certain assumptions (known as Cartesian and radial functional models), we derive asymptotic expressions for the bias and the covariance matrix of the parameter estimates. We also extend Kanatani's version of the Cramer-Rao lower bound, which he proved for unbiased estimates only, to more general estimates that include many popular algorithms (most notably, the orthogonal least squares and algebraic fits). We then show that the gradient-weighted algebraic fit is statistically efficient and describe all other statistically efficient algebraic fits.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
537,807
1007.0484
Query Strategies for Evading Convex-Inducing Classifiers
Classifiers are often used to detect miscreant activities. We study how an adversary can systematically query a classifier to elicit information that allows the adversary to evade detection while incurring a near-minimal cost of modifying their intended malfeasance. We generalize the theory of Lowd and Meek (2005) to the family of convex-inducing classifiers that partition input space into two sets one of which is convex. We present query algorithms for this family that construct undetected instances of approximately minimal cost using only polynomially-many queries in the dimension of the space and in the level of approximation. Our results demonstrate that near-optimal evasion can be accomplished without reverse-engineering the classifier's decision boundary. We also consider general lp costs and show that near-optimal evasion on the family of convex-inducing classifiers is generally efficient for both positive and negative convexity for all levels of approximation if p=1.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
true
6,964
2407.15187
HoloDreamer: Holistic 3D Panoramic World Generation from Text Descriptions
3D scene generation is in high demand across various domains, including virtual reality, gaming, and the film industry. Owing to the powerful generative capabilities of text-to-image diffusion models that provide reliable priors, the creation of 3D scenes using only text prompts has become viable, thereby significantly advancing researches in text-driven 3D scene generation. In order to obtain multiple-view supervision from 2D diffusion models, prevailing methods typically employ the diffusion model to generate an initial local image, followed by iteratively outpainting the local image using diffusion models to gradually generate scenes. Nevertheless, these outpainting-based approaches prone to produce global inconsistent scene generation results without high degree of completeness, restricting their broader applications. To tackle these problems, we introduce HoloDreamer, a framework that first generates high-definition panorama as a holistic initialization of the full 3D scene, then leverage 3D Gaussian Splatting (3D-GS) to quickly reconstruct the 3D scene, thereby facilitating the creation of view-consistent and fully enclosed 3D scenes. Specifically, we propose Stylized Equirectangular Panorama Generation, a pipeline that combines multiple diffusion models to enable stylized and detailed equirectangular panorama generation from complex text prompts. Subsequently, Enhanced Two-Stage Panorama Reconstruction is introduced, conducting a two-stage optimization of 3D-GS to inpaint the missing region and enhance the integrity of the scene. Comprehensive experiments demonstrated that our method outperforms prior works in terms of overall visual consistency and harmony as well as reconstruction quality and rendering robustness when generating fully enclosed scenes.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
475,069
1906.06442
Tagged Back-Translation
Recent work in Neural Machine Translation (NMT) has shown significant quality gains from noised-beam decoding during back-translation, a method to generate synthetic parallel data. We show that the main role of such synthetic noise is not to diversify the source side, as previously suggested, but simply to indicate to the model that the given source is synthetic. We propose a simpler alternative to noising techniques, consisting of tagging back-translated source sentences with an extra token. Our results on WMT outperform noised back-translation in English-Romanian and match performance on English-German, re-defining state-of-the-art in the former.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
135,302
1902.09686
Maximum Marginal Likelihood Estimation of Phase Connections in Power Distribution Systems
Accurate phase connectivity information is essential for advanced monitoring and control applications in power distribution systems. The existing data-driven approaches for phase identification lack precise physical interpretation and theoretical performance guarantee. Their performance generally deteriorates as the complexity of the network, the number of phase connections, and the level of load balance increase. In this paper, by linearizing the three-phase power flow manifold, we develop a physical model, which links the phase connections to the smart meter measurements. The phase identification problem is first formulated as a maximum likelihood estimation problem and then reformulated as a maximum marginal likelihood estimation problem. We prove that the correct phase connection achieves the highest log likelihood values for both problems. An efficient solution method is proposed by decomposing the original problem into subproblems with a binary least-squares formulation. The numerical tests on a comprehensive set of distribution circuits show that our proposed method yields very high accuracy on both radial and meshed distribution circuits with a combination of single-phase, two-phase, and three-phase loads. The proposed algorithm is robust with respect to inaccurate feeder models and incomplete measurements. It also outperforms the existing methods on complex circuits.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
122,468
1903.02958
Reparameterizing Distributions on Lie Groups
Reparameterizable densities are an important way to learn probability distributions in a deep learning setting. For many distributions it is possible to create low-variance gradient estimators by utilizing a `reparameterization trick'. Due to the absence of a general reparameterization trick, much research has recently been devoted to extend the number of reparameterizable distributional families. Unfortunately, this research has primarily focused on distributions defined in Euclidean space, ruling out the usage of one of the most influential class of spaces with non-trivial topologies: Lie groups. In this work we define a general framework to create reparameterizable densities on arbitrary Lie groups, and provide a detailed practitioners guide to further the ease of usage. We demonstrate how to create complex and multimodal distributions on the well known oriented group of 3D rotations, $\operatorname{SO}(3)$, using normalizing flows. Our experiments on applying such distributions in a Bayesian setting for pose estimation on objects with discrete and continuous symmetries, showcase their necessity in achieving realistic uncertainty estimates.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
123,608
1610.05556
Identifiability and Transportability in Dynamic Causal Networks
In this paper we propose a causal analog to the purely observational Dynamic Bayesian Networks, which we call Dynamic Causal Networks. We provide a sound and complete algorithm for identification of Dynamic Causal Net- works, namely, for computing the effect of an intervention or experiment, based on passive observations only, whenever possible. We note the existence of two types of confounder variables that affect in substantially different ways the iden- tification procedures, a distinction with no analog in either Dynamic Bayesian Networks or standard causal graphs. We further propose a procedure for the transportability of causal effects in Dynamic Causal Network settings, where the re- sult of causal experiments in a source domain may be used for the identification of causal effects in a target domain.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
62,533
1711.11473
Spatially-Adaptive Filter Units for Deep Neural Networks
Classical deep convolutional networks increase receptive field size by either gradual resolution reduction or application of hand-crafted dilated convolutions to prevent increase in the number of parameters. In this paper we propose a novel displaced aggregation unit (DAU) that does not require hand-crafting. In contrast to classical filters with units (pixels) placed on a fixed regular grid, the displacement of the DAUs are learned, which enables filters to spatially-adapt their receptive field to a given problem. We extensively demonstrate the strength of DAUs on a classification and semantic segmentation tasks. Compared to ConvNets with regular filter, ConvNets with DAUs achieve comparable performance at faster convergence and up to 3-times reduction in parameters. Furthermore, DAUs allow us to study deep networks from novel perspectives. We study spatial distributions of DAU filters and analyze the number of parameters allocated for spatial coverage in a filter.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
85,785
1511.03198
Sliced Wasserstein Kernels for Probability Distributions
Optimal transport distances, otherwise known as Wasserstein distances, have recently drawn ample attention in computer vision and machine learning as a powerful discrepancy measure for probability distributions. The recent developments on alternative formulations of the optimal transport have allowed for faster solutions to the problem and has revamped its practical applications in machine learning. In this paper, we exploit the widely used kernel methods and provide a family of provably positive definite kernels based on the Sliced Wasserstein distance and demonstrate the benefits of these kernels in a variety of learning tasks. Our work provides a new perspective on the application of optimal transport flavored distances through kernel methods in machine learning tasks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
48,728
2007.08005
Xiaomingbot: A Multilingual Robot News Reporter
This paper proposes the building of Xiaomingbot, an intelligent, multilingual and multimodal software robot equipped with four integral capabilities: news generation, news translation, news reading and avatar animation. Its system summarizes Chinese news that it automatically generates from data tables. Next, it translates the summary or the full article into multiple languages, and reads the multilingual rendition through synthesized speech. Notably, Xiaomingbot utilizes a voice cloning technology to synthesize the speech trained from a real person's voice data in one input language. The proposed system enjoys several merits: it has an animated avatar, and is able to generate and read multilingual news. Since it was put into practice, Xiaomingbot has written over 600,000 articles, and gained over 150,000 followers on social media platforms.
false
false
true
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
187,488
1703.03453
Using Options and Covariance Testing for Long Horizon Off-Policy Policy Evaluation
Evaluating a policy by deploying it in the real world can be risky and costly. Off-policy policy evaluation (OPE) algorithms use historical data collected from running a previous policy to evaluate a new policy, which provides a means for evaluating a policy without requiring it to ever be deployed. Importance sampling is a popular OPE method because it is robust to partial observability and works with continuous states and actions. However, the amount of historical data required by importance sampling can scale exponentially with the horizon of the problem: the number of sequential decisions that are made. We propose using policies over temporally extended actions, called options, and show that combining these policies with importance sampling can significantly improve performance for long-horizon problems. In addition, we can take advantage of special cases that arise due to options-based policies to further improve the performance of importance sampling. We further generalize these special cases to a general covariance testing rule that can be used to decide which weights to drop in an IS estimate, and derive a new IS algorithm called Incremental Importance Sampling that can provide significantly more accurate estimates for a broad class of domains.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
69,732
cs/0405014
Estimating Genome Reversal Distance by Genetic Algorithm
Sorting by reversals is an important problem in inferring the evolutionary relationship between two genomes. The problem of sorting unsigned permutation has been proven to be NP-hard. The best guaranteed error bounded is the 3/2- approximation algorithm. However, the problem of sorting signed permutation can be solved easily. Fast algorithms have been developed both for finding the sorting sequence and finding the reversal distance of signed permutation. In this paper, we present a way to view the problem of sorting unsigned permutation as signed permutation. And the problem can then be seen as searching an optimal signed permutation in all n2 corresponding signed permutations. We use genetic algorithm to conduct the search. Our experimental result shows that the proposed method outperform the 3/2-approximation algorithm.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
538,177
2211.13902
TAOTF: A Two-stage Approximately Orthogonal Training Framework in Deep Neural Networks
The orthogonality constraints, including the hard and soft ones, have been used to normalize the weight matrices of Deep Neural Network (DNN) models, especially the Convolutional Neural Network (CNN) and Vision Transformer (ViT), to reduce model parameter redundancy and improve training stability. However, the robustness to noisy data of these models with constraints is not always satisfactory. In this work, we propose a novel two-stage approximately orthogonal training framework (TAOTF) to find a trade-off between the orthogonal solution space and the main task solution space to solve this problem in noisy data scenarios. In the first stage, we propose a novel algorithm called polar decomposition-based orthogonal initialization (PDOI) to find a good initialization for the orthogonal optimization. In the second stage, unlike other existing methods, we apply soft orthogonal constraints for all layers of DNN model. We evaluate the proposed model-agnostic framework both on the natural image and medical image datasets, which show that our method achieves stable and superior performances to existing methods.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
332,642
2309.11968
Quantum complementarity: A novel resource for unambiguous exclusion and encryption
Complementarity is a phenomenon explaining several core features of quantum theory, such as the well-known uncertainty principle. Roughly speaking, two objects are said to be complementary if being certain about one of them necessarily forbids useful knowledge about the other. Two quantum measurements that do not commute form an example of complementary measurements, and this phenomenon can also be defined for ensembles of states. Although a key quantum feature, it is unclear whether complementarity can be understood more operationally, as a necessary resource in some quantum information task. Here we show this is the case, and relates to a novel task which we term $\eta$-unambiguous exclusion. As well as giving complementarity a clear operational definition, this also uncovers the foundational underpinning of unambiguous exclusion tasks for the first time. We further show that a special type of measurement complementarity is equivalent to advantages in certain encryption tasks. Finally, our analysis suggest that complementarity of measurement and state ensemble can be interpreted as strong forms of measurement incompatibility and quantum steering, respectively.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
393,610
2303.03348
Thompson Sampling for Linear Bandit Problems with Normal-Gamma Priors
We consider Thompson sampling for linear bandit problems with finitely many independent arms, where rewards are sampled from normal distributions that are linearly dependent on unknown parameter vectors and with unknown variance. Specifically, with a Bayesian formulation we consider multivariate normal-gamma priors to represent environment uncertainty for all involved parameters. We show that our chosen sampling prior is a conjugate prior to the reward model and derive a Bayesian regret bound for Thompson sampling under the condition that the 5/2-moment of the variance distribution exist.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
349,692
1403.5715
Mining Attribute-Based Access Control Policies from Logs
Attribute-based access control (ABAC) provides a high level of flexibility that promotes security and information sharing. ABAC policy mining algorithms have potential to significantly reduce the cost of migration to ABAC, by partially automating the development of an ABAC policy from information about the existing access-control policy and attribute data. This paper presents an algorithm for mining ABAC policies from operation logs and attribute data. To the best of our knowledge, it is the first algorithm for this problem.
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
true
false
31,757
2311.08870
One-Shot Federated Learning with Classifier-Guided Diffusion Models
One-shot federated learning (OSFL) has gained attention in recent years due to its low communication cost. However, most of the existing methods require auxiliary datasets or training generators, which hinders their practicality in real-world scenarios. In this paper, we explore the novel opportunities that diffusion models bring to OSFL and propose FedCADO, utilizing guidance from client classifiers to generate data that complies with clients' distributions and subsequently training the aggregated model on the server. Specifically, our method involves targeted optimizations in two aspects. On one hand, we conditionally edit the randomly sampled initial noises, embedding them with specified semantics and distributions, resulting in a significant improvement in both the quality and stability of generation. On the other hand, we employ the BN statistics from the classifiers to provide detailed guidance during generation. These tailored optimizations enable us to limitlessly generate datasets, which closely resemble the distribution and quality of the original client dataset. Our method effectively handles the heterogeneous client models and the problems of non-IID features or labels. In terms of privacy protection, our method avoids training any generator or transferring any auxiliary information on clients, eliminating any additional privacy leakage risks. Leveraging the extensive knowledge stored in the pre-trained diffusion model, the synthetic datasets can assist us in surpassing the knowledge limitations of the client samples, resulting in aggregation models that even outperform the performance ceiling of centralized training in some cases, which is convincingly demonstrated in the sufficient quantification and visualization experiments conducted on three large-scale multi-domain image datasets.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
407,899
1505.05007
Modelling-based experiment retrieval: A case study with gene expression clustering
Motivation: Public and private repositories of experimental data are growing to sizes that require dedicated methods for finding relevant data. To improve on the state of the art of keyword searches from annotations, methods for content-based retrieval have been proposed. In the context of gene expression experiments, most methods retrieve gene expression profiles, requiring each experiment to be expressed as a single profile, typically of case vs. control. A more general, recently suggested alternative is to retrieve experiments whose models are good for modelling the query dataset. However, for very noisy and high-dimensional query data, this retrieval criterion turns out to be very noisy as well. Results: We propose doing retrieval using a denoised model of the query dataset, instead of the original noisy dataset itself. To this end, we introduce a general probabilistic framework, where each experiment is modelled separately and the retrieval is done by finding related models. For retrieval of gene expression experiments, we use a probabilistic model called product partition model, which induces a clustering of genes that show similar expression patterns across a number of samples. The suggested metric for retrieval using clusterings is the normalized information distance. Empirical results finally suggest that inference for the full probabilistic model can be approximated with good performance using computationally faster heuristic clustering approaches (e.g. $k$-means). The method is highly scalable and straightforward to apply to construct a general-purpose gene expression experiment retrieval method. Availability: The method can be implemented using standard clustering algorithms and normalized information distance, available in many statistical software packages.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
43,256
2408.08583
GrassNet: State Space Model Meets Graph Neural Network
Designing spectral convolutional networks is a formidable task in graph learning. In traditional spectral graph neural networks (GNNs), polynomial-based methods are commonly used to design filters via the Laplacian matrix. In practical applications, however, these polynomial methods encounter inherent limitations, which primarily arise from the the low-order truncation of polynomial filters and the lack of overall modeling of the graph spectrum. This leads to poor performance of existing spectral approaches on real-world graph data, especially when the spectrum is highly concentrated or contains many numerically identical values, as they tend to apply the exact same modulation to signals with the same frequencies. To overcome these issues, in this paper, we propose Graph State Space Network (GrassNet), a novel graph neural network with theoretical support that provides a simple yet effective scheme for designing and learning arbitrary graph spectral filters. In particular, our GrassNet introduces structured state space models (SSMs) to model the correlations of graph signals at different frequencies and derives a unique rectification for each frequency in the graph spectrum. To the best of our knowledge, our work is the first to employ SSMs for the design of GNN spectral filters, and it theoretically offers greater expressive power compared with polynomial filters. Extensive experiments on nine public benchmarks reveal that GrassNet achieves superior performance in real-world graph modeling tasks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
481,054
1503.00090
Improved Image Deblurring based on Salient-region Segmentation
Image deblurring techniques play important roles in many image processing applications. As the blur varies spatially across the image plane, it calls for robust and effective methods to deal with the spatially-variant blur problem. In this paper, a Saliency-based Deblurring (SD) approach is proposed based on the saliency detection for salient-region segmentation and a corresponding compensate method for image deblurring. We also propose a PDE-based deblurring method which introduces an anisotropic Partial Differential Equation (PDE) model for latent image prediction and employs an adaptive optimization model in the kernel estimation and deconvolution steps. Experimental results demonstrate the effectiveness of the proposed algorithm.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
40,663
1909.10913
On Constant Distance Spacing Policies for Cooperative Adaptive Cruise Control
Cooperative Adaptive Cruise Control (CACC) systems are considered as key potential enablers to improve driving safety and traffic efficiency. They allow for automated vehicle following using wireless communication in addition to onboard sensors. To achieve string stability in CACC platoons, constant time headway (CTH) spacing policies have prevailed in research; namely, vehicle interspacing grows with the speed. While constant distance headway (CDH) spacing policies provide superior potential to increase traffic capacity than CTH, a major drawback is a smaller safety margin at high velocities and string stability cannot be achieved using a one-vehicle look-ahead communication. The hypothesis of this work is to apply CDH only in few driving situations, when traffic throughput is of highest importance and safety requirements can be met due to comparably low velocities. As the most relevant situations where CDH could be applied, we identify starting platoons at signalized intersections. In this paper, we illustrate this idea. Specifically, we compare CTH with CDH regarding its potential to increase the capacity of traffic lights. Starting with the elementary situation of single traffic lights we expand our scope to whole traffic networks including several thousand vehicles in simulation. Using real world data to calibrate and validate vehicle dynamics simulation and traffic simulation, the study discusses the most relevant working parameters of CDH, CTH, and the traffic system in which both are applied.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
146,667
1809.05285
Keypoint Based Weakly Supervised Human Parsing
Fully convolutional networks (FCN) have achieved great success in human parsing in recent years. In conventional human parsing tasks, pixel-level labeling is required for guiding the training, which usually involves enormous human labeling efforts. To ease the labeling efforts, we propose a novel weakly supervised human parsing method which only requires simple object keypoint annotations for learning. We develop an iterative learning method to generate pseudo part segmentation masks from keypoint labels. With these pseudo masks, we train an FCN network to output pixel-level human parsing predictions. Furthermore, we develop a correlation network to perform joint prediction of part and object segmentation masks and improve the segmentation performance. The experiment results show that our weakly supervised method is able to achieve very competitive human parsing results. Despite our method only uses simple keypoint annotations for learning, we are able to achieve comparable performance with fully supervised methods which use the expensive pixel-level annotations.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
107,765
1708.02840
Speaker Diarization using Deep Recurrent Convolutional Neural Networks for Speaker Embeddings
In this paper we propose a new method of speaker diarization that employs a deep learning architecture to learn speaker embeddings. In contrast to the traditional approaches that build their speaker embeddings using manually hand-crafted spectral features, we propose to train for this purpose a recurrent convolutional neural network applied directly on magnitude spectrograms. To compare our approach with the state of the art, we collect and release for the public an additional dataset of over 6 hours of fully annotated broadcast material. The results of our evaluation on the new dataset and three other benchmark datasets show that our proposed method significantly outperforms the competitors and reduces diarization error rate by a large margin of over 30% with respect to the baseline.
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
false
true
78,667
2107.12657
Continual Learning with Neuron Activation Importance
Continual learning is a concept of online learning with multiple sequential tasks. One of the critical barriers of continual learning is that a network should learn a new task keeping the knowledge of old tasks without access to any data of the old tasks. In this paper, we propose a neuron activation importance-based regularization method for stable continual learning regardless of the order of tasks. We conduct comprehensive experiments on existing benchmark data sets to evaluate not just the stability and plasticity of our method with improved classification accuracy also the robustness of the performance along the changes of task order.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
247,970
2102.05542
On the Existence of Optimal Transport Gradient for Learning Generative Models
The use of optimal transport cost for learning generative models has become popular with Wasserstein Generative Adversarial Networks (WGAN). Training of WGAN relies on a theoretical background: the calculation of the gradient of the optimal transport cost with respect to the generative model parameters. We first demonstrate that such gradient may not be defined, which can result in numerical instabilities during gradient-based optimization. We address this issue by stating a valid differentiation theorem in the case of entropic regularized transport and specify conditions under which existence is ensured. By exploiting the discrete nature of empirical data, we formulate the gradient in a semi-discrete setting and propose an algorithm for the optimization of the generative model parameters. Finally, we illustrate numerically the advantage of the proposed framework.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
219,476
2306.02236
Detector Guidance for Multi-Object Text-to-Image Generation
Diffusion models have demonstrated impressive performance in text-to-image generation. They utilize a text encoder and cross-attention blocks to infuse textual information into images at a pixel level. However, their capability to generate images with text containing multiple objects is still restricted. Previous works identify the problem of information mixing in the CLIP text encoder and introduce the T5 text encoder or incorporate strong prior knowledge to assist with the alignment. We find that mixing problems also occur on the image side and in the cross-attention blocks. The noisy images can cause different objects to appear similar, and the cross-attention blocks inject information at a pixel level, leading to leakage of global object understanding and resulting in object mixing. In this paper, we introduce Detector Guidance (DG), which integrates a latent object detection model to separate different objects during the generation process. DG first performs latent object detection on cross-attention maps (CAMs) to obtain object information. Based on this information, DG then masks conflicting prompts and enhances related prompts by manipulating the following CAMs. We evaluate the effectiveness of DG using Stable Diffusion on COCO, CC, and a novel multi-related object benchmark, MRO. Human evaluations demonstrate that DG provides an 8-22\% advantage in preventing the amalgamation of conflicting concepts and ensuring that each object possesses its unique region without any human involvement and additional iterations. Our implementation is available at \url{https://github.com/luping-liu/Detector-Guidance}.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
370,810
2309.08480
PoseFix: Correcting 3D Human Poses with Natural Language
Automatically producing instructions to modify one's posture could open the door to endless applications, such as personalized coaching and in-home physical therapy. Tackling the reverse problem (i.e., refining a 3D pose based on some natural language feedback) could help for assisted 3D character animation or robot teaching, for instance. Although a few recent works explore the connections between natural language and 3D human pose, none focus on describing 3D body pose differences. In this paper, we tackle the problem of correcting 3D human poses with natural language. To this end, we introduce the PoseFix dataset, which consists of several thousand paired 3D poses and their corresponding text feedback, that describe how the source pose needs to be modified to obtain the target pose. We demonstrate the potential of this dataset on two tasks: (1) text-based pose editing, that aims at generating corrected 3D body poses given a query pose and a text modifier; and (2) correctional text generation, where instructions are generated based on the differences between two body poses.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
392,187
2309.11244
Integrating Visual Foundation Models for Enhanced Robot Manipulation and Motion Planning: A Layered Approach
This paper presents a novel layered framework that integrates visual foundation models to improve robot manipulation tasks and motion planning. The framework consists of five layers: Perception, Cognition, Planning, Execution, and Learning. Using visual foundation models, we enhance the robot's perception of its environment, enabling more efficient task understanding and accurate motion planning. This approach allows for real-time adjustments and continual learning, leading to significant improvements in task execution. Experimental results demonstrate the effectiveness of the proposed framework in various robot manipulation tasks and motion planning scenarios, highlighting its potential for practical deployment in dynamic environments.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
393,334
1708.03447
Unified Neural Architecture for Drug, Disease and Clinical Entity Recognition
Most existing methods for biomedical entity recognition task rely on explicit feature engineering where many features either are specific to a particular task or depends on output of other existing NLP tools. Neural architectures have been shown across various domains that efforts for explicit feature design can be reduced. In this work we propose an unified framework using bi-directional long short term memory network (BLSTM) for named entity recognition (NER) tasks in biomedical and clinical domains. Three important characteristics of the framework are as follows - (1) model learns contextual as well as morphological features using two different BLSTM in hierarchy, (2) model uses first order linear conditional random field (CRF) in its output layer in cascade of BLSTM to infer label or tag sequence, (3) model does not use any domain specific features or dictionary, i.e., in another words, same set of features are used in the three NER tasks, namely, disease name recognition (Disease NER), drug name recognition (Drug NER) and clinical entity recognition (Clinical NER). We compare performance of the proposed model with existing state-of-the-art models on the standard benchmark datasets of the three tasks. We show empirically that the proposed framework outperforms all existing models. Further our analysis of CRF layer and word-embedding obtained using character based embedding show their importance.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
78,778
2407.16115
Transformer-based Graph Neural Networks for Battery Range Prediction in AIoT Battery-Swap Services
The concept of the sharing economy has gained broad recognition, and within this context, Sharing E-Bike Battery (SEB) have emerged as a focal point of societal interest. Despite the popularity, a notable discrepancy remains between user expectations regarding the remaining battery range of SEBs and the reality, leading to a pronounced inclination among users to find an available SEB during emergency situations. In response to this challenge, the integration of Artificial Intelligence of Things (AIoT) and battery-swap services has surfaced as a viable solution. In this paper, we propose a novel structural Transformer-based model, referred to as the SEB-Transformer, designed specifically for predicting the battery range of SEBs. The scenario is conceptualized as a dynamic heterogeneous graph that encapsulates the interactions between users and bicycles, providing a comprehensive framework for analysis. Furthermore, we incorporate the graph structure into the SEB-Transformer to facilitate the estimation of the remaining e-bike battery range, in conjunction with mean structural similarity, enhancing the prediction accuracy. By employing the predictions made by our model, we are able to dynamically adjust the optimal cycling routes for users in real-time, while also considering the strategic locations of charging stations, thereby optimizing the user experience. Empirically our results on real-world datasets demonstrate the superiority of our model against nine competitive baselines. These innovations, powered by AIoT, not only bridge the gap between user expectations and the physical limitations of battery range but also significantly improve the operational efficiency and sustainability of SEB services. Through these advancements, the shared electric bicycle ecosystem is evolving, making strides towards a more reliable, user-friendly, and sustainable mode of transportation.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
475,456
1707.07217
Deep Learning in Robotics: A Review of Recent Research
Advances in deep learning over the last decade have led to a flurry of research in the application of deep artificial neural networks to robotic systems, with at least thirty papers published on the subject between 2014 and the present. This review discusses the applications, benefits, and limitations of deep learning vis-\`a-vis physical robotic systems, using contemporary research as exemplars. It is intended to communicate recent advances to the wider robotics community and inspire additional interest in and application of deep learning in robotics.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
77,575
2312.10802
GO-DICE: Goal-Conditioned Option-Aware Offline Imitation Learning via Stationary Distribution Correction Estimation
Offline imitation learning (IL) refers to learning expert behavior solely from demonstrations, without any additional interaction with the environment. Despite significant advances in offline IL, existing techniques find it challenging to learn policies for long-horizon tasks and require significant re-training when task specifications change. Towards addressing these limitations, we present GO-DICE an offline IL technique for goal-conditioned long-horizon sequential tasks. GO-DICE discerns a hierarchy of sub-tasks from demonstrations and uses these to learn separate policies for sub-task transitions and action execution, respectively; this hierarchical policy learning facilitates long-horizon reasoning. Inspired by the expansive DICE-family of techniques, policy learning at both the levels transpires within the space of stationary distributions. Further, both policies are learnt with goal conditioning to minimize need for retraining when task goals change. Experimental results substantiate that GO-DICE outperforms recent baselines, as evidenced by a marked improvement in the completion rate of increasingly challenging pick-and-place Mujoco robotic tasks. GO-DICE is also capable of leveraging imperfect demonstration and partial task segmentation when available, both of which boost task performance relative to learning from expert demonstrations alone.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
416,321
1810.10895
Almost Optimal Algorithms for Linear Stochastic Bandits with Heavy-Tailed Payoffs
In linear stochastic bandits, it is commonly assumed that payoffs are with sub-Gaussian noises. In this paper, under a weaker assumption on noises, we study the problem of \underline{lin}ear stochastic {\underline b}andits with h{\underline e}avy-{\underline t}ailed payoffs (LinBET), where the distributions have finite moments of order $1+\epsilon$, for some $\epsilon\in (0,1]$. We rigorously analyze the regret lower bound of LinBET as $\Omega(T^{\frac{1}{1+\epsilon}})$, implying that finite moments of order 2 (i.e., finite variances) yield the bound of $\Omega(\sqrt{T})$, with $T$ being the total number of rounds to play bandits. The provided lower bound also indicates that the state-of-the-art algorithms for LinBET are far from optimal. By adopting median of means with a well-designed allocation of decisions and truncation based on historical information, we develop two novel bandit algorithms, where the regret upper bounds match the lower bound up to polylogarithmic factors. To the best of our knowledge, we are the first to solve LinBET optimally in the sense of the polynomial order on $T$. Our proposed algorithms are evaluated based on synthetic datasets, and outperform the state-of-the-art results.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
111,388
1806.02954
Using Social Network Information in Bayesian Truth Discovery
We investigate the problem of truth discovery based on opinions from multiple agents who may be unreliable or biased. We consider the case where agents' reliabilities or biases are correlated if they belong to the same community, which defines a group of agents with similar opinions regarding a particular event. An agent can belong to different communities for different events, and these communities are unknown a priori. We incorporate knowledge of the agents' social network in our truth discovery framework and develop Laplace variational inference methods to estimate agents' reliabilities, communities, and the event states. We also develop a stochastic variational inference method to scale our model to large social networks. Simulations and experiments on real data suggest that when observations are sparse, our proposed methods perform better than several other inference methods, including majority voting, TruthFinder, AccuSim, the Confidence-Aware Truth Discovery method, the Bayesian Classifier Combination (BCC) method, and the Community BCC method.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
true
false
false
false
99,891
2101.00604
Privacy-sensitive Objects Pixelation for Live Video Streaming
With the prevailing of live video streaming, establishing an online pixelation method for privacy-sensitive objects is an urgency. Caused by the inaccurate detection of privacy-sensitive objects, simply migrating the tracking-by-detection structure into the online form will incur problems in target initialization, drifting, and over-pixelation. To cope with the inevitable but impacting detection issue, we propose a novel Privacy-sensitive Objects Pixelation (PsOP) framework for automatic personal privacy filtering during live video streaming. Leveraging pre-trained detection networks, our PsOP is extendable to any potential privacy-sensitive objects pixelation. Employing the embedding networks and the proposed Positioned Incremental Affinity Propagation (PIAP) clustering algorithm as the backbone, our PsOP unifies the pixelation of discriminating and indiscriminating pixelation objects through trajectories generation. In addition to the pixelation accuracy boosting, experiments on the streaming video data we built show that the proposed PsOP can significantly reduce the over-pixelation ratio in privacy-sensitive object pixelation.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
true
214,145
2310.11690
Deep learning based on Transformer architecture for power system short-term voltage stability assessment with class imbalance
Most existing data-driven power system short-term voltage stability assessment (STVSA) approaches presume class-balanced input data. However, in practical applications, the occurrence of short-term voltage instability following a disturbance is minimal, leading to a significant class imbalance problem and a consequent decline in classifier performance. This work proposes a Transformer-based STVSA method to address this challenge. By utilizing the basic Transformer architecture, a stability assessment Transformer (StaaT) is developed {as a classification model to reflect the correlation between the operational states of the system and the resulting stability outcomes}. To combat the negative impact of imbalanced datasets, this work employs a conditional Wasserstein generative adversarial network with gradient penalty (CWGAN-GP) for synthetic data generation, aiding in the creation of a balanced, representative training set for the classifier. Semi-supervised clustering learning is implemented to enhance clustering quality, addressing the lack of a unified quantitative criterion for short-term voltage stability. {Numerical tests on the IEEE 39-bus test system extensively demonstrate that the proposed method exhibits robust performance under class imbalances up to 100:1 and noisy environments, and maintains consistent effectiveness even with an increased penetration of renewable energy}. Comparative results reveal that the CWGAN-GP generates more balanced datasets than traditional oversampling methods and that the StaaT outperforms other deep learning algorithms. This study presents a compelling solution for real-world STVSA applications that often face class imbalance and data noise challenges.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
400,741
1309.3917
Strategic Planning in Air Traffic Control as a Multi-objective Stochastic Optimization Problem
With the objective of handling the airspace sector congestion subject to continuously growing air traffic, we suggest to create a collaborative working plan during the strategic phase of air traffic control. The plan obtained via a new decision support tool presented in this article consists in a schedule for controllers, which specifies time of overflight on the different waypoints of the flight plans. In order to do it, we believe that the decision-support tool shall model directly the uncertainty at a trajectory level in order to propagate the uncertainty to the sector level. Then, the probability of congestion for any sector in the airspace can be computed. Since air traffic regulations and sector congestion are antagonist, we designed and implemented a multi-objective optimization algorithm for determining the best trade-off between these two criteria. The solution comes up as a set of alternatives for the multi-sector planner where the severity of the congestion cost is adjustable. In this paper, the Non-dominated Sorting Genetic Algorithm (NSGA-II) was used to solve an artificial benchmark problem involving 24 aircraft and 11 sectors, and is able to provide a good approximation of the Pareto front.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
27,059
1901.08901
Expanding Click and Buy rates: Exploration of evaluation metrics that measure the impact of personalized recommendation engines on e-commerce platforms
To identify the most appropriate recommendation model for an e-commerce business, a live evaluation should be performed on the shopping website to measure the influence of personalization in real-time. The aim of this paper is to introduce and justify two new metrics -- CTR NoRepeat and Click & Buy rate -- which stem from the standard metrics, Click-through(CTR) and Buy-through rate(BTR), respectively. The former variation tackles the issue of overestimation of clicks in the original CTR while the latter accounts for noting purchases of products that have been previously clicked, in order to validate that the buy included in the metric is a result of customer interactions. A significance test for independence of two means is conducted for multiple datasets, between each of the new metrics and its respective parent to determine the novelty and necessity of the variants. The Pearson-correlation coefficient is calculated to assess the strength of the linear relationships and conclude on the predictability factor amongst the aforementioned factors to investigate unknown connections between customer clicks and buys. Additionally, other metrics such as hits per customer, buyers per customer, clicks per customer etc. are introduced that help explain indicators of customer behavior on the e-commerce website in reference.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
119,596
2501.16374
SAFR: Neuron Redistribution for Interpretability
Superposition refers to encoding representations of multiple features within a single neuron, which is common in deep neural networks. This property allows neurons to combine and represent multiple features, enabling the model to capture intricate information and handle complex tasks. Despite promising performance, the model's interpretability has been diminished. This paper presents a novel approach to enhance model interpretability by regularizing feature superposition. We introduce SAFR, which simply applies regularizations to the loss function to promote monosemantic representations for important tokens while encouraging polysemanticity for correlated token pairs, where important tokens and correlated token pairs are identified via VMASK and attention weights respectively. We evaluate SAFR with a transformer model on two classification tasks. Experiments demonstrate the effectiveness of SAFR in improving model interpretability without compromising prediction performance. Besides, SAFR provides explanations by visualizing the neuron allocation within the intermediate layers.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
527,939
2406.07561
Artificial Intelligence as the New Hacker: Developing Agents for Offensive Security
In the vast domain of cybersecurity, the transition from reactive defense to offensive has become critical in protecting digital infrastructures. This paper explores the integration of Artificial Intelligence (AI) into offensive cybersecurity, particularly through the development of an autonomous AI agent, ReaperAI, designed to simulate and execute cyberattacks. Leveraging the capabilities of Large Language Models (LLMs) such as GPT-4, ReaperAI demonstrates the potential to identify, exploit, and analyze security vulnerabilities autonomously. This research outlines the core methodologies that can be utilized to increase consistency and performance, including task-driven penetration testing frameworks, AI-driven command generation, and advanced prompting techniques. The AI agent operates within a structured environment using Python, enhanced by Retrieval Augmented Generation (RAG) for contextual understanding and memory retention. ReaperAI was tested on platforms including, Hack The Box, where it successfully exploited known vulnerabilities, demonstrating its potential power. However, the deployment of AI in offensive security presents significant ethical and operational challenges. The agent's development process revealed complexities in command execution, error handling, and maintaining ethical constraints, highlighting areas for future enhancement. This study contributes to the discussion on AI's role in cybersecurity by showcasing how AI can augment offensive security strategies. It also proposes future research directions, including the refinement of AI interactions with cybersecurity tools, enhancement of learning mechanisms, and the discussion of ethical guidelines for AI in offensive roles. The findings advocate for a unique approach to AI implementation in cybersecurity, emphasizing innovation.
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
false
false
463,111
0709.1771
Variational local structure estimation for image super-resolution
Super-resolution is an important but difficult problem in image/video processing. If a video sequence or some training set other than the given low-resolution image is available, this kind of extra information can greatly aid in the reconstruction of the high-resolution image. The problem is substantially more difficult with only a single low-resolution image on hand. The image reconstruction methods designed primarily for denoising is insufficient for super-resolution problem in the sense that it tends to oversmooth images with essentially no noise. We propose a new adaptive linear interpolation method based on variational method and inspired by local linear embedding (LLE). The experimental result shows that our method avoids the problem of oversmoothing and preserves image structures well.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
656
2404.10976
Group-Aware Coordination Graph for Multi-Agent Reinforcement Learning
Cooperative Multi-Agent Reinforcement Learning (MARL) necessitates seamless collaboration among agents, often represented by an underlying relation graph. Existing methods for learning this graph primarily focus on agent-pair relations, neglecting higher-order relationships. While several approaches attempt to extend cooperation modelling to encompass behaviour similarities within groups, they commonly fall short in concurrently learning the latent graph, thereby constraining the information exchange among partially observed agents. To overcome these limitations, we present a novel approach to infer the Group-Aware Coordination Graph (GACG), which is designed to capture both the cooperation between agent pairs based on current observations and group-level dependencies from behaviour patterns observed across trajectories. This graph is further used in graph convolution for information exchange between agents during decision-making. To further ensure behavioural consistency among agents within the same group, we introduce a group distance loss, which promotes group cohesion and encourages specialization between groups. Our evaluations, conducted on StarCraft II micromanagement tasks, demonstrate GACG's superior performance. An ablation study further provides experimental evidence of the effectiveness of each component of our method.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
true
false
false
false
447,327
2412.12798
ZoRI: Towards Discriminative Zero-Shot Remote Sensing Instance Segmentation
Instance segmentation algorithms in remote sensing are typically based on conventional methods, limiting their application to seen scenarios and closed-set predictions. In this work, we propose a novel task called zero-shot remote sensing instance segmentation, aimed at identifying aerial objects that are absent from training data. Challenges arise when classifying aerial categories with high inter-class similarity and intra-class variance. Besides, the domain gap between vision-language models' pretraining datasets and remote sensing datasets hinders the zero-shot capabilities of the pretrained model when it is directly applied to remote sensing images. To address these challenges, we propose a $\textbf{Z}$ero-Sh$\textbf{o}$t $\textbf{R}$emote Sensing $\textbf{I}$nstance Segmentation framework, dubbed $\textbf{ZoRI}$. Our approach features a discrimination-enhanced classifier that uses refined textual embeddings to increase the awareness of class disparities. Instead of direct fine-tuning, we propose a knowledge-maintained adaptation strategy that decouples semantic-related information to preserve the pretrained vision-language alignment while adjusting features to capture remote sensing domain-specific visual cues. Additionally, we introduce a prior-injected prediction with cache bank of aerial visual prototypes to supplement the semantic richness of text embeddings and seamlessly integrate aerial representations, adapting to the remote sensing domain. We establish new experimental protocols and benchmarks, and extensive experiments convincingly demonstrate that ZoRI achieves the state-of-art performance on the zero-shot remote sensing instance segmentation task. Our code is available at https://github.com/HuangShiqi128/ZoRI.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
518,035
2203.15723
FlexR: Few-shot Classification with Language Embeddings for Structured Reporting of Chest X-rays
The automation of chest X-ray reporting has garnered significant interest due to the time-consuming nature of the task. However, the clinical accuracy of free-text reports has proven challenging to quantify using natural language processing metrics, given the complexity of medical information, the variety of writing styles, and the potential for typos and inconsistencies. Structured reporting and standardized reports, on the other hand, can provide consistency and formalize the evaluation of clinical correctness. However, high-quality annotations for structured reporting are scarce. Therefore, we propose a method to predict clinical findings defined by sentences in structured reporting templates, which can be used to fill such templates. The approach involves training a contrastive language-image model using chest X-rays and related free-text radiological reports, then creating textual prompts for each structured finding and optimizing a classifier to predict clinical findings in the medical image. Results show that even with limited image-level annotations for training, the method can accomplish the structured reporting tasks of severity assessment of cardiomegaly and localizing pathologies in chest X-rays.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
288,512
2312.04565
MuRF: Multi-Baseline Radiance Fields
We present Multi-Baseline Radiance Fields (MuRF), a general feed-forward approach to solving sparse view synthesis under multiple different baseline settings (small and large baselines, and different number of input views). To render a target novel view, we discretize the 3D space into planes parallel to the target image plane, and accordingly construct a target view frustum volume. Such a target volume representation is spatially aligned with the target view, which effectively aggregates relevant information from the input views for high-quality rendering. It also facilitates subsequent radiance field regression with a convolutional network thanks to its axis-aligned nature. The 3D context modeled by the convolutional network enables our method to synthesis sharper scene structures than prior works. Our MuRF achieves state-of-the-art performance across multiple different baseline settings and diverse scenarios ranging from simple objects (DTU) to complex indoor and outdoor scenes (RealEstate10K and LLFF). We also show promising zero-shot generalization abilities on the Mip-NeRF 360 dataset, demonstrating the general applicability of MuRF.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
413,731
2108.03491
Approximate Last Iterate Convergence in Overparameterized GANs
In this work, we showed that the Implicit Update and Predictive Methods dynamics introduced in prior work satisfy last iterate convergence to a neighborhood around the optimum in overparameterized GANs, where the size of the neighborhood shrinks with the width of the neural network. This is in contrast to prior results, which only guaranteed average iterate convergence.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
249,676
1801.05534
Blind De-anonymization Attacks using Social Networks
It is important to study the risks of publishing privacy-sensitive data. Even if sensitive identities (e.g., name, social security number) were removed and advanced data perturbation techniques were applied, several de-anonymization attacks have been proposed to re-identify individuals. However, existing attacks have some limitations: 1) they are limited in de-anonymization accuracy; 2) they require prior seed knowledge and suffer from the imprecision of such seed information. We propose a novel structure-based de-anonymization attack, which does not require the attacker to have prior information (e.g., seeds). Our attack is based on two key insights: using multi-hop neighborhood information, and optimizing the process of de-anonymization by exploiting enhanced machine learning techniques. The experimental results demonstrate that our method is robust to data perturbations and significantly outperforms the state-of-the-art de-anonymization techniques by up to $10\times$ improvement.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
88,473
2003.05622
Distributed Hierarchical GPU Parameter Server for Massive Scale Deep Learning Ads Systems
Neural networks of ads systems usually take input from multiple resources, e.g., query-ad relevance, ad features and user portraits. These inputs are encoded into one-hot or multi-hot binary features, with typically only a tiny fraction of nonzero feature values per example. Deep learning models in online advertising industries can have terabyte-scale parameters that do not fit in the GPU memory nor the CPU main memory on a computing node. For example, a sponsored online advertising system can contain more than $10^{11}$ sparse features, making the neural network a massive model with around 10 TB parameters. In this paper, we introduce a distributed GPU hierarchical parameter server for massive scale deep learning ads systems. We propose a hierarchical workflow that utilizes GPU High-Bandwidth Memory, CPU main memory and SSD as 3-layer hierarchical storage. All the neural network training computations are contained in GPUs. Extensive experiments on real-world data confirm the effectiveness and the scalability of the proposed system. A 4-node hierarchical GPU parameter server can train a model more than 2X faster than a 150-node in-memory distributed parameter server in an MPI cluster. In addition, the price-performance ratio of our proposed system is 4-9 times better than an MPI-cluster solution.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
167,902
2201.02727
Multi-Mode Spatial Signal Processor with Rainbow-like Fast Beam Training and Wideband Communications using True-Time-Delay Arrays
Initial access in millimeter-wave (mmW) wireless is critical toward successful realization of the fifth-generation (5G) wireless networks and beyond. Limited bandwidth in existing standards and use of phase-shifters in analog/hybrid phased-antenna arrays (PAA) are not suited for these emerging standards demanding low-latency direction finding. This work proposes a reconfigurable true-time-delay (TTD) based spatial signal processor (SSP) with frequency-division beam training methodology and wideband beam-squint less data communications. Discrete-time delay compensated clocking technique is used to support 800~MHz bandwidth with a large unity-gain bandwidth ring-amplifier (RAMP)-based signal combiner. To extensively characterize the proposed SSP across different SSP modes and frequency-angle pairs, an automated testbed is developed using computer-vision techniques that significantly speeds up the testing progress and minimize possible human errors. Using seven levels of time-interleaving for each of the 4 antenna elements, the TTD SSP has a delay range of 3.8 ns over 800 MHz and achieves unique frequency-to-angle mapping in the beamtraining mode with nearly 12 dB frequency-independent gain in the beamforming mode. The SSP is prototyped in 65nm CMOS with an area of 1.98mm$^2$ consuming only 29 mW excluding buffers. Further, an error vector magnitude (EVM) of 9.8% is realized for 16-QAM modulation at a speed of 122.8 Mb/s.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
274,625
2502.07237
DrugImproverGPT: A Large Language Model for Drug Optimization with Fine-Tuning via Structured Policy Optimization
Finetuning a Large Language Model (LLM) is crucial for generating results towards specific objectives. This research delves into the realm of drug optimization and introduce a novel reinforcement learning algorithm to finetune a drug optimization LLM-based generative model, enhancing the original drug across target objectives, while retains the beneficial chemical properties of the original drug. This work is comprised of two primary components: (1) DrugImprover: A framework tailored for improving robustness and efficiency in drug optimization. It includes a LLM designed for drug optimization and a novel Structured Policy Optimization (SPO) algorithm, which is theoretically grounded. This algorithm offers a unique perspective for fine-tuning the LLM-based generative model by aligning the improvement of the generated molecule with the input molecule under desired objectives. (2) A dataset of 1 million compounds, each with OEDOCK docking scores on 5 human proteins associated with cancer cells and 24 binding sites from SARS-CoV-2 virus. We conduct a comprehensive evaluation of SPO and demonstrate its effectiveness in improving the original drug across target properties. Our code and dataset will be publicly available at: https://github.com/xuefeng-cs/DrugImproverGPT.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
532,500
2307.14956
The Effect of Third Party Implementations on Reproducibility
Reproducibility of recommender systems research has come under scrutiny during recent years. Along with works focusing on repeating experiments with certain algorithms, the research community has also started discussing various aspects of evaluation and how these affect reproducibility. We add a novel angle to this discussion by examining how unofficial third-party implementations could benefit or hinder reproducibility. Besides giving a general overview, we thoroughly examine six third-party implementations of a popular recommender algorithm and compare them to the official version on five public datasets. In the light of our alarming findings we aim to draw the attention of the research community to this neglected aspect of reproducibility.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
382,107
2203.16470
Remember to correct the bias when using deep learning for regression!
When training deep learning models for least-squares regression, we cannot expect that the training error residuals of the final model, selected after a fixed training time or based on performance on a hold-out data set, sum to zero. This can introduce a systematic error that accumulates if we are interested in the total aggregated performance over many data points. We suggest to adjust the bias of the machine learning model after training as a default postprocessing step, which efficiently solves the problem. The severeness of the error accumulation and the effectiveness of the bias correction is demonstrated in exemplary experiments.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
288,795
2111.14630
On computable learning of continuous features
We introduce definitions of computable PAC learning for binary classification over computable metric spaces. We provide sufficient conditions for learners that are empirical risk minimizers (ERM) to be computable, and bound the strong Weihrauch degree of an ERM learner under more general conditions. We also give a presentation of a hypothesis class that does not admit any proper computable PAC learner with computable sample function, despite the underlying class being PAC learnable.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
268,656
2208.03384
Amplitude Constrained Vector Gaussian Wiretap Channel: Properties of the Secrecy-Capacity-Achieving Input Distribution
This paper studies secrecy-capacity of an $n$-dimensional Gaussian wiretap channel under a peak-power constraint. This work determines the largest peak-power constraint $\bar{\mathsf{R}}_n$ such that an input distribution uniformly distributed on a single sphere is optimal; this regime is termed the low amplitude regime. The asymptotic of $\bar{\mathsf{R}}_n$ as $n$ goes to infinity is completely characterized as a function of noise variance at both receivers. Moreover, the secrecy-capacity is also characterized in a form amenable for computation. Several numerical examples are provided, such as the example of the secrecy-capacity-achieving distribution beyond the low amplitude regime. Furthermore, for the scalar case $(n=1)$ we show that the secrecy-capacity-achieving input distribution is discrete with finitely many points at most of the order of $\frac{\mathsf{R}^2}{\sigma_1^2}$, where $\sigma_1^2$ is the variance of the Gaussian noise over the legitimate channel.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
311,769
1401.0214
Band Allocation for Cognitive Radios with Buffered Primary and Secondary Users
In this paper, we study band allocation of $\mathcal{M}_s$ buffered secondary users (SUs) to $\mathcal{M}_p$ orthogonal primary licensed bands, where each primary band is assigned to one primary user (PU). Each SU is assigned to one of the available primary bands with a certain probability designed to satisfy some specified quality of service (QoS) requirements for the SUs. In the proposed system, only one SU is assigned to a particular band. The optimization problem used to obtain the stability region's envelope (closure) is shown to be a linear program. We compare the stability region of the proposed system with that of a system where each SU chooses a band randomly with some assignment probability. We also compare with a fixed (deterministic) assignment system, where only one SU is assigned to one of the primary bands all the time. We prove the advantage of the proposed system over the other systems.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
29,540
2404.06892
SparseAD: Sparse Query-Centric Paradigm for Efficient End-to-End Autonomous Driving
End-to-End paradigms use a unified framework to implement multi-tasks in an autonomous driving system. Despite simplicity and clarity, the performance of end-to-end autonomous driving methods on sub-tasks is still far behind the single-task methods. Meanwhile, the widely used dense BEV features in previous end-to-end methods make it costly to extend to more modalities or tasks. In this paper, we propose a Sparse query-centric paradigm for end-to-end Autonomous Driving (SparseAD), where the sparse queries completely represent the whole driving scenario across space, time and tasks without any dense BEV representation. Concretely, we design a unified sparse architecture for perception tasks including detection, tracking, and online mapping. Moreover, we revisit motion prediction and planning, and devise a more justifiable motion planner framework. On the challenging nuScenes dataset, SparseAD achieves SOTA full-task performance among end-to-end methods and significantly narrows the performance gap between end-to-end paradigms and single-task methods. Codes will be released soon.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
445,640
2106.09852
LSEC: Large-scale spectral ensemble clustering
Ensemble clustering is a fundamental problem in the machine learning field, combining multiple base clusterings into a better clustering result. However, most of the existing methods are unsuitable for large-scale ensemble clustering tasks due to the efficiency bottleneck. In this paper, we propose a large-scale spectral ensemble clustering (LSEC) method to strike a good balance between efficiency and effectiveness. In LSEC, a large-scale spectral clustering based efficient ensemble generation framework is designed to generate various base clusterings within a low computational complexity. Then all based clustering are combined through a bipartite graph partition based consensus function into a better consensus clustering result. The LSEC method achieves a lower computational complexity than most existing ensemble clustering methods. Experiments conducted on ten large-scale datasets show the efficiency and effectiveness of the LSEC method. The MATLAB code of the proposed method and experimental datasets are available at https://github.com/Li- Hongmin/MyPaperWithCode.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
241,809
2404.18311
Towards Incremental Learning in Large Language Models: A Critical Review
Incremental learning is the ability of systems to acquire knowledge over time, enabling their adaptation and generalization to novel tasks. It is a critical ability for intelligent, real-world systems, especially when data changes frequently or is limited. This review provides a comprehensive analysis of incremental learning in Large Language Models. It synthesizes the state-of-the-art incremental learning paradigms, including continual learning, meta-learning, parameter-efficient learning, and mixture-of-experts learning. We demonstrate their utility for incremental learning by describing specific achievements from these related topics and their critical factors. An important finding is that many of these approaches do not update the core model, and none of them update incrementally in real-time. The paper highlights current problems and challenges for future research in the field. By consolidating the latest relevant research developments, this review offers a comprehensive understanding of incremental learning and its implications for designing and developing LLM-based learning systems.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
450,213
1608.03907
Temporal Registration in In-Utero Volumetric MRI Time Series
We present a robust method to correct for motion and deformations for in-utero volumetric MRI time series. Spatio-temporal analysis of dynamic MRI requires robust alignment across time in the presence of substantial and unpredictable motion. We make a Markov assumption on the nature of deformations to take advantage of the temporal structure in the image data. Forward message passing in the corresponding hidden Markov model (HMM) yields an estimation algorithm that only has to account for relatively small motion between consecutive frames. We demonstrate the utility of the temporal model by showing that its use improves the accuracy of the segmentation propagation through temporal registration. Our results suggest that the proposed model captures accurately the temporal dynamics of deformations in in-utero MRI time series.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
59,745
2405.15471
Emergence of a High-Dimensional Abstraction Phase in Language Transformers
A language model (LM) is a mapping from a linguistic context to an output token. However, much remains to be known about this mapping, including how its geometric properties relate to its function. We take a high-level geometric approach to its analysis, observing, across five pre-trained transformer-based LMs and three input datasets, a distinct phase characterized by high intrinsic dimensionality. During this phase, representations (1) correspond to the first full linguistic abstraction of the input; (2) are the first to viably transfer to downstream tasks; (3) predict each other across different LMs. Moreover, we find that an earlier onset of the phase strongly predicts better language modelling performance. In short, our results suggest that a central high-dimensionality phase underlies core linguistic processing in many common LM architectures.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
456,951
2312.14185
Auto311: A Confidence-guided Automated System for Non-emergency Calls
Emergency and non-emergency response systems are essential services provided by local governments and critical to protecting lives, the environment, and property. The effective handling of (non-)emergency calls is critical for public safety and well-being. By reducing the burden through non-emergency callers, residents in critical need of assistance through 911 will receive a fast and effective response. Collaborating with the Department of Emergency Communications (DEC) in Nashville, we analyzed 11,796 non-emergency call recordings and developed Auto311, the first automated system to handle 311 non-emergency calls, which (1) effectively and dynamically predicts ongoing non-emergency incident types to generate tailored case reports during the call; (2) itemizes essential information from dialogue contexts to complete the generated reports; and (3) strategically structures system-caller dialogues with optimized confidence. We used real-world data to evaluate the system's effectiveness and deployability. The experimental results indicate that the system effectively predicts incident type with an average F-1 score of 92.54%. Moreover, the system successfully itemizes critical information from relevant contexts to complete reports, evincing a 0.93 average consistency score compared to the ground truth. Additionally, emulations demonstrate that the system effectively decreases conversation turns as the utterance size gets more extensive and categorizes the ongoing call with 94.49% mean accuracy.
false
false
false
false
true
false
true
false
true
false
false
false
false
true
false
false
false
false
417,524
2501.17896
Explainable Machine Learning: An Illustration of Kolmogorov-Arnold Network Model for Airfoil Lift Prediction
Data science has emerged as fourth paradigm of scientific exploration. However many machine learning models operate as black boxes offering limited insight into the reasoning behind their predictions. This lack of transparency is one of the drawbacks to generate new knowledge from data. Recently Kolmogorov-Arnold Network or KAN has been proposed as an alternative model which embeds explainable AI. This study demonstrates the potential of KAN for new scientific exploration. KAN along with five other popular supervised machine learning models are applied to the well-known problem of airfoil lift prediction in aerospace engineering. Standard data generated from an earlier study on 2900 different airfoils is used. KAN performed the best with an R2 score of 96.17 percent on the test data, surpassing both the baseline model and Multi Layer Perceptron. Explainability of KAN is shown by pruning and symbolizing the model resulting in an equation for coefficient of lift in terms of input variables. The explainable information retrieved from KAN model is found to be consistent with the known physics of lift generation by airfoil thus demonstrating its potential to aid in scientific exploration.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
528,503
1911.03992
Stochastic DCA for minimizing a large sum of DC functions with application to Multi-class Logistic Regression
We consider the large sum of DC (Difference of Convex) functions minimization problem which appear in several different areas, especially in stochastic optimization and machine learning. Two DCA (DC Algorithm) based algorithms are proposed: stochastic DCA and inexact stochastic DCA. We prove that the convergence of both algorithms to a critical point is guaranteed with probability one. Furthermore, we develop our stochastic DCA for solving an important problem in multi-task learning, namely group variables selection in multi class logistic regression. The corresponding stochastic DCA is very inexpensive, all computations are explicit. Numerical experiments on several benchmark datasets and synthetic datasets illustrate the efficiency of our algorithms and their superiority over existing methods, with respect to classification accuracy, sparsity of solution as well as running time.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
152,853
2405.02510
Low-cost sensors and circuits for plasma education: characterizing power and illuminance
Industrial applications of plasma have significantly increased beyond semiconductor manufacturing in recent years. This necessitates training a skilled workforce in plasma science and technology. However, an essential challenge to this end stems from the high cost of plasma devices and diagnostics. The limited access to plasma devices has hindered plasma education, particularly in the least developed countries. To this end, this paper demonstrates how low-cost sensors and circuits can be developed to enable inexpensive plasma experiments in laboratory environments. In particular, we show how to measure high voltage, current, and power from a cold-atmospheric plasma discharge. Additionally, we develop a low-cost illuminance sensor and demonstrate how it can be used to estimate the corresponding plasma power. The low-cost sensors and electronics presented in this paper can aid educators in characterizing plasma power versus plasma illuminance.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
451,791
2405.13527
End-to-End Real-World Polyphonic Piano Audio-to-Score Transcription with Hierarchical Decoding
Piano audio-to-score transcription (A2S) is an important yet underexplored task with extensive applications for music composition, practice, and analysis. However, existing end-to-end piano A2S systems faced difficulties in retrieving bar-level information such as key and time signatures, and have been trained and evaluated with only synthetic data. To address these limitations, we propose a sequence-to-sequence (Seq2Seq) model with a hierarchical decoder that aligns with the hierarchical structure of musical scores, enabling the transcription of score information at both the bar and note levels by multi-task learning. To bridge the gap between synthetic data and recordings of human performance, we propose a two-stage training scheme, which involves pre-training the model using an expressive performance rendering (EPR) system on synthetic audio, followed by fine-tuning the model using recordings of human performance. To preserve the voicing structure for score reconstruction, we propose a pre-processing method for **Kern scores in scenarios with an unconstrained number of voices. Experimental results support the effectiveness of our proposed approaches, in terms of both transcription performance on synthetic audio data in comparison to the current state-of-the-art, and the first experiment on human recordings.
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
455,971
2305.13873
Unsafe Diffusion: On the Generation of Unsafe Images and Hateful Memes From Text-To-Image Models
State-of-the-art Text-to-Image models like Stable Diffusion and DALLE$\cdot$2 are revolutionizing how people generate visual content. At the same time, society has serious concerns about how adversaries can exploit such models to generate unsafe images. In this work, we focus on demystifying the generation of unsafe images and hateful memes from Text-to-Image models. We first construct a typology of unsafe images consisting of five categories (sexually explicit, violent, disturbing, hateful, and political). Then, we assess the proportion of unsafe images generated by four advanced Text-to-Image models using four prompt datasets. We find that these models can generate a substantial percentage of unsafe images; across four models and four prompt datasets, 14.56% of all generated images are unsafe. When comparing the four models, we find different risk levels, with Stable Diffusion being the most prone to generating unsafe content (18.92% of all generated images are unsafe). Given Stable Diffusion's tendency to generate more unsafe content, we evaluate its potential to generate hateful meme variants if exploited by an adversary to attack a specific individual or community. We employ three image editing methods, DreamBooth, Textual Inversion, and SDEdit, which are supported by Stable Diffusion. Our evaluation result shows that 24% of the generated images using DreamBooth are hateful meme variants that present the features of the original hateful meme and the target individual/community; these generated images are comparable to hateful meme variants collected from the real world. Overall, our results demonstrate that the danger of large-scale generation of unsafe images is imminent. We discuss several mitigating measures, such as curating training data, regulating prompts, and implementing safety filters, and encourage better safeguard tools to be developed to prevent unsafe generation.
false
false
false
true
false
false
true
false
false
false
false
true
true
true
false
false
false
false
366,735
2106.03893
Rethinking Graph Transformers with Spectral Attention
In recent years, the Transformer architecture has proven to be very successful in sequence processing, but its application to other data structures, such as graphs, has remained limited due to the difficulty of properly defining positions. Here, we present the $\textit{Spectral Attention Network}$ (SAN), which uses a learned positional encoding (LPE) that can take advantage of the full Laplacian spectrum to learn the position of each node in a given graph. This LPE is then added to the node features of the graph and passed to a fully-connected Transformer. By leveraging the full spectrum of the Laplacian, our model is theoretically powerful in distinguishing graphs, and can better detect similar sub-structures from their resonance. Further, by fully connecting the graph, the Transformer does not suffer from over-squashing, an information bottleneck of most GNNs, and enables better modeling of physical phenomenons such as heat transfer and electric interaction. When tested empirically on a set of 4 standard datasets, our model performs on par or better than state-of-the-art GNNs, and outperforms any attention-based model by a wide margin, becoming the first fully-connected architecture to perform well on graph benchmarks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
239,494
2405.05626
An Uncertainty-aware, Mesh-free Numerical Method for Kolmogorov PDEs
This study introduces an uncertainty-aware, mesh-free numerical method for solving Kolmogorov PDEs. In the proposed method, we use Gaussian process regression (GPR) to smoothly interpolate pointwise solutions that are obtained by Monte Carlo methods based on the Feynman-Kac formula. The proposed method has two main advantages: 1. uncertainty assessment, which is facilitated by the probabilistic nature of GPR, and 2. mesh-free computation, which allows efficient handling of high-dimensional PDEs. The quality of the solution is improved by adjusting the kernel function and incorporating noise information from the Monte Carlo samples into the GPR noise model. The performance of the method is rigorously analyzed based on a theoretical lower bound on the posterior variance, which serves as a measure of the error between the numerical and true solutions. Extensive tests on three representative PDEs demonstrate the high accuracy and robustness of the method compared to existing methods.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
452,999