id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2108.11004
Reasoning about Counterfactuals and Explanations: Problems, Results and Directions
There are some recent approaches and results about the use of answer-set programming for specifying counterfactual interventions on entities under classification, and reasoning about them. These approaches are flexible and modular in that they allow the seamless addition of domain knowledge. Reasoning is enabled by query answering from the answer-set program. The programs can be used to specify and compute responsibility-based numerical scores as attributive explanations for classification results.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
252,057
2502.13447
Enhancing Chest X-ray Classification through Knowledge Injection in Cross-Modality Learning
The integration of artificial intelligence in medical imaging has shown tremendous potential, yet the relationship between pre-trained knowledge and performance in cross-modality learning remains unclear. This study investigates how explicitly injecting medical knowledge into the learning process affects the performance of cross-modality classification, focusing on Chest X-ray (CXR) images. We introduce a novel Set Theory-based knowledge injection framework that generates captions for CXR images with controllable knowledge granularity. Using this framework, we fine-tune CLIP model on captions with varying levels of medical information. We evaluate the model's performance through zero-shot classification on the CheXpert dataset, a benchmark for CXR classification. Our results demonstrate that injecting fine-grained medical knowledge substantially improves classification accuracy, achieving 72.5\% compared to 49.9\% when using human-generated captions. This highlights the crucial role of domain-specific knowledge in medical cross-modality learning. Furthermore, we explore the influence of knowledge density and the use of domain-specific Large Language Models (LLMs) for caption generation, finding that denser knowledge and specialized LLMs contribute to enhanced performance. This research advances medical image analysis by demonstrating the effectiveness of knowledge injection for improving automated CXR classification, paving the way for more accurate and reliable diagnostic tools.
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
535,368
2405.19143
DeepOKAN: Deep Operator Network Based on Kolmogorov Arnold Networks for Mechanics Problems
The modern digital engineering design often requires costly repeated simulations for different scenarios. The prediction capability of neural networks (NNs) makes them suitable surrogates for providing design insights. However, only a few NNs can efficiently handle complex engineering scenario predictions. We introduce a new version of the neural operators called DeepOKAN, which utilizes Kolmogorov Arnold networks (KANs) rather than the conventional neural network architectures. Our DeepOKAN uses Gaussian radial basis functions (RBFs) rather than the B-splines. RBFs offer good approximation properties and are typically computationally fast. The KAN architecture, combined with RBFs, allows DeepOKANs to represent better intricate relationships between input parameters and output fields, resulting in more accurate predictions across various mechanics problems. Specifically, we evaluate DeepOKAN's performance on several mechanics problems, including 1D sinusoidal waves, 2D orthotropic elasticity, and transient Poisson's problem, consistently achieving lower training losses and more accurate predictions compared to traditional DeepONets. This approach should pave the way for further improving the performance of neural operators.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
458,764
1711.10453
Deep Predictive Models for Collision Risk Assessment in Autonomous Driving
In this paper, we investigate a predictive approach for collision risk assessment in autonomous and assisted driving. A deep predictive model is trained to anticipate imminent accidents from traditional video streams. In particular, the model learns to identify cues in RGB images that are predictive of hazardous upcoming situations. In contrast to previous work, our approach incorporates (a) temporal information during decision making, (b) multi-modal information about the environment, as well as the proprioceptive state and steering actions of the controlled vehicle, and (c) information about the uncertainty inherent to the task. To this end, we discuss Deep Predictive Models and present an implementation using a Bayesian Convolutional LSTM. Experiments in a simple simulation environment show that the approach can learn to predict impending accidents with reasonable accuracy, especially when multiple cameras are used as input sources.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
85,606
1105.5306
On the Generalized Degrees of Freedom of the K-user Symmetric MIMO Gaussian Interference Channel
The K-user symmetric multiple input multiple output (MIMO) Gaussian interference channel (IC) where each transmitter has M antennas and each receiver has N antennas is studied from a generalized degrees of freedom (GDOF) perspective. An inner bound on the GDOF is derived using a combination of techniques such as treating interference as noise, zero forcing (ZF) at the receivers, interference alignment (IA), and extending the Han-Kobayashi (HK) scheme to K users, as a function of the number of antennas and the log (INR) / log (SNR) level. Three outer bounds are derived, under different assumptions of cooperation and providing side information to receivers. The novelty in the derivation lies in the careful selection of side information, which results in the cancellation of the negative differential entropy terms containing signal components, leading to a tractable outer bound. The overall outer bound is obtained by taking the minimum of the three outer bounds. The derived bounds are simplified for the MIMO Gaussian symmetric IC to obtain outer bounds on the generalized degrees of freedom (GDOF). Several interesting conclusions are drawn from the derived bounds. For example, when K > N/M + 1, a combination of the HK and IA schemes performs the best among the schemes considered. When N/M < K <= N/M + 1, the HK-scheme outperforms other schemes and is shown to be GDOF optimal. In addition, when the SNR and INR are at the same level, ZF-receiving and the HK-scheme have the same GDOF performance. It is also shown that many of the existing results on the GDOF of the Gaussian IC can be obtained as special cases of the bounds, e.g., by setting K=2 or the number of antennas at each user to 1.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
10,507
1007.1213
Composite Cyclotomic Fourier Transforms with Reduced Complexities
Discrete Fourier transforms~(DFTs) over finite fields have widespread applications in digital communication and storage systems. Hence, reducing the computational complexities of DFTs is of great significance. Recently proposed cyclotomic fast Fourier transforms (CFFTs) are promising due to their low multiplicative complexities. Unfortunately, there are two issues with CFFTs: (1) they rely on efficient short cyclic convolution algorithms, which has not been investigated thoroughly yet, and (2) they have very high additive complexities when directly implemented. In this paper, we address both issues. One of the main contributions of this paper is efficient bilinear 11-point cyclic convolution algorithms, which allow us to construct CFFTs over GF$(2^{11})$. The other main contribution of this paper is that we propose composite cyclotomic Fourier transforms (CCFTs). In comparison to previously proposed fast Fourier transforms, our CCFTs achieve lower overall complexities for moderate to long lengths, and the improvement significantly increases as the length grows. Our 2047-point and 4095-point CCFTs are also first efficient DFTs of such lengths to the best of our knowledge. Finally, our CCFTs are also advantageous for hardware implementations due to their regular and modular structure.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
7,016
2203.14485
Optimization of Directional Landmark Deployment for Visual Observer on SE(3)
An optimization method is proposed in this paper for novel deployment of given number of directional landmarks (location and pose) within a given region in the 3-D task space. This new deployment technique is built on the geometric models of both landmarks and the monocular camera. In particular, a new concept of Multiple Coverage Probability (MCP) is defined to characterize the probability of at least n landmarks being covered simultaneously by a camera at a fixed position. The optimization is conducted with respect to the position and pose of the given number of landmarks to maximize MCP through globally exploration of the given 3-D space. By adopting the elimination genetic algorithm, the global optimal solutions can be obtained, which are then applied to improve the convergent performance of the visual observer on SE(3) as a demonstration example. Both simulation and experimental results are presented to validate the effectiveness of the proposed landmark deployment optimization method.
false
false
false
false
false
false
false
false
false
false
true
true
false
false
false
false
false
false
288,027
1606.00720
Differentially Private Gaussian Processes
A major challenge for machine learning is increasing the availability of data while respecting the privacy of individuals. Here we combine the provable privacy guarantees of the differential privacy framework with the flexibility of Gaussian processes (GPs). We propose a method using GPs to provide differentially private (DP) regression. We then improve this method by crafting the DP noise covariance structure to efficiently protect the training data, while minimising the scale of the added noise. We find that this cloaking method achieves the greatest accuracy, while still providing privacy guarantees, and offers practical DP for regression over multi-dimensional inputs. Together these methods provide a starter toolkit for combining differential privacy and GPs.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
56,700
2105.05920
Attention-based Neural Beamforming Layers for Multi-channel Speech Recognition
Attention-based beamformers have recently been shown to be effective for multi-channel speech recognition. However, they are less capable at capturing local information. In this work, we propose a 2D Conv-Attention module which combines convolution neural networks with attention for beamforming. We apply self- and cross-attention to explicitly model the correlations within and between the input channels. The end-to-end 2D Conv-Attention model is compared with a multi-head self-attention and superdirective-based neural beamformers. We train and evaluate on an in-house multi-channel dataset. The results show a relative improvement of 3.8% in WER by the proposed model over the baseline neural beamformer.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
234,958
1705.09914
Dilated Residual Networks
Convolutional networks for image classification progressively reduce resolution until the image is represented by tiny feature maps in which the spatial structure of the scene is no longer discernible. Such loss of spatial acuity can limit image classification accuracy and complicate the transfer of the model to downstream applications that require detailed scene understanding. These problems can be alleviated by dilation, which increases the resolution of output feature maps without reducing the receptive field of individual neurons. We show that dilated residual networks (DRNs) outperform their non-dilated counterparts in image classification without increasing the model's depth or complexity. We then study gridding artifacts introduced by dilation, develop an approach to removing these artifacts (`degridding'), and show that this further increases the performance of DRNs. In addition, we show that the accuracy advantage of DRNs is further magnified in downstream applications such as object localization and semantic segmentation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
74,301
2206.07571
Efficient decoding up to a constant fraction of the code length for asymptotically good quantum codes
We introduce and analyse an efficient decoder for the quantum Tanner codes of that can correct adversarial errors of linear weight. Previous decoders for quantum low-density parity-check codes could only handle adversarial errors of weight $O(\sqrt{n \log n})$. We also work on the link between quantum Tanner codes and the Lifted Product codes of Panteleev and Kalachev, and show that our decoder can be adapted to the latter. The decoding algorithm alternates between sequential and parallel procedures and converges in linear time.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
302,790
1803.08793
Exploring the Naturalness of Buggy Code with Recurrent Neural Networks
Statistical language models are powerful tools which have been used for many tasks within natural language processing. Recently, they have been used for other sequential data such as source code.(Ray et al., 2015) showed that it is possible train an n-gram source code language mode, and use it to predict buggy lines in code by determining "unnatural" lines via entropy with respect to the language model. In this work, we propose using a more advanced language modeling technique, Long Short-term Memory recurrent neural networks, to model source code and classify buggy lines based on entropy. We show that our method slightly outperforms an n-gram model in the buggy line classification task using AUC.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
true
93,339
1810.00838
Multimodal Interactive Learning of Primitive Actions
We describe an ongoing project in learning to perform primitive actions from demonstrations using an interactive interface. In our previous work, we have used demonstrations captured from humans performing actions as training samples for a neural network-based trajectory model of actions to be performed by a computational agent in novel setups. We found that our original framework had some limitations that we hope to overcome by incorporating communication between the human and the computational agent, using the interaction between them to fine-tune the model learned by the machine. We propose a framework that uses multimodal human-computer interaction to teach action concepts to machines, making use of both live demonstration and communication through natural language, as two distinct teaching modalities, while requiring few training samples.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
109,269
1506.04422
A Fast Incremental Gaussian Mixture Model
This work builds upon previous efforts in online incremental learning, namely the Incremental Gaussian Mixture Network (IGMN). The IGMN is capable of learning from data streams in a single-pass by improving its model after analyzing each data point and discarding it thereafter. Nevertheless, it suffers from the scalability point-of-view, due to its asymptotic time complexity of $\operatorname{O}\bigl(NKD^3\bigr)$ for $N$ data points, $K$ Gaussian components and $D$ dimensions, rendering it inadequate for high-dimensional data. In this paper, we manage to reduce this complexity to $\operatorname{O}\bigl(NKD^2\bigr)$ by deriving formulas for working directly with precision matrices instead of covariance matrices. The final result is a much faster and scalable algorithm which can be applied to high dimensional tasks. This is confirmed by applying the modified algorithm to high-dimensional classification datasets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
44,165
2411.06711
Anytime Probabilistically Constrained Provably Convergent Online Belief Space Planning
Taking into account future risk is essential for an autonomously operating robot to find online not only the best but also a safe action to execute. In this paper, we build upon the recently introduced formulation of probabilistic belief-dependent constraints. We present an anytime approach employing the Monte Carlo Tree Search (MCTS) method in continuous domains. Unlike previous approaches, our method assures safety anytime with respect to the currently expanded search tree without relying on the convergence of the search. We prove convergence in probability with an exponential rate of a version of our algorithms and study proposed techniques via extensive simulations. Even with a tiny number of tree queries, the best action found by our approach is much safer than the baseline. Moreover, our approach constantly finds better than the baseline action in terms of objective. This is because we revise the values and statistics maintained in the search tree and remove from them the contribution of the pruned actions.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
507,231
2212.05897
MultiAct: Long-Term 3D Human Motion Generation from Multiple Action Labels
We tackle the problem of generating long-term 3D human motion from multiple action labels. Two main previous approaches, such as action- and motion-conditioned methods, have limitations to solve this problem. The action-conditioned methods generate a sequence of motion from a single action. Hence, it cannot generate long-term motions composed of multiple actions and transitions between actions. Meanwhile, the motion-conditioned methods generate future motions from initial motion. The generated future motions only depend on the past, so they are not controllable by the user's desired actions. We present MultiAct, the first framework to generate long-term 3D human motion from multiple action labels. MultiAct takes account of both action and motion conditions with a unified recurrent generation system. It repetitively takes the previous motion and action label; then, it generates a smooth transition and the motion of the given action. As a result, MultiAct produces realistic long-term motion controlled by the given sequence of multiple action labels. Codes are available here at https://github.com/TaeryungLee/MultiAct_RELEASE.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
335,929
2301.04195
Orbit: A Unified Simulation Framework for Interactive Robot Learning Environments
We present Orbit, a unified and modular framework for robot learning powered by NVIDIA Isaac Sim. It offers a modular design to easily and efficiently create robotic environments with photo-realistic scenes and high-fidelity rigid and deformable body simulation. With Orbit, we provide a suite of benchmark tasks of varying difficulty -- from single-stage cabinet opening and cloth folding to multi-stage tasks such as room reorganization. To support working with diverse observations and action spaces, we include fixed-arm and mobile manipulators with different physically-based sensors and motion generators. Orbit allows training reinforcement learning policies and collecting large demonstration datasets from hand-crafted or expert solutions in a matter of minutes by leveraging GPU-based parallelization. In summary, we offer an open-sourced framework that readily comes with 16 robotic platforms, 4 sensor modalities, 10 motion generators, more than 20 benchmark tasks, and wrappers to 4 learning libraries. With this framework, we aim to support various research areas, including representation learning, reinforcement learning, imitation learning, and task and motion planning. We hope it helps establish interdisciplinary collaborations in these communities, and its modularity makes it easily extensible for more tasks and applications in the future.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
339,985
1911.11536
Electricity Load Forecasting -- An Evaluation of Simple 1D-CNN Network Structures
This paper presents a convolutional neural network (CNN) which can be used for forecasting electricity load profiles 36 hours into the future. In contrast to well established CNN architectures, the input data is one-dimensional. A parameter scanning of network parameters is conducted in order to gain information about the influence of the kernel size, number of filters, and dense size. The results show that a good forecast quality can already be achieved with basic CNN architectures.The method works not only for smooth sum loads of many hundred consumers, but also for the load of apartment buildings.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
155,155
2402.18571
Arithmetic Control of LLMs for Diverse User Preferences: Directional Preference Alignment with Multi-Objective Rewards
Fine-grained control over large language models (LLMs) remains a significant challenge, hindering their adaptability to diverse user needs. While Reinforcement Learning from Human Feedback (RLHF) shows promise in aligning LLMs, its reliance on scalar rewards often limits its ability to capture diverse user preferences in real-world applications. To address this limitation, we introduce the Directional Preference Alignment (DPA) framework. Unlike the scalar-reward RLHF, DPA incorporates multi-objective reward modeling to represent diverse preference profiles. Additionally, DPA models user preferences as directions (i.e., unit vectors) in the reward space to achieve user-dependent preference control. Our method involves training a multi-objective reward model and then fine-tuning the LLM with a preference-conditioned variant of Rejection Sampling Finetuning (RSF), an RLHF method adopted by Llama 2. This method enjoys a better performance trade-off across various reward objectives. In comparison with the scalar-reward RLHF, DPA offers users intuitive control over LLM generation: they can arithmetically specify their desired trade-offs (e.g., more helpfulness with less verbosity). We also validate the effectiveness of DPA with real-world alignment experiments on Mistral-7B. Our method provides straightforward arithmetic control over the trade-off between helpfulness and verbosity while maintaining competitive performance with strong baselines such as Direct Preference Optimization (DPO).
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
433,473
1903.00161
DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs
Reading comprehension has recently seen rapid progress, with systems matching humans on the most popular datasets for the task. However, a large body of work has highlighted the brittleness of these systems, showing that there is much work left to be done. We introduce a new English reading comprehension benchmark, DROP, which requires Discrete Reasoning Over the content of Paragraphs. In this crowdsourced, adversarially-created, 96k-question benchmark, a system must resolve references in a question, perhaps to multiple input positions, and perform discrete operations over them (such as addition, counting, or sorting). These operations require a much more comprehensive understanding of the content of paragraphs than what was necessary for prior datasets. We apply state-of-the-art methods from both the reading comprehension and semantic parsing literature on this dataset and show that the best systems only achieve 32.7% F1 on our generalized accuracy metric, while expert human performance is 96.0%. We additionally present a new model that combines reading comprehension methods with simple numerical reasoning to achieve 47.0% F1.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
122,961
2008.11825
SHAP values for Explaining CNN-based Text Classification Models
Deep neural networks are increasingly used in natural language processing (NLP) models. However, the need to interpret and explain the results from complex algorithms are limiting their widespread adoption in regulated industries such as banking. There has been recent work on interpretability of machine learning algorithms with structured data. But there are only limited techniques for NLP applications where the problem is more challenging due to the size of the vocabulary, high-dimensional nature, and the need to consider textual coherence and language structure. This paper develops a methodology to compute SHAP values for local explainability of CNN-based text classification models. The approach is also extended to compute global scores to assess the importance of features. The results are illustrated on sentiment analysis of Amazon Electronic Review data.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
193,386
2212.08622
Reconfigurable Wearable Antenna for 5G Applications using Nematic Liquid Crystals
The antenna is one of the key building blocks of many wearable electronic device, and its functions include wireless communications, energy harvesting and radiative wireless power transfer (WPT). In an effort to realise lightweight, autonomous and battery-less wearable devices, we demonstrate a reconfigurable antenna design for 5G wearable applications that require ultra-low driving voltages ($0.4$-$0.6\,$V) and operate over a high frequency range ($3.3$-$3.8\,$GHz). For smart glasses application, previous antenna designs were `fixed' and mounted on the eyeglass frame itself. Here, we demonstrate a reconfigurable design that could be achieved on the lens itself, using an anisotropic liquid crystal (LC) material. We demonstrate how LC alignment and electric field patterns strongly influence the tuning capabilities of these antennas in the gigahertz range and present a smart, reconfigurable spiral antenna system with a LC substrate.
true
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
336,813
2012.03411
MLS: A Large-Scale Multilingual Dataset for Speech Research
This paper introduces Multilingual LibriSpeech (MLS) dataset, a large multilingual corpus suitable for speech research. The dataset is derived from read audiobooks from LibriVox and consists of 8 languages, including about 44.5K hours of English and a total of about 6K hours for other languages. Additionally, we provide Language Models (LM) and baseline Automatic Speech Recognition (ASR) models and for all the languages in our dataset. We believe such a large transcribed dataset will open new avenues in ASR and Text-To-Speech (TTS) research. The dataset will be made freely available for anyone at http://www.openslr.org.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
210,099
1903.07120
Stabilize Deep ResNet with A Sharp Scaling Factor $\tau$
We study the stability and convergence of training deep ResNets with gradient descent. Specifically, we show that the parametric branch in the residual block should be scaled down by a factor $\tau =O(1/\sqrt{L})$ to guarantee stable forward/backward process, where $L$ is the number of residual blocks. Moreover, we establish a converse result that the forward process is unbounded when $\tau>L^{-\frac{1}{2}+c}$, for any positive constant $c$. The above two results together establish a sharp value of the scaling factor in determining the stability of deep ResNet. Based on the stability result, we further show that gradient descent finds the global minima if the ResNet is properly over-parameterized, which significantly improves over the previous work with a much larger range of $\tau$ that admits global convergence. Moreover, we show that the convergence rate is independent of the depth, theoretically justifying the advantage of ResNet over vanilla feedforward network. Empirically, with such a factor $\tau$, one can train deep ResNet without normalization layer. Moreover, for ResNets with normalization layer, adding such a factor $\tau$ also stabilizes the training and obtains significant performance gain for deep ResNet.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
124,537
2310.09503
JM3D & JM3D-LLM: Elevating 3D Understanding with Joint Multi-modal Cues
The rising importance of 3D understanding, pivotal in computer vision, autonomous driving, and robotics, is evident. However, a prevailing trend, which straightforwardly resorted to transferring 2D alignment strategies to the 3D domain, encounters three distinct challenges: (1) Information Degradation: This arises from the alignment of 3D data with mere single-view 2D images and generic texts, neglecting the need for multi-view images and detailed subcategory texts. (2) Insufficient Synergy: These strategies align 3D representations to image and text features individually, hampering the overall optimization for 3D models. (3) Underutilization: The fine-grained information inherent in the learned representations is often not fully exploited, indicating a potential loss in detail. To address these issues, we introduce JM3D, a comprehensive approach integrating point cloud, text, and image. Key contributions include the Structured Multimodal Organizer (SMO), enriching vision-language representation with multiple views and hierarchical text, and the Joint Multi-modal Alignment (JMA), combining language understanding with visual representation. Our advanced model, JM3D-LLM, marries 3D representation with large language models via efficient fine-tuning. Evaluations on ModelNet40 and ScanObjectNN establish JM3D's superiority. The superior performance of JM3D-LLM further underscores the effectiveness of our representation transfer approach. Our code and models are available at https://github.com/Mr-Neko/JM3D.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
399,805
2401.11627
Tight Verification of Probabilistic Robustness in Bayesian Neural Networks
We introduce two algorithms for computing tight guarantees on the probabilistic robustness of Bayesian Neural Networks (BNNs). Computing robustness guarantees for BNNs is a significantly more challenging task than verifying the robustness of standard Neural Networks (NNs) because it requires searching the parameters' space for safe weights. Moreover, tight and complete approaches for the verification of standard NNs, such as those based on Mixed-Integer Linear Programming (MILP), cannot be directly used for the verification of BNNs because of the polynomial terms resulting from the consecutive multiplication of variables encoding the weights. Our algorithms efficiently and effectively search the parameters' space for safe weights by using iterative expansion and the network's gradient and can be used with any verification algorithm of choice for BNNs. In addition to proving that our algorithms compute tighter bounds than the SoA, we also evaluate our algorithms against the SoA on standard benchmarks, such as MNIST and CIFAR10, showing that our algorithms compute bounds up to 40% tighter than the SoA.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
423,076
2401.15469
Wind speed super-resolution and validation: from ERA5 to CERRA via diffusion models
The Copernicus Regional Reanalysis for Europe, CERRA, is a high-resolution regional reanalysis dataset for the European domain. In recent years it has shown significant utility across various climate-related tasks, ranging from forecasting and climate change research to renewable energy prediction, resource management, air quality risk assessment, and the forecasting of rare events, among others. Unfortunately, the availability of CERRA is lagging two years behind the current date, due to constraints in acquiring the requisite external data and the intensive computational demands inherent in its generation. As a solution, this paper introduces a novel method using diffusion models to approximate CERRA downscaling in a data-driven manner, without additional informations. By leveraging the lower resolution ERA5 dataset, which provides boundary conditions for CERRA, we approach this as a super-resolution task. Focusing on wind speed around Italy, our model, trained on existing CERRA data, shows promising results, closely mirroring original CERRA data. Validation with in-situ observations further confirms the model's accuracy in approximating ground measurements.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
424,455
1205.2909
Evolution of robust network topologies: Emergence of central backbones
We model the robustness against random failure or intentional attack of networks with arbitrary large-scale structure. We construct a block-based model which incorporates --- in a general fashion --- both connectivity and interdependence links, as well as arbitrary degree distributions and block correlations. By optimizing the percolation properties of this general class of networks, we identify a simple core-periphery structure as the topology most robust against random failure. In such networks, a distinct and small "core" of nodes with higher degree is responsible for most of the connectivity, functioning as a central "backbone" of the system. This centralized topology remains the optimal structure when other constraints are imposed, such as a given fraction of interdependence links and fixed degree distributions. This distinguishes simple centralized topologies as the most likely to emerge, when robustness against failure is the dominant evolutionary force.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
15,988
2311.00922
Research Team Identification Based on Representation Learning of Academic Heterogeneous Information Network
Academic networks in the real world can usually be described by heterogeneous information networks composed of multi-type nodes and relationships. Some existing research on representation learning for homogeneous information networks lacks the ability to explore heterogeneous information networks in heterogeneous information networks. It cannot be applied to heterogeneous information networks. Aiming at the practical needs of effectively identifying and discovering scientific research teams from the academic heterogeneous information network composed of massive and complex scientific and technological big data, this paper proposes a scientific research team identification method based on representation learning of academic heterogeneous information networks. The attention mechanism at node level and meta-path level learns low-dimensional, dense and real-valued vector representations on the basis of retaining the rich topological information of nodes in the network and the semantic information based on meta-paths, and realizes effective identification and discovery of scientific research teams and important team members in academic heterogeneous information networks based on maximizing node influence. Experimental results show that our proposed method outperforms the comparative methods.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
404,821
1810.10341
Visions of a generalized probability theory
In this Book we argue that the fruitful interaction of computer vision and belief calculus is capable of stimulating significant advances in both fields. From a methodological point of view, novel theoretical results concerning the geometric and algebraic properties of belief functions as mathematical objects are illustrated and discussed in Part II, with a focus on both a perspective 'geometric approach' to uncertainty and an algebraic solution to the issue of conflicting evidence. In Part III we show how these theoretical developments arise from important computer vision problems (such as articulated object tracking, data association and object pose estimation) to which, in turn, the evidential formalism is able to provide interesting new solutions. Finally, some initial steps towards a generalization of the notion of total probability to belief functions are taken, in the perspective of endowing the theory of evidence with a complete battery of estimation and inference tools to the benefit of all scientists and practitioners.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
111,268
2502.03381
Integrating automatic speech recognition into remote healthcare interpreting: A pilot study of its impact on interpreting quality
This paper reports on the results from a pilot study investigating the impact of automatic speech recognition (ASR) technology on interpreting quality in remote healthcare interpreting settings. Employing a within-subjects experiment design with four randomised conditions, this study utilises scripted medical consultations to simulate dialogue interpreting tasks. It involves four trainee interpreters with a language combination of Chinese and English. It also gathers participants' experience and perceptions of ASR support through cued retrospective reports and semi-structured interviews. Preliminary data suggest that the availability of ASR, specifically the access to full ASR transcripts and to ChatGPT-generated summaries based on ASR, effectively improved interpreting quality. Varying types of ASR output had different impacts on the distribution of interpreting error types. Participants reported similar interactive experiences with the technology, expressing their preference for full ASR transcripts. This pilot study shows encouraging results of applying ASR to dialogue-based healthcare interpreting and offers insights into the optimal ways to present ASR output to enhance interpreter experience and performance. However, it should be emphasised that the main purpose of this study was to validate the methodology and that further research with a larger sample size is necessary to confirm these findings.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
530,699
1506.04229
Evaluation of the Accuracy of the BGLemmatizer
This paper reveals the results of an analysis of the accuracy of developed software for automatic lemmatization for the Bulgarian language. This lemmatization software is written entirely in Java and is distributed as a GATE plugin. Certain statistical methods are used to define the accuracy of this software. The results of the analysis show 95% lemmatization accuracy.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
44,141
2302.08675
DREEAM: Guiding Attention with Evidence for Improving Document-Level Relation Extraction
Document-level relation extraction (DocRE) is the task of identifying all relations between each entity pair in a document. Evidence, defined as sentences containing clues for the relationship between an entity pair, has been shown to help DocRE systems focus on relevant texts, thus improving relation extraction. However, evidence retrieval (ER) in DocRE faces two major issues: high memory consumption and limited availability of annotations. This work aims at addressing these issues to improve the usage of ER in DocRE. First, we propose DREEAM, a memory-efficient approach that adopts evidence information as the supervisory signal, thereby guiding the attention modules of the DocRE system to assign high weights to evidence. Second, we propose a self-training strategy for DREEAM to learn ER from automatically-generated evidence on massive data without evidence annotations. Experimental results reveal that our approach exhibits state-of-the-art performance on the DocRED benchmark for both DocRE and ER. To the best of our knowledge, DREEAM is the first approach to employ ER self-training.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
346,137
2004.08673
A Hybrid Approach for Aspect-Based Sentiment Analysis Using Deep Contextual Word Embeddings and Hierarchical Attention
The Web has become the main platform where people express their opinions about entities of interest and their associated aspects. Aspect-Based Sentiment Analysis (ABSA) aims to automatically compute the sentiment towards these aspects from opinionated text. In this paper we extend the state-of-the-art Hybrid Approach for Aspect-Based Sentiment Analysis (HAABSA) method in two directions. First we replace the non-contextual word embeddings with deep contextual word embeddings in order to better cope with the word semantics in a given text. Second, we use hierarchical attention by adding an extra attention layer to the HAABSA high-level representations in order to increase the method flexibility in modeling the input data. Using two standard datasets (SemEval 2015 and SemEval 2016) we show that the proposed extensions improve the accuracy of the built model for ABSA.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
173,136
1912.09727
Computation of the maximal invariant set of discrete-time linear systems subject to a class of non-convex constraints
We consider the problem of computing the maximal invariant set of discrete-time linear systems subject to a class of non-convex constraints that admit quadratic relaxations. These non-convex constraints include semialgebraic sets and other smooth constraints with Lipschitz gradient. With these quadratic relaxations, a sufficient condition for set invariance is derived and it can be formulated as a set of linear matrix inequalities. Based on the sufficient condition, a new algorithm is presented with finite-time convergence to the actual maximal invariant set under mild assumptions. This algorithm can be also extended to switched linear systems and some special nonlinear systems. The performance of this algorithm is demonstrated on several numerical examples.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
158,145
2109.03542
Autonomous search of an airborne release in urban environments using informed tree planning
The use of autonomous vehicles for chemical source localisation is a key enabling tool for disaster response teams to safely and efficiently deal with chemical emergencies. Whilst much work has been performed on source localisation using autonomous systems, most previous works have assumed an open environment or employed simplistic obstacle avoidance, separate to the estimation procedure. In this paper, we explore the coupling of the path planning task for both source term estimation and obstacle avoidance in a holistic framework. The proposed system intelligently produces potential gas sampling locations based on the current estimation of the wind field and the local map. Then a tree search is performed to generate paths toward the estimated source location that traverse around any obstacles and still allow for exploration of potentially superior sampling locations. The proposed informed tree planning algorithm is then tested against the Entrotaxis technique in a series of high fidelity simulations. The proposed system is found to reduce source position error far more efficiently than Entrotaxis in a feature rich environment, whilst also exhibiting vastly more consistent and robust results.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
254,104
2402.01613
Nomic Embed: Training a Reproducible Long Context Text Embedder
This technical report describes the training of nomic-embed-text-v1, the first fully reproducible, open-source, open-weights, open-data, 8192 context length English text embedding model that outperforms both OpenAI Ada-002 and OpenAI text-embedding-3-small on the short-context MTEB benchmark and the long context LoCo benchmark. We release the training code and model weights under an Apache 2.0 license. In contrast with other open-source models, we release the full curated training data and code that allows for full replication of nomic-embed-text-v1. You can find code and data to replicate the model at https://github.com/nomic-ai/contrastors.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
426,099
2406.07585
Rate-Preserving Reductions for Blackwell Approachability
Abernethy et al. (2011) showed that Blackwell approachability and no-regret learning are equivalent, in the sense that any algorithm that solves a specific Blackwell approachability instance can be converted to a sublinear regret algorithm for a specific no-regret learning instance, and vice versa. In this paper, we study a more fine-grained form of such reductions, and ask when this translation between problems preserves not only a sublinear rate of convergence, but also preserves the optimal rate of convergence. That is, in which cases does it suffice to find the optimal regret bound for a no-regret learning instance in order to find the optimal rate of convergence for a corresponding approachability instance? We show that the reduction of Abernethy et al. (2011) does not preserve rates: their reduction may reduce a $d$-dimensional approachability instance $I_1$ with optimal convergence rate $R_1$ to a no-regret learning instance $I_2$ with optimal regret-per-round of $R_2$, with $R_{2}/R_{1}$ arbitrarily large (in particular, it is possible that $R_1 = 0$ and $R_{2} > 0$). On the other hand, we show that it is possible to tightly reduce any approachability instance to an instance of a generalized form of regret minimization we call improper $\phi$-regret minimization (a variant of the $\phi$-regret minimization of Gordon et al. (2008) where the transformation functions may map actions outside of the action set). Finally, we characterize when linear transformations suffice to reduce improper $\phi$-regret minimization problems to standard classes of regret minimization problems in a rate preserving manner. We prove that some improper $\phi$-regret minimization instances cannot be reduced to either subclass of instance in this way, suggesting that approachability can capture some problems that cannot be phrased in the language of online learning.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
463,127
2108.01450
Is Disentanglement enough? On Latent Representations for Controllable Music Generation
Improving controllability or the ability to manipulate one or more attributes of the generated data has become a topic of interest in the context of deep generative models of music. Recent attempts in this direction have relied on learning disentangled representations from data such that the underlying factors of variation are well separated. In this paper, we focus on the relationship between disentanglement and controllability by conducting a systematic study using different supervised disentanglement learning algorithms based on the Variational Auto-Encoder (VAE) architecture. Our experiments show that a high degree of disentanglement can be achieved by using different forms of supervision to train a strong discriminative encoder. However, in the absence of a strong generative decoder, disentanglement does not necessarily imply controllability. The structure of the latent space with respect to the VAE-decoder plays an important role in boosting the ability of a generative model to manipulate different attributes. To this end, we also propose methods and metrics to help evaluate the quality of a latent space with respect to the afforded degree of controllability.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
249,029
1906.04159
Inference and Uncertainty Quantification for Noisy Matrix Completion
Noisy matrix completion aims at estimating a low-rank matrix given only partial and corrupted entries. Despite substantial progress in designing efficient estimation algorithms, it remains largely unclear how to assess the uncertainty of the obtained estimates and how to perform statistical inference on the unknown matrix (e.g.~constructing a valid and short confidence interval for an unseen entry). This paper takes a step towards inference and uncertainty quantification for noisy matrix completion. We develop a simple procedure to compensate for the bias of the widely used convex and nonconvex estimators. The resulting de-biased estimators admit nearly precise non-asymptotic distributional characterizations, which in turn enable optimal construction of confidence intervals\,/\,regions for, say, the missing entries and the low-rank factors. Our inferential procedures do not rely on sample splitting, thus avoiding unnecessary loss of data efficiency. As a byproduct, we obtain a sharp characterization of the estimation accuracy of our de-biased estimators, which, to the best of our knowledge, are the first tractable algorithms that provably achieve full statistical efficiency (including the preconstant). The analysis herein is built upon the intimate link between convex and nonconvex optimization --- an appealing feature recently discovered by \cite{chen2019noisy}.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
134,622
1808.09465
The University of Cambridge's Machine Translation Systems for WMT18
The University of Cambridge submission to the WMT18 news translation task focuses on the combination of diverse models of translation. We compare recurrent, convolutional, and self-attention-based neural models on German-English, English-German, and Chinese-English. Our final system combines all neural models together with a phrase-based SMT system in an MBR-based scheme. We report small but consistent gains on top of strong Transformer ensembles.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
106,193
2102.13327
Mitigating Domain Mismatch in Face Recognition Using Style Matching
Despite outstanding performance on public benchmarks, face recognition still suffers due to domain mismatch between training (source) and testing (target) data. Furthermore, these domains are not shared classes, which complicates domain adaptation. Since this is also a fine-grained classification problem which does not strictly follow the low-density separation principle, conventional domain adaptation approaches do not resolve these problems. In this paper, we formulate domain mismatch in face recognition as a style mismatch problem for which we propose two methods. First, we design a domain discriminator with human-level judgment to mine target-like images in the training data to mitigate the domain gap. Second, we extract style representations in low-level feature maps of the backbone model, and match the style distributions of the two domains to find a common style representation. Evaluations on verification and open-set and closed-set identification protocols show that both methods yield good improvements, and that performance is more robust if they are combined. Our approach is competitive with related work, and its effectiveness is verified in a practical application.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
222,026
2009.05700
Information-Theoretic Multi-Objective Bayesian Optimization with Continuous Approximations
Many real-world applications involve black-box optimization of multiple objectives using continuous function approximations that trade-off accuracy and resource cost of evaluation. For example, in rocket launching research, we need to find designs that trade-off return-time and angular distance using continuous-fidelity simulators (e.g., varying tolerance parameter to trade-off simulation time and accuracy) for design evaluations. The goal is to approximate the optimal Pareto set by minimizing the cost for evaluations. In this paper, we propose a novel approach referred to as information-Theoretic Multi-Objective Bayesian Optimization with Continuous Approximations (iMOCA)} to solve this problem. The key idea is to select the sequence of input and function approximations for multiple objectives which maximize the information gain per unit cost for the optimal Pareto front. Our experiments on diverse synthetic and real-world benchmarks show that iMOCA significantly improves over existing single-fidelity methods.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
195,396
1701.08585
Variational Policy for Guiding Point Processes
Temporal point processes have been widely applied to model event sequence data generated by online users. In this paper, we consider the problem of how to design the optimal control policy for point processes, such that the stochastic system driven by the point process is steered to a target state. In particular, we exploit the key insight to view the stochastic optimal control problem from the perspective of optimal measure and variational inference. We further propose a convex optimization framework and an efficient algorithm to update the policy adaptively to the current system state. Experiments on synthetic and real-world data show that our algorithm can steer the user activities much more accurately and efficiently than other stochastic control methods.
false
false
false
true
false
false
true
false
false
false
true
false
false
false
false
false
false
false
67,490
1910.12686
Growing axons: greedy learning of neural networks with application to function approximation
We propose a new method for learning deep neural network models that is based on a greedy learning approach: we add one basis function at a time, and a new basis function is generated as a non-linear activation function applied to a linear combination of the previous basis functions. Such a method (growing deep neural network by one neuron at a time) allows us to compute much more accurate approximants for several model problems in function approximation.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
151,163
2012.04925
Towards Annotation-Free Evaluation of Cross-Lingual Image Captioning
Cross-lingual image captioning, with its ability to caption an unlabeled image in a target language other than English, is an emerging topic in the multimedia field. In order to save the precious human resource from re-writing reference sentences per target language, in this paper we make a brave attempt towards annotation-free evaluation of cross-lingual image captioning. Depending on whether we assume the availability of English references, two scenarios are investigated. For the first scenario with the references available, we propose two metrics, i.e., WMDRel and CLinRel. WMDRel measures the semantic relevance between a model-generated caption and machine translation of an English reference using their Word Mover's Distance. By projecting both captions into a deep visual feature space, CLinRel is a visual-oriented cross-lingual relevance measure. As for the second scenario, which has zero reference and is thus more challenging, we propose CMedRel to compute a cross-media relevance between the generated caption and the image content, in the same visual feature space as used by CLinRel. The promising results show high potential of the new metrics for evaluation with no need of references in the target language.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
210,619
2412.04955
MixedGaussianAvatar: Realistically and Geometrically Accurate Head Avatar via Mixed 2D-3D Gaussian Splatting
Reconstructing high-fidelity 3D head avatars is crucial in various applications such as virtual reality. The pioneering methods reconstruct realistic head avatars with Neural Radiance Fields (NeRF), which have been limited by training and rendering speed. Recent methods based on 3D Gaussian Splatting (3DGS) significantly improve the efficiency of training and rendering. However, the surface inconsistency of 3DGS results in subpar geometric accuracy; later, 2DGS uses 2D surfels to enhance geometric accuracy at the expense of rendering fidelity. To leverage the benefits of both 2DGS and 3DGS, we propose a novel method named MixedGaussianAvatar for realistically and geometrically accurate head avatar reconstruction. Our main idea is to utilize 2D Gaussians to reconstruct the surface of the 3D head, ensuring geometric accuracy. We attach the 2D Gaussians to the triangular mesh of the FLAME model and connect additional 3D Gaussians to those 2D Gaussians where the rendering quality of 2DGS is inadequate, creating a mixed 2D-3D Gaussian representation. These 2D-3D Gaussians can then be animated using FLAME parameters. We further introduce a progressive training strategy that first trains the 2D Gaussians and then fine-tunes the mixed 2D-3D Gaussians. We demonstrate the superiority of MixedGaussianAvatar through comprehensive experiments. The code will be released at: https://github.com/ChenVoid/MGA/.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
514,646
2211.00828
Synthesizing Programs with Continuous Optimization
Automatic software generation based on some specification is known as program synthesis. Most existing approaches formulate program synthesis as a search problem with discrete parameters. In this paper, we present a novel formulation of program synthesis as a continuous optimization problem and use a state-of-the-art evolutionary approach, known as Covariance Matrix Adaptation Evolution Strategy to solve the problem. We then propose a mapping scheme to convert the continuous formulation into actual programs. We compare our system, called GENESYS, with several recent program synthesis techniques (in both discrete and continuous domains) and show that GENESYS synthesizes more programs within a fixed time budget than those existing schemes. For example, for programs of length 10, GENESYS synthesizes 28% more programs than those existing schemes within the same time budget.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
328,014
2403.14578
RAmBLA: A Framework for Evaluating the Reliability of LLMs as Assistants in the Biomedical Domain
Large Language Models (LLMs) increasingly support applications in a wide range of domains, some with potential high societal impact such as biomedicine, yet their reliability in realistic use cases is under-researched. In this work we introduce the Reliability AssesMent for Biomedical LLM Assistants (RAmBLA) framework and evaluate whether four state-of-the-art foundation LLMs can serve as reliable assistants in the biomedical domain. We identify prompt robustness, high recall, and a lack of hallucinations as necessary criteria for this use case. We design shortform tasks and tasks requiring LLM freeform responses mimicking real-world user interactions. We evaluate LLM performance using semantic similarity with a ground truth response, through an evaluator LLM.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
440,136
2502.02140
An Information-Theoretic Analysis of Thompson Sampling with Infinite Action Spaces
This paper studies the Bayesian regret of the Thompson Sampling algorithm for bandit problems, building on the information-theoretic framework introduced by Russo and Van Roy (2015). Specifically, it extends the rate-distortion analysis of Dong and Van Roy (2018), which provides near-optimal bounds for linear bandits. A limitation of these results is the assumption of a finite action space. We address this by extending the analysis to settings with infinite and continuous action spaces. Additionally, we specialize our results to bandit problems with expected rewards that are Lipschitz continuous with respect to the action space, deriving a regret bound that explicitly accounts for the complexity of the action space.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
530,194
1909.04497
Equity2Vec: End-to-end Deep Learning Framework for Cross-sectional Asset Pricing
Pricing assets has attracted significant attention from the financial technology community. We observe that the existing solutions overlook the cross-sectional effects and not fully leveraged the heterogeneous data sets, leading to sub-optimal performance. To this end, we propose an end-to-end deep learning framework to price the assets. Our framework possesses two main properties: 1) We propose Equity2Vec, a graph-based component that effectively captures both long-term and evolving cross-sectional interactions. 2) The framework simultaneously leverages all the available heterogeneous alpha sources including technical indicators, financial news signals, and cross-sectional signals. Experimental results on datasets from the real-world stock market show that our approach outperforms the existing state-of-the-art approaches. Furthermore, market trading simulations demonstrate that our framework monetizes the signals effectively.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
144,822
2010.15527
On the robustness of kernel-based pairwise learning
It is shown that many results on the statistical robustness of kernel-based pairwise learning can be derived under basically no assumptions on the input and output spaces. In particular neither moment conditions on the conditional distribution of Y given X = x nor the boundedness of the output space is needed. We obtain results on the existence and boundedness of the influence function and show qualitative robustness of the kernel-based estimator. The present paper generalizes results by Christmann and Zhou (2016) by allowing the prediction function to take two arguments and can thus be applied in a variety of situations such as ranking.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
203,795
2008.01834
Non-Commutative Ring Learning With Errors From Cyclic Algebras
The Learning with Errors (LWE) problem is the fundamental backbone of modern lattice based cryptography, allowing one to establish cryptography on the hardness of well-studied computational problems. However, schemes based on LWE are often impractical, so Ring LWE was introduced as a form of `structured' LWE, trading off a hard to quantify loss of security for an increase in efficiency by working over a well chosen ring. Another popular variant, Module LWE, generalizes this exchange by implementing a module structure over a ring. In this work, we introduce a novel variant of LWE over cyclic algebras (CLWE) to replicate the addition of the ring structure taking LWE to Ring LWE by adding cyclic structure to Module LWE. The proposed construction is both more efficient than Module LWE and conjecturally more secure than Ring LWE, the best of both worlds. We show that the security reductions expected for an LWE problem hold, namely a reduction from certain structured lattice problems to the hardness of the decision variant of the CLWE problem. As a contribution of theoretic interest, we view CLWE as the first variant of Ring LWE which supports non-commutative multiplication operations. This ring structure compares favorably with Module LWE, and naturally allows a larger message space for error correction coding.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
190,444
2404.16957
Attributing Responsibility in AI-Induced Incidents: A Computational Reflective Equilibrium Framework for Accountability
The pervasive integration of Artificial Intelligence (AI) has introduced complex challenges in the responsibility and accountability in the event of incidents involving AI-enabled systems. The interconnectivity of these systems, ethical concerns of AI-induced incidents, coupled with uncertainties in AI technology and the absence of corresponding regulations, have made traditional responsibility attribution challenging. To this end, this work proposes a Computational Reflective Equilibrium (CRE) approach to establish a coherent and ethically acceptable responsibility attribution framework for all stakeholders. The computational approach provides a structured analysis that overcomes the limitations of conceptual approaches in dealing with dynamic and multifaceted scenarios, showcasing the framework's explainability, coherence, and adaptivity properties in the responsibility attribution process. We examine the pivotal role of the initial activation level associated with claims in equilibrium computation. Using an AI-assisted medical decision-support system as a case study, we illustrate how different initializations lead to diverse responsibility distributions. The framework offers valuable insights into accountability in AI-induced incidents, facilitating the development of a sustainable and resilient system through continuous monitoring, revision, and reflection.
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
449,686
2301.05492
Disentangled Representation for Diversified Recommendations
Accuracy and diversity have long been considered to be two conflicting goals for recommendations. We point out, however, that as the diversity is typically measured by certain pre-selected item attributes, e.g., category as the most popularly employed one, improved diversity can be achieved without sacrificing recommendation accuracy, as long as the diversification respects the user's preference about the pre-selected attributes. This calls for a fine-grained understanding of a user's preferences over items, where one needs to recognize the user's choice is driven by the quality of the item itself, or the pre-selected attributes of the item. In this work, we focus on diversity defined on item categories. We propose a general diversification framework agnostic to the choice of recommendation algorithms. Our solution disentangles the learnt user representation in the recommendation module into category-independent and category-dependent components to differentiate a user's preference over items from two orthogonal perspectives. Experimental results on three benchmark datasets and online A/B test demonstrate the effectiveness of our solution in improving both recommendation accuracy and diversity. In-depth analysis suggests that the improvement is due to our improved modeling of users' categorical preferences and refined ranking within item categories.
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
340,361
2405.12234
Joint Prediction Regions for time-series models
Machine Learning algorithms are notorious for providing point predictions but not prediction intervals. There are many applications where one requires confidence in predictions and prediction intervals. Stringing together, these intervals give rise to joint prediction regions with the desired significance level. It is an easy task to compute Joint Prediction regions (JPR) when the data is IID. However, the task becomes overly difficult when JPR is needed for time series because of the dependence between the observations. This project aims to implement Wolf and Wunderli's method for constructing JPRs and compare it with other methods (e.g. NP heuristic, Joint Marginals). The method under study is based on bootstrapping and is applied to different datasets (Min Temp, Sunspots), using different predictors (e.g. ARIMA and LSTM). One challenge of applying the method under study is to derive prediction standard errors for models, it cannot be obtained analytically. A novel method to estimate prediction standard error for different predictors is also devised. Finally, the method is applied to a synthetic dataset to find empirical averages and empirical widths and the results from the Wolf and Wunderli paper are consolidated. The experimental results show a narrowing of width with strong predictors like neural nets, widening of width with increasing forecast horizon H and decreasing significance level alpha, controlling the width with parameter k in K-FWE, and loss of information using Joint Marginals.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
455,450
2106.01503
Towards an Explanation Space to Align Humans and Explainable-AI Teamwork
Providing meaningful and actionable explanations to end-users is a fundamental prerequisite for implementing explainable intelligent systems in the real world. Explainability is a situated interaction between a user and the AI system rather than being static design principles. The content of explanations is context-dependent and must be defined by evidence about the user and its context. This paper seeks to operationalize this concept by proposing a formative architecture that defines the explanation space from a user-inspired perspective. The architecture comprises five intertwined components to outline explanation requirements for a task: (1) the end-users mental models, (2) the end-users cognitive process, (3) the user interface, (4) the human-explainer agent, and the (5) agent process. We first define each component of the architecture. Then we present the Abstracted Explanation Space, a modeling tool that aggregates the architecture's components to support designers in systematically aligning explanations with the end-users work practices, needs, and goals. It guides the specifications of what needs to be explained (content - end-users mental model), why this explanation is necessary (context - end-users cognitive process), to delimit how to explain it (format - human-explainer agent and user interface), and when should the explanations be given. We then exemplify the tool's use in an ongoing case study in the aircraft maintenance domain. Finally, we discuss possible contributions of the tool, known limitations/areas for improvement, and future work to be done.
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
238,518
2204.01159
Shape-Pose Disentanglement using SE(3)-equivariant Vector Neurons
We introduce an unsupervised technique for encoding point clouds into a canonical shape representation, by disentangling shape and pose. Our encoder is stable and consistent, meaning that the shape encoding is purely pose-invariant, while the extracted rotation and translation are able to semantically align different input shapes of the same class to a common canonical pose. Specifically, we design an auto-encoder based on Vector Neuron Networks, a rotation-equivariant neural network, whose layers we extend to provide translation-equivariance in addition to rotation-equivariance only. The resulting encoder produces pose-invariant shape encoding by construction, enabling our approach to focus on learning a consistent canonical pose for a class of objects. Quantitative and qualitative experiments validate the superior stability and consistency of our approach.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
289,518
1904.11785
Cryptanalysis of a System Based on Twisted Reed-Solomon Codes
Twisted Reed-Solomon (TRS) codes are a family of codes that contains a large number of maximum distance separable codes that are non-equivalent to Reed--Solomon codes. TRS codes were recently proposed as an alternative to Goppa codes for the McEliece code-based cryptosystem, resulting in a potential reduction of key sizes. The use of TRS codes in the McEliece cryptosystem has been motivated by the fact that a large subfamily of TRS codes is resilient to a direct use of known algebraic key-recovery methods. In this paper, an efficient key-recovery attack on the TRS variant that was used in the McEliece cryptosystem is presented. The algorithm exploits a new approach based on recovering the structure of a well-chosen subfield subcode of the public code. It is proved that the attack always succeeds and breaks the system for all practical parameters in $O(n^4)$ field operations. A software implementation of the algorithm retrieves a valid private key from the public key within a few minutes, for parameters claiming a security level of 128 bits. The success of the attack also indicates that, contrary to common beliefs, subfield subcodes of the public code need to be precisely analyzed when proposing a McEliece-type code-based cryptosystem. Finally, the paper discusses an attempt to repair the scheme and a modification of the attack aiming at Gabidulin-Paramonov-Tretjakov cryptosystems based on twisted Gabidulin codes.
false
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
128,947
2309.03179
SLiMe: Segment Like Me
Significant strides have been made using large vision-language models, like Stable Diffusion (SD), for a variety of downstream tasks, including image editing, image correspondence, and 3D shape generation. Inspired by these advancements, we explore leveraging these extensive vision-language models for segmenting images at any desired granularity using as few as one annotated sample by proposing SLiMe. SLiMe frames this problem as an optimization task. Specifically, given a single training image and its segmentation mask, we first extract attention maps, including our novel "weighted accumulated self-attention map" from the SD prior. Then, using the extracted attention maps, the text embeddings of Stable Diffusion are optimized such that, each of them, learn about a single segmented region from the training image. These learned embeddings then highlight the segmented region in the attention maps, which in turn can then be used to derive the segmentation map. This enables SLiMe to segment any real-world image during inference with the granularity of the segmented region in the training image, using just one example. Moreover, leveraging additional training data when available, i.e. few-shot, improves the performance of SLiMe. We carried out a knowledge-rich set of experiments examining various design factors and showed that SLiMe outperforms other existing one-shot and few-shot segmentation methods.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
390,294
2302.03788
Toward a Theory of Causation for Interpreting Neural Code Models
Neural Language Models of Code, or Neural Code Models (NCMs), are rapidly progressing from research prototypes to commercial developer tools. As such, understanding the capabilities and limitations of such models is becoming critical. However, the abilities of these models are typically measured using automated metrics that often only reveal a portion of their real-world performance. While, in general, the performance of NCMs appears promising, currently much is unknown about how such models arrive at decisions. To this end, this paper introduces $do_{code}$, a post hoc interpretability method specific to NCMs that is capable of explaining model predictions. $do_{code}$ is based upon causal inference to enable programming language-oriented explanations. While the theoretical underpinnings of $do_{code}$ are extensible to exploring different model properties, we provide a concrete instantiation that aims to mitigate the impact of spurious correlations by grounding explanations of model behavior in properties of programming languages. To demonstrate the practical benefit of $do_{code}$, we illustrate the insights that our framework can provide by performing a case study on two popular deep learning architectures and ten NCMs. The results of this case study illustrate that our studied NCMs are sensitive to changes in code syntax. All our NCMs, except for the BERT-like model, statistically learn to predict tokens related to blocks of code (\eg brackets, parenthesis, semicolon) with less confounding bias as compared to other programming language constructs. These insights demonstrate the potential of $do_{code}$ as a useful method to detect and facilitate the elimination of confounding bias in NCMs.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
344,465
2308.00451
Physics-Driven Spectrum-Consistent Federated Learning for Palmprint Verification
Palmprint as biometrics has gained increasing attention recently due to its discriminative ability and robustness. However, existing methods mainly improve palmprint verification within one spectrum, which is challenging to verify across different spectrums. Additionally, in distributed server-client-based deployment, palmprint verification systems predominantly necessitate clients to transmit private data for model training on the centralized server, thereby engendering privacy apprehensions. To alleviate the above issues, in this paper, we propose a physics-driven spectrum-consistent federated learning method for palmprint verification, dubbed as PSFed-Palm. PSFed-Palm draws upon the inherent physical properties of distinct wavelength spectrums, wherein images acquired under similar wavelengths display heightened resemblances. Our approach first partitions clients into short- and long-spectrum groups according to the wavelength range of their local spectrum images. Subsequently, we introduce anchor models for short- and long-spectrum, which constrain the optimization directions of local models associated with long- and short-spectrum images. Specifically, a spectrum-consistent loss that enforces the model parameters and feature representation to align with their corresponding anchor models is designed. Finally, we impose constraints on the local models to ensure their consistency with the global model, effectively preventing model drift. This measure guarantees spectrum consistency while protecting data privacy, as there is no need to share local data. Extensive experiments are conducted to validate the efficacy of our proposed PSFed-Palm approach. The proposed PSFed-Palm demonstrates compelling performance despite only a limited number of training data. The codes will be released at https://github.com/Zi-YuanYang/PSFed-Palm.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
382,942
2406.05350
Exploiting Monotonicity to Design an Adaptive PI Passivity-Based Controller for a Fuel-Cell System
We present a controller for a power electronic system composed of a fuel cell (FC) connected to a boost converter which feeds a resistive load. The controller aims to regulate the output voltage of the converter regardless of the uncertainty of the load. Leveraging the monotonicity feature of the fuel cell polarization curve we prove that the nonlinear system can be controlled by means of a passivity-based proportional-integral approach. We afterward extend the result to an adaptive version, allowing the controller to deal with parameter uncertainties, such as inductor parasitic resistance, load, and FC polarization curve parameters. This adaptive design is based on an indirect control approach with online parameter identification performed by a ``hybrid'' estimator which combines two techniques: the gradient-descent and immersion-and-invariance algorithms. The overall system is proved to be stable with the output voltage regulated to its reference. Experimental results validate our proposal under two real-life scenarios: pulsating load and output voltage reference changes.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
462,094
2412.01692
Digital Epidemiology: Leveraging Social Media for Insight into Epilepsy and Mental Health
Social media platforms, particularly Reddit's r/Epilepsy community, offer a unique perspective into the experiences of individuals with epilepsy (PWE) and their caregivers. This study analyzes 57k posts and 533k comments to explore key themes across demographics such as age, gender, and relationships. Our findings highlight significant discussions on epilepsy-related challenges, including depression (with 39.75\% of posts indicating severe symptoms), driving restrictions, workplace concerns, and pregnancy-related issues in women with epilepsy. We introduce a novel engagement metric, F(P), which incorporates post length, sentiment scores, and readability to quantify community interaction. This analysis underscores the importance of integrated care addressing both neurological and mental health challenges faced by PWE. The insights from this study inform strategies for targeted support and awareness interventions.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
513,221
2210.16025
Ultra-fast, programmable, and electronics-free soft robots enabled by snapping metacaps
Soft robots have a myriad of potentials because of their intrinsically compliant bodies, enabling safe interactions with humans and adaptability to unpredictable environments. However, most of them have limited actuation speeds, require complex control systems, and lack sensing capabilities. To address these challenges, here we geometrically design a class of metacaps whose rich nonlinear mechanical behaviors can be harnessed to create soft robots with unprecedented functionalities. Specifically, we demonstrate a sensor-less metacap gripper that can grasp objects in 3.75 ms upon physical contact and a pneumatically actuated gripper with tunable actuation behaviors that have little dependence on the rate of input. Both grippers can be readily integrated into a robotic platform for practical applications. Furthermore, we demonstrate that the metacap enables propelling of a swimming robot, exhibiting amplified swimming speed as well as untethered, electronics-free swimming with tunable speeds. Our metacaps provide new strategies to design the next-generation soft robots that require high transient output energy and are capable of autonomous and electronics-free maneuvering.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
327,186
cs/9705101
Query DAGs: A Practical Paradigm for Implementing Belief-Network Inference
We describe a new paradigm for implementing inference in belief networks, which consists of two steps: (1) compiling a belief network into an arithmetic expression called a Query DAG (Q-DAG); and (2) answering queries using a simple evaluation algorithm. Each node of a Q-DAG represents a numeric operation, a number, or a symbol for evidence. Each leaf node of a Q-DAG represents the answer to a network query, that is, the probability of some event of interest. It appears that Q-DAGs can be generated using any of the standard algorithms for exact inference in belief networks (we show how they can be generated using clustering and conditioning algorithms). The time and space complexity of a Q-DAG generation algorithm is no worse than the time complexity of the inference algorithm on which it is based. The complexity of a Q-DAG evaluation algorithm is linear in the size of the Q-DAG, and such inference amounts to a standard evaluation of the arithmetic expression it represents. The intended value of Q-DAGs is in reducing the software and hardware resources required to utilize belief networks in on-line, real-world applications. The proposed framework also facilitates the development of on-line inference on different software and hardware platforms due to the simplicity of the Q-DAG evaluation algorithm. Interestingly enough, Q-DAGs were found to serve other purposes: simple techniques for reducing Q-DAGs tend to subsume relatively complex optimization techniques for belief-network inference, such as network-pruning and computation-caching.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
540,359
2210.07765
Activity-aware Human Mobility Prediction with Hierarchical Graph Attention Recurrent Network
Human mobility prediction is a fundamental task essential for various applications in urban planning, location-based services and intelligent transportation systems. Existing methods often ignore activity information crucial for reasoning human preferences and routines, or adopt a simplified representation of the dependencies between time, activities and locations. To address these issues, we present Hierarchical Graph Attention Recurrent Network (HGARN) for human mobility prediction. Specifically, we construct a hierarchical graph based on past mobility records and employ a Hierarchical Graph Attention Module to capture complex time-activity-location dependencies. This way, HGARN can learn representations with rich human travel semantics to model user preferences at the global level. We also propose a model-agnostic history-enhanced confidence (MAHEC) label to incorporate each user's individual-level preferences. Finally, we introduce a Temporal Module, which employs recurrent structures to jointly predict users' next activities and their associated locations, with the former used as an auxiliary task to enhance the latter prediction. For model evaluation, we test the performance of HGARN against existing state-of-the-art methods in both the recurring (i.e., returning to a previously visited location) and explorative (i.e., visiting a new location) settings. Overall, HGARN outperforms other baselines significantly in all settings based on two real-world human mobility data benchmarks. These findings confirm the important role that human activities play in determining mobility decisions, illustrating the need to develop activity-aware intelligent transportation systems. Source codes of this study are available at https://github.com/YihongT/HGARN.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
323,858
2402.09265
Computational Complexity of Preferred Subset Repairs on Data-Graphs
Preferences are a pivotal component in practical reasoning, especially in tasks that involve decision-making over different options or courses of action that could be pursued. In this work, we focus on repairing and querying inconsistent knowledge bases in the form of graph databases, which involves finding a way to solve conflicts in the knowledge base and considering answers that are entailed from every possible repair, respectively. Without a priori domain knowledge, all possible repairs are equally preferred. Though that may be adequate for some settings, it seems reasonable to establish and exploit some form of preference order among the potential repairs. We study the problem of computing prioritized repairs over graph databases with data values, using a notion of consistency based on GXPath expressions as integrity constraints. We present several preference criteria based on the standard subset repair semantics, incorporating weights, multisets, and set-based priority levels. We show that it is possible to maintain the same computational complexity as in the case where no preference criterion is available for exploitation. Finally, we explore the complexity of consistent query answering in this setting and obtain tight lower and upper bounds for all the preference criteria introduced.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
true
true
429,442
2008.08005
Reinforcement Learning for Improving Object Detection
The performance of a trained object detection neural network depends a lot on the image quality. Generally, images are pre-processed before feeding them into the neural network and domain knowledge about the image dataset is used to choose the pre-processing techniques. In this paper, we introduce an algorithm called ObjectRL to choose the amount of a particular pre-processing to be applied to improve the object detection performances of pre-trained networks. The main motivation for ObjectRL is that an image which looks good to a human eye may not necessarily be the optimal one for a pre-trained object detector to detect objects.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
192,284
2302.09734
Achieving Hierarchy-Free Approximation for Bilevel Programs With Equilibrium Constraints
In this paper, we develop an approximation scheme for solving bilevel programs with equilibrium constraints, which are generally difficult to solve. Among other things, calculating the first-order derivative in such a problem requires differentiation across the hierarchy, which is computationally intensive, if not prohibitive. To bypass the hierarchy, we propose to bound such bilevel programs, equivalent to multiple-followers Stackelberg games, with two new hierarchy-free problems: a $T$-step Cournot game and a $T$-step monopoly model. Since they are standard equilibrium or optimization problems, both can be efficiently solved via first-order methods. Importantly, we show that the bounds provided by these problems -- the upper bound by the $T$-step Cournot game and the lower bound by the $T$-step monopoly model -- can be made arbitrarily tight by increasing the step parameter $T$ for a wide range of problems. We prove that a small $T$ usually suffices under appropriate conditions to reach an approximation acceptable for most practical purposes. Eventually, the analytical insights are highlighted through numerical examples.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
true
346,542
2106.07094
On the Convergence of Differentially Private Federated Learning on Non-Lipschitz Objectives, and with Normalized Client Updates
There is a dearth of convergence results for differentially private federated learning (FL) with non-Lipschitz objective functions (i.e., when gradient norms are not bounded). The primary reason for this is that the clipping operation (i.e., projection onto an $\ell_2$ ball of a fixed radius called the clipping threshold) for bounding the sensitivity of the average update to each client's update introduces bias depending on the clipping threshold and the number of local steps in FL, and analyzing this is not easy. For Lipschitz functions, the Lipschitz constant serves as a trivial clipping threshold with zero bias. However, Lipschitzness does not hold in many practical settings; moreover, verifying it and computing the Lipschitz constant is hard. Thus, the choice of the clipping threshold is non-trivial and requires a lot of tuning in practice. In this paper, we provide the first convergence result for private FL on smooth \textit{convex} objectives \textit{for a general clipping threshold} -- \textit{without assuming Lipschitzness}. We also look at a simpler alternative to clipping (for bounding sensitivity) which is \textit{normalization} -- where we use only a scaled version of the unit vector along the client updates, completely discarding the magnitude information. {The resulting normalization-based private FL algorithm is theoretically shown to have better convergence than its clipping-based counterpart on smooth convex functions. We corroborate our theory with synthetic experiments as well as experiments on benchmarking datasets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
240,768
1611.00960
Adaptive mixed norm optical flow estimation
The pel-recursive computation of 2-D optical flow has been extensively studied in computer vision to estimate motion from image sequences, but it still raises a wealth of issues, such as the treatment of outliers, motion discontinuities and occlusion. It relies on spatio-temporal brightness variations due to motion. Our proposed adaptive regularized approach deals with these issues within a common framework. It relies on the use of a data-driven technique called Mixed Norm (MN) to estimate the best motion vector for a given pixel. In our model, various types of noise can be handled, representing different sources of error. The motion vector estimation takes into consideration local image properties and it results from the minimization of a mixed norm functional with a regularization parameter depending on the kurtosis. This parameter determines the relative importance of the fourth norm and makes the functional convex. The main advantage of the developed procedure is that no knowledge of the noise distribution is necessary. Experiments indicate that this approach provides robust estimates of the optical flow.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
63,304
2005.12566
Hierarchical Fashion Graph Network for Personalized Outfit Recommendation
Fashion outfit recommendation has attracted increasing attentions from online shopping services and fashion communities.Distinct from other scenarios (e.g., social networking or content sharing) which recommend a single item (e.g., a friend or picture) to a user, outfit recommendation predicts user preference on a set of well-matched fashion items.Hence, performing high-quality personalized outfit recommendation should satisfy two requirements -- 1) the nice compatibility of fashion items and 2) the consistence with user preference. However, present works focus mainly on one of the requirements and only consider either user-outfit or outfit-item relationships, thereby easily leading to suboptimal representations and limiting the performance. In this work, we unify two tasks, fashion compatibility modeling and personalized outfit recommendation. Towards this end, we develop a new framework, Hierarchical Fashion Graph Network(HFGN), to model relationships among users, items, and outfits simultaneously. In particular, we construct a hierarchical structure upon user-outfit interactions and outfit-item mappings. We then get inspirations from recent graph neural networks, and employ the embedding propagation on such hierarchical graph, so as to aggregate item information into an outfit representation, and then refine a user's representation via his/her historical outfits. Furthermore, we jointly train these two tasks to optimize these representations. To demonstrate the effectiveness of HFGN, we conduct extensive experiments on a benchmark dataset, and HFGN achieves significant improvements over the state-of-the-art compatibility matching models like NGNN and outfit recommenders like FHN.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
178,777
2009.12836
Normalization Techniques in Training DNNs: Methodology, Analysis and Application
Normalization techniques are essential for accelerating the training and improving the generalization of deep neural networks (DNNs), and have successfully been used in various applications. This paper reviews and comments on the past, present and future of normalization methods in the context of DNN training. We provide a unified picture of the main motivation behind different approaches from the perspective of optimization, and present a taxonomy for understanding the similarities and differences between them. Specifically, we decompose the pipeline of the most representative normalizing activation methods into three components: the normalization area partitioning, normalization operation and normalization representation recovery. In doing so, we provide insight for designing new normalization technique. Finally, we discuss the current progress in understanding normalization methods, and provide a comprehensive review of the applications of normalization for particular tasks, in which it can effectively solve the key issues.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
197,547
2311.07562
GPT-4V in Wonderland: Large Multimodal Models for Zero-Shot Smartphone GUI Navigation
We present MM-Navigator, a GPT-4V-based agent for the smartphone graphical user interface (GUI) navigation task. MM-Navigator can interact with a smartphone screen as human users, and determine subsequent actions to fulfill given instructions. Our findings demonstrate that large multimodal models (LMMs), specifically GPT-4V, excel in zero-shot GUI navigation through its advanced screen interpretation, action reasoning, and precise action localization capabilities. We first benchmark MM-Navigator on our collected iOS screen dataset. According to human assessments, the system exhibited a 91\% accuracy rate in generating reasonable action descriptions and a 75\% accuracy rate in executing the correct actions for single-step instructions on iOS. Additionally, we evaluate the model on a subset of an Android screen navigation dataset, where the model outperforms previous GUI navigators in a zero-shot fashion. Our benchmark and detailed analyses aim to lay a robust groundwork for future research into the GUI navigation task. The project page is at https://github.com/zzxslp/MM-Navigator.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
407,372
1906.02534
Contextual Relabelling of Detected Objects
Contextual information, such as the co-occurrence of objects and the spatial and relative size among objects provides deep and complex information about scenes. It also can play an important role in improving object detection. In this work, we present two contextual models (rescoring and re-labeling models) that leverage contextual information (16 contextual relationships are applied in this paper) to enhance the state-of-the-art RCNN-based object detection (Faster RCNN). We experimentally demonstrate that our models lead to enhancement in detection performance using the most common dataset used in this field (MSCOCO).
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
134,096
2211.03174
Wheel-SLAM: Simultaneous Localization and Terrain Mapping Using One Wheel-mounted IMU
A reliable pose estimator robust to environmental disturbances is desirable for mobile robots. To this end, inertial measurement units (IMUs) play an important role because they can perceive the full motion state of the vehicle independently. However, it suffers from accumulative error due to inherent noise and bias instability, especially for low-cost sensors. In our previous studies on Wheel-INS \cite{niu2021, wu2021}, we proposed to limit the error drift of the pure inertial navigation system (INS) by mounting an IMU to the wheel of the robot to take advantage of rotation modulation. However, Wheel-INS still drifted over a long period of time due to the lack of external correction signals. In this letter, we propose to exploit the environmental perception ability of Wheel-INS to achieve simultaneous localization and mapping (SLAM) with only one IMU. To be specific, we use the road bank angles (mirrored by the robot roll angles estimated by Wheel-INS) as terrain features to enable the loop closure with a Rao-Blackwellized particle filter. The road bank angle is sampled and stored according to the robot position in the grid maps maintained by the particles. The weights of the particles are updated according to the difference between the currently estimated roll sequence and the terrain map. Field experiments suggest the feasibility of the idea to perform SLAM in Wheel-INS using the robot roll angle estimates. In addition, the positioning accuracy is improved significantly (more than 30\%) over Wheel-INS. The source code of our implementation is publicly available (https://github.com/i2Nav-WHU/Wheel-SLAM).
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
328,852
2009.14628
Meta Partial Benders Decomposition for the Logistics Service Network Design Problem
Supply chain transportation operations often account for a large proportion of product total cost to market. Such operations can be optimized by solving the Logistics Service Network Design Problem (LSNDP), wherein a logistics service provider seeks to cost-effectively source and fulfill customer demands of products within a multi-echelon distribution network. However, many industrial settings yield instances of the LSNDP that are too large to be solved in reasonable run-times by off-the-shelf optimization solvers. We introduce an exact Benders decomposition algorithm based on partial decompositions that strengthen the master problem with information derived from aggregating subproblem data. More specifically, the proposed Meta Partial Benders Decomposition intelligently switches from one master problem to another by changing both the amount of subproblem information to include in the master as well as how it is aggregated. Through an extensive computational study, we show that the approach outperforms existing benchmark methods and we demonstrate the benefits of dynamically refining the master problem in the course of a partial Benders decomposition-based scheme.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
198,094
1611.04399
Joint Graph Decomposition and Node Labeling: Problem, Algorithms, Applications
We state a combinatorial optimization problem whose feasible solutions define both a decomposition and a node labeling of a given graph. This problem offers a common mathematical abstraction of seemingly unrelated computer vision tasks, including instance-separating semantic segmentation, articulated human body pose estimation and multiple object tracking. Conceptually, the problem we state generalizes the unconstrained integer quadratic program and the minimum cost lifted multicut problem, both of which are NP-hard. In order to find feasible solutions efficiently, we define two local search algorithms that converge monotonously to a local optimum, offering a feasible solution at any time. To demonstrate their effectiveness in tackling computer vision tasks, we apply these algorithms to instances of the problem that we construct from published data, using published algorithms. We report state-of-the-art application-specific accuracy for the three above-mentioned applications.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
63,840
1710.09657
Segment Parameter Labelling in MCMC Mean-Shift Change Detection
This work addresses the problem of segmentation in time series data with respect to a statistical parameter of interest in Bayesian models. It is common to assume that the parameters are distinct within each segment. As such, many Bayesian change point detection models do not exploit the segment parameter patterns, which can improve performance. This work proposes a Bayesian mean-shift change point detection algorithm that makes use of repetition in segment parameters, by introducing segment class labels that utilise a Dirichlet process prior. The performance of the proposed approach was assessed on both synthetic and real world data, highlighting the enhanced performance when using parameter labelling.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
83,236
2306.14399
Mutual Query Network for Multi-Modal Product Image Segmentation
Product image segmentation is vital in e-commerce. Most existing methods extract the product image foreground only based on the visual modality, making it difficult to distinguish irrelevant products. As product titles contain abundant appearance information and provide complementary cues for product image segmentation, we propose a mutual query network to segment products based on both visual and linguistic modalities. First, we design a language query vision module to obtain the response of language description in image areas, thus aligning the visual and linguistic representations across modalities. Then, a vision query language module utilizes the correlation between visual and linguistic modalities to filter the product title and effectively suppress the content irrelevant to the vision in the title. To promote the research in this field, we also construct a Multi-Modal Product Segmentation dataset (MMPS), which contains 30,000 images and corresponding titles. The proposed method significantly outperforms the state-of-the-art methods on MMPS.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
375,668
2406.06776
SeeFar: Satellite Agnostic Multi-Resolution Dataset for Geospatial Foundation Models
SeeFar is an evolving collection of multi-resolution satellite images from public and commercial satellites. We specifically curated this dataset for training geospatial foundation models, unconstrained by satellite type. In recent years, advances in technology have made satellite imagery more accessible than ever. More earth-observing satellites have been launched in the last five years than in the previous fifty. Modern commercial satellites now offer up to 100 times the spatial resolution of public access satellites. However, the high cost and limited historical availability of commercial satellite imagery is a barrier to the training of foundational models, impacting what images can be used during inference. The SeeFar dataset represents a step towards training models that are satellite-agnostic by combining multi-resolution commercial and public access pre-processed images. This will enable users to utilize historical data alongside higher-resolution, more expensive satellite imagery, offering greater flexibility during inference. To achieve this, we describe a process for standardizing data from diverse satellite sources, normalizing different data formats, and aligning spectral bands to enhance interoperability. The SeeFar dataset includes images at a resolution of 384x384 pixels, spanning four spectral bands (Blue, Green, Red, and Near-Infrared) and expanding spatial resolutions (starting with 30, 10, 1.5, and 1.0 meters), all in cloud-optimized GeoTIFF format. It also provides consistent and comprehensive metadata to enhance data transparency and reliability. By aggregating data from multiple sources, SeeFar makes processed and consistent satellite data accessible to a wider range of users - from researchers to policymakers - fostering competition and innovation in satellite imagery analysis. The dataset is available at \url{coastalcarbon.ai/seefar}.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
462,763
1711.10185
Hyper-dimensional computing for a visual question-answering system that is trainable end-to-end
In this work we propose a system for visual question answering. Our architecture is composed of two parts, the first part creates the logical knowledge base given the image. The second part evaluates questions against the knowledge base. Differently from previous work, the knowledge base is represented using hyper-dimensional computing. This choice has the advantage that all the operations in the system, namely creating the knowledge base and evaluating the questions against it, are differentiable, thereby making the system easily trainable in an end-to-end fashion.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
false
85,544
1703.04364
Deep Learning for Skin Lesion Classification
Melanoma, a malignant form of skin cancer is very threatening to life. Diagnosis of melanoma at an earlier stage is highly needed as it has a very high cure rate. Benign and malignant forms of skin cancer can be detected by analyzing the lesions present on the surface of the skin using dermoscopic images. In this work, an automated skin lesion detection system has been developed which learns the representation of the image using Google's pretrained CNN model known as Inception-v3 \cite{cnn}. After obtaining the representation vector for our input dermoscopic images we have trained two layer feed forward neural network to classify the images as malignant or benign. The system also classifies the images based on the cause of the cancer either due to melanocytic or non-melanocytic cells using a different neural network. These classification tasks are part of the challenge organized by International Skin Imaging Collaboration (ISIC) 2017. Our system learns to classify the images based on the model built using the training images given in the challenge and the experimental results were evaluated using validation and test sets. Our system has achieved an overall accuracy of 65.8\% for the validation set.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
69,886
1606.01323
Improving Coreference Resolution by Learning Entity-Level Distributed Representations
A long-standing challenge in coreference resolution has been the incorporation of entity-level information - features defined over clusters of mentions instead of mention pairs. We present a neural network based coreference system that produces high-dimensional vector representations for pairs of coreference clusters. Using these representations, our system learns when combining clusters is desirable. We train the system with a learning-to-search algorithm that teaches it which local decisions (cluster merges) will lead to a high-scoring final coreference partition. The system substantially outperforms the current state-of-the-art on the English and Chinese portions of the CoNLL 2012 Shared Task dataset despite using few hand-engineered features.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
56,792
1902.08915
Bi-Skip: A Motion Deblurring Network Using Self-paced Learning
A fast and effective motion deblurring method has great application values in real life. This work presents an innovative approach in which a self-paced learning is combined with GAN to deblur image. First, We explain that a proper generator can be used as deep priors and point out that the solution for pixel-based loss is not same with the one for perception-based loss. By using these ideas as starting points, a Bi-Skip network is proposed to improve the generating ability and a bi-level loss is adopted to solve the problem that common conditions are non-identical. Second, considering that the complex motion blur will perturb the network in the training process, a self-paced mechanism is adopted to enhance the robustness of the network. Through extensive evaluations on both qualitative and quantitative criteria, it is demonstrated that our approach has a competitive advantage over state-of-the-art methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
122,296
2203.01852
Identification in Tree-shaped Linear Structural Causal Models
Linear structural equation models represent direct causal effects as directed edges and confounding factors as bidirected edges. An open problem is to identify the causal parameters from correlations between the nodes. We investigate models, whose directed component forms a tree, and show that there, besides classical instrumental variables, missing cycles of bidirected edges can be used to identify the model. They can yield systems of quadratic equations that we explicitly solve to obtain one or two solutions for the causal parameters of adjacent directed edges. We show how multiple missing cycles can be combined to obtain a unique solution. This results in an algorithm that can identify instances that previously required approaches based on Gr\"obner bases, which have doubly-exponential time complexity in the number of structural parameters.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
283,532
2404.00498
94% on CIFAR-10 in 3.29 Seconds on a Single GPU
CIFAR-10 is among the most widely used datasets in machine learning, facilitating thousands of research projects per year. To accelerate research and reduce the cost of experiments, we introduce training methods for CIFAR-10 which reach 94% accuracy in 3.29 seconds, 95% in 10.4 seconds, and 96% in 46.3 seconds, when run on a single NVIDIA A100 GPU. As one factor contributing to these training speeds, we propose a derandomized variant of horizontal flipping augmentation, which we show improves over the standard method in every case where flipping is beneficial over no flipping at all. Our code is released at https://github.com/KellerJordan/cifar10-airbench.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
442,958
2412.01601
Arabic Handwritten Document OCR Solution with Binarization and Adaptive Scale Fusion Detection
The problem of converting images of text into plain text is a widely researched topic in both academia and industry. Arabic handwritten Text Recognation (AHTR) poses additional challenges due to diverse handwriting styles and limited labeled data. In this paper we present a complete OCR pipeline that starts with line segmentation using Differentiable Binarization and Adaptive Scale Fusion techniques to ensure accurate detection of text lines. Following segmentation, a CNN-BiLSTM-CTC architecture is applied to recognize characters. Our system, trained on the Arabic Multi-Fonts Dataset (AMFDS), achieves a Character Recognition Rate (CRR) of 99.20% and a Word Recognition Rate (WRR) of 93.75% on single-word samples containing 7 to 10 characters, along with a CRR of 83.76% for sentences. These results demonstrate the system's strong performance in handling Arabic scripts, establishing a new benchmark for AHTR systems.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
513,189
1808.05779
Learning to Quantize Deep Networks by Optimizing Quantization Intervals with Task Loss
Reducing bit-widths of activations and weights of deep networks makes it efficient to compute and store them in memory, which is crucial in their deployments to resource-limited devices, such as mobile phones. However, decreasing bit-widths with quantization generally yields drastically degraded accuracy. To tackle this problem, we propose to learn to quantize activations and weights via a trainable quantizer that transforms and discretizes them. Specifically, we parameterize the quantization intervals and obtain their optimal values by directly minimizing the task loss of the network. This quantization-interval-learning (QIL) allows the quantized networks to maintain the accuracy of the full-precision (32-bit) networks with bit-width as low as 4-bit and minimize the accuracy degeneration with further bit-width reduction (i.e., 3 and 2-bit). Moreover, our quantizer can be trained on a heterogeneous dataset, and thus can be used to quantize pretrained networks without access to their training data. We demonstrate the effectiveness of our trainable quantizer on ImageNet dataset with various network architectures such as ResNet-18, -34 and AlexNet, on which it outperforms existing methods to achieve the state-of-the-art accuracy.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
105,414
2301.12231
Rateless Autoencoder Codes: Trading off Decoding Delay and Reliability
Most of today's communication systems are designed to target reliable message recovery after receiving the entire encoded message (codeword). However, in many practical scenarios, the transmission process may be interrupted before receiving the complete codeword. This paper proposes a novel rateless autoencoder (AE)-based code design suitable for decoding the transmitted message before the noisy codeword is fully received. Using particular dropout strategies applied during the training process, rateless AE codes allow to trade off between decoding delay and reliability, providing a graceful improvement of the latter with each additionally received codeword symbol. The proposed rateless AEs significantly outperform the conventional AE designs for scenarios where it is desirable to trade off reliability for lower decoding delay.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
342,453
2006.07361
Gaussian Processes on Graphs via Spectral Kernel Learning
We propose a graph spectrum-based Gaussian process for prediction of signals defined on nodes of the graph. The model is designed to capture various graph signal structures through a highly adaptive kernel that incorporates a flexible polynomial function in the graph spectral domain. Unlike most existing approaches, we propose to learn such a spectral kernel, where the polynomial setup enables learning without the need for eigen-decomposition of the graph Laplacian. In addition, this kernel has the interpretability of graph filtering achieved by a bespoke maximum likelihood learning algorithm that enforces the positivity of the spectrum. We demonstrate the interpretability of the model in synthetic experiments from which we show the various ground truth spectral filters can be accurately recovered, and the adaptability translates to superior performances in the prediction of real-world graph data of various characteristics.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
181,774
1610.02060
After Sandy Hook Elementary: A Year in the Gun Control Debate on Twitter
The mass shooting at Sandy Hook elementary school on December 14, 2012 catalyzed a year of active debate and legislation on gun control in the United States. Social media hosted an active public discussion where people expressed their support and opposition to a variety of issues surrounding gun legislation. In this paper, we show how a content-based analysis of Twitter data can provide insights and understanding into this debate. We estimate the relative support and opposition to gun control measures, along with a topic analysis of each camp by analyzing over 70 million gun-related tweets from 2013. We focus on spikes in conversation surrounding major events related to guns throughout the year. Our general approach can be applied to other important public health and political issues to analyze the prevalence and nature of public opinion.
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
false
62,050
2401.10272
Multi-Source Collaborative Gradient Discrepancy Minimization for Federated Domain Generalization
Federated Domain Generalization aims to learn a domain-invariant model from multiple decentralized source domains for deployment on unseen target domain. Due to privacy concerns, the data from different source domains are kept isolated, which poses challenges in bridging the domain gap. To address this issue, we propose a Multi-source Collaborative Gradient Discrepancy Minimization (MCGDM) method for federated domain generalization. Specifically, we propose intra-domain gradient matching between the original images and augmented images to avoid overfitting the domain-specific information within isolated domains. Additionally, we propose inter-domain gradient matching with the collaboration of other domains, which can further reduce the domain shift across decentralized domains. Combining intra-domain and inter-domain gradient matching, our method enables the learned model to generalize well on unseen domains. Furthermore, our method can be extended to the federated domain adaptation task by fine-tuning the target model on the pseudo-labeled target domain. The extensive experiments on federated domain generalization and adaptation indicate that our method outperforms the state-of-the-art methods significantly.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
422,565
2306.00571
Robust Exponential Stability and Invariance Guarantees with General Dynamic O'Shea-Zames-Falb Multipliers
We propose novel time-domain dynamic integral quadratic constraints with a terminal cost for exponentially weighted slope-restricted gradients of not necessarily convex functions. This extends recent results for subdifferentials of convex function and their link to so-called O'Shea-Zames-Falb multipliers. The benefit of merging time-domain and frequency-domain techniques is demonstrated for linear saturated systems.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
370,070
2009.14468
Learning Image-adaptive 3D Lookup Tables for High Performance Photo Enhancement in Real-time
Recent years have witnessed the increasing popularity of learning based methods to enhance the color and tone of photos. However, many existing photo enhancement methods either deliver unsatisfactory results or consume too much computational and memory resources, hindering their application to high-resolution images (usually with more than 12 megapixels) in practice. In this paper, we learn image-adaptive 3-dimensional lookup tables (3D LUTs) to achieve fast and robust photo enhancement. 3D LUTs are widely used for manipulating color and tone of photos, but they are usually manually tuned and fixed in camera imaging pipeline or photo editing tools. We, for the first time to our best knowledge, propose to learn 3D LUTs from annotated data using pairwise or unpaired learning. More importantly, our learned 3D LUT is image-adaptive for flexible photo enhancement. We learn multiple basis 3D LUTs and a small convolutional neural network (CNN) simultaneously in an end-to-end manner. The small CNN works on the down-sampled version of the input image to predict content-dependent weights to fuse the multiple basis 3D LUTs into an image-adaptive one, which is employed to transform the color and tone of source images efficiently. Our model contains less than 600K parameters and takes less than 2 ms to process an image of 4K resolution using one Titan RTX GPU. While being highly efficient, our model also outperforms the state-of-the-art photo enhancement methods by a large margin in terms of PSNR, SSIM and a color difference metric on two publically available benchmark datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
198,046
1711.02679
Neural Variational Inference and Learning in Undirected Graphical Models
Many problems in machine learning are naturally expressed in the language of undirected graphical models. Here, we propose black-box learning and inference algorithms for undirected models that optimize a variational approximation to the log-likelihood of the model. Central to our approach is an upper bound on the log-partition function parametrized by a function q that we express as a flexible neural network. Our bound makes it possible to track the partition function during learning, to speed-up sampling, and to train a broad class of hybrid directed/undirected models via a unified variational inference framework. We empirically demonstrate the effectiveness of our method on several popular generative modeling datasets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
84,097
2307.15912
A Method to Speed Up Convergence of Iterative Learning Control for High Precision Repetitive Motions
Various spacecraft have sensors that repeatedly perform a prescribed scanning maneuver, and one may want high precision. Iterative Learning Control (ILC) records previous run tracking error, adjusts the next run command, aiming for zero tracking error in the real world, not our model of the world. In response to a command, feedback control systems perform a convolution sum over all commands given since time zero, with a weighting factor getting smaller going further back in time. ILC learns to eliminate this error through iterations in hardware. It aims to find that command that causes the real world system to actually follow the desired command. The topic of this paper considers the possibility of learning to make our model of the world produce less error. This can be done easily and quickly numerically, and the result then used as a starting point for the ILC iterations performed in hardware. The point is to reduce the number of hardware iteration, converging more quickly to closely tracking the desired trajectory in the world. How to decide how many iterations with our model to make before switching to hardware iterations is presented, with numerical simulations performed to illustrate the approach.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
382,410
2207.06456
Graph Neural Network Bandits
We consider the bandit optimization problem with the reward function defined over graph-structured data. This problem has important applications in molecule design and drug discovery, where the reward is naturally invariant to graph permutations. The key challenges in this setting are scaling to large domains, and to graphs with many nodes. We resolve these challenges by embedding the permutation invariance into our model. In particular, we show that graph neural networks (GNNs) can be used to estimate the reward function, assuming it resides in the Reproducing Kernel Hilbert Space of a permutation-invariant additive kernel. By establishing a novel connection between such kernels and the graph neural tangent kernel (GNTK), we introduce the first GNN confidence bound and use it to design a phased-elimination algorithm with sublinear regret. Our regret bound depends on the GNTK's maximum information gain, which we also provide a bound for. While the reward function depends on all $N$ node features, our guarantees are independent of the number of graph nodes $N$. Empirically, our approach exhibits competitive performance and scales well on graph-structured domains.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
307,888
2106.13880
Self-paced Principal Component Analysis
Principal Component Analysis (PCA) has been widely used for dimensionality reduction and feature extraction. Robust PCA (RPCA), under different robust distance metrics, such as l1-norm and l2, p-norm, can deal with noise or outliers to some extent. However, real-world data may display structures that can not be fully captured by these simple functions. In addition, existing methods treat complex and simple samples equally. By contrast, a learning pattern typically adopted by human beings is to learn from simple to complex and less to more. Based on this principle, we propose a novel method called Self-paced PCA (SPCA) to further reduce the effect of noise and outliers. Notably, the complexity of each sample is calculated at the beginning of each iteration in order to integrate samples from simple to more complex into training. Based on an alternating optimization, SPCA finds an optimal projection matrix and filters out outliers iteratively. Theoretical analysis is presented to show the rationality of SPCA. Extensive experiments on popular data sets demonstrate that the proposed method can improve the state of-the-art results considerably.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
243,214