id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1906.04818 | Medium-Term Load Forecasting Using Support Vector Regression, Feature
Selection, and Symbiotic Organism Search Optimization | An accurate load forecasting has always been one of the main indispensable parts in the operation and planning of power systems. Among different time horizons of forecasting, while short-term load forecasting (STLF) and long-term load forecasting (LTLF) have respectively got benefits of accurate predictors and probabilistic forecasting, medium-term load forecasting (MTLF) demands more attention due to its vital role in power system operation and planning such as optimal scheduling of generation units, robust planning program for customer service, and economic supply. In this study, a hybrid method, composed of Support Vector Regression (SVR) and Symbiotic Organism Search Optimization (SOSO) method, is proposed for MTLF. In the proposed forecasting model, SVR is the main part of the forecasting algorithm while SOSO is embedded into it to optimize the parameters of SVR. In addition, a minimum redundancy-maximum relevance feature selection algorithm is used to in the preprocessing of input data. The proposed method is tested on EUNITE competition dataset to demonstrate its proper performance. Furthermore, it is compared with some previous works to show eligibility of our method. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 134,845 |
2204.07770 | UniGDD: A Unified Generative Framework for Goal-Oriented
Document-Grounded Dialogue | The goal-oriented document-grounded dialogue aims at responding to the user query based on the dialogue context and supporting document. Existing studies tackle this problem by decomposing it into two sub-tasks: knowledge identification and response generation. However, such pipeline methods would unavoidably suffer from the error propagation issue. This paper proposes to unify these two sub-tasks via sequentially generating the grounding knowledge and the response. We further develop a prompt-connected multi-task learning strategy to model the characteristics and connections of different tasks and introduce linear temperature scheduling to reduce the negative effect of irrelevant document information. Experimental results demonstrate the effectiveness of our framework. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 291,840 |
2206.00845 | Hyperspherical Consistency Regularization | Recent advances in contrastive learning have enlightened diverse applications across various semi-supervised fields. Jointly training supervised learning and unsupervised learning with a shared feature encoder becomes a common scheme. Though it benefits from taking advantage of both feature-dependent information from self-supervised learning and label-dependent information from supervised learning, this scheme remains suffering from bias of the classifier. In this work, we systematically explore the relationship between self-supervised learning and supervised learning, and study how self-supervised learning helps robust data-efficient deep learning. We propose hyperspherical consistency regularization (HCR), a simple yet effective plug-and-play method, to regularize the classifier using feature-dependent information and thus avoid bias from labels. Specifically, HCR first projects logits from the classifier and feature projections from the projection head on the respective hypersphere, then it enforces data points on hyperspheres to have similar structures by minimizing binary cross entropy of pairwise distances' similarity metrics. Extensive experiments on semi-supervised and weakly-supervised learning demonstrate the effectiveness of our method, by showing superior performance with HCR. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 300,279 |
2402.12406 | Teacher as a Lenient Expert: Teacher-Agnostic Data-Free Knowledge
Distillation | Data-free knowledge distillation (DFKD) aims to distill pretrained knowledge to a student model with the help of a generator without using original data. In such data-free scenarios, achieving stable performance of DFKD is essential due to the unavailability of validation data. Unfortunately, this paper has discovered that existing DFKD methods are quite sensitive to different teacher models, occasionally showing catastrophic failures of distillation, even when using well-trained teacher models. Our observation is that the generator in DFKD is not always guaranteed to produce precise yet diverse samples using the existing representative strategy of minimizing both class-prior and adversarial losses. Through our empirical study, we focus on the fact that class-prior not only decreases the diversity of generated samples, but also cannot completely address the problem of generating unexpectedly low-quality samples depending on teacher models. In this paper, we propose the teacher-agnostic data-free knowledge distillation (TA-DFKD) method, with the goal of more robust and stable performance regardless of teacher models. Our basic idea is to assign the teacher model a lenient expert role for evaluating samples, rather than a strict supervisor that enforces its class-prior on the generator. Specifically, we design a sample selection approach that takes only clean samples verified by the teacher model without imposing restrictions on the power of generating diverse samples. Through extensive experiments, we show that our method successfully achieves both robustness and training stability across various teacher models, while outperforming the existing DFKD methods. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 430,838 |
2406.02464 | Meta-Learners for Partially-Identified Treatment Effects Across Multiple
Environments | Estimating the conditional average treatment effect (CATE) from observational data is relevant for many applications such as personalized medicine. Here, we focus on the widespread setting where the observational data come from multiple environments, such as different hospitals, physicians, or countries. Furthermore, we allow for violations of standard causal assumptions, namely, overlap within the environments and unconfoundedness. To this end, we move away from point identification and focus on partial identification. Specifically, we show that current assumptions from the literature on multiple environments allow us to interpret the environment as an instrumental variable (IV). This allows us to adapt bounds from the IV literature for partial identification of CATE by leveraging treatment assignment mechanisms across environments. Then, we propose different model-agnostic learners (so-called meta-learners) to estimate the bounds that can be used in combination with arbitrary machine learning models. We further demonstrate the effectiveness of our meta-learners across various experiments using both simulated and real-world data. Finally, we discuss the applicability of our meta-learners to partial identification in instrumental variable settings, such as randomized controlled trials with non-compliance. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 460,777 |
2302.03026 | Sampling-Based Accuracy Testing of Posterior Estimators for General
Inference | Parameter inference, i.e. inferring the posterior distribution of the parameters of a statistical model given some data, is a central problem to many scientific disciplines. Generative models can be used as an alternative to Markov Chain Monte Carlo methods for conducting posterior inference, both in likelihood-based and simulation-based problems. However, assessing the accuracy of posteriors encoded in generative models is not straightforward. In this paper, we introduce `Tests of Accuracy with Random Points' (TARP) coverage testing as a method to estimate coverage probabilities of generative posterior estimators. Our method differs from previously-existing coverage-based methods, which require posterior evaluations. We prove that our approach is necessary and sufficient to show that a posterior estimator is accurate. We demonstrate the method on a variety of synthetic examples, and show that TARP can be used to test the results of posterior inference analyses in high-dimensional spaces. We also show that our method can detect inaccurate inferences in cases where existing methods fail. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 344,199 |
1608.05094 | Tolerant Compressed Sensing With Partially Coherent Sensing Matrices | Most of compressed sensing (CS) theory to date is focused on incoherent sensing, that is, columns from the sensing matrix are highly uncorrelated. However, sensing systems with naturally occurring correlations arise in many applications, such as signal detection, motion detection and radar. Moreover, in these applications it is often not necessary to know the support of the signal exactly, but instead small errors in the support and signal are tolerable. Despite the abundance of work utilizing incoherent sensing matrices, for this type of tolerant recovery we suggest that coherence is actually beneficial. We promote the use of coherent sampling when tolerant support recovery is acceptable, and demonstrate its advantages empirically. In addition, we provide a first step towards theoretical analysis by considering a specific reconstruction method for selected signal classes. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 59,919 |
2211.12660 | The Impact of Generative AI on the Future of Visual Content Marketing | In today's world of marketing, it is necessary to have visually appealing content. Visual material has become an essential area of focus for every company as a result of the widespread availability of gadgets for mass communication and extended visual advancements. Similarly, artificial intelligence is also gaining ground and it is proving to be the most revolutionary technological advancement thus far. The integration of visual content with artificial intelligence is the key to acquiring and retaining loyal customers; its absence from the overarching marketing strategy of any production raises a red flag that could ultimately result in a smaller market share for that company. | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 332,187 |
2411.12939 | Stabilization of Switched Affine Systems With Dwell-Time Constraint | This paper addresses the problem of stabilization of switched affine systems under dwell-time constraint, giving guarantees on the bound of the quadratic cost associated with the proposed state switching control law. Specifically, two switching rules are presented relying on the solution of differential Lyapunov inequalities and Lyapunov-Metzler inequalities, from which the stability conditions are expressed. The first one allows to regulate the state of linear switched systems to zero, whereas the second one is designed for switched affine systems proving practical stability of the origin. In both cases, the determination of a guaranteed cost associated with each control strategy is shown. In the cases of linear and affine systems, the existence of the solution for the Lyapunov-Metzler condition is discussed and guidelines for the selection of a solution ensuring suitable performance of the system evolution are provided. The theoretical results are finally assessed by means of three examples. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 509,603 |
2109.14591 | Combining Human Predictions with Model Probabilities via Confusion
Matrices and Calibration | An increasingly common use case for machine learning models is augmenting the abilities of human decision makers. For classification tasks where neither the human or model are perfectly accurate, a key step in obtaining high performance is combining their individual predictions in a manner that leverages their relative strengths. In this work, we develop a set of algorithms that combine the probabilistic output of a model with the class-level output of a human. We show theoretically that the accuracy of our combination model is driven not only by the individual human and model accuracies, but also by the model's confidence. Empirical results on image classification with CIFAR-10 and a subset of ImageNet demonstrate that such human-model combinations consistently have higher accuracies than the model or human alone, and that the parameters of the combination method can be estimated effectively with as few as ten labeled datapoints. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 258,006 |
2212.02469 | One-shot Implicit Animatable Avatars with Model-based Priors | Existing neural rendering methods for creating human avatars typically either require dense input signals such as video or multi-view images, or leverage a learned prior from large-scale specific 3D human datasets such that reconstruction can be performed with sparse-view inputs. Most of these methods fail to achieve realistic reconstruction when only a single image is available. To enable the data-efficient creation of realistic animatable 3D humans, we propose ELICIT, a novel method for learning human-specific neural radiance fields from a single image. Inspired by the fact that humans can effortlessly estimate the body geometry and imagine full-body clothing from a single image, we leverage two priors in ELICIT: 3D geometry prior and visual semantic prior. Specifically, ELICIT utilizes the 3D body shape geometry prior from a skinned vertex-based template model (i.e., SMPL) and implements the visual clothing semantic prior with the CLIP-based pretrained models. Both priors are used to jointly guide the optimization for creating plausible content in the invisible areas. Taking advantage of the CLIP models, ELICIT can use text descriptions to generate text-conditioned unseen regions. In order to further improve visual details, we propose a segmentation-based sampling strategy that locally refines different parts of the avatar. Comprehensive evaluations on multiple popular benchmarks, including ZJU-MoCAP, Human3.6M, and DeepFashion, show that ELICIT has outperformed strong baseline methods of avatar creation when only a single image is available. The code is public for research purposes at https://huangyangyi.github.io/ELICIT/. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | true | 334,796 |
2212.05409 | Towards Leaving No Indic Language Behind: Building Monolingual Corpora,
Benchmark and Models for Indic Languages | Building Natural Language Understanding (NLU) capabilities for Indic languages, which have a collective speaker base of more than one billion speakers is absolutely crucial. In this work, we aim to improve the NLU capabilities of Indic languages by making contributions along 3 important axes (i) monolingual corpora (ii) NLU testsets (iii) multilingual LLMs focusing on Indic languages. Specifically, we curate the largest monolingual corpora, IndicCorp, with 20.9B tokens covering 24 languages from 4 language families - a 2.3x increase over prior work, while supporting 12 additional languages. Next, we create a human-supervised benchmark, IndicXTREME, consisting of nine diverse NLU tasks covering 20 languages. Across languages and tasks, IndicXTREME contains a total of 105 evaluation sets, of which 52 are new contributions to the literature. To the best of our knowledge, this is the first effort towards creating a standard benchmark for Indic languages that aims to test the multilingual zero-shot capabilities of pretrained language models. Finally, we train IndicBERT v2, a state-of-the-art model supporting all the languages. Averaged across languages and tasks, the model achieves an absolute improvement of 2 points over a strong baseline. The data and models are available at https://github.com/AI4Bharat/IndicBERT. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 335,782 |
2008.06902 | Modeling "Equitable and Sustainable Well-being" (BES) using Bayesian
Networks: A Case Study of the Italian regions | Measurement of well-being has been a highly debated topic since the end of the last century. While some specific aspects are still open issues, a multidimensional approach as well as the construction of shared and well-rooted systems of indicators are now accepted as the main route to measure this complex phenomenon. A meaningful effort, in this direction, is that of the Italian "Equitable and Sustainable Well-being" (BES) system of indicators, developed by the Italian National Institute of Statistics (ISTAT) and the National Council for Economics and Labour (CNEL). The BES framework comprises a number of atomic indicators measured yearly at the regional level and reflecting the different domains of well-being (e.g. Health, Education, Work \& Life Balance, Environment,...). In this work we aim at dealing with the multidimensionality of the BES system of indicators and try to answer three main research questions: I) What is the structure of the relationships among the BES atomic indicators; II) What is the structure of the relationships among the BES domains; III) To what extent the structure of the relationships reflects the current BES theoretical framework. We address these questions by implementing Bayesian Networks (BNs), a widely accepted class of multivariate statistical models, particularly suitable for handling reasoning with uncertainty. Implementation of a BN results in a set of nodes and a set of conditional independence statements that provide an effective tool to explore associations in a system of variables. In this work, we also suggest two strategies for encoding prior knowledge in the BN estimating algorithm so that the BES theoretical framework can be represented into the network. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 191,929 |
cs/9603104 | Active Learning with Statistical Models | For many types of machine learning algorithms, one can compute the statistically `optimal' way to select training data. In this paper, we review how optimal data selection techniques have been used with feedforward neural networks. We then show how the same principles may be used to select data for two alternative, statistically-based learning architectures: mixtures of Gaussians and locally weighted regression. While the techniques for neural networks are computationally expensive and approximate, the techniques for mixtures of Gaussians and locally weighted regression are both efficient and accurate. Empirically, we observe that the optimality criterion sharply decreases the number of training examples the learner needs in order to achieve good performance. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 540,333 |
1402.6016 | Incremental Redundancy, Fountain Codes and Advanced Topics | This document is written in order to establish a common base ground on which the majority of the relevant research about linear fountain codes can be analyzed and compared. As far as I am concerned, there is no unified approach that outlines and compares most of the published linear fountain codes in a single and self-contained framework. This written document has not only resulted in the review of theoretical fundamentals of efficient coding techniques for incremental redundancy and linear fountain coding, but also helped me have a comprehensive reference document and hopefully for many other graduate students who would like to have some background to pursue a research career regarding fountain codes and their various applications. Some background in information, coding, graph and probability theory is expected. Although various aspects of this topic and many other relevant research are deliberately left out, I still hope that this document shall serve researchers' need well. I have also included several exercises to warm up. The presentation style is usually informal and the presented material is not necessarily rigorous. There are many spots in the text that are product of my coauthors and myself, although some of which have not been published yet. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 31,139 |
2002.09284 | On The Reasons Behind Decisions | Recent work has shown that some common machine learning classifiers can be compiled into Boolean circuits that have the same input-output behavior. We present a theory for unveiling the reasons behind the decisions made by Boolean classifiers and study some of its theoretical and practical implications. We define notions such as sufficient, necessary and complete reasons behind decisions, in addition to classifier and decision bias. We show how these notions can be used to evaluate counterfactual statements such as "a decision will stick even if ... because ... ." We present efficient algorithms for computing these notions, which are based on new advances on tractable Boolean circuits, and illustrate them using a case study. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 165,024 |
1803.03169 | An Enabling Waveform for 5G - QAM-FBMC: Initial Analysis | In this paper, we identified the challenges and requirements for the waveform design of the fifth generation mobile communication networks (5G) and compared Orthogonal frequency-division multiplexing (OFDM) based waveforms with Filter Bank Multicarrier (FBMC) based ones. Recently it has been shown that Quadrature-Amplitude Modulation (QAM) transmission and reception can be enabled in FBMC by using multiple prototype filters, resulting in a new waveform: QAM-FBMC. Here, the transceiver architecture and signal model of QAM-FBMC are presented and channel estimation error and RF impairment, e.g., phase noise, are modeled. In addition, initial evaluation is made in terms of out-of-band (OOB) emission and complexity. The simulation results show that QAM-FBCM can achieve the same BER performance as cyclic-prefix (CP) OFDM without spectrum efficiency reduction due to the adding of CP. Different equalization schemes are evaluated and the effect of channel estimation error is investigated. Moreover, effects of the phase noise are evaluated and QAM-FBMC is shown to be robust to the phase noise. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 92,199 |
2312.01947 | Maximising Quantum-Computing Expressive Power through Randomised
Circuits | In the noisy intermediate-scale quantum era, variational quantum algorithms (VQAs) have emerged as a promising avenue to obtain quantum advantage. However, the success of VQAs depends on the expressive power of parameterised quantum circuits, which is constrained by the limited gate number and the presence of barren plateaus. In this work, we propose and numerically demonstrate a novel approach for VQAs, utilizing randomised quantum circuits to generate the variational wavefunction. We parameterize the distribution function of these random circuits using artificial neural networks and optimize it to find the solution. This random-circuit approach presents a trade-off between the expressive power of the variational wavefunction and time cost, in terms of the sampling cost of quantum circuits. Given a fixed gate number, we can systematically increase the expressive power by extending the quantum-computing time. With a sufficiently large permissible time cost, the variational wavefunction can approximate any quantum state with arbitrary accuracy. Furthermore, we establish explicit relationships between expressive power, time cost, and gate number for variational quantum eigensolvers. These results highlight the promising potential of the random-circuit approach in achieving a high expressive power in quantum computing. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 412,639 |
2212.02168 | Video Games as a Corpus: Sentiment Analysis using Fallout New Vegas
Dialog | We present a method for extracting a multilingual sentiment annotated dialog data set from Fallout New Vegas. The game developers have preannotated every line of dialog in the game in one of the 8 different sentiments: \textit{anger, disgust, fear, happy, neutral, pained, sad } and \textit{surprised}. The game has been translated into English, Spanish, German, French and Italian. We conduct experiments on multilingual, multilabel sentiment analysis on the extracted data set using multilingual BERT, XLMRoBERTa and language specific BERT models. In our experiments, multilingual BERT outperformed XLMRoBERTa for most of the languages, also language specific models were slightly better than multilingual BERT for most of the languages. The best overall accuracy was 54\% and it was achieved by using multilingual BERT on Spanish data. The extracted data set presents a challenging task for sentiment analysis. We have released the data, including the testing and training splits, openly on Zenodo. The data set has been shuffled for copyright reasons. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 334,705 |
2410.05441 | Thompson Sampling For Combinatorial Bandits: Polynomial Regret and
Mismatched Sampling Paradox | We consider Thompson Sampling (TS) for linear combinatorial semi-bandits and subgaussian rewards. We propose the first known TS whose finite-time regret does not scale exponentially with the dimension of the problem. We further show the "mismatched sampling paradox": A learner who knows the rewards distributions and samples from the correct posterior distribution can perform exponentially worse than a learner who does not know the rewards and simply samples from a well-chosen Gaussian posterior. The code used to generate the experiments is available at https://github.com/RaymZhang/CTS-Mismatched-Paradox | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 495,733 |
2302.12156 | Personalized Decentralized Federated Learning with Knowledge
Distillation | Personalization in federated learning (FL) functions as a coordinator for clients with high variance in data or behavior. Ensuring the convergence of these clients' models relies on how closely users collaborate with those with similar patterns or preferences. However, it is generally challenging to quantify similarity under limited knowledge about other users' models given to users in a decentralized network. To cope with this issue, we propose a personalized and fully decentralized FL algorithm, leveraging knowledge distillation techniques to empower each device so as to discern statistical distances between local models. Each client device can enhance its performance without sharing local data by estimating the similarity between two intermediate outputs from feeding local samples as in knowledge distillation. Our empirical studies demonstrate that the proposed algorithm improves the test accuracy of clients in fewer iterations under highly non-independent and identically distributed (non-i.i.d.) data distributions and is beneficial to agents with small datasets, even without the need for a central server. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 347,455 |
2310.09462 | A Framework for Empowering Reinforcement Learning Agents with Causal
Analysis: Enhancing Automated Cryptocurrency Trading | Despite advances in artificial intelligence-enhanced trading methods, developing a profitable automated trading system remains challenging in the rapidly evolving cryptocurrency market. This research focuses on developing a reinforcement learning (RL) framework to tackle the complexities of trading five prominent altcoins: Binance Coin, Ethereum, Litecoin, Ripple, and Tether. To this end, we present the CausalReinforceNet~(CRN) framework, which integrates both Bayesian and dynamic Bayesian network techniques to empower the RL agent in trade decision-making. We develop two agents using the framework based on distinct RL algorithms to analyse performance compared to the Buy-and-Hold benchmark strategy and a baseline RL model. The results indicate that our framework surpasses both models in profitability, highlighting CRN's consistent superiority, although the level of effectiveness varies across different cryptocurrencies. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 399,782 |
2403.07918 | On the Societal Impact of Open Foundation Models | Foundation models are powerful technologies: how they are released publicly directly shapes their societal impact. In this position paper, we focus on open foundation models, defined here as those with broadly available model weights (e.g. Llama 2, Stable Diffusion XL). We identify five distinctive properties (e.g. greater customizability, poor monitoring) of open foundation models that lead to both their benefits and risks. Open foundation models present significant benefits, with some caveats, that span innovation, competition, the distribution of decision-making power, and transparency. To understand their risks of misuse, we design a risk assessment framework for analyzing their marginal risk. Across several misuse vectors (e.g. cyberattacks, bioweapons), we find that current research is insufficient to effectively characterize the marginal risk of open foundation models relative to pre-existing technologies. The framework helps explain why the marginal risk is low in some cases, clarifies disagreements about misuse risks by revealing that past work has focused on different subsets of the framework with different assumptions, and articulates a way forward for more constructive debate. Overall, our work helps support a more grounded assessment of the societal impact of open foundation models by outlining what research is needed to empirically validate their theoretical benefits and risks. | false | false | false | false | true | false | true | false | false | false | false | false | false | true | false | false | false | false | 437,090 |
2311.00564 | Online Student-$t$ Processes with an Overall-local Scale Structure for
Modelling Non-stationary Data | Time-dependent data often exhibit characteristics, such as non-stationarity and heavy-tailed errors, that would be inappropriate to model with the typical assumptions used in popular models. Thus, more flexible approaches are required to be able to accommodate such issues. To this end, we propose a Bayesian mixture of student-$t$ processes with an overall-local scale structure for the covariance. Moreover, we use a sequential Monte Carlo (SMC) sampler in order to perform online inference as data arrive in real-time. We demonstrate the superiority of our proposed approach compared to typical Gaussian process-based models on real-world data sets in order to prove the necessity of using mixtures of student-$t$ processes. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 404,696 |
2006.08580 | Uncertainty quantification for nonconvex tensor completion: Confidence
intervals, heteroscedasticity and optimality | We study the distribution and uncertainty of nonconvex optimization for noisy tensor completion -- the problem of estimating a low-rank tensor given incomplete and corrupted observations of its entries. Focusing on a two-stage estimation algorithm proposed by Cai et al. (2019), we characterize the distribution of this nonconvex estimator down to fine scales. This distributional theory in turn allows one to construct valid and short confidence intervals for both the unseen tensor entries and the unknown tensor factors. The proposed inferential procedure enjoys several important features: (1) it is fully adaptive to noise heteroscedasticity, and (2) it is data-driven and automatically adapts to unknown noise distributions. Furthermore, our findings unveil the statistical optimality of nonconvex tensor completion: it attains un-improvable $\ell_{2}$ accuracy -- including both the rates and the pre-constants -- when estimating both the unknown tensor and the underlying tensor factors. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 182,235 |
1909.05962 | SegNAS3D: Network Architecture Search with Derivative-Free Global
Optimization for 3D Image Segmentation | Deep learning has largely reduced the need for manual feature selection in image segmentation. Nevertheless, network architecture optimization and hyperparameter tuning are mostly manual and time consuming. Although there are increasing research efforts on network architecture search in computer vision, most works concentrate on image classification but not segmentation, and there are very limited efforts on medical image segmentation especially in 3D. To remedy this, here we propose a framework, SegNAS3D, for network architecture search of 3D image segmentation. In this framework, a network architecture comprises interconnected building blocks that consist of operations such as convolution and skip connection. By representing the block structure as a learnable directed acyclic graph, hyperparameters such as the number of feature channels and the option of using deep supervision can be learned together through derivative-free global optimization. Experiments on 43 3D brain magnetic resonance images with 19 structures achieved an average Dice coefficient of 82%. Each architecture search required less than three days on three GPUs and produced architectures that were much smaller than the state-of-the-art manually created architectures. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | true | false | false | 145,250 |
2306.14177 | Enhancing Mapless Trajectory Prediction through Knowledge Distillation | Scene information plays a crucial role in trajectory forecasting systems for autonomous driving by providing semantic clues and constraints on potential future paths of traffic agents. Prevalent trajectory prediction techniques often take high-definition maps (HD maps) as part of the inputs to provide scene knowledge. Although HD maps offer accurate road information, they may suffer from the high cost of annotation or restrictions of law that limits their widespread use. Therefore, those methods are still expected to generate reliable prediction results in mapless scenarios. In this paper, we tackle the problem of improving the consistency of multi-modal prediction trajectories and the real road topology when map information is unavailable during the test phase. Specifically, we achieve this by training a map-based prediction teacher network on the annotated samples and transferring the knowledge to a student mapless prediction network using a two-fold knowledge distillation framework. Our solution is generalizable for common trajectory prediction networks and does not bring extra computation burden. Experimental results show that our method stably improves prediction performance in mapless mode on many widely used state-of-the-art trajectory prediction baselines, compensating for the gaps caused by the absence of HD maps. Qualitative visualization results demonstrate that our approach helps infer unseen map information. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 375,581 |
2305.19244 | Testing for the Markov Property in Time Series via Deep Conditional
Generative Learning | The Markov property is widely imposed in analysis of time series data. Correspondingly, testing the Markov property, and relatedly, inferring the order of a Markov model, are of paramount importance. In this article, we propose a nonparametric test for the Markov property in high-dimensional time series via deep conditional generative learning. We also apply the test sequentially to determine the order of the Markov model. We show that the test controls the type-I error asymptotically, and has the power approaching one. Our proposal makes novel contributions in several ways. We utilize and extend state-of-the-art deep generative learning to estimate the conditional density functions, and establish a sharp upper bound on the approximation error of the estimators. We derive a doubly robust test statistic, which employs a nonparametric estimation but achieves a parametric convergence rate. We further adopt sample splitting and cross-fitting to minimize the conditions required to ensure the consistency of the test. We demonstrate the efficacy of the test through both simulations and the three data applications. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 369,438 |
2406.08344 | Blind Image Deblurring with FFT-ReLU Sparsity Prior | Blind image deblurring is the process of recovering a sharp image from a blurred one without prior knowledge about the blur kernel. It is a small data problem, since the key challenge lies in estimating the unknown degrees of blur from a single image or limited data, instead of learning from large datasets. The solution depends heavily on developing algorithms that effectively model the image degradation process. We introduce a method that leverages a prior which targets the blur kernel to achieve effective deblurring across a wide range of image types. In our extensive empirical analysis, our algorithm achieves results that are competitive with the state-of-the-art blind image deblurring algorithms, and it offers up to two times faster inference, making it a highly efficient solution. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 463,445 |
1412.1723 | Communication complexity and the reality of the wave-function | In this review, we discuss a relation between quantum communication complexity and a long-standing debate in quantum foundation concerning the interpretation of the quantum state. Is the quantum state a physical element of reality as originally interpreted by Schrodinger? Or is it an abstract mathematical object containing statistical information about the outcome of measurements as interpreted by Born? Although these questions sound philosophical and pointless, they can be made precise in the framework of what we call classical theories of quantum processes, which are a reword of quantum phenomena in the language of classical probability theory. In 2012, Pusey, Barrett and Rudolph (PBR) proved, under an assumption of preparation independence, a theorem supporting the original interpretation of Schrodinger in the classical framework. Recently, we showed that these questions are related to a practical problem in quantum communication complexity, namely, quantifying the minimal amount of classical communication required in the classical simulation of a two-party quantum communication process. In particular, we argued that the statement of the PBR theorem can be proved if the classical communication cost of simulating the communication of n qubits grows more than exponentially in 'n'. Our argument is based on an assumption that we call probability equipartition property. This property is somehow weaker than the preparation independence property used in the PBR theorem, as the former can be justified by the latter and the asymptotic equipartition property of independent stochastic sources. The equipartition property is a general and natural hypothesis that can be assumed even if the preparation independence hypothesis is dropped. In this review, we further develop our argument into the form of a theorem. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 38,130 |
2408.10623 | TextMastero: Mastering High-Quality Scene Text Editing in Diverse
Languages and Styles | Scene text editing aims to modify texts on images while maintaining the style of newly generated text similar to the original. Given an image, a target area, and target text, the task produces an output image with the target text in the selected area, replacing the original. This task has been studied extensively, with initial success using Generative Adversarial Networks (GANs) to balance text fidelity and style similarity. However, GAN-based methods struggled with complex backgrounds or text styles. Recent works leverage diffusion models, showing improved results, yet still face challenges, especially with non-Latin languages like CJK characters (Chinese, Japanese, Korean) that have complex glyphs, often producing inaccurate or unrecognizable characters. To address these issues, we present \emph{TextMastero} - a carefully designed multilingual scene text editing architecture based on latent diffusion models (LDMs). TextMastero introduces two key modules: a glyph conditioning module for fine-grained content control in generating accurate texts, and a latent guidance module for providing comprehensive style information to ensure similarity before and after editing. Both qualitative and quantitative experiments demonstrate that our method surpasses all known existing works in text fidelity and style similarity. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 481,946 |
2404.09556 | nnU-Net Revisited: A Call for Rigorous Validation in 3D Medical Image
Segmentation | The release of nnU-Net marked a paradigm shift in 3D medical image segmentation, demonstrating that a properly configured U-Net architecture could still achieve state-of-the-art results. Despite this, the pursuit of novel architectures, and the respective claims of superior performance over the U-Net baseline, continued. In this study, we demonstrate that many of these recent claims fail to hold up when scrutinized for common validation shortcomings, such as the use of inadequate baselines, insufficient datasets, and neglected computational resources. By meticulously avoiding these pitfalls, we conduct a thorough and comprehensive benchmarking of current segmentation methods including CNN-based, Transformer-based, and Mamba-based approaches. In contrast to current beliefs, we find that the recipe for state-of-the-art performance is 1) employing CNN-based U-Net models, including ResNet and ConvNeXt variants, 2) using the nnU-Net framework, and 3) scaling models to modern hardware resources. These results indicate an ongoing innovation bias towards novel architectures in the field and underscore the need for more stringent validation standards in the quest for scientific progress. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 446,732 |
2402.06707 | Multi-class real-time crash risk forecasting using convolutional neural
network: Istanbul case study | The performance of an artificial neural network (ANN) in forecasting crash risk is shown in this paper. To begin, some traffic and weather data are acquired as raw data. This data is then analyzed, and relevant characteristics are chosen to utilize as input data based on additional tree and Pearson correlation. Furthermore, crash and non-crash time data are separated; then, feature values for crash and non-crash events are written in three four-minute intervals prior to the crash and non-crash events using the average of all available values for that period. The number of non-crash samples was lowered after calculating crash likelihood for each period based on accident labeling. The proposed CNN model is capable of learning from recorded, processed, and categorized input characteristics such as traffic characteristics and meteorological conditions. The goal of this work is to forecast the chance of a real-time crash based on three periods before events. The area under the curve (AUC) for the receiver operating characteristic curve (ROC curve), as well as sensitivity as the true positive rate and specificity as the false positive rate, are shown and compared with three typical machine learning and neural network models. Finally, when it comes to the error value, AUC, sensitivity, and specificity parameters as performance variables, the executed model outperforms other models. The findings of this research suggest applying the CNN model as a multi-class prediction model for real-time crash risk prediction. Our emphasis is on multi-class prediction, while prior research used this for binary (two-class) categorization like crash and non-crash. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 428,410 |
1206.3231 | CORL: A Continuous-state Offset-dynamics Reinforcement Learner | Continuous state spaces and stochastic, switching dynamics characterize a number of rich, realworld domains, such as robot navigation across varying terrain. We describe a reinforcementlearning algorithm for learning in these domains and prove for certain environments the algorithm is probably approximately correct with a sample complexity that scales polynomially with the state-space dimension. Unfortunately, no optimal planning techniques exist in general for such problems; instead we use fitted value iteration to solve the learned MDP, and include the error due to approximate planning in our bounds. Finally, we report an experiment using a robotic car driving over varying terrain to demonstrate that these dynamics representations adequately capture real-world dynamics and that our algorithm can be used to efficiently solve such problems. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 16,490 |
2010.14931 | TurboKV: Scaling Up The Performance of Distributed Key-Value Stores With
In-Switch Coordination | The power and flexibility of software-defined networks lead to a programmable network infrastructure in which in-network computation can help accelerating the performance of applications. This can be achieved by offloading some computational tasks to the network. However, what kind of computational tasks should be delegated to the network to accelerate applications performance? In this paper, we propose a way to exploit the usage of programmable switches to scale up the performance of distributed key-value stores. Moreover, as a proof-of-concept, we propose TurboKV, an efficient distributed key-value store architecture that utilizes programmable switches as: 1) partition management nodes to store the key-value store partitions and replicas information; and 2) monitoring stations to measure the load of storage nodes, this monitoring information is used to balance the load among storage nodes. We also propose a key-based routing protocol to route the search queries of clients based on the requested keys to targeted storage nodes. Our experimental results of an initial prototype show that our proposed architecture improves the throughput and reduces the latency of distributed key-value stores when compared to the existing architectures. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | true | 203,623 |
2312.15172 | Pre-trained Trojan Attacks for Visual Recognition | Pre-trained vision models (PVMs) have become a dominant component due to their exceptional performance when fine-tuned for downstream tasks. However, the presence of backdoors within PVMs poses significant threats. Unfortunately, existing studies primarily focus on backdooring PVMs for the classification task, neglecting potential inherited backdoors in downstream tasks such as detection and segmentation. In this paper, we propose the Pre-trained Trojan attack, which embeds backdoors into a PVM, enabling attacks across various downstream vision tasks. We highlight the challenges posed by cross-task activation and shortcut connections in successful backdoor attacks. To achieve effective trigger activation in diverse tasks, we stylize the backdoor trigger patterns with class-specific textures, enhancing the recognition of task-irrelevant low-level features associated with the target class in the trigger pattern. Moreover, we address the issue of shortcut connections by introducing a context-free learning pipeline for poison training. In this approach, triggers without contextual backgrounds are directly utilized as training data, diverging from the conventional use of clean images. Consequently, we establish a direct shortcut from the trigger to the target class, mitigating the shortcut connection issue. We conducted extensive experiments to thoroughly validate the effectiveness of our attacks on downstream detection and segmentation tasks. Additionally, we showcase the potential of our approach in more practical scenarios, including large vision models and 3D object detection in autonomous driving. This paper aims to raise awareness of the potential threats associated with applying PVMs in practical scenarios. Our codes will be available upon paper publication. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 417,902 |
1906.06007 | Convolutional Neural Network based Multiple-Rate Compressive Sensing for
Massive MIMO CSI Feedback: Design, Simulation, and Analysis | Massive multiple-input multiple-output (MIMO) is a promising technology to increase link capacity and energy efficiency. However, these benefits are based on available channel state information (CSI) at the base station (BS). Therefore, user equipment (UE) needs to keep on feeding CSI back to the BS, thereby consuming precious bandwidth resource. Large-scale antennas at the BS for massive MIMO seriously increase this overhead. In this paper, we propose a multiple-rate compressive sensing neural network framework to compress and quantize the CSI. This framework not only improves reconstruction accuracy but also decreases storage space at the UE, thus enhancing the system feasibility. Specifically, we establish two network design principles for CSI feedback, propose a new network architecture, CsiNet+, according to these principles, and develop a novel quantization framework and training strategy. Next, we further introduce two different variable-rate approaches, namely, SM-CsiNet+ and PM-CsiNet+, which decrease the parameter number at the UE by 38.0% and 46.7%, respectively. Experimental results show that CsiNet+ outperforms the state-of-the-art network by a margin but only slightly increases the parameter number. We also investigate the compression and reconstruction mechanism behind deep learning-based CSI feedback methods via parameter visualization, which provides a guideline for subsequent research. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 135,183 |
2204.13414 | Improving the Robustness of Federated Learning for Severely Imbalanced
Datasets | With the ever increasing data deluge and the success of deep neural networks, the research of distributed deep learning has become pronounced. Two common approaches to achieve this distributed learning is synchronous and asynchronous weight update. In this manuscript, we have explored very simplistic synchronous weight update mechanisms. It has been seen that with an increasing number of worker nodes, the performance degrades drastically. This effect has been studied in the context of extreme imbalanced classification (e.g. outlier detection). In practical cases, the assumed conditions of i.i.d. may not be fulfilled. There may also arise global class imbalance situations like that of outlier detection where the local servers receive severely imbalanced data and may not get any samples from the minority class. In that case, the DNNs in the local servers will get completely biased towards the majority class that they receive. This would highly impact the learning at the parameter server (which practically does not see any data). It has been observed that in a parallel setting if one uses the existing federated weight update mechanisms at the parameter server, the performance degrades drastically with the increasing number of worker nodes. This is mainly because, with the increasing number of nodes, there is a high chance that one worker node gets a very small portion of the data, either not enough to train the model without overfitting or having a highly imbalanced class distribution. The chapter, hence, proposes a workaround to this problem by introducing the concept of adaptive cost-sensitive momentum averaging. It is seen that for the proposed system, there was no to minimal degradation in performance while most of the other methods hit their bottom performance before that. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 293,812 |
2307.00863 | Thompson Sampling under Bernoulli Rewards with Local Differential
Privacy | This paper investigates the problem of regret minimization for multi-armed bandit (MAB) problems with local differential privacy (LDP) guarantee. Given a fixed privacy budget $\epsilon$, we consider three privatizing mechanisms under Bernoulli scenario: linear, quadratic and exponential mechanisms. Under each mechanism, we derive stochastic regret bound for Thompson Sampling algorithm. Finally, we simulate to illustrate the convergence of different mechanisms under different privacy budgets. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 377,157 |
1802.04086 | The Complex Event Recognition Group | The Complex Event Recognition (CER) group is a research team, affiliated with the National Centre of Scientific Research "Demokritos" in Greece. The CER group works towards advanced and efficient methods for the recognition of complex events in a multitude of large, heterogeneous and interdependent data streams. Its research covers multiple aspects of complex event recognition, from efficient detection of patterns on event streams to handling uncertainty and noise in streams, and machine learning techniques for inferring interesting patterns. Lately, it has expanded to methods for forecasting the occurrence of events. It was founded in 2009 and currently hosts 3 senior researchers, 5 PhD students and works regularly with under-graduate students. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 90,141 |
2412.01792 | CTRL-D: Controllable Dynamic 3D Scene Editing with Personalized 2D
Diffusion | Recent advances in 3D representations, such as Neural Radiance Fields and 3D Gaussian Splatting, have greatly improved realistic scene modeling and novel-view synthesis. However, achieving controllable and consistent editing in dynamic 3D scenes remains a significant challenge. Previous work is largely constrained by its editing backbones, resulting in inconsistent edits and limited controllability. In our work, we introduce a novel framework that first fine-tunes the InstructPix2Pix model, followed by a two-stage optimization of the scene based on deformable 3D Gaussians. Our fine-tuning enables the model to "learn" the editing ability from a single edited reference image, transforming the complex task of dynamic scene editing into a simple 2D image editing process. By directly learning editing regions and styles from the reference, our approach enables consistent and precise local edits without the need for tracking desired editing regions, effectively addressing key challenges in dynamic scene editing. Then, our two-stage optimization progressively edits the trained dynamic scene, using a designed edited image buffer to accelerate convergence and improve temporal consistency. Compared to state-of-the-art methods, our approach offers more flexible and controllable local scene editing, achieving high-quality and consistent results. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 513,264 |
2306.11689 | Statistical Tests for Replacing Human Decision Makers with Algorithms | This paper proposes a statistical framework of using artificial intelligence to improve human decision making. The performance of each human decision maker is benchmarked against that of machine predictions. We replace the diagnoses made by a subset of the decision makers with the recommendation from the machine learning algorithm. We apply both a heuristic frequentist approach and a Bayesian posterior loss function approach to abnormal birth detection using a nationwide dataset of doctor diagnoses from prepregnancy checkups of reproductive age couples and pregnancy outcomes. We find that our algorithm on a test dataset results in a higher overall true positive rate and a lower false positive rate than the diagnoses made by doctors only. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 374,673 |
2408.15221 | LLM Defenses Are Not Robust to Multi-Turn Human Jailbreaks Yet | Recent large language model (LLM) defenses have greatly improved models' ability to refuse harmful queries, even when adversarially attacked. However, LLM defenses are primarily evaluated against automated adversarial attacks in a single turn of conversation, an insufficient threat model for real-world malicious use. We demonstrate that multi-turn human jailbreaks uncover significant vulnerabilities, exceeding 70% attack success rate (ASR) on HarmBench against defenses that report single-digit ASRs with automated single-turn attacks. Human jailbreaks also reveal vulnerabilities in machine unlearning defenses, successfully recovering dual-use biosecurity knowledge from unlearned models. We compile these results into Multi-Turn Human Jailbreaks (MHJ), a dataset of 2,912 prompts across 537 multi-turn jailbreaks. We publicly release MHJ alongside a compendium of jailbreak tactics developed across dozens of commercial red teaming engagements, supporting research towards stronger LLM defenses. | false | false | false | false | false | false | true | false | true | false | false | false | true | true | false | false | false | false | 483,843 |
2501.19267 | Transformer-Based Financial Fraud Detection with Cloud-Optimized
Real-Time Streaming | As the financial industry becomes more interconnected and reliant on digital systems, fraud detection systems must evolve to meet growing threats. Cloud-enabled Transformer models present a transformative opportunity to address these challenges. By leveraging the scalability, flexibility, and advanced AI capabilities of cloud platforms, companies can deploy fraud detection solutions that adapt to real-time data patterns and proactively respond to evolving threats. Using the Graph self-attention Transformer neural network module, we can directly excavate gang fraud features from the transaction network without constructing complicated feature engineering. Finally, the fraud prediction network is combined to optimize the topological pattern and the temporal transaction pattern to realize the high-precision detection of fraudulent transactions. The results of antifraud experiments on credit card transaction data show that the proposed model outperforms the 7 baseline models on all evaluation indicators: In the transaction fraud detection task, the average accuracy (AP) increased by 20% and the area under the ROC curve (AUC) increased by 2.7% on average compared with the benchmark graph attention neural network (GAT), which verified the effectiveness of the proposed model in the detection of credit card fraud transactions. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 529,087 |
2012.03636 | Noise and Fluctuation of Finite Learning Rate Stochastic Gradient
Descent | In the vanishing learning rate regime, stochastic gradient descent (SGD) is now relatively well understood. In this work, we propose to study the basic properties of SGD and its variants in the non-vanishing learning rate regime. The focus is on deriving exactly solvable results and discussing their implications. The main contributions of this work are to derive the stationary distribution for discrete-time SGD in a quadratic loss function with and without momentum; in particular, one implication of our result is that the fluctuation caused by discrete-time dynamics takes a distorted shape and is dramatically larger than a continuous-time theory could predict. Examples of applications of the proposed theory considered in this work include the approximation error of variants of SGD, the effect of minibatch noise, the optimal Bayesian inference, the escape rate from a sharp minimum, and the stationary covariance of a few second-order methods including damped Newton's method, natural gradient descent, and Adam. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 210,190 |
2302.09000 | Train What You Know -- Precise Pick-and-Place with Transporter Networks | Precise pick-and-place is essential in robotic applications. To this end, we define a novel exact training method and an iterative inference method that improve pick-and-place precision with Transporter Networks. We conduct a large scale experiment on 8 simulated tasks. A systematic analysis shows, that the proposed modifications have a significant positive effect on model performance. Considering picking and placing independently, our methods achieve up to 60% lower rotation and translation errors than baselines. For the whole pick-and-place process we observe 50% lower rotation errors for most tasks with slight improvements in terms of translation errors. Furthermore, we propose architectural changes that retain model performance and reduce computational costs and time. We validate our methods with an interactive teaching procedure on real hardware. Supplementary material will be made available at: https://gergely-soti.github.io/p | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 346,261 |
cs/9811029 | A Human - machine interface for teleoperation of arm manipulators in a
complex environment | This paper discusses the feasibility of using configuration space (C-space) as a means of visualization and control in operator-guided real-time motion of a robot arm manipulator. The motivation is to improve performance of the human operator in tasks involving the manipulator motion in an environment with obstacles. Unlike some other motion planning tasks, operators are known to make expensive mistakes in such tasks, even in a simpler two-dimensional case. They have difficulty learning better procedures and their performance improves very little with practice. Using an example of a two-dimensional arm manipulator, we show that translating the problem into C-space improves the operator performance rather remarkably, on the order of magnitude compared to the usual work space control. An interface that makes the transfer possible is described, and an example of its use in a virtual environment is shown. | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | 540,442 |
2005.13363 | GSTO: Gated Scale-Transfer Operation for Multi-Scale Feature Learning in
Pixel Labeling | Existing CNN-based methods for pixel labeling heavily depend on multi-scale features to meet the requirements of both semantic comprehension and detail preservation. State-of-the-art pixel labeling neural networks widely exploit conventional scale-transfer operations, i.e., up-sampling and down-sampling to learn multi-scale features. In this work, we find that these operations lead to scale-confused features and suboptimal performance because they are spatial-invariant and directly transit all feature information cross scales without spatial selection. To address this issue, we propose the Gated Scale-Transfer Operation (GSTO) to properly transit spatial-filtered features to another scale. Specifically, GSTO can work either with or without extra supervision. Unsupervised GSTO is learned from the feature itself while the supervised one is guided by the supervised probability matrix. Both forms of GSTO are lightweight and plug-and-play, which can be flexibly integrated into networks or modules for learning better multi-scale features. In particular, by plugging GSTO into HRNet, we get a more powerful backbone (namely GSTO-HRNet) for pixel labeling, and it achieves new state-of-the-art results on the COCO benchmark for human pose estimation and other benchmarks for semantic segmentation including Cityscapes, LIP and Pascal Context, with negligible extra computational cost. Moreover, experiment results demonstrate that GSTO can also significantly boost the performance of multi-scale feature aggregation modules like PPM and ASPP. Code will be made available at https://github.com/VDIGPKU/GSTO. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 178,996 |
1906.11231 | On the Common Randomness Capacity of a Special Class of Two-way Channels | In this paper, we would like to study the common randomness (CR) capacity of intertwined two-way channels, namely those whose marginal channel transition probabilities depends also on the signal they transmit. We bring a few special settings and provide constructive schemes with which the two nodes can agree upon a common randomness. We then provide an outer bound on the CR capacity of intertwined receiver-decomposable (RD) two-way channel and will provide a bound on the cardinality of the available auxiliary variables. We will also show this outer bound is bounded above by Venkatesan-Anantharam CR capacity which makes it tight for decomposing two-way setting. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 136,615 |
1104.0888 | Settling the feasibility of interference alignment for the MIMO
interference channel: the symmetric square case | Determining the feasibility conditions for vector space interference alignment in the K-user MIMO interference channel with constant channel coefficients has attracted much recent attention yet remains unsolved. The main result of this paper is restricted to the symmetric square case where all transmitters and receivers have N antennas, and each user desires d transmit dimensions. We prove that alignment is possible if and only if the number of antennas satisfies N>= d(K+1)/2. We also show a necessary condition for feasibility of alignment with arbitrary system parameters. An algebraic geometry approach is central to the results. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 9,879 |
1706.08915 | The Fog of War: A Machine Learning Approach to Forecasting Weather on
Mars | For over a decade, scientists at NASA's Jet Propulsion Laboratory (JPL) have been recording measurements from the Martian surface as a part of the Mars Exploration Rovers mission. One quantity of interest has been the opacity of Mars's atmosphere for its importance in day-to-day estimations of the amount of power available to the rover from its solar arrays. This paper proposes the use of neural networks as a method for forecasting Martian atmospheric opacity that is more effective than the current empirical model. The more accurate prediction provided by these networks would allow operators at JPL to make more accurate predictions of the amount of energy available to the rover when they plan activities for coming sols. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 76,067 |
1705.09619 | Learning Lyapunov (Potential) Functions from Counterexamples and
Demonstrations | We present a technique for learning control Lyapunov (potential) functions, which are used in turn to synthesize controllers for nonlinear dynamical systems. The learning framework uses a demonstrator that implements a black-box, untrusted strategy presumed to solve the problem of interest, a learner that poses finitely many queries to the demonstrator to infer a candidate function and a verifier that checks whether the current candidate is a valid control Lyapunov function. The overall learning framework is iterative, eliminating a set of candidates on each iteration using the counterexamples discovered by the verifier and the demonstrations over these counterexamples. We prove its convergence using ellipsoidal approximation techniques from convex optimization. We also implement this scheme using nonlinear MPC controllers to serve as demonstrators for a set of state and trajectory stabilization problems for nonlinear dynamical systems. Our approach is able to synthesize relatively simple polynomial control Lyapunov functions, and in that process replace the MPC using a guaranteed and computationally less expensive controller. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 74,236 |
2501.17770 | Generative Unordered Flow for Set-Structured Data Generation | Flow-based generative models have demonstrated promising performance across a broad spectrum of data modalities (e.g., image and text). However, there are few works exploring their extension to unordered data (e.g., spatial point set), which is not trivial because previous models are mostly designed for vector data that are naturally ordered. In this paper, we present unordered flow, a type of flow-based generative model for set-structured data generation. Specifically, we convert unordered data into an appropriate function representation, and learn the probability measure of such representations through function-valued flow matching. For the inverse map from a function representation to unordered data, we propose a method similar to particle filtering, with Langevin dynamics to first warm-up the initial particles and gradient-based search to update them until convergence. We have conducted extensive experiments on multiple real-world datasets, showing that our unordered flow model is very effective in generating set-structured data and significantly outperforms previous baselines. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 528,446 |
2308.04365 | SLEM: Machine Learning for Path Modeling and Causal Inference with Super
Learner Equation Modeling | Causal inference is a crucial goal of science, enabling researchers to arrive at meaningful conclusions regarding the predictions of hypothetical interventions using observational data. Path models, Structural Equation Models (SEMs), and, more generally, Directed Acyclic Graphs (DAGs), provide a means to unambiguously specify assumptions regarding the causal structure underlying a phenomenon. Unlike DAGs, which make very few assumptions about the functional and parametric form, SEM assumes linearity. This can result in functional misspecification which prevents researchers from undertaking reliable effect size estimation. In contrast, we propose Super Learner Equation Modeling, a path modeling technique integrating machine learning Super Learner ensembles. We empirically demonstrate its ability to provide consistent and unbiased estimates of causal effects, its competitive performance for linear models when compared with SEM, and highlight its superiority over SEM when dealing with non-linear relationships. We provide open-source code, and a tutorial notebook with example usage, accentuating the easy-to-use nature of the method. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 384,382 |
2109.06627 | Scalable Font Reconstruction with Dual Latent Manifolds | We propose a deep generative model that performs typography analysis and font reconstruction by learning disentangled manifolds of both font style and character shape. Our approach enables us to massively scale up the number of character types we can effectively model compared to previous methods. Specifically, we infer separate latent variables representing character and font via a pair of inference networks which take as input sets of glyphs that either all share a character type, or belong to the same font. This design allows our model to generalize to characters that were not observed during training time, an important task in light of the relative sparsity of most fonts. We also put forward a new loss, adapted from prior work that measures likelihood using an adaptive distribution in a projected space, resulting in more natural images without requiring a discriminator. We evaluate on the task of font reconstruction over various datasets representing character types of many languages, and compare favorably to modern style transfer systems according to both automatic and manually-evaluated metrics. | false | false | false | false | false | false | true | false | true | false | false | true | false | false | false | false | false | false | 255,217 |
2412.03812 | Pinco: Position-induced Consistent Adapter for Diffusion Transformer in
Foreground-conditioned Inpainting | Foreground-conditioned inpainting aims to seamlessly fill the background region of an image by utilizing the provided foreground subject and a text description. While existing T2I-based image inpainting methods can be applied to this task, they suffer from issues of subject shape expansion, distortion, or impaired ability to align with the text description, resulting in inconsistencies between the visual elements and the text description. To address these challenges, we propose Pinco, a plug-and-play foreground-conditioned inpainting adapter that generates high-quality backgrounds with good text alignment while effectively preserving the shape of the foreground subject. Firstly, we design a Self-Consistent Adapter that integrates the foreground subject features into the layout-related self-attention layer, which helps to alleviate conflicts between the text and subject features by ensuring that the model can effectively consider the foreground subject's characteristics while processing the overall image layout. Secondly, we design a Decoupled Image Feature Extraction method that employs distinct architectures to extract semantic and shape features separately, significantly improving subject feature extraction and ensuring high-quality preservation of the subject's shape. Thirdly, to ensure precise utilization of the extracted features and to focus attention on the subject region, we introduce a Shared Positional Embedding Anchor, greatly improving the model's understanding of subject features and boosting training efficiency. Extensive experiments demonstrate that our method achieves superior performance and efficiency in foreground-conditioned inpainting. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 514,116 |
1706.09364 | Online Adaptation of Convolutional Neural Networks for Video Object
Segmentation | We tackle the task of semi-supervised video object segmentation, i.e. segmenting the pixels belonging to an object in the video using the ground truth pixel mask for the first frame. We build on the recently introduced one-shot video object segmentation (OSVOS) approach which uses a pretrained network and fine-tunes it on the first frame. While achieving impressive performance, at test time OSVOS uses the fine-tuned network in unchanged form and is not able to adapt to large changes in object appearance. To overcome this limitation, we propose Online Adaptive Video Object Segmentation (OnAVOS) which updates the network online using training examples selected based on the confidence of the network and the spatial configuration. Additionally, we add a pretraining step based on objectness, which is learned on PASCAL. Our experiments show that both extensions are highly effective and improve the state of the art on DAVIS to an intersection-over-union score of 85.7%. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 76,126 |
1404.6935 | Homophily and the Speed of Social Mobilization: The Effect of Acquired
and Ascribed Traits | Large-scale mobilization of individuals across social networks is becoming increasingly prevalent in society. However, little is known about what affects the speed of social mobilization. Here we use a framed field experiment to identify and measure properties of individuals and their relationships that predict mobilization speed. We ran a global social mobilization contest and recorded personal traits of the participants and those they recruited. We studied the effects of ascribed traits (gender, age) and acquired traits (geography, and information source) on the speed of mobilization. We found that homophily, a preference for interacting with other individuals with similar traits, had a mixed role in social mobilization. Homophily was present for acquired traits, in which mobilization speed was faster when the recuiter and recruit had the same trait compared to different traits. In contrast, we did not find support for homophily for the ascribed traits. Instead, those traits had other, non-homophily effects: Females mobilized other females faster than males mobilized other males. Younger recruiters mobilized others faster, and older recruits mobilized slower. Recruits also mobilized faster when they first heard about the contest directly from the contest organization, and decreased in speed when hearing from less personal source types (e.g. family vs. media). These findings show that social mobilization includes dynamics that are unlike other, more passive forms of social activity propagation. These findings suggest relevant factors for engineering social mobilization tasks for increased speed. | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | false | 32,646 |
2310.15966 | Constructing and Machine Learning Calabi-Yau Five-folds | We construct all possible complete intersection Calabi-Yau five-folds in a product of four or less complex projective spaces, with up to four constraints. We obtain $27068$ spaces, which are not related by permutations of rows and columns of the configuration matrix, and determine the Euler number for all of them. Excluding the $3909$ product manifolds among those, we calculate the cohomological data for $12433$ cases, i.e. $53.7 \%$ of the non-product spaces, obtaining $2375$ different Hodge diamonds. The dataset containing all the above information is available at https://www.dropbox.com/scl/fo/z7ii5idt6qxu36e0b8azq/h?rlkey=0qfhx3tykytduobpld510gsfy&dl=0 . The distributions of the invariants are presented, and a comparison with the lower-dimensional analogues is discussed. Supervised machine learning is performed on the cohomological data, via classifier and regressor (both fully connected and convolutional) neural networks. We find that $h^{1,1}$ can be learnt very efficiently, with very high $R^2$ score and an accuracy of $96\%$, i.e. $96 \%$ of the predictions exactly match the correct values. For $h^{1,4},h^{2,3}, \eta$, we also find very high $R^2$ scores, but the accuracy is lower, due to the large ranges of possible values. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 402,521 |
2305.09602 | Urban-StyleGAN: Learning to Generate and Manipulate Images of Urban
Scenes | A promise of Generative Adversarial Networks (GANs) is to provide cheap photorealistic data for training and validating AI models in autonomous driving. Despite their huge success, their performance on complex images featuring multiple objects is understudied. While some frameworks produce high-quality street scenes with little to no control over the image content, others offer more control at the expense of high-quality generation. A common limitation of both approaches is the use of global latent codes for the whole image, which hinders the learning of independent object distributions. Motivated by SemanticStyleGAN (SSG), a recent work on latent space disentanglement in human face generation, we propose a novel framework, Urban-StyleGAN, for urban scene generation and manipulation. We find that a straightforward application of SSG leads to poor results because urban scenes are more complex than human faces. To provide a more compact yet disentangled latent representation, we develop a class grouping strategy wherein individual classes are grouped into super-classes. Moreover, we employ an unsupervised latent exploration algorithm in the $\mathcal{S}$-space of the generator and show that it is more efficient than the conventional $\mathcal{W}^{+}$-space in controlling the image content. Results on the Cityscapes and Mapillary datasets show the proposed approach achieves significantly more controllability and improved image quality than previous approaches on urban scenes and is on par with general-purpose non-controllable generative models (like StyleGAN2) in terms of quality. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 364,699 |
2405.03056 | Convolutional Learning on Directed Acyclic Graphs | We develop a novel convolutional architecture tailored for learning from data defined over directed acyclic graphs (DAGs). DAGs can be used to model causal relationships among variables, but their nilpotent adjacency matrices pose unique challenges towards developing DAG signal processing and machine learning tools. To address this limitation, we harness recent advances offering alternative definitions of causal shifts and convolutions for signals on DAGs. We develop a novel convolutional graph neural network that integrates learnable DAG filters to account for the partial ordering induced by the graph topology, thus providing valuable inductive bias to learn effective representations of DAG-supported data. We discuss the salient advantages and potential limitations of the proposed DAG convolutional network (DCN) and evaluate its performance on two learning tasks using synthetic data: network diffusion estimation and source identification. DCN compares favorably relative to several baselines, showcasing its promising potential. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 452,029 |
2502.07790 | Can Generative AI be Egalitarian? | The recent explosion of "foundation" generative AI models has been built upon the extensive extraction of value from online sources, often without corresponding reciprocation. This pattern mirrors and intensifies the extractive practices of surveillance capitalism, while the potential for enormous profit has challenged technology organizations' commitments to responsible AI practices, raising significant ethical and societal concerns. However, a promising alternative is emerging: the development of models that rely on content willingly and collaboratively provided by users. This article explores this "egalitarian" approach to generative AI, taking inspiration from the successful model of Wikipedia. We explore the potential implications of this approach for the design, development, and constraints of future foundation models. We argue that such an approach is not only ethically sound but may also lead to models that are more responsive to user needs, more diverse in their training data, and ultimately more aligned with societal values. Furthermore, we explore potential challenges and limitations of this approach, including issues of scalability, quality control, and potential biases inherent in volunteer-contributed content. | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | 532,758 |
1911.09450 | Few Shot Network Compression via Cross Distillation | Model compression has been widely adopted to obtain light-weighted deep neural networks. Most prevalent methods, however, require fine-tuning with sufficient training data to ensure accuracy, which could be challenged by privacy and security issues. As a compromise between privacy and performance, in this paper we investigate few shot network compression: given few samples per class, how can we effectively compress the network with negligible performance drop? The core challenge of few shot network compression lies in high estimation errors from the original network during inference, since the compressed network can easily over-fits on the few training instances. The estimation errors could propagate and accumulate layer-wisely and finally deteriorate the network output. To address the problem, we propose cross distillation, a novel layer-wise knowledge distillation approach. By interweaving hidden layers of teacher and student network, layer-wisely accumulated estimation errors can be effectively reduced.The proposed method offers a general framework compatible with prevalent network compression techniques such as pruning. Extensive experiments on benchmark datasets demonstrate that cross distillation can significantly improve the student network's accuracy when only a few training instances are available. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 154,525 |
2002.05378 | Efficient Distance Approximation for Structured High-Dimensional
Distributions via Learning | We design efficient distance approximation algorithms for several classes of structured high-dimensional distributions. Specifically, we show algorithms for the following problems: - Given sample access to two Bayesian networks $P_1$ and $P_2$ over known directed acyclic graphs $G_1$ and $G_2$ having $n$ nodes and bounded in-degree, approximate $d_{tv}(P_1,P_2)$ to within additive error $\epsilon$ using $poly(n,\epsilon)$ samples and time - Given sample access to two ferromagnetic Ising models $P_1$ and $P_2$ on $n$ variables with bounded width, approximate $d_{tv}(P_1, P_2)$ to within additive error $\epsilon$ using $poly(n,\epsilon)$ samples and time - Given sample access to two $n$-dimensional Gaussians $P_1$ and $P_2$, approximate $d_{tv}(P_1, P_2)$ to within additive error $\epsilon$ using $poly(n,\epsilon)$ samples and time - Given access to observations from two causal models $P$ and $Q$ on $n$ variables that are defined over known causal graphs, approximate $d_{tv}(P_a, Q_a)$ to within additive error $\epsilon$ using $poly(n,\epsilon)$ samples, where $P_a$ and $Q_a$ are the interventional distributions obtained by the intervention $do(A=a)$ on $P$ and $Q$ respectively for a particular variable $A$. Our results are the first efficient distance approximation algorithms for these well-studied problems. They are derived using a simple and general connection to distribution learning algorithms. The distance approximation algorithms imply new efficient algorithms for {\em tolerant} testing of closeness of the above-mentioned structured high-dimensional distributions. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 163,878 |
2411.01380 | Signer-Optimal Multiple-Time Post-Quantum Hash-Based Signature for
Heterogeneous IoT Systems | Heterogeneous Internet of Things (IoTs) harboring resource-limited devices like wearable sensors are essential for next-generation networks. Ensuring the authentication and integrity of security-sensitive telemetry in these applications is vital. Digital signatures provide scalable authentication with non-repudiation and public verifiability, making them essential tools for IoTs. However, emerging quantum computers necessitate post-quantum (PQ) secure solutions, yet existing NIST-PQC standards are costlier than their conventional counterparts and unsuitable for resource-limited IoTs. There is a significant need for lightweight PQ-secure digital signatures that respect the resource constraints of low-end IoTs. We propose a new multiple-time hash-based signature called Maximum Utilization Multiple HORS (MUM-HORS) that offers PQ security, short signatures, fast signing, and high key utilization for an extended lifespan. MUM-HORS addresses the inefficiency and key loss issues of HORS in offline/online settings by introducing compact key management data structures and optimized resistance to weak-message attacks. We tested MUM-HORS on two embedded platforms (ARM Cortex A-72 and 8-bit AVR ATmega2560) and commodity hardware. Our experiments confirm up to 40x better utilization with the same signing capacity (2^20 messages, 128-bit security) compared to multiple-time HORS while achieving 2x and 156-2463x faster signing than conventional-secure and NIST PQ-secure schemes, respectively, on an ARM Cortex. These features make MUM-HORS ideal multiple-time PQ-secure signature for heterogeneous IoTs. | false | false | false | false | false | false | false | false | false | false | true | false | true | false | false | false | false | false | 505,042 |
2412.01807 | Occam's LGS: A Simple Approach for Language Gaussian Splatting | TL;DR: Gaussian Splatting is a widely adopted approach for 3D scene representation that offers efficient, high-quality 3D reconstruction and rendering. A major reason for the success of 3DGS is its simplicity of representing a scene with a set of Gaussians, which makes it easy to interpret and adapt. To enhance scene understanding beyond the visual representation, approaches have been developed that extend 3D Gaussian Splatting with semantic vision-language features, especially allowing for open-set tasks. In this setting, the language features of 3D Gaussian Splatting are often aggregated from multiple 2D views. Existing works address this aggregation problem using cumbersome techniques that lead to high computational cost and training time. In this work, we show that the sophisticated techniques for language-grounded 3D Gaussian Splatting are simply unnecessary. Instead, we apply Occam's razor to the task at hand and perform weighted multi-view feature aggregation using the weights derived from the standard rendering process, followed by a simple heuristic-based noisy Gaussian filtration. Doing so offers us state-of-the-art results with a speed-up of two orders of magnitude. We showcase our results in two commonly used benchmark datasets: LERF and 3D-OVS. Our simple approach allows us to perform reasoning directly in the language features, without any compression whatsoever. Such modeling in turn offers easy scene manipulation, unlike the existing methods -- which we illustrate using an application of object insertion in the scene. Furthermore, we provide a thorough discussion regarding the significance of our contributions within the context of the current literature. Project Page: https://insait-institute.github.io/OccamLGS/ | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 513,271 |
2307.12101 | Spatial Self-Distillation for Object Detection with Inaccurate Bounding
Boxes | Object detection via inaccurate bounding boxes supervision has boosted a broad interest due to the expensive high-quality annotation data or the occasional inevitability of low annotation quality (\eg tiny objects). The previous works usually utilize multiple instance learning (MIL), which highly depends on category information, to select and refine a low-quality box. Those methods suffer from object drift, group prediction and part domination problems without exploring spatial information. In this paper, we heuristically propose a \textbf{Spatial Self-Distillation based Object Detector (SSD-Det)} to mine spatial information to refine the inaccurate box in a self-distillation fashion. SSD-Det utilizes a Spatial Position Self-Distillation \textbf{(SPSD)} module to exploit spatial information and an interactive structure to combine spatial information and category information, thus constructing a high-quality proposal bag. To further improve the selection procedure, a Spatial Identity Self-Distillation \textbf{(SISD)} module is introduced in SSD-Det to obtain spatial confidence to help select the best proposals. Experiments on MS-COCO and VOC datasets with noisy box annotation verify our method's effectiveness and achieve state-of-the-art performance. The code is available at https://github.com/ucas-vg/PointTinyBenchmark/tree/SSD-Det. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 381,136 |
2006.05087 | Isotropic SGD: a Practical Approach to Bayesian Posterior Sampling | In this work we define a unified mathematical framework to deepen our understanding of the role of stochastic gradient (SG) noise on the behavior of Markov chain Monte Carlo sampling (SGMCMC) algorithms. Our formulation unlocks the design of a novel, practical approach to posterior sampling, which makes the SG noise isotropic using a fixed learning rate that we determine analytically, and that requires weaker assumptions than existing algorithms. In contrast, the common traits of existing \sgmcmc algorithms is to approximate the isotropy condition either by drowning the gradients in additive noise (annealing the learning rate) or by making restrictive assumptions on the \sg noise covariance and the geometry of the loss landscape. Extensive experimental validations indicate that our proposal is competitive with the state-of-the-art on \sgmcmc, while being much more practical to use. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 180,934 |
1710.07312 | FPGA-based ORB Feature Extraction for Real-Time Visual SLAM | Simultaneous Localization And Mapping (SLAM) is the problem of constructing or updating a map of an unknown environment while simultaneously keeping track of an agent's location within it. How to enable SLAM robustly and durably on mobile, or even IoT grade devices, is the main challenge faced by the industry today. The main problems we need to address are: 1.) how to accelerate the SLAM pipeline to meet real-time requirements; and 2.) how to reduce SLAM energy consumption to extend battery life. After delving into the problem, we found out that feature extraction is indeed the bottleneck of performance and energy consumption. Hence, in this paper, we design, implement, and evaluate a hardware ORB feature extractor and prove that our design is a great balance between performance and energy consumption compared with ARM Krait and Intel Core i5. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | 82,910 |
1908.02338 | Modelling Segmented Cardiotocography Time-Series Signals Using
One-Dimensional Convolutional Neural Networks for the Early Detection of
Abnormal Birth Outcomes | Gynaecologists and obstetricians visually interpret cardiotocography (CTG) traces using the International Federation of Gynaecology and Obstetrics (FIGO) guidelines to assess the wellbeing of the foetus during antenatal care. This approach has raised concerns among professionals with regards to inter- and intra-variability where clinical diagnosis only has a 30\% positive predictive value when classifying pathological outcomes. Machine learning models, trained with FIGO and other user derived features extracted from CTG traces, have been shown to increase positive predictive capacity and minimise variability. This is only possible however when class distributions are equal which is rarely the case in clinical trials where case-control observations are heavily skewed in favour of normal outcomes. Classes can be balanced using either synthetic data derived from resampled case training data or by decreasing the number of control instances. However, this either introduces bias or removes valuable information. Concerns have also been raised regarding machine learning studies and their reliance on manually handcrafted features. While this has led to some interesting results, deriving an optimal set of features is considered to be an art as well as a science and is often an empirical and time consuming process. In this paper, we address both of these issues and propose a novel CTG analysis methodology that a) splits CTG time-series signals into n-size windows with equal class distributions, and b) automatically extracts features from time-series windows using a one dimensional convolutional neural network (1DCNN) and multilayer perceptron (MLP) ensemble. Collectively, the proposed approach normally distributes classes and removes the need to handcrafted features from CTG traces. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 140,973 |
2205.11168 | Logarithmic regret bounds for continuous-time average-reward Markov
decision processes | We consider reinforcement learning for continuous-time Markov decision processes (MDPs) in the infinite-horizon, average-reward setting. In contrast to discrete-time MDPs, a continuous-time process moves to a state and stays there for a random holding time after an action is taken. With unknown transition probabilities and rates of exponential holding times, we derive instance-dependent regret lower bounds that are logarithmic in the time horizon. Moreover, we design a learning algorithm and establish a finite-time regret bound that achieves the logarithmic growth rate. Our analysis builds upon upper confidence reinforcement learning, a delicate estimation of the mean holding times, and stochastic comparison of point processes. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 298,030 |
2502.09274 | FLARES: Fast and Accurate LiDAR Multi-Range Semantic Segmentation | 3D scene understanding is a critical yet challenging task in autonomous driving, primarily due to the irregularity and sparsity of LiDAR data, as well as the computational demands of processing large-scale point clouds. Recent methods leverage the range-view representation to improve processing efficiency. To mitigate the performance drop caused by information loss inherent to the "many-to-one" problem, where multiple nearby 3D points are mapped to the same 2D grids and only the closest is retained, prior works tend to choose a higher azimuth resolution for range-view projection. However, this can bring the drawback of reducing the proportion of pixels that carry information and heavier computation within the network. We argue that it is not the optimal solution and show that, in contrast, decreasing the resolution is more advantageous in both efficiency and accuracy. In this work, we present a comprehensive re-design of the workflow for range-view-based LiDAR semantic segmentation. Our approach addresses data representation, augmentation, and post-processing methods for improvements. Through extensive experiments on two public datasets, we demonstrate that our pipeline significantly enhances the performance of various network architectures over their baselines, paving the way for more effective LiDAR-based perception in autonomous systems. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 533,386 |
2501.04217 | Continual Self-supervised Learning Considering Medical Domain Knowledge
in Chest CT Images | We propose a novel continual self-supervised learning method (CSSL) considering medical domain knowledge in chest CT images. Our approach addresses the challenge of sequential learning by effectively capturing the relationship between previously learned knowledge and new information at different stages. By incorporating an enhanced DER into CSSL and maintaining both diversity and representativeness within the rehearsal buffer of DER, the risk of data interference during pretraining is reduced, enabling the model to learn more richer and robust feature representations. In addition, we incorporate a mixup strategy and feature distillation to further enhance the model's ability to learn meaningful representations. We validate our method using chest CT images obtained under two different imaging conditions, demonstrating superior performance compared to state-of-the-art methods. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 523,136 |
1805.12022 | On $q$-ratio CMSV for sparse recovery | Sparse recovery aims to reconstruct an unknown spare or approximately sparse signal from significantly few noisy incoherent linear measurements. As a kind of computable incoherence measure of the measurement matrix, $q$-ratio constrained minimal singular values (CMSV) was proposed in Zhou and Yu \cite{zhou2018sparse} to derive the performance bounds for sparse recovery. In this paper, we study the geometrical property of the $q$-ratio CMSV, based on which we establish new sufficient conditions for signal recovery involving both sparsity defect and measurement error. The $\ell_1$-truncated set $q$-width of the measurement matrix is developed as the geometrical characterization of $q$-ratio CMSV. In addition, we show that the $q$-ratio CMSVs of a class of structured random matrices are bounded away from zero with high probability as long as the number of measurements is large enough, therefore satisfy those established sufficient conditions. Overall, our results generalize the results in Zhang and Cheng \cite{zc} from $q=2$ to any $q\in(1,\infty]$ and complement the arguments of $q$-ratio CMSV from a geometrical view. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 99,074 |
2410.14393 | Debug Smarter, Not Harder: AI Agents for Error Resolution in
Computational Notebooks | Computational notebooks became indispensable tools for research-related development, offering unprecedented interactivity and flexibility in the development process. However, these benefits come at the cost of reproducibility and an increased potential for bugs. With the rise of code-fluent Large Language Models empowered with agentic techniques, smart bug-fixing tools with a high level of autonomy have emerged. However, those tools are tuned for classical script programming and still struggle with non-linear computational notebooks. In this paper, we present an AI agent designed specifically for error resolution in a computational notebook. We have developed an agentic system capable of exploring a notebook environment by interacting with it -- similar to how a user would -- and integrated the system into the JetBrains service for collaborative data science called Datalore. We evaluate our approach against the pre-existing single-action solution by comparing costs and conducting a user study. Users rate the error resolution capabilities of the agentic system higher but experience difficulties with UI. We share the results of the study and consider them valuable for further improving user-agent collaboration. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 500,010 |
1611.10088 | On Binary de Bruijn Sequences from LFSRs with Arbitrary Characteristic
Polynomials | We propose a construction of de Bruijn sequences by the cycle joining method from linear feedback shift registers (LFSRs) with arbitrary characteristic polynomial $f(x)$. We study in detail the cycle structure of the set $\Omega(f(x))$ that contains all sequences produced by a specific LFSR on distinct inputs and provide a fast way to find a state of each cycle. This leads to an efficient algorithm to find all conjugate pairs between any two cycles, yielding the adjacency graph. The approach is practical to generate a large class of de Bruijn sequences up to order $n \approx 20$. Many previously proposed constructions of de Bruijn sequences are shown to be special cases of our construction. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 64,771 |
2407.01916 | Sequential Manipulation Against Rank Aggregation: Theory and Algorithm | Rank aggregation with pairwise comparisons is widely encountered in sociology, politics, economics, psychology, sports, etc . Given the enormous social impact and the consequent incentives, the potential adversary has a strong motivation to manipulate the ranking list. However, the ideal attack opportunity and the excessive adversarial capability cause the existing methods to be impractical. To fully explore the potential risks, we leverage an online attack on the vulnerable data collection process. Since it is independent of rank aggregation and lacks effective protection mechanisms, we disrupt the data collection process by fabricating pairwise comparisons without knowledge of the future data or the true distribution. From the game-theoretic perspective, the confrontation scenario between the online manipulator and the ranker who takes control of the original data source is formulated as a distributionally robust game that deals with the uncertainty of knowledge. Then we demonstrate that the equilibrium in the above game is potentially favorable to the adversary by analyzing the vulnerability of the sampling algorithms such as Bernoulli and reservoir methods. According to the above theoretical analysis, different sequential manipulation policies are proposed under a Bayesian decision framework and a large class of parametric pairwise comparison models. For attackers with complete knowledge, we establish the asymptotic optimality of the proposed policies. To increase the success rate of the sequential manipulation with incomplete knowledge, a distributionally robust estimator, which replaces the maximum likelihood estimation in a saddle point problem, provides a conservative data generation solution. Finally, the corroborating empirical evidence shows that the proposed method manipulates the results of rank aggregation methods in a sequential manner. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 469,509 |
2001.03384 | Decentralized Optimization of Vehicle Route Planning -- A Cross-City
Comparative Study | New mobility concepts are at the forefront of research and innovation in smart cities. The introduction of connected and autonomous vehicles enables new possibilities in vehicle routing. Specifically, knowing the origin and destination of each agent in the network can allow for real-time routing of the vehicles to optimize network performance. However, this relies on individual vehicles being "altruistic" i.e., being willing to accept an alternative non-preferred route in order to achieve a network-level performance goal. In this work, we conduct a study to compare different levels of agent altruism and the resulting effect on the network-level traffic performance. Specifically, this study compares the effects of different underlying urban structures on the overall network performance, and investigates which characteristics of the network make it possible to realize routing improvements using a decentralized optimization router. The main finding is that, with increased vehicle altruism, it is possible to balance traffic flow among the links of the network. We show evidence that the decentralized optimization router is more effective with networks of high load while we study the influence of cities characteristics, in particular: networks with a higher number of nodes (intersections) or edges (roads) per unit area allow for more possible alternate routes, and thus higher potential to improve network performance. | false | false | false | false | true | false | false | false | false | false | true | false | false | false | true | false | false | true | 159,964 |
1910.05672 | Optic-Net: A Novel Convolutional Neural Network for Diagnosis of Retinal
Diseases from Optical Tomography Images | Diagnosing different retinal diseases from Spectral Domain Optical Coherence Tomography (SD-OCT) images is a challenging task. Different automated approaches such as image processing, machine learning and deep learning algorithms have been used for early detection and diagnosis of retinal diseases. Unfortunately, these are prone to error and computational inefficiency, which requires further intervention from human experts. In this paper, we propose a novel convolution neural network architecture to successfully distinguish between different degeneration of retinal layers and their underlying causes. The proposed novel architecture outperforms other classification models while addressing the issue of gradient explosion. Our approach reaches near perfect accuracy of 99.8% and 100% for two separately available Retinal SD-OCT data-set respectively. Additionally, our architecture predicts retinal diseases in real time while outperforming human diagnosticians. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 149,138 |
1308.1292 | Science Fiction as a Worldwide Phenomenon: A Study of International
Creation, Consumption and Dissemination | This paper examines the international nature of science fiction. The focus of this research is to determine whether science fiction is primarily English speaking and Western or global; being created and consumed by people in non-Western, non-English speaking countries? Science fiction's international presence was found in three ways, by network analysis, by examining a online retailer and with a survey. Condor, a program developed by GalaxyAdvisors was used to determine if science fiction is being talked about by non-English speakers. An analysis of the international Amazon.com websites was done to discover if it was being consumed worldwide. A survey was also conducted to see if people had experience with science fiction. All three research methods revealed similar results. Science fiction was found to be international, with science fiction creators originating in different countries and writing in a host of different languages. English and non-English science fiction was being created and consumed all over the world, not just in the English speaking West. | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | false | false | true | 26,296 |
2212.09252 | Mind the Knowledge Gap: A Survey of Knowledge-enhanced Dialogue Systems | Many dialogue systems (DSs) lack characteristics humans have, such as emotion perception, factuality, and informativeness. Enhancing DSs with knowledge alleviates this problem, but, as many ways of doing so exist, keeping track of all proposed methods is difficult. Here, we present the first survey of knowledge-enhanced DSs. We define three categories of systems - internal, external, and hybrid - based on the knowledge they use. We survey the motivation for enhancing DSs with knowledge, used datasets, and methods for knowledge search, knowledge encoding, and knowledge incorporation. Finally, we propose how to improve existing systems based on theories from linguistics and cognitive science. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 337,044 |
2410.11347 | Periodic autocorrelation of sequences | The autocorrelation of a sequence is a useful criterion, among all, of resistance to cryptographic attacks. The behavior of the autocorrelations of random Boolean functions (studied by Florian Caullery, Eric F\'erard and Fran\c{c}ois Rodier [4]) shows that they are concentrated around a point. We show that the same is true for the evaluation of the periodic autocorrelations of random binary sequences. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 498,515 |
2409.06183 | EDADepth: Enhanced Data Augmentation for Monocular Depth Estimation | Due to their text-to-image synthesis feature, diffusion models have recently seen a rise in visual perception tasks, such as depth estimation. The lack of good-quality datasets makes the extraction of a fine-grain semantic context challenging for the diffusion models. The semantic context with fewer details further worsens the process of creating effective text embeddings that will be used as input for diffusion models. In this paper, we propose a novel EDADepth, an enhanced data augmentation method to estimate monocular depth without using additional training data. We use Swin2SR, a super-resolution model, to enhance the quality of input images. We employ the BEiT pre-trained semantic segmentation model for better extraction of text embeddings. We use BLIP-2 tokenizer to generate tokens from these text embeddings. The novelty of our approach is the introduction of Swin2SR, the BEiT model, and the BLIP-2 tokenizer in the diffusion-based pipeline for the monocular depth estimation. Our model achieves state-of-the-art results (SOTA) on the delta3 metric on NYUv2 and KITTI datasets. It also achieves results comparable to those of the SOTA models in the RMSE and REL metrics. Finally, we also show improvements in the visualization of the estimated depth compared to the SOTA diffusion-based monocular depth estimation models. Code: https://github.com/edadepthmde/EDADepth_ICMLA. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 487,028 |
1503.08513 | Hiding Symbols and Functions: New Metrics and Constructions for
Information-Theoretic Security | We present information-theoretic definitions and results for analyzing symmetric-key encryption schemes beyond the perfect secrecy regime, i.e. when perfect secrecy is not attained. We adopt two lines of analysis, one based on lossless source coding, and another akin to rate-distortion theory. We start by presenting a new information-theoretic metric for security, called symbol secrecy, and derive associated fundamental bounds. We then introduce list-source codes (LSCs), which are a general framework for mapping a key length (entropy) to a list size that an eavesdropper has to resolve in order to recover a secret message. We provide explicit constructions of LSCs, and demonstrate that, when the source is uniformly distributed, the highest level of symbol secrecy for a fixed key length can be achieved through a construction based on minimum-distance separable (MDS) codes. Using an analysis related to rate-distortion theory, we then show how symbol secrecy can be used to determine the probability that an eavesdropper correctly reconstructs functions of the original plaintext. We illustrate how these bounds can be applied to characterize security properties of symmetric-key encryption schemes, and, in particular, extend security claims based on symbol secrecy to a functional setting. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 41,598 |
2309.00386 | Satisfiability Checking of Multi-Variable TPTL with Unilateral Intervals
Is PSPACE-Complete | We investigate the decidability of the ${0,\infty}$ fragment of Timed Propositional Temporal Logic (TPTL). We show that the satisfiability checking of TPTL$^{0,\infty}$ is PSPACE-complete. Moreover, even its 1-variable fragment (1-TPTL$^{0,\infty}$) is strictly more expressive than Metric Interval Temporal Logic (MITL) for which satisfiability checking is EXPSPACE complete. Hence, we have a strictly more expressive logic with computationally easier satisfiability checking. To the best of our knowledge, TPTL$^{0,\infty}$ is the first multi-variable fragment of TPTL for which satisfiability checking is decidable without imposing any bounds/restrictions on the timed words (e.g. bounded variability, bounded time, etc.). The membership in PSPACE is obtained by a reduction to the emptiness checking problem for a new "non-punctual" subclass of Alternating Timed Automata with multiple clocks called Unilateral Very Weak Alternating Timed Automata (VWATA$^{0,\infty}$) which we prove to be in PSPACE. We show this by constructing a simulation equivalent non-deterministic timed automata whose number of clocks is polynomial in the size of the given VWATA$^{0,\infty}$. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | 389,297 |
2310.03614 | Adversarial Machine Learning for Social Good: Reframing the Adversary as
an Ally | Deep Neural Networks (DNNs) have been the driving force behind many of the recent advances in machine learning. However, research has shown that DNNs are vulnerable to adversarial examples -- input samples that have been perturbed to force DNN-based models to make errors. As a result, Adversarial Machine Learning (AdvML) has gained a lot of attention, and researchers have investigated these vulnerabilities in various settings and modalities. In addition, DNNs have also been found to incorporate embedded bias and often produce unexplainable predictions, which can result in anti-social AI applications. The emergence of new AI technologies that leverage Large Language Models (LLMs), such as ChatGPT and GPT-4, increases the risk of producing anti-social applications at scale. AdvML for Social Good (AdvML4G) is an emerging field that repurposes the AdvML bug to invent pro-social applications. Regulators, practitioners, and researchers should collaborate to encourage the development of pro-social applications and hinder the development of anti-social ones. In this work, we provide the first comprehensive review of the emerging field of AdvML4G. This paper encompasses a taxonomy that highlights the emergence of AdvML4G, a discussion of the differences and similarities between AdvML4G and AdvML, a taxonomy covering social good-related concepts and aspects, an exploration of the motivations behind the emergence of AdvML4G at the intersection of ML4G and AdvML, and an extensive summary of the works that utilize AdvML4G as an auxiliary tool for innovating pro-social applications. Finally, we elaborate upon various challenges and open research issues that require significant attention from the research community. | false | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | 397,348 |
2410.06317 | Learning in complex action spaces without policy gradients | Conventional wisdom suggests that policy gradient methods are better suited to complex action spaces than action-value methods. However, foundational studies have shown equivalences between these paradigms in small and finite action spaces (O'Donoghue et al., 2017; Schulman et al., 2017a). This raises the question of why their computational applicability and performance diverge as the complexity of the action space increases. We hypothesize that the apparent superiority of policy gradients in such settings stems not from intrinsic qualities of the paradigm, but from universal principles that can also be applied to action-value methods to serve similar functionality. We identify three such principles and provide a framework for incorporating them into action-value methods. To support our hypothesis, we instantiate this framework in what we term QMLE, for Q-learning with maximum likelihood estimation. Our results show that QMLE can be applied to complex action spaces with a controllable computational cost that is comparable to that of policy gradient methods, all without using policy gradients. Furthermore, QMLE demonstrates strong performance on the DeepMind Control Suite, even when compared to the state-of-the-art methods such as DMPO and D4PG. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 496,142 |
2209.08283 | Detecting Generated Scientific Papers using an Ensemble of Transformer
Models | The paper describes neural models developed for the DAGPap22 shared task hosted at the Third Workshop on Scholarly Document Processing. This shared task targets the automatic detection of generated scientific papers. Our work focuses on comparing different transformer-based models as well as using additional datasets and techniques to deal with imbalanced classes. As a final submission, we utilized an ensemble of SciBERT, RoBERTa, and DeBERTa fine-tuned using random oversampling technique. Our model achieved 99.24% in terms of F1-score. The official evaluation results have put our system at the third place. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 318,063 |
1611.04845 | An Evaluation of Information Sharing Parking Guidance Policies Using a
Bayesian Approach | Real-time parking occupancy information is critical for a parking management system to facilitate drivers to park more efficiently. Recent advances in connected and automated vehicle technologies enable sensor-equipped cars (probe cars) to detect and broadcast available parking spaces when driving through parking lots. In this paper, we evaluate the impact of market penetration of probe cars on the system performance, and investigate different parking guidance policies to improve the data acquisition process. We adopt a simulation-based approach to impose four policies on an off- street parking lot influencing the behavior of probe cars to park in assigned parking spaces. This in turn effects the scanning route and the parking space occupancy estimations. The last policy we propose is a near-optimal guidance strategy that maximizes the information gain of posteriors. The results suggest that an efficient information gathering policy can compensate for low penetration of connected and automated vehicles. We also highlight the policy trade-off that occur while attempting to maximize information gain through explorations and improve assignment accuracy through exploitations. Our results can assist urban policy makers in designing and managing smart parking systems. | false | false | false | false | true | false | false | true | false | false | false | false | false | true | false | false | false | false | 63,910 |
2409.15915 | Planning in the Dark: LLM-Symbolic Planning Pipeline without Experts | Large Language Models (LLMs) have shown promise in solving natural language-described planning tasks, but their direct use often leads to inconsistent reasoning and hallucination. While hybrid LLM-symbolic planning pipelines have emerged as a more robust alternative, they typically require extensive expert intervention to refine and validate generated action schemas. It not only limits scalability but also introduces a potential for biased interpretation, as a single expert's interpretation of ambiguous natural language descriptions might not align with the user's actual intent. To address this, we propose a novel approach that constructs an action schema library to generate multiple candidates, accounting for the diverse possible interpretations of natural language descriptions. We further introduce a semantic validation and ranking module that automatically filter and rank the generated schemas and plans without expert-in-the-loop. The experiments showed our pipeline maintains superiority in planning over the direct LLM planning approach. These findings demonstrate the feasibility of a fully automated end-to-end LLM-symbolic planner that requires no expert intervention, opening up the possibility for a broader audience to engage with AI planning with less prerequisite of domain expertise. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 491,127 |
2103.11056 | ConDA: Continual Unsupervised Domain Adaptation | Domain Adaptation (DA) techniques are important for overcoming the domain shift between the source domain used for training and the target domain where testing takes place. However, current DA methods assume that the entire target domain is available during adaptation, which may not hold in practice. This paper considers a more realistic scenario, where target data become available in smaller batches and adaptation on the entire target domain is not feasible. In our work, we introduce a new, data-constrained DA paradigm where unlabeled target samples are received in batches and adaptation is performed continually. We propose a novel source-free method for continual unsupervised domain adaptation that utilizes a buffer for selective replay of previously seen samples. In our continual DA framework, we selectively mix samples from incoming batches with data stored in a buffer using buffer management strategies and use the combination to incrementally update our model. We evaluate the classification performance of the continual DA approach with state-of-the-art DA methods based on the entire target domain. Our results on three popular DA datasets demonstrate that our method outperforms many existing state-of-the-art DA methods with access to the entire target domain during adaptation. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 225,648 |
2406.09900 | GEB-1.3B: Open Lightweight Large Language Model | Recently developed large language models (LLMs) such as ChatGPT, Claude, and Llama have demonstrated impressive abilities, and even surpass human-level performance in several tasks. Despite their success, the resource-intensive demands of these models, requiring significant computational power for both training and inference, limit their deployment to high-performance servers. Additionally, the extensive calculation requirements of the models often lead to increased latency in response times. With the increasing need for LLMs to operate efficiently on CPUs, research about lightweight models that are optimized for CPU inference has emerged. In this work, we introduce GEB-1.3B, a lightweight LLM trained on 550 billion tokens in both Chinese and English languages. We employ novel training techniques, including ROPE, Group-Query-Attention, and FlashAttention-2, to accelerate training while maintaining model performance. Additionally, we fine-tune the model using 10 million samples of instruction data to enhance alignment. GEB-1.3B exhibits outstanding performance on general benchmarks such as MMLU, C-Eval, and CMMLU, outperforming comparative models such as MindLLM-1.3B and TinyLLaMA-1.1B. Notably, the FP32 version of GEB-1.3B achieves commendable inference times on CPUs, with ongoing efforts to further enhance speed through advanced quantization techniques. The release of GEB-1.3B as an open-source model marks a significant contribution to the development of lightweight LLMs, promising to foster further research and innovation in the field. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 464,133 |
1908.11852 | New stable method to solve heat conduction problems in extremely large
systems | We present a new explicit and stable numerical algorithm to solve the homogeneous heat equation. We illustrate the performance of the new method in the cases of two 2D systems with highly inhomogeneous random parameters. Spatial discretization of these problems results in huge and stiff ordinary differential equation systems, which can be solved by our novel method faster than by explicit or the commonly used implicit methods. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 143,492 |
1404.7109 | Security Thresholds of Multicarrier Continuous-Variable Quantum Key
Distribution | We prove the secret key rate formulas and derive security threshold parameters of multicarrier continuous-variable quantum key distribution (CVQKD). In a multicarrier CVQKD scenario, the Gaussian input quantum states of the legal parties are granulated into Gaussian subcarrier CVs (continuous-variables). The multicarrier communication formulates Gaussian sub-channels from the physical quantum channel, each dedicated to the transmission of a subcarrier CV. The Gaussian subcarriers are decoded by a unitary CV operation, which results in the recovered single-carrier Gaussian CVs. We derive the formulas through the AMQD (adaptive multicarrier quadrature division) scheme, the SVD-assisted (singular value decomposition) AMQD, and the multiuser AMQD-MQA (multiuser quadrature allocation). We prove that the multicarrier CVQKD leads to improved secret key rates and higher tolerable excess noise in comparison to single-carrier CVQKD. We derive the private classical capacity of a Gaussian sub-channel and the security parameters of an optimal Gaussian collective attack in the multicarrier setting. We reveal the secret key rate formulas for one-way and two-way multicarrier CVQKD protocols, assuming homodyne and heterodyne measurements and direct and reverse reconciliation. The results reveal the physical boundaries of physically allowed Gaussian attacks in a multicarrier CVQKD scenario and confirm that the improved transmission rates lead to enhanced secret key rates and security thresholds. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 32,664 |
2207.05703 | Tell Me the Evidence? Dual Visual-Linguistic Interaction for Answer
Grounding | Answer grounding aims to reveal the visual evidence for visual question answering (VQA), which entails highlighting relevant positions in the image when answering questions about images. Previous attempts typically tackle this problem using pretrained object detectors, but without the flexibility for objects not in the predefined vocabulary. However, these black-box methods solely concentrate on the linguistic generation, ignoring the visual interpretability. In this paper, we propose Dual Visual-Linguistic Interaction (DaVI), a novel unified end-to-end framework with the capability for both linguistic answering and visual grounding. DaVI innovatively introduces two visual-linguistic interaction mechanisms: 1) visual-based linguistic encoder that understands questions incorporated with visual features and produces linguistic-oriented evidence for further answer decoding, and 2) linguistic-based visual decoder that focuses visual features on the evidence-related regions for answer grounding. This way, our approach ranked the 1st place in the answer grounding track of 2022 VizWiz Grand Challenge. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 307,633 |
2304.01804 | Bridging the Gap between Model Explanations in Partially Annotated
Multi-label Classification | Due to the expensive costs of collecting labels in multi-label classification datasets, partially annotated multi-label classification has become an emerging field in computer vision. One baseline approach to this task is to assume unobserved labels as negative labels, but this assumption induces label noise as a form of false negative. To understand the negative impact caused by false negative labels, we study how these labels affect the model's explanation. We observe that the explanation of two models, trained with full and partial labels each, highlights similar regions but with different scaling, where the latter tends to have lower attribution scores. Based on these findings, we propose to boost the attribution scores of the model trained with partial labels to make its explanation resemble that of the model trained with full labels. Even with the conceptually simple approach, the multi-label classification performance improves by a large margin in three different datasets on a single positive label setting and one on a large-scale partial label setting. Code is available at https://github.com/youngwk/BridgeGapExplanationPAMC. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 356,211 |
2312.11923 | IPAD: Iterative, Parallel, and Diffusion-based Network for Scene Text
Recognition | Nowadays, scene text recognition has attracted more and more attention due to its diverse applications. Most state-of-the-art methods adopt an encoder-decoder framework with the attention mechanism, autoregressively generating text from left to right. Despite the convincing performance, this sequential decoding strategy constrains inference speed. Conversely, non-autoregressive models provide faster, simultaneous predictions but often sacrifice accuracy. Although utilizing an explicit language model can improve performance, it burdens the computational load. Besides, separating linguistic knowledge from vision information may harm the final prediction. In this paper, we propose an alternative solution, using a parallel and iterative decoder that adopts an easy-first decoding strategy. Furthermore, we regard text recognition as an image-based conditional text generation task and utilize the discrete diffusion strategy, ensuring exhaustive exploration of bidirectional contextual information. Extensive experiments demonstrate that the proposed approach achieves superior results on the benchmark datasets, including both Chinese and English text images. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 416,779 |
2210.03428 | Missing Modality meets Meta Sampling (M3S): An Efficient Universal
Approach for Multimodal Sentiment Analysis with Missing Modality | Multimodal sentiment analysis (MSA) is an important way of observing mental activities with the help of data captured from multiple modalities. However, due to the recording or transmission error, some modalities may include incomplete data. Most existing works that address missing modalities usually assume a particular modality is completely missing and seldom consider a mixture of missing across multiple modalities. In this paper, we propose a simple yet effective meta-sampling approach for multimodal sentiment analysis with missing modalities, namely Missing Modality-based Meta Sampling (M3S). To be specific, M3S formulates a missing modality sampling strategy into the modal agnostic meta-learning (MAML) framework. M3S can be treated as an efficient add-on training component on existing models and significantly improve their performances on multimodal data with a mixture of missing modalities. We conduct experiments on IEMOCAP, SIMS and CMU-MOSI datasets, and superior performance is achieved compared with recent state-of-the-art methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 322,030 |
2110.05442 | Neural Algorithmic Reasoners are Implicit Planners | Implicit planning has emerged as an elegant technique for combining learned models of the world with end-to-end model-free reinforcement learning. We study the class of implicit planners inspired by value iteration, an algorithm that is guaranteed to yield perfect policies in fully-specified tabular environments. We find that prior approaches either assume that the environment is provided in such a tabular form -- which is highly restrictive -- or infer "local neighbourhoods" of states to run value iteration over -- for which we discover an algorithmic bottleneck effect. This effect is caused by explicitly running the planning algorithm based on scalar predictions in every state, which can be harmful to data efficiency if such scalars are improperly predicted. We propose eXecuted Latent Value Iteration Networks (XLVINs), which alleviate the above limitations. Our method performs all planning computations in a high-dimensional latent space, breaking the algorithmic bottleneck. It maintains alignment with value iteration by carefully leveraging neural graph-algorithmic reasoning and contrastive self-supervised learning. Across eight low-data settings -- including classical control, navigation and Atari -- XLVINs provide significant improvements to data efficiency against value iteration-based implicit planners, as well as relevant model-free baselines. Lastly, we empirically verify that XLVINs can closely align with value iteration. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 260,281 |
1508.03671 | Fuzzy Longest Common Subsequence Matching With FCM Using R | Capturing the interdependencies between real valued time series can be achieved by finding common similar patterns. The abstraction of time series makes the process of finding similarities closer to the way as humans do. Therefore, the abstraction by means of a symbolic levels and finding the common patterns attracts researchers. One particular algorithm, Longest Common Subsequence, has been used successfully as a similarity measure between two sequences including real valued time series. In this paper, we propose Fuzzy Longest Common Subsequence matching for time series. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 46,025 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.