id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2112.12582 | Beyond Low Earth Orbit: Biological Research, Artificial Intelligence,
and Self-Driving Labs | Space biology research aims to understand fundamental effects of spaceflight on organisms, develop foundational knowledge to support deep space exploration, and ultimately bioengineer spacecraft and habitats to stabilize the ecosystem of plants, crops, microbes, animals, and humans for sustained multi-planetary life. To advance these aims, the field leverages experiments, platforms, data, and model organisms from both spaceborne and ground-analog studies. As research is extended beyond low Earth orbit, experiments and platforms must be maximally autonomous, light, agile, and intelligent to expedite knowledge discovery. Here we present a summary of recommendations from a workshop organized by the National Aeronautics and Space Administration on artificial intelligence, machine learning, and modeling applications which offer key solutions toward these space biology challenges. In the next decade, the synthesis of artificial intelligence into the field of space biology will deepen the biological understanding of spaceflight effects, facilitate predictive modeling and analytics, support maximally autonomous and reproducible experiments, and efficiently manage spaceborne data and metadata, all with the goal to enable life to thrive in deep space. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 273,006 |
2409.12500 | LLMR: Knowledge Distillation with a Large Language Model-Induced Reward | Large language models have become increasingly popular and demonstrated remarkable performance in various natural language processing (NLP) tasks. However, these models are typically computationally expensive and difficult to be deployed in resource-constrained environments. In this paper, we propose LLMR, a novel knowledge distillation (KD) method based on a reward function induced from large language models. We conducted experiments on multiple datasets in the dialogue generation and summarization tasks. Empirical results demonstrate that our LLMR approach consistently outperforms traditional KD methods in different tasks and datasets. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 489,620 |
1507.01716 | Temporal-varying failures of nodes in networks | We consider networks in which random walkers are removed because of the failure of specific nodes. We interpret the rate of loss as a measure of the importance of nodes, a notion we denote as failure-centrality. We show that the degree of the node is not sufficient to determine this measure and that, in a first approximation, the shortest loops through the node have to be taken into account. We propose approximations of the failure-centrality which are valid for temporal-varying failures and we dwell on the possibility of externally changing the relative importance of nodes in a given network, by exploiting the interference between the loops of a node and the cycles of the temporal pattern of failures. In the limit of long failure cycles we show analytically that the escape in a node is larger than the one estimated from a stochastic failure with the same failure probability. We test our general formalism in two real-world networks (air-transportation and e-mail users) and show how communities lead to deviations from predictions for failures in hubs. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 44,898 |
1111.6087 | Fast Distributed Computation of Distances in Networks | This paper presents a distributed algorithm to simultaneously compute the diameter, radius and node eccentricity in all nodes of a synchronous network. Such topological information may be useful as input to configure other algorithms. Previous approaches have been modular, progressing in sequential phases using building blocks such as BFS tree construction, thus incurring longer executions than strictly required. We present an algorithm that, by timely propagation of available estimations, achieves a faster convergence to the correct values. We show local criteria for detecting convergence in each node. The algorithm avoids the creation of BFS trees and simply manipulates sets of node ids and hop counts. For the worst scenario of variable start times, each node i with eccentricity ecc(i) can compute: the node eccentricity in diam(G)+ecc(i)+2 rounds; the diameter in 2*diam(G)+ecc(i)+2 rounds; and the radius in diam(G)+ecc(i)+2*radius(G) rounds. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 13,181 |
2312.06633 | Examining the Effect of Implementation Factors on Deep Learning
Reproducibility | Reproducing published deep learning papers to validate their conclusions can be difficult due to sources of irreproducibility. We investigate the impact that implementation factors have on the results and how they affect reproducibility of deep learning studies. Three deep learning experiments were ran five times each on 13 different hardware environments and four different software environments. The analysis of the 780 combined results showed that there was a greater than 6% accuracy range on the same deterministic examples introduced from hardware or software environment variations alone. To account for these implementation factors, researchers should run their experiments multiple times in different hardware and software environments to verify their conclusions are not affected. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 414,599 |
2412.14088 | Joint Perception and Prediction for Autonomous Driving: A Survey | Perception and prediction modules are critical components of autonomous driving systems, enabling vehicles to navigate safely through complex environments. The perception module is responsible for perceiving the environment, including static and dynamic objects, while the prediction module is responsible for predicting the future behavior of these objects. These modules are typically divided into three tasks: object detection, object tracking, and motion prediction. Traditionally, these tasks are developed and optimized independently, with outputs passed sequentially from one to the next. However, this approach has significant limitations: computational resources are not shared across tasks, the lack of joint optimization can amplify errors as they propagate throughout the pipeline, and uncertainty is rarely propagated between modules, resulting in significant information loss. To address these challenges, the joint perception and prediction paradigm has emerged, integrating perception and prediction into a unified model through multi-task learning. This strategy not only overcomes the limitations of previous methods, but also enables the three tasks to have direct access to raw sensor data, allowing richer and more nuanced environmental interpretations. This paper presents the first comprehensive survey of joint perception and prediction for autonomous driving. We propose a taxonomy that categorizes approaches based on input representation, scene context modeling, and output representation, highlighting their contributions and limitations. Additionally, we present a qualitative analysis and quantitative comparison of existing methods. Finally, we discuss future research directions based on identified gaps in the state-of-the-art. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 518,567 |
2104.03899 | Unsupervised Speech Representation Learning for Behavior Modeling using
Triplet Enhanced Contextualized Networks | Speech encodes a wealth of information related to human behavior and has been used in a variety of automated behavior recognition tasks. However, extracting behavioral information from speech remains challenging including due to inadequate training data resources stemming from the often low occurrence frequencies of specific behavioral patterns. Moreover, supervised behavioral modeling typically relies on domain-specific construct definitions and corresponding manually-annotated data, rendering generalizing across domains challenging. In this paper, we exploit the stationary properties of human behavior within an interaction and present a representation learning method to capture behavioral information from speech in an unsupervised way. We hypothesize that nearby segments of speech share the same behavioral context and hence map onto similar underlying behavioral representations. We present an encoder-decoder based Deep Contextualized Network (DCN) as well as a Triplet-Enhanced DCN (TE-DCN) framework to capture the behavioral context and derive a manifold representation, where speech frames with similar behaviors are closer while frames of different behaviors maintain larger distances. The models are trained on movie audio data and validated on diverse domains including on a couples therapy corpus and other publicly collected data (e.g., stand-up comedy). With encouraging results, our proposed framework shows the feasibility of unsupervised learning within cross-domain behavioral modeling. | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 229,216 |
2306.05881 | Nonlinear Stability Assessment Of Type-4 Wind Turbines During Unbalanced
Grid Faults Based On Reduced-Order Model | As the number of converter-based renewable generations in the power system is increasing, the inertia provided by the synchronous generators is reducing, which in turn is reducing the stability margins of the power system. In order to assess the large-signal stability, it is essential to model the wind power plant connections accurately. However, the actual EMT models are often unavailable, black-boxed, or computationally too heavy to model in detail. Hence, simplified reduced-order models (ROMs) resembling the actual system behaviour have gained prominence in stability studies. In this regard, an improved WT ROM was proposed to investigate large signal stability during unbalanced grid faults. The methodology presents a systematic way to model the coupled sequence components of the WT ROM for various grid faults. Based on the studies carried out in this paper, it is observed that post unbalanced grid disturbances the proposed WT ROM correctly tracks the angle and frequency, and its trajectory is a good match when compared to a detailed simulation model in PSCAD. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 372,371 |
2301.02781 | Knowledge Reasoning via Jointly Modeling Knowledge Graphs and Soft Rules | Knowledge graphs (KGs) play a crucial role in many applications, such as question answering, but incompleteness is an urgent issue for their broad application. Much research in knowledge graph completion (KGC) has been performed to resolve this issue. The methods of KGC can be classified into two major categories: rule-based reasoning and embedding-based reasoning. The former has high accuracy and good interpretability, but a major challenge is to obtain effective rules on large-scale KGs. The latter has good efficiency and scalability, but it relies heavily on data richness and cannot fully use domain knowledge in the form of logical rules. We propose a novel method that injects rules and learns representations iteratively to take full advantage of rules and embeddings. Specifically, we model the conclusions of rule groundings as 0-1 variables and use a rule confidence regularizer to remove the uncertainty of the conclusions. The proposed approach has the following advantages: 1) It combines the benefits of both rules and knowledge graph embeddings (KGEs) and achieves a good balance between efficiency and scalability. 2) It uses an iterative method to continuously improve KGEs and remove incorrect rule conclusions. Evaluations on two public datasets show that our method outperforms the current state-of-the-art methods, improving performance by 2.7\% and 4.3\% in mean reciprocal rank (MRR). | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 339,594 |
1010.0189 | Reed-Muller Codes for Peak Power Control in Multicarrier CDMA | Reed-Muller codes are studied for peak power control in multicarrier code-division multiple access (MC-CDMA) communication systems. In a coded MC-CDMA system, the information data multiplexed from users is encoded by a Reed-Muller subcode and the codeword is fully-loaded to Walsh-Hadamard spreading sequences. The polynomial representation of a coded MC-CDMA signal is established for theoretical analysis of the peak-to-average power ratio (PAPR). The Reed-Muller subcodes are defined in a recursive way by the Boolean functions providing the transmitted MC-CDMA signals with the bounded PAPR as well as the error correction capability. A connection between the code rates and the maximum PAPR is theoretically investigated in the coded MC-CDMA. Simulation results present the statistical evidence that the PAPR of the coded MC-CDMA signal is not only theoretically bounded, but also statistically reduced. In particular, the coded MC-CDMA solves the major PAPR problem of uncoded MC-CDMA by dramatically reducing its PAPR for the small number of users. Finally, the theoretical and statistical studies show that the Reed-Muller subcodes are effective coding schemes for peak power control in MC-CDMA with small and moderate numbers of users, subcarriers, and spreading factors. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 7,749 |
2301.08102 | Geometric path augmentation for inference of sparsely observed
stochastic nonlinear systems | Stochastic evolution equations describing the dynamics of systems under the influence of both deterministic and stochastic forces are prevalent in all fields of science. Yet, identifying these systems from sparse-in-time observations remains still a challenging endeavour. Existing approaches focus either on the temporal structure of the observations by relying on conditional expectations, discarding thereby information ingrained in the geometry of the system's invariant density; or employ geometric approximations of the invariant density, which are nevertheless restricted to systems with conservative forces. Here we propose a method that reconciles these two paradigms. We introduce a new data-driven path augmentation scheme that takes the local observation geometry into account. By employing non-parametric inference on the augmented paths, we can efficiently identify the deterministic driving forces of the underlying system for systems observed at low sampling rates. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 341,094 |
1901.10423 | A Minimalistic Approach to Segregation in Robot Swarms | We present a decentralized algorithm to achieve segregation into an arbitrary number of groups with swarms of autonomous robots. The distinguishing feature of our approach is in the minimalistic assumptions on which it is based. Specifically, we assume that (i) Each robot is equipped with a ternary sensor capable of detecting the presence of a single nearby robot, and, if that robot is present, whether or not it belongs to the same group as the sensing robot; (ii) The robots move according to a differential drive model; and (iii) The structure of the control system is purely reactive, and it maps directly the sensor readings to the wheel speeds with a simple 'if' statement. We present a thorough analysis of the parameter space that enables this behavior to emerge, along with conditions for guaranteed convergence and a study of non-ideal aspects in the robot design. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 120,014 |
1910.09952 | Convolutional Neural Networks for Space-Time Block Coding Recognition | We apply the latest advances in machine learning with deep neural networks to the tasks of radio modulation recognition, channel coding recognition, and spectrum monitoring. This paper first proposes an identification algorithm for space-time block coding of a signal. The feature between spatial multiplexing and Alamouti signals is extracted by adapting convolutional neural networks after preprocessing the received sequence. Unlike other algorithms, this method requires no prior information of channel coefficients and noise power, and consequently is well-suited for noncooperative contexts. Results show that the proposed algorithm performs well even at a low signal-to-noise ratio | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 150,348 |
2311.13507 | Applying Dimensionality Reduction as Precursor to LSTM-CNN Models for
Classifying Imagery and Motor Signals in ECoG-Based BCIs | Motor impairments, frequently caused by neurological incidents like strokes or traumatic brain injuries, present substantial obstacles in rehabilitation therapy. This research aims to elevate the field by optimizing motor imagery classification algorithms within Brain-Computer Interfaces (BCIs). By improving the efficiency of BCIs, we offer a novel approach that holds significant promise for enhancing motor rehabilitation outcomes. Utilizing unsupervised techniques for dimensionality reduction, namely Uniform Manifold Approximation and Projection (UMAP) coupled with K-Nearest Neighbors (KNN), we evaluate the necessity of employing supervised methods such as Long Short-Term Memory (LSTM) and Convolutional Neural Networks (CNNs) for classification tasks. Importantly, participants who exhibited high KNN scores following UMAP dimensionality reduction also achieved high accuracy in supervised deep learning (DL) models. Due to individualized model requirements and massive neural training data, dimensionality reduction becomes an effective preprocessing step that minimizes the need for extensive data labeling and supervised deep learning techniques. This approach has significant implications not only for targeted therapies in motor dysfunction but also for addressing regulatory, safety, and reliability concerns in the rapidly evolving BCI field. | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 409,759 |
1906.00912 | Temporal Density Extrapolation using a Dynamic Basis Approach | Density estimation is a versatile technique underlying many data mining tasks and techniques,ranging from exploration and presentation of static data, to probabilistic classification, or identifying changes or irregularities in streaming data. With the pervasiveness of embedded systems and digitisation, this latter type of streaming and evolving data becomes more important. Nevertheless, research in density estimation has so far focused on stationary data, leaving the task of of extrapolating and predicting density at time points outside a training window an open problem. For this task, Temporal Density Extrapolation (TDX) is proposed. This novel method models and predicts gradual monotonous changes in a distribution. It is based on the expansion of basis functions, whose weights are modelled as functions of compositional data over time by using an isometric log-ratio transformation. Extrapolated density estimates are then obtained by extrapolating the weights to the requested time point, and querying the density from the basis functions with back-transformed weights. Our approach aims for broad applicability by neither being restricted to a specific parametric distribution, nor relying on cluster structure in the data.It requires only two additional extrapolation-specific parameters, for which reasonable defaults exist. Experimental evaluation on various data streams, synthetic as well as from the real-world domains of credit scoring and environmental health, shows that the model manages to capture monotonous drift patterns accurately and better than existing methods. Thereby, it requires not more than 1.5-times the run time of a corresponding static density estimation approach. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 133,542 |
1907.04924 | Infer Implicit Contexts in Real-time Online-to-Offline Recommendation | Understanding users' context is essential for successful recommendations, especially for Online-to-Offline (O2O) recommendation, such as Yelp, Groupon, and Koubei. Different from traditional recommendation where individual preference is mostly static, O2O recommendation should be dynamic to capture variation of users' purposes across time and location. However, precisely inferring users' real-time contexts information, especially those implicit ones, is extremely difficult, and it is a central challenge for O2O recommendation. In this paper, we propose a new approach, called Mixture Attentional Constrained Denoise AutoEncoder (MACDAE), to infer implicit contexts and consequently, to improve the quality of real-time O2O recommendation. In MACDAE, we first leverage the interaction among users, items, and explicit contexts to infer users' implicit contexts, then combine the learned implicit-context representation into an end-to-end model to make the recommendation. MACDAE works quite well in the real system. We conducted both offline and online evaluations of the proposed approach. Experiments on several real-world datasets (Yelp, Dianping, and Koubei) show our approach could achieve significant improvements over state-of-the-arts. Furthermore, online A/B test suggests a 2.9% increase for click-through rate and 5.6% improvement for conversion rate in real-world traffic. Our model has been deployed in the product of "Guess You Like" recommendation in Koubei. | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | 138,235 |
2306.02015 | Machine learning enabled experimental design and parameter estimation
for ultrafast spin dynamics | Advanced experimental measurements are crucial for driving theoretical developments and unveiling novel phenomena in condensed matter and material physics, which often suffer from the scarcity of facility resources and increasing complexities. To address the limitations, we introduce a methodology that combines machine learning with Bayesian optimal experimental design (BOED), exemplified with x-ray photon fluctuation spectroscopy (XPFS) measurements for spin fluctuations. Our method employs a neural network model for large-scale spin dynamics simulations for precise distribution and utility calculations in BOED. The capability of automatic differentiation from the neural network model is further leveraged for more robust and accurate parameter estimation. Our numerical benchmarks demonstrate the superior performance of our method in guiding XPFS experiments, predicting model parameters, and yielding more informative measurements within limited experimental time. Although focusing on XPFS and spin fluctuations, our method can be adapted to other experiments, facilitating more efficient data collection and accelerating scientific discoveries. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 370,719 |
2501.01981 | Optical Character Recognition using Convolutional Neural Networks for
Ashokan Brahmi Inscriptions | This research paper delves into the development of an Optical Character Recognition (OCR) system for the recognition of Ashokan Brahmi characters using Convolutional Neural Networks. It utilizes a comprehensive dataset of character images to train the models, along with data augmentation techniques to optimize the training process. Furthermore, the paper incorporates image preprocessing to remove noise, as well as image segmentation to facilitate line and character segmentation. The study mainly focuses on three pre-trained CNNs, namely LeNet, VGG-16, and MobileNet and compares their accuracy. Transfer learning was employed to adapt the pre-trained models to the Ashokan Brahmi character dataset. The findings reveal that MobileNet outperforms the other two models in terms of accuracy, achieving a validation accuracy of 95.94% and validation loss of 0.129. The paper provides an in-depth analysis of the implementation process using MobileNet and discusses the implications of the findings. The use of OCR for character recognition is of significant importance in the field of epigraphy, specifically for the preservation and digitization of ancient scripts. The results of this research paper demonstrate the effectiveness of using pre-trained CNNs for the recognition of Ashokan Brahmi characters. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 522,283 |
2008.02421 | Cross-Model Image Annotation Platform with Active Learning | We have seen significant leapfrog advancement in machine learning in recent decades. The central idea of machine learnability lies on constructing learning algorithms that learn from good data. The availability of more data being made publicly available also accelerates the growth of AI in recent years. In the domain of computer vision, the quality of image data arises from the accuracy of image annotation. Labeling large volume of image data is a daunting and tedious task. This work presents an End-to-End pipeline tool for object annotation and recognition aims at enabling quick image labeling. We have developed a modular image annotation platform which seamlessly incorporates assisted image annotation (annotation assistance), active learning and model training and evaluation. Our approach provides a number of advantages over current image annotation tools. Firstly, the annotation assistance utilizes reference hierarchy and reference images to locate the objects in the images, thus reducing the need for annotating the whole object. Secondly, images can be annotated using polygon points allowing for objects of any shape to be annotated. Thirdly, it is also interoperable across several image models, and the tool provides an interface for object model training and evaluation across a series of pre-trained models. We have tested the model and embeds several benchmarking deep learning models. The highest accuracy achieved is 74%. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 190,606 |
2107.09770 | Faster Matchings via Learned Duals | A recent line of research investigates how algorithms can be augmented with machine-learned predictions to overcome worst case lower bounds. This area has revealed interesting algorithmic insights into problems, with particular success in the design of competitive online algorithms. However, the question of improving algorithm running times with predictions has largely been unexplored. We take a first step in this direction by combining the idea of machine-learned predictions with the idea of "warm-starting" primal-dual algorithms. We consider one of the most important primitives in combinatorial optimization: weighted bipartite matching and its generalization to $b$-matching. We identify three key challenges when using learned dual variables in a primal-dual algorithm. First, predicted duals may be infeasible, so we give an algorithm that efficiently maps predicted infeasible duals to nearby feasible solutions. Second, once the duals are feasible, they may not be optimal, so we show that they can be used to quickly find an optimal solution. Finally, such predictions are useful only if they can be learned, so we show that the problem of learning duals for matching has low sample complexity. We validate our theoretical findings through experiments on both real and synthetic data. As a result we give a rigorous, practical, and empirically effective method to compute bipartite matchings. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 247,117 |
2310.05057 | BRAINTEASER: Lateral Thinking Puzzles for Large Language Models | The success of language models has inspired the NLP community to attend to tasks that require implicit and complex reasoning, relying on human-like commonsense mechanisms. While such vertical thinking tasks have been relatively popular, lateral thinking puzzles have received little attention. To bridge this gap, we devise BRAINTEASER: a multiple-choice Question Answering task designed to test the model's ability to exhibit lateral thinking and defy default commonsense associations. We design a three-step procedure for creating the first lateral thinking benchmark, consisting of data collection, distractor generation, and generation of adversarial examples, leading to 1,100 puzzles with high-quality annotations. To assess the consistency of lateral reasoning by models, we enrich BRAINTEASER based on a semantic and contextual reconstruction of its questions. Our experiments with state-of-the-art instruction- and commonsense language models reveal a significant gap between human and model performance, which is further widened when consistency across adversarial formats is considered. We make all of our code and data available to stimulate work on developing and evaluating lateral thinking models. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 397,959 |
2210.09721 | An incremental input-to-state stability condition for a generic class of
recurrent neural networks | This paper proposes a novel sufficient condition for the incremental input-to-state stability of a generic class of recurrent neural networks (RNNs). The established condition is compared with others available in the literature, showing to be less conservative. Moreover, it can be applied for the design of incremental input-to-state stable RNN-based control systems, resulting in a linear matrix inequality constraint for some specific RNN architectures. The formulation of nonlinear observers for the considered system class, as well as the design of control schemes with explicit integral action, are also investigated. The theoretical results are validated through simulation on a referenced nonlinear system. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 324,639 |
1007.3906 | A colorful origin for the genetic code: Information theory, statistical
mechanics and the emergence of molecular codes | The genetic code maps the sixty-four nucleotide triplets (codons) to twenty amino-acids. While the biochemical details of this code were unraveled long ago, its origin is still obscure. We review information-theoretic approaches to the problem of the code's origin and discuss the results of a recent work that treats the code in terms of an evolving, error-prone information channel. Our model - which utilizes the rate-distortion theory of noisy communication channels - suggests that the genetic code originated as a result of the interplay of the three conflicting evolutionary forces: the needs for diverse amino-acids, for error-tolerance and for minimal cost of resources. The description of the code as an information channel allows us to mathematically identify the fitness of the code and locate its emergence at a second-order phase transition when the mapping of codons to amino-acids becomes nonrandom. The noise in the channel brings about an error-graph, in which edges connect codons that are likely to be confused. The emergence of the code is governed by the topology of the error-graph, which determines the lowest modes of the graph-Laplacian and is related to the map coloring problem. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 7,101 |
2012.03447 | An Improved Benders Decomposition Algorithm for Steady-State Dispatch
Problem in an Integrated Electricity-Gas System | Optimally operating an integrated electricity-gas system (IEGS) is significant for the energy sector. However, the IEGS operation model's nonconvexity makes it challenging to solve the optimal dispatch problem in the IEGS. This letter proposes an improved Benders decomposition (IBD) algorithm catering to a commonly used steady-state dispatch model of the IEGS. This IBD algorithm leverages a refined decomposition structure where the subproblems become linear and ready to be solved in parallel. We analytically compare our IBD algorithm with an existing Benders decomposition algorithm and a typical piecewise linearization method. Case studies have substantiated the higher computational efficiency of our IBD algorithm. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 210,117 |
2303.13703 | End-to-End Diffusion Latent Optimization Improves Classifier Guidance | Classifier guidance -- using the gradients of an image classifier to steer the generations of a diffusion model -- has the potential to dramatically expand the creative control over image generation and editing. However, currently classifier guidance requires either training new noise-aware models to obtain accurate gradients or using a one-step denoising approximation of the final generation, which leads to misaligned gradients and sub-optimal control. We highlight this approximation's shortcomings and propose a novel guidance method: Direct Optimization of Diffusion Latents (DOODL), which enables plug-and-play guidance by optimizing diffusion latents w.r.t. the gradients of a pre-trained classifier on the true generated pixels, using an invertible diffusion process to achieve memory-efficient backpropagation. Showcasing the potential of more precise guidance, DOODL outperforms one-step classifier guidance on computational and human evaluation metrics across different forms of guidance: using CLIP guidance to improve generations of complex prompts from DrawBench, using fine-grained visual classifiers to expand the vocabulary of Stable Diffusion, enabling image-conditioned generation with a CLIP visual encoder, and improving image aesthetics using an aesthetic scoring network. Code at https://github.com/salesforce/DOODL. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 353,784 |
2104.14133 | Text-to-Text Multi-view Learning for Passage Re-ranking | Recently, much progress in natural language processing has been driven by deep contextualized representations pretrained on large corpora. Typically, the fine-tuning on these pretrained models for a specific downstream task is based on single-view learning, which is however inadequate as a sentence can be interpreted differently from different perspectives. Therefore, in this work, we propose a text-to-text multi-view learning framework by incorporating an additional view -- the text generation view -- into a typical single-view passage ranking model. Empirically, the proposed approach is of help to the ranking performance compared to its single-view counterpart. Ablation studies are also reported in the paper. | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | 232,730 |
2005.02068 | Establishing Baselines for Text Classification in Low-Resource Languages | While transformer-based finetuning techniques have proven effective in tasks that involve low-resource, low-data environments, a lack of properly established baselines and benchmark datasets make it hard to compare different approaches that are aimed at tackling the low-resource setting. In this work, we provide three contributions. First, we introduce two previously unreleased datasets as benchmark datasets for text classification and low-resource multilabel text classification for the low-resource language Filipino. Second, we pretrain better BERT and DistilBERT models for use within the Filipino setting. Third, we introduce a simple degradation test that benchmarks a model's resistance to performance degradation as the number of training samples are reduced. We analyze our pretrained model's degradation speeds and look towards the use of this method for comparing models aimed at operating within the low-resource setting. We release all our models and datasets for the research community to use. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 175,748 |
1707.01219 | Like What You Like: Knowledge Distill via Neuron Selectivity Transfer | Despite deep neural networks have demonstrated extraordinary power in various applications, their superior performances are at expense of high storage and computational costs. Consequently, the acceleration and compression of neural networks have attracted much attention recently. Knowledge Transfer (KT), which aims at training a smaller student network by transferring knowledge from a larger teacher model, is one of the popular solutions. In this paper, we propose a novel knowledge transfer method by treating it as a distribution matching problem. Particularly, we match the distributions of neuron selectivity patterns between teacher and student networks. To achieve this goal, we devise a new KT loss function by minimizing the Maximum Mean Discrepancy (MMD) metric between these distributions. Combined with the original loss function, our method can significantly improve the performance of student networks. We validate the effectiveness of our method across several datasets, and further combine it with other KT methods to explore the best possible results. Last but not least, we fine-tune the model to other tasks such as object detection. The results are also encouraging, which confirm the transferability of the learned features. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | true | false | false | 76,497 |
2109.08913 | Intelligent Reflecting Surface Aided MIMO with Cascaded LoS Links:
Channel Modelling and Full Multiplexing Region | This work studies the modelling and the optimization of intelligent reflecting surface (IRS) assisted multiple-input multiple-output (MIMO) systems through cascaded line-of-sight (LoS) links. In Part I of this work, we build up a new IRS-aided MIMO channel model, named the cascaded LoS MIMO channel. The proposed channel model consists of a transmitter (Tx) and a receiver (Rx) both equipped with uniform linear arrays, and an IRS is used to enable communications between the transmitter and the receiver through the LoS links seen by the IRS. When modeling the reflection of electromagnetic waves at the IRS, we take into account the curvature of the wavefront on different reflecting elements. Based on the established model, we study the spatial multiplexing capability of the cascaded LoS MIMO system. We introduce the notion of full multiplexing region (FMR) for the cascaded LoS MIMO channel, where the FMR is the union of Tx-IRS and IRS-Rx distance pairs that enable full multiplexing communication. Under a special passive beamforming strategy named reflective focusing, we derive an inner bound of the FMR, and provide the corresponding orientation settings of the antenna arrays that enable full multiplexing. Based on the proposed channel model and reflective focusing, the mutual information maximization problem is discussed in Part II. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 256,072 |
2102.00457 | MultiRocket: Multiple pooling operators and transformations for fast and
effective time series classification | We propose MultiRocket, a fast time series classification (TSC) algorithm that achieves state-of-the-art performance with a tiny fraction of the time and without the complex ensembling structure of many state-of-the-art methods. MultiRocket improves on MiniRocket, one of the fastest TSC algorithms to date, by adding multiple pooling operators and transformations to improve the diversity of the features generated. In addition to processing the raw input series, MultiRocket also applies first order differences to transform the original series. Convolutions are applied to both representations, and four pooling operators are applied to the convolution outputs. When benchmarked using the University of California Riverside TSC benchmark datasets, MultiRocket is significantly more accurate than MiniRocket, and competitive with the best ranked current method in terms of accuracy, HIVE-COTE 2.0, while being orders of magnitude faster. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 217,789 |
2302.13770 | Mask Reference Image Quality Assessment | Understanding semantic information is an essential step in knowing what is being learned in both full-reference (FR) and no-reference (NR) image quality assessment (IQA) methods. However, especially for many severely distorted images, even if there is an undistorted image as a reference (FR-IQA), it is difficult to perceive the lost semantic and texture information of distorted images directly. In this paper, we propose a Mask Reference IQA (MR-IQA) method that masks specific patches of a distorted image and supplements missing patches with the reference image patches. In this way, our model only needs to input the reconstructed image for quality assessment. First, we design a mask generator to select the best candidate patches from reference images and supplement the lost semantic information in distorted images, thus providing more reference for quality assessment; in addition, the different masked patches imply different data augmentations, which favors model training and reduces overfitting. Second, we provide a Mask Reference Network (MRNet): the dedicated modules can prevent disturbances due to masked patches and help eliminate the patch discontinuity in the reconstructed image. Our method achieves state-of-the-art performances on the benchmark KADID-10k, LIVE and CSIQ datasets and has better generalization performance across datasets. The code and results are available in the supplementary material. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 348,053 |
2307.06108 | Fast Decoding of Lifted Interleaved Linearized Reed-Solomon Codes for
Multishot Network Coding | Mart{\'\i}nez-Pe{\~n}as and Kschischang (IEEE Trans.\ Inf.\ Theory, 2019) proposed lifted linearized Reed--Solomon codes as suitable codes for error control in multishot network coding. We show how to construct and decode \ac{LILRS} codes. Compared to the construction by Mart{\'\i}nez-Pe{\~n}as--Kschischang, interleaving allows to increase the decoding region significantly and decreases the overhead due to the lifting (i.e., increases the code rate), at the cost of an increased packet size. We propose two decoding schemes for \ac{LILRS} that are both capable of correcting insertions and deletions beyond half the minimum distance of the code by either allowing a list or a small decoding failure probability. We propose a probabilistic unique {\LOlike} decoder for \ac{LILRS} codes and an efficient interpolation-based decoding scheme that can be either used as a list decoder (with exponential worst-case list size) or as a probabilistic unique decoder. We derive upper bounds on the decoding failure probability of the probabilistic-unique decoders which show that the decoding failure probability is very small for most channel realizations up to the maximal decoding radius. The tightness of the bounds is verified by Monte Carlo simulations. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 378,970 |
2307.05241 | Does pre-training on brain-related tasks results in better
deep-learning-based brain age biomarkers? | Brain age prediction using neuroimaging data has shown great potential as an indicator of overall brain health and successful aging, as well as a disease biomarker. Deep learning models have been established as reliable and efficient brain age estimators, being trained to predict the chronological age of healthy subjects. In this paper, we investigate the impact of a pre-training step on deep learning models for brain age prediction. More precisely, instead of the common approach of pre-training on natural imaging classification, we propose pre-training the models on brain-related tasks, which led to state-of-the-art results in our experiments on ADNI data. Furthermore, we validate the resulting brain age biomarker on images of patients with mild cognitive impairment and Alzheimer's disease. Interestingly, our results indicate that better-performing deep learning models in terms of brain age prediction on healthy patients do not result in more reliable biomarkers. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 378,668 |
2409.18866 | MCUBench: A Benchmark of Tiny Object Detectors on MCUs | We introduce MCUBench, a benchmark featuring over 100 YOLO-based object detection models evaluated on the VOC dataset across seven different MCUs. This benchmark provides detailed data on average precision, latency, RAM, and Flash usage for various input resolutions and YOLO-based one-stage detectors. By conducting a controlled comparison with a fixed training pipeline, we collect comprehensive performance metrics. Our Pareto-optimal analysis shows that integrating modern detection heads and training techniques allows various YOLO architectures, including legacy models like YOLOv3, to achieve a highly efficient tradeoff between mean Average Precision (mAP) and latency. MCUBench serves as a valuable tool for benchmarking the MCU performance of contemporary object detectors and aids in model selection based on specific constraints. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 492,436 |
2202.06374 | Holdouts set for safe predictive model updating | Predictive risk scores for adverse outcomes are increasingly crucial in guiding health interventions. Such scores may need to be periodically updated due to change in the distributions they model. However, directly updating risk scores used to guide intervention can lead to biased risk estimates. To address this, we propose updating using a `holdout set' - a subset of the population that does not receive interventions guided by the risk score. Balancing the holdout set size is essential to ensure good performance of the updated risk score whilst minimising the number of held out samples. We prove that this approach reduces adverse outcome frequency to an asymptotically optimal level and argue that often there is no competitive alternative. We describe conditions under which an optimal holdout size (OHS) can be readily identified, and introduce parametric and semi-parametric algorithms for OHS estimation. We apply our methods to the ASPRE risk score for pre-eclampsia to recommend a plan for updating it in the presence of change in the underlying data distribution. We show that, in order to minimise the number of pre-eclampsia cases over time, this is best achieved using a holdout set of around 10,000 individuals. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 280,199 |
1707.06334 | Fully Decentralized Policies for Multi-Agent Systems: An Information
Theoretic Approach | Learning cooperative policies for multi-agent systems is often challenged by partial observability and a lack of coordination. In some settings, the structure of a problem allows a distributed solution with limited communication. Here, we consider a scenario where no communication is available, and instead we learn local policies for all agents that collectively mimic the solution to a centralized multi-agent static optimization problem. Our main contribution is an information theoretic framework based on rate distortion theory which facilitates analysis of how well the resulting fully decentralized policies are able to reconstruct the optimal solution. Moreover, this framework provides a natural extension that addresses which nodes an agent should communicate with to improve the performance of its individual policy. | false | false | false | false | true | false | false | false | false | true | true | false | false | false | false | false | false | false | 77,400 |
1802.02721 | Deep Image Super Resolution via Natural Image Priors | Single image super-resolution (SR) via deep learning has recently gained significant attention in the literature. Convolutional neural networks (CNNs) are typically learned to represent the mapping between low-resolution (LR) and high-resolution (HR) images/patches with the help of training examples. Most existing deep networks for SR produce high quality results when training data is abundant. However, their performance degrades sharply when training is limited. We propose to regularize deep structures with prior knowledge about the images so that they can capture more structural information from the same limited data. In particular, we incorporate in a tractable fashion within the CNN framework, natural image priors which have shown to have much recent success in imaging and vision inverse problems. Experimental results show that the proposed deep network with natural image priors is particularly effective in training starved regimes. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 89,831 |
2412.12583 | Process-Supervised Reward Models for Verifying Clinical Note Generation:
A Scalable Approach Guided by Domain Expertise | Process-supervised reward models (PRMs), which verify large language model (LLM) outputs step-by-step, have achieved significant success in mathematical and coding problems. However, their application to other domains remains largely unexplored. In this work, we train a PRM to provide step-level reward signals for clinical notes generated by LLMs from patient-doctor dialogues. Guided by real-world clinician expertise, we carefully designed step definitions for clinical notes and utilized Gemini-Pro 1.5 to automatically generate process supervision data at scale. Our proposed PRM, trained on the LLaMA-3.1 8B instruct model, outperformed both Gemini-Pro 1.5 and the vanilla outcome-supervised reward model (ORM) in two key evaluations: (1) selecting gold-reference samples from error-containing ones, achieving 98.8% accuracy (versus 70.0% for the vanilla ORM and 93.8% for Gemini-Pro 1.5), and (2) selecting physician-preferred notes, achieving 56.2% accuracy (compared to 37.5% for the vanilla ORM and 50.0% for Gemini-Pro 1.5). Additionally, we conducted ablation studies to determine optimal loss functions and data selection strategies, along with physician reader studies to explore predictors of downstream Best-of-N performance. Our promising results suggest the potential of PRMs to extend beyond the clinical domain, offering a scalable and effective solution for diverse generative tasks. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 517,933 |
1302.3564 | Tail Sensitivity Analysis in Bayesian Networks | The paper presents an efficient method for simulating the tails of a target variable Z=h(X) which depends on a set of basic variables X=(X_1, ..., X_n). To this aim, variables X_i, i=1, ..., n are sequentially simulated in such a manner that Z=h(x_1, ..., x_i-1, X_i, ..., X_n) is guaranteed to be in the tail of Z. When this method is difficult to apply, an alternative method is proposed, which leads to a low rejection proportion of sample values, when compared with the Monte Carlo method. Both methods are shown to be very useful to perform a sensitivity analysis of Bayesian networks, when very large confidence intervals for the marginal/conditional probabilities are required, as in reliability or risk analysis. The methods are shown to behave best when all scores coincide. The required modifications for this to occur are discussed. The methods are illustrated with several examples and one example of application to a real case is used to illustrate the whole process. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 22,030 |
2109.07727 | Secure Transmission for Hierarchical Information Accessibility in
Downlink MU-MIMO | Physical layer security is a useful tool to prevent confidential information from wiretapping. In this paper, we consider a generalized model of conventional physical layer security, referred to as hierarchical information accessibility (HIA). A main feature of the HIA model is that a network has a hierarchy in information accessibility, wherein decoding feasibility is determined by a priority of users. Under this HIA model, we formulate a sum secrecy rate maximization problem with regard to precoding vectors. This problem is challenging since multiple non-smooth functions are involved into the secrecy rate to fulfill the HIA conditions and also the problem is non-convex. To address the challenges, we approximate the minimum function by using the LogSumExp technique, thereafter obtain the first-order optimality condition. One key observation is that the derived condition is cast as a functional eigenvalue problem, where the eigenvalue is equivalent to the approximated objective function of the formulated problem. Accordingly, we show that finding a principal eigenvector is equivalent to finding a local optimal solution. To this end, we develop a novel method called generalized power iteration for HIA (GPI-HIA). Simulations demonstrate that the GPI-HIA significantly outperforms other baseline methods in terms of the secrecy rate. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 255,628 |
1912.05184 | Variational Learning with Disentanglement-PyTorch | Unsupervised learning of disentangled representations is an open problem in machine learning. The Disentanglement-PyTorch library is developed to facilitate research, implementation, and testing of new variational algorithms. In this modular library, neural architectures, dimensionality of the latent space, and the training algorithms are fully decoupled, allowing for independent and consistent experiments across variational methods. The library handles the training scheduling, logging, and visualizations of reconstructions and latent space traversals. It also evaluates the encodings based on various disentanglement metrics. The library, so far, includes implementations of the following unsupervised algorithms VAE, Beta-VAE, Factor-VAE, DIP-I-VAE, DIP-II-VAE, Info-VAE, and Beta-TCVAE, as well as conditional approaches such as CVAE and IFCVAE. The library is compatible with the Disentanglement Challenge of NeurIPS 2019, hosted on AICrowd, and achieved the 3rd rank in both the first and second stages of the challenge. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 157,044 |
2211.11690 | Learn to explain yourself, when you can: Equipping Concept Bottleneck
Models with the ability to abstain on their concept predictions | The Concept Bottleneck Models (CBMs) of Koh et al. [2020] provide a means to ensure that a neural network based classifier bases its predictions solely on human understandable concepts. The concept labels, or rationales as we refer to them, are learned by the concept labeling component of the CBM. Another component learns to predict the target classification label from these predicted concept labels. Unfortunately, these models are heavily reliant on human provided concept labels for each datapoint. To enable CBMs to behave robustly when these labels are not readily available, we show how to equip them with the ability to abstain from predicting concepts when the concept labeling component is uncertain. In other words, our model learns to provide rationales for its predictions, but only whenever it is sure the rationale is correct. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 331,834 |
2012.15534 | HopRetriever: Retrieve Hops over Wikipedia to Answer Complex Questions | Collecting supporting evidence from large corpora of text (e.g., Wikipedia) is of great challenge for open-domain Question Answering (QA). Especially, for multi-hop open-domain QA, scattered evidence pieces are required to be gathered together to support the answer extraction. In this paper, we propose a new retrieval target, hop, to collect the hidden reasoning evidence from Wikipedia for complex question answering. Specifically, the hop in this paper is defined as the combination of a hyperlink and the corresponding outbound link document. The hyperlink is encoded as the mention embedding which models the structured knowledge of how the outbound link entity is mentioned in the textual context, and the corresponding outbound link document is encoded as the document embedding representing the unstructured knowledge within it. Accordingly, we build HopRetriever which retrieves hops over Wikipedia to answer complex questions. Experiments on the HotpotQA dataset demonstrate that HopRetriever outperforms previously published evidence retrieval methods by large margins. Moreover, our approach also yields quantifiable interpretations of the evidence collection process. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 213,827 |
2112.09598 | Towards Deep Learning-based 6D Bin Pose Estimation in 3D Scans | An automated robotic system needs to be as robust as possible and fail-safe in general while having relatively high precision and repeatability. Although deep learning-based methods are becoming research standard on how to approach 3D scan and image processing tasks, the industry standard for processing this data is still analytically-based. Our paper claims that analytical methods are less robust and harder for testing, updating, and maintaining. This paper focuses on a specific task of 6D pose estimation of a bin in 3D scans. Therefore, we present a high-quality dataset composed of synthetic data and real scans captured by a structured-light scanner with precise annotations. Additionally, we propose two different methods for 6D bin pose estimation, an analytical method as the industrial standard and a baseline data-driven method. Both approaches are cross-evaluated, and our experiments show that augmenting the training on real scans with synthetic data improves our proposed data-driven neural model. This position paper is preliminary, as proposed methods are trained and evaluated on a relatively small initial dataset which we plan to extend in the future. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 272,196 |
2405.09477 | Harmonizing Human Insights and AI Precision: Hand in Hand for Advancing
Knowledge Graph Task | Knowledge graph embedding (KGE) has caught significant interest for its effectiveness in knowledge graph completion (KGC), specifically link prediction (LP), with recent KGE models cracking the LP benchmarks. Despite the rapidly growing literature, insufficient attention has been paid to the cooperation between humans and AI on KG. However, humans' capability to analyze graphs conceptually may further improve the efficacy of KGE models with semantic information. To this effect, we carefully designed a human-AI team (HAIT) system dubbed KG-HAIT, which harnesses the human insights on KG by leveraging fully human-designed ad-hoc dynamic programming (DP) on KG to produce human insightful feature (HIF) vectors that capture the subgraph structural feature and semantic similarities. By integrating HIF vectors into the training of KGE models, notable improvements are observed across various benchmarks and metrics, accompanied by accelerated model convergence. Our results underscore the effectiveness of human-designed DP in the task of LP, emphasizing the pivotal role of collaboration between humans and AI on KG. We open avenues for further exploration and innovation through KG-HAIT, paving the way towards more effective and insightful KG analysis techniques. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 454,412 |
2007.10668 | An Interpretable Probabilistic Approach for Demystifying Black-box
Predictive Models | The use of sophisticated machine learning models for critical decision making is faced with a challenge that these models are often applied as a "black-box". This has led to an increased interest in interpretable machine learning, where post hoc interpretation presents a useful mechanism for generating interpretations of complex learning models. In this paper, we propose a novel approach underpinned by an extended framework of Bayesian networks for generating post hoc interpretations of a black-box predictive model. The framework supports extracting a Bayesian network as an approximation of the black-box model for a specific prediction. Compared to the existing post hoc interpretation methods, the contribution of our approach is three-fold. Firstly, the extracted Bayesian network, as a probabilistic graphical model, can provide interpretations about not only what input features but also why these features contributed to a prediction. Secondly, for complex decision problems with many features, a Markov blanket can be generated from the extracted Bayesian network to provide interpretations with a focused view on those input features that directly contributed to a prediction. Thirdly, the extracted Bayesian network enables the identification of four different rules which can inform the decision-maker about the confidence level in a prediction, thus helping the decision-maker assess the reliability of predictions learned by a black-box model. We implemented the proposed approach, applied it in the context of two well-known public datasets and analysed the results, which are made available in an open-source repository. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 188,338 |
1911.12060 | Reviewing and Improving the Gaussian Mechanism for Differential Privacy | Differential privacy provides a rigorous framework to quantify data privacy, and has received considerable interest recently. A randomized mechanism satisfying $(\epsilon, \delta)$-differential privacy (DP) roughly means that, except with a small probability $\delta$, altering a record in a dataset cannot change the probability that an output is seen by more than a multiplicative factor $e^{\epsilon} $. A well-known solution to $(\epsilon, \delta)$-DP is the Gaussian mechanism initiated by Dwork et al. [1] in 2006 with an improvement by Dwork and Roth [2] in 2014, where a Gaussian noise amount $\sqrt{2\ln \frac{2}{\delta}} \times \frac{\Delta}{\epsilon}$ of [1] or $\sqrt{2\ln \frac{1.25}{\delta}} \times \frac{\Delta}{\epsilon}$ of [2] is added independently to each dimension of the query result, for a query with $\ell_2$-sensitivity $\Delta$. Although both classical Gaussian mechanisms [1,2] assume $0 < \epsilon \leq 1$, our review finds that many studies in the literature have used the classical Gaussian mechanisms under values of $\epsilon$ and $\delta$ where the added noise amounts of [1,2] do not achieve $(\epsilon,\delta)$-DP. We obtain such result by analyzing the optimal noise amount $\sigma_{DP-OPT}$ for $(\epsilon,\delta)$-DP and identifying $\epsilon$ and $\delta$ where the noise amounts of classical mechanisms are even less than $\sigma_{DP-OPT}$. Since $\sigma_{DP-OPT}$ has no closed-form expression and needs to be approximated in an iterative manner, we propose Gaussian mechanisms by deriving closed-form upper bounds for $\sigma_{DP-OPT}$. Our mechanisms achieve $(\epsilon,\delta)$-DP for any $\epsilon$, while the classical mechanisms [1,2] do not achieve $(\epsilon,\delta)$-DP for large $\epsilon$ given $\delta$. Moreover, the utilities of our mechanisms improve those of [1,2] and are close to that of the optimal yet more computationally expensive Gaussian mechanism. | false | false | false | false | true | false | true | false | false | false | false | false | true | true | false | false | true | false | 155,304 |
2305.19650 | Adverbs, Surprisingly | This paper begins with the premise that adverbs are neglected in computational linguistics. This view derives from two analyses: a literature review and a novel adverb dataset to probe a state-of-the-art language model, thereby uncovering systematic gaps in accounts for adverb meaning. We suggest that using Frame Semantics for characterizing word meaning, as in FrameNet, provides a promising approach to adverb analysis, given its ability to describe ambiguity, semantic roles, and null instantiation. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 369,625 |
2202.02540 | Science Facing Interoperability as a Necessary Condition of Success and
Evil | Artificial intelligence (AI) systems, such as machine learning algorithms, have allowed scientists, marketers and governments to shed light on correlations that remained invisible until now. Beforehand, the dots that we had to connect in order to imagine a new knowledge were either too numerous, too sparse or not even detected. Sometimes, the information was not stored in the same data lake or format and was not able to communicate. But in creating new bridges with AI, many problems appeared such as bias reproduction, unfair inferences or mass surveillance. Our aim is to show that, on one hand, the AI's deep ethical problem lays essentially in these new connections made possible by systems interoperability. In connecting the spheres of our life, these systems undermine the notion of justice particular to each of them, because the new interactions create dominances of social goods from a sphere to another. These systems make therefore spheres permeable to one another and, in doing so, they open to progress as well as to tyranny. On another hand, however, we would like to emphasize that the act to connect what used to seem a priori disjoint is a necessary move of knowledge and scientific progress. | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | 278,852 |
1505.03991 | Nonlinear-Programming-Based Model of Power System Marginal States:
Theoretical Substantiation | In order to maintain the security of power system at an appropriate level and at low cost, it is essential to accurately assess the steady-state stability limits and power flow feasibility boundaries, i.e., the power system marginal states (MS). This paper is devoted to creation and theoretical substantiation of the MS model based on nonlinear programming (NLP-MS model), its research to reveal MS properties which promote better MS understanding, to evolution of the theory of power systems and MS, to elaboration of more effective algorithms of MS problem solution. The proposed NLP-MS model is universal and allows to determine and to take into account various MS, including the MS in a given direction of power change, the closest MS, a moving the power system state into a power flow feasibility region, etc. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 43,130 |
2203.05813 | Averaging Spatio-temporal Signals using Optimal Transport and Soft
Alignments | Several fields in science, from genomics to neuroimaging, require monitoring populations (measures) that evolve with time. These complex datasets, describing dynamics with both time and spatial components, pose new challenges for data analysis. We propose in this work a new framework to carry out averaging of these datasets, with the goal of synthesizing a representative template trajectory from multiple trajectories. We show that this requires addressing three sources of invariance: shifts in time, space, and total population size (or mass/amplitude). Here we draw inspiration from dynamic time warping (DTW), optimal transport (OT) theory and its unbalanced extension (UOT) to propose a criterion that can address all three issues. This proposal leverages a smooth formulation of DTW (Soft-DTW) that is shown to capture temporal shifts, and UOT to handle both variations in space and size. Our proposed loss can be used to define spatio-temporal barycenters as Fr\'echet means. Using Fenchel duality, we show how these barycenters can be computed efficiently, in parallel, via a novel variant of entropy-regularized debiased UOT. Experiments on handwritten letters and brain imaging data confirm our theoretical findings and illustrate the effectiveness of the proposed loss for spatio-temporal data. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 284,922 |
1903.09950 | Periphery-Fovea Multi-Resolution Driving Model guided by Human Attention | Inspired by human vision, we propose a new periphery-fovea multi-resolution driving model that predicts vehicle speed from dash camera videos. The peripheral vision module of the model processes the full video frames in low resolution. Its foveal vision module selects sub-regions and uses high-resolution input from those regions to improve its driving performance. We train the fovea selection module with supervision from driver gaze. We show that adding high-resolution input from predicted human driver gaze locations significantly improves the driving accuracy of the model. Our periphery-fovea multi-resolution model outperforms a uni-resolution periphery-only model that has the same amount of floating-point operations. More importantly, we demonstrate that our driving model achieves a significantly higher performance gain in pedestrian-involved critical situations than in other non-critical situations. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 125,172 |
2203.11633 | Semi-Targeted Model Poisoning Attack on Federated Learning via Backward
Error Analysis | Model poisoning attacks on federated learning (FL) intrude in the entire system via compromising an edge model, resulting in malfunctioning of machine learning models. Such compromised models are tampered with to perform adversary-desired behaviors. In particular, we considered a semi-targeted situation where the source class is predetermined however the target class is not. The goal is to cause the global classifier to misclassify data of the source class. Though approaches such as label flipping have been adopted to inject poisoned parameters into FL, it has been shown that their performances are usually class-sensitive varying with different target classes applied. Typically, an attack can become less effective when shifting to a different target class. To overcome this challenge, we propose the Attacking Distance-aware Attack (ADA) to enhance a poisoning attack by finding the optimized target class in the feature space. Moreover, we studied a more challenging situation where an adversary had limited prior knowledge about a client's data. To tackle this problem, ADA deduces pair-wise distances between different classes in the latent feature space from shared model parameters based on the backward error analysis. We performed extensive empirical evaluations on ADA by varying the factor of attacking frequency in three different image classification tasks. As a result, ADA succeeded in increasing the attack performance by 1.8 times in the most challenging case with an attacking frequency of 0.01. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 286,981 |
1303.2446 | Broadening the Scope of Nanopublications | In this paper, we present an approach for extending the existing concept of nanopublications --- tiny entities of scientific results in RDF representation --- to broaden their application range. The proposed extension uses English sentences to represent informal and underspecified scientific claims. These sentences follow a syntactic and semantic scheme that we call AIDA (Atomic, Independent, Declarative, Absolute), which provides a uniform and succinct representation of scientific assertions. Such AIDA nanopublications are compatible with the existing nanopublication concept and enjoy most of its advantages such as information sharing, interlinking of scientific findings, and detailed attribution, while being more flexible and applicable to a much wider range of scientific results. We show that users are able to create AIDA sentences for given scientific results quickly and at high quality, and that it is feasible to automatically extract and interlink AIDA nanopublications from existing unstructured data sources. To demonstrate our approach, a web-based interface is introduced, which also exemplifies the use of nanopublications for non-scientific content, including meta-nanopublications that describe other nanopublications. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | true | 22,836 |
2111.07341 | On Optimizing Rate Splitting in Laser-based Optical Wireless Networks | Optical wireless communication (OWC) is a promising technology that has the potential to provide Tb/s aggregate rates. In this paper, interference management is studied in a Laser-based optical wireless network where vertical-cavity surface-emitting (VCSEL) lasers are used for data transmission. In particular, rate splitting (RS) and hierarchical rate splitting (HRS) are proposed to align multi-user interference, while maximizing the multiplexing gain of the network. Basically, RS serves multiple users simultaneously by splitting a message of a user into common and private messages, each message with a certain level of power, while on the other side users decode their messages following a specific methodology. The performance of the conventional RS scheme is limited in high density wireless networks. Therefore, the HRS scheme is developed aiming to achieve high rates where users are divided into multiple groups, and a new message called outer common message is used for managing inter-group interference. We formulate an optimization problem that addresses power allocation among the messages of the HRS scheme to further enhance the performance of the network. The results show that the proposed approach provides high achievable rates compared with the conventional RS and HRS schemes in different scenarios. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 266,336 |
2203.00648 | Measuring the Impact of Individual Domain Factors in Self-Supervised
Pre-Training | Human speech data comprises a rich set of domain factors such as accent, syntactic and semantic variety, or acoustic environment. Previous work explores the effect of domain mismatch in automatic speech recognition between pre-training and fine-tuning as a whole but does not dissect the contribution of individual factors. In this paper, we present a controlled study to better understand the effect of such factors on the performance of pre-trained representations on automatic speech recognition. To do so, we pre-train models either on modified natural speech or synthesized audio, with a single domain factor modified, and then measure performance after fine-tuning. Results show that phonetic domain factors play an important role during pre-training while grammatical and syntactic factors are far less important. To our knowledge, this is the first study to better understand the domain characteristics of pre-trained sets in self-supervised pre-training for speech. | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 283,076 |
2404.11889 | Multi-view X-ray Image Synthesis with Multiple Domain Disentanglement
from CT Scans | X-ray images play a vital role in the intraoperative processes due to their high resolution and fast imaging speed and greatly promote the subsequent segmentation, registration and reconstruction. However, over-dosed X-rays superimpose potential risks to human health to some extent. Data-driven algorithms from volume scans to X-ray images are restricted by the scarcity of paired X-ray and volume data. Existing methods are mainly realized by modelling the whole X-ray imaging procedure. In this study, we propose a learning-based approach termed CT2X-GAN to synthesize the X-ray images in an end-to-end manner using the content and style disentanglement from three different image domains. Our method decouples the anatomical structure information from CT scans and style information from unpaired real X-ray images/ digital reconstructed radiography (DRR) images via a series of decoupling encoders. Additionally, we introduce a novel consistency regularization term to improve the stylistic resemblance between synthesized X-ray images and real X-ray images. Meanwhile, we also impose a supervised process by computing the similarity of computed real DRR and synthesized DRR images. We further develop a pose attention module to fully strengthen the comprehensive information in the decoupled content code from CT scans, facilitating high-quality multi-view image synthesis in the lower 2D space. Extensive experiments were conducted on the publicly available CTSpine1K dataset and achieved 97.8350, 0.0842 and 3.0938 in terms of FID, KID and defined user-scored X-ray similarity, respectively. In comparison with 3D-aware methods ($\pi$-GAN, EG3D), CT2X-GAN is superior in improving the synthesis quality and realistic to the real X-ray images. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 447,651 |
2208.06110 | Automatically Creating a Large Number of New Bilingual Dictionaries | This paper proposes approaches to automatically create a large number of new bilingual dictionaries for low-resource languages, especially resource-poor and endangered languages, from a single input bilingual dictionary. Our algorithms produce translations of words in a source language to plentiful target languages using available Wordnets and a machine translator (MT). Since our approaches rely on just one input dictionary, available Wordnets and an MT, they are applicable to any bilingual dictionary as long as one of the two languages is English or has a Wordnet linked to the Princeton Wordnet. Starting with 5 available bilingual dictionaries, we create 48 new bilingual dictionaries. Of these, 30 pairs of languages are not supported by the popular MTs: Google and Bing. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 312,601 |
2110.12984 | Generative Residual Attention Network for Disease Detection | Accurate identification and localization of abnormalities from radiology images serve as a critical role in computer-aided diagnosis (CAD) systems. Building a highly generalizable system usually requires a large amount of data with high-quality annotations, including disease-specific global and localization information. However, in medical images, only a limited number of high-quality images and annotations are available due to annotation expenses. In this paper, we explore this problem by presenting a novel approach for disease generation in X-rays using a conditional generative adversarial learning. Specifically, given a chest X-ray image from a source domain, we generate a corresponding radiology image in a target domain while preserving the identity of the patient. We then use the generated X-ray image in the target domain to augment our training to improve the detection performance. We also present a unified framework that simultaneously performs disease generation and localization.We evaluate the proposed approach on the X-ray image dataset provided by the Radiological Society of North America (RSNA), surpassing the state-of-the-art baseline detection algorithms. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 263,035 |
1904.02892 | WaveCycleGAN2: Time-domain Neural Post-filter for Speech Waveform
Generation | WaveCycleGAN has recently been proposed to bridge the gap between natural and synthesized speech waveforms in statistical parametric speech synthesis and provides fast inference with a moving average model rather than an autoregressive model and high-quality speech synthesis with the adversarial training. However, the human ear can still distinguish the processed speech waveforms from natural ones. One possible cause of this distinguishability is the aliasing observed in the processed speech waveform via down/up-sampling modules. To solve the aliasing and provide higher quality speech synthesis, we propose WaveCycleGAN2, which 1) uses generators without down/up-sampling modules and 2) combines discriminators of the waveform domain and acoustic parameter domain. The results show that the proposed method 1) alleviates the aliasing well, 2) is useful for both speech waveforms generated by analysis-and-synthesis and statistical parametric speech synthesis, and 3) achieves a mean opinion score comparable to those of natural speech and speech synthesized by WaveNet (open WaveNet) and WaveGlow while processing speech samples at a rate of more than 150 kHz on an NVIDIA Tesla P100. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 126,560 |
2103.03773 | A Geometric Algebra Solution to Wahba's Problem | We retrace Davenport's solution to Wahba's classic problem of aligning two pointclouds using the formalism of Geometric Algebra (GA). GA proves to be a natural backdrop for this problem involving three-dimensional rotations due to the isomorphism between unit-length quaternions and rotors. While the solution to this problem is not a new result, it is hoped that its treatment in GA will have tutorial value as well as open the door to addressing more complex problems in a similar way. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 223,410 |
2011.06058 | Forecasting Emergency Department Capacity Constraints for COVID
Isolation Beds | Predicting patient volumes in a hospital setting is a well-studied application of time series forecasting. Existing tools usually make forecasts at the daily or weekly level to assist in planning for staffing requirements. Prompted by new COVID-related capacity constraints placed on our pediatric hospital's emergency department, we developed an hourly forecasting tool to make predictions over a 24 hour window. These forecasts would give our hospital sufficient time to be able to martial resources towards expanding capacity and augmenting staff (e.g. transforming wards or bringing in physicians on call). Using Gaussian Process Regressions (GPRs), we obtain strong performance for both point predictions (average R-squared: 82%) as well as classification accuracy when predicting the ordinal tiers of our hospital's capacity (average precision/recall: 82%/74%). Compared to traditional regression approaches, GPRs not only obtain consistently higher performance, but are also robust to the dataset shifts that have occurred throughout 2020. Hospital stakeholders are encouraged by the strength of our results, and we are currently working on moving our tool to a real-time setting with the goal of augmenting the capabilities of our healthcare workers. | false | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | 206,123 |
2112.14084 | Embodied Learning for Lifelong Visual Perception | We study lifelong visual perception in an embodied setup, where we develop new models and compare various agents that navigate in buildings and occasionally request annotations which, in turn, are used to refine their visual perception capabilities. The purpose of the agents is to recognize objects and other semantic classes in the whole building at the end of a process that combines exploration and active visual learning. As we study this task in a lifelong learning context, the agents should use knowledge gained in earlier visited environments in order to guide their exploration and active learning strategy in successively visited buildings. We use the semantic segmentation performance as a proxy for general visual perception and study this novel task for several exploration and annotation methods, ranging from frontier exploration baselines which use heuristic active learning, to a fully learnable approach. For the latter, we introduce a deep reinforcement learning (RL) based agent which jointly learns both navigation and active learning. A point goal navigation formulation, coupled with a global planner which supplies goals, is integrated into the RL model in order to provide further incentives for systematic exploration of novel scenes. By performing extensive experiments on the Matterport3D dataset, we show how the proposed agents can utilize knowledge from previously explored scenes when exploring new ones, e.g. through less granular exploration and less frequent requests for annotations. The results also suggest that a learning-based agent is able to use its prior visual knowledge more effectively than heuristic alternatives. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 273,445 |
2411.13284 | DATTA: Domain-Adversarial Test-Time Adaptation for Cross-Domain
WiFi-Based Human Activity Recognition | Cross-domain generalization is an open problem in WiFi-based sensing due to variations in environments, devices, and subjects, causing domain shifts in channel state information. To address this, we propose Domain-Adversarial Test-Time Adaptation (DATTA), a novel framework combining domain-adversarial training (DAT), test-time adaptation (TTA), and weight resetting to facilitate adaptation to unseen target domains and to prevent catastrophic forgetting. DATTA is integrated into a lightweight, flexible architecture optimized for speed. We conduct a comprehensive evaluation of DATTA, including an ablation study on all key components using publicly available data, and verify its suitability for real-time applications such as human activity recognition. When combining a SotA video-based variant of TTA with WiFi-based DAT and comparing it to DATTA, our method achieves an 8.1% higher F1-Score. The PyTorch implementation of DATTA is publicly available at: https://github.com/StrohmayerJ/DATTA. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | true | 509,742 |
2206.13959 | Comparing and extending the use of defeasible argumentation with
quantitative data in real-world contexts | Dealing with uncertain, contradicting, and ambiguous information is still a central issue in Artificial Intelligence (AI). As a result, many formalisms have been proposed or adapted so as to consider non-monotonicity, with only a limited number of works and researchers performing any sort of comparison among them. A non-monotonic formalism is one that allows the retraction of previous conclusions or claims, from premises, in light of new evidence, offering some desirable flexibility when dealing with uncertainty. This research article focuses on evaluating the inferential capacity of defeasible argumentation, a formalism particularly envisioned for modelling non-monotonic reasoning. In addition to this, fuzzy reasoning and expert systems, extended for handling non-monotonicity of reasoning, are selected and employed as baselines, due to their vast and accepted use within the AI community. Computational trust was selected as the domain of application of such models. Trust is an ill-defined construct, hence, reasoning applied to the inference of trust can be seen as non-monotonic. Inference models were designed to assign trust scalars to editors of the Wikipedia project. In particular, argument-based models demonstrated more robustness than those built upon the baselines despite the knowledge bases or datasets employed. This study contributes to the body of knowledge through the exploitation of defeasible argumentation and its comparison to similar approaches. The practical use of such approaches coupled with a modular design that facilitates similar experiments was exemplified and their respective implementations made publicly available on GitHub [120, 121]. This work adds to previous works, empirically enhancing the generalisability of defeasible argumentation as a compelling approach to reason with quantitative data and uncertain knowledge. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 305,131 |
2202.03843 | A Unified Multi-Task Learning Framework of Real-Time Drone Supervision
for Crowd Counting | In this paper, a novel Unified Multi-Task Learning Framework of Real-Time Drone Supervision for Crowd Counting (MFCC) is proposed, which utilizes an image fusion network architecture to fuse images from the visible and thermal infrared image, and a crowd counting network architecture to estimate the density map. The purpose of our framework is to fuse two modalities, including visible and thermal infrared images captured by drones in real-time, that exploit the complementary information to accurately count the dense population and then automatically guide the flight of the drone to supervise the dense crowd. To this end, we propose the unified multi-task learning framework for crowd counting for the first time and re-design the unified training loss functions to align the image fusion network and crowd counting network. We also design the Assisted Learning Module (ALM) to fuse the density map feature to the image fusion encoder process for learning the counting features. To improve the accuracy, we propose the Extensive Context Extraction Module (ECEM) that is based on a dense connection architecture to encode multi-receptive-fields contextual information and apply the Multi-domain Attention Block (MAB) for concerning the head region in the drone view. Finally, we apply the prediction map to automatically guide the drones to supervise the dense crowd. The experimental results on the DroneRGBT dataset show that, compared with the existing methods, ours has comparable results on objective evaluations and an easier training process. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 279,360 |
2407.08909 | KGpose: Keypoint-Graph Driven End-to-End Multi-Object 6D Pose Estimation
via Point-Wise Pose Voting | This letter presents KGpose, a novel end-to-end framework for 6D pose estimation of multiple objects. Our approach combines keypoint-based method with learnable pose regression through `keypoint-graph', which is a graph representation of the keypoints. KGpose first estimates 3D keypoints for each object using an attentional multi-modal feature fusion of RGB and point cloud features. These keypoints are estimated from each point of point cloud and converted into a graph representation. The network directly regresses 6D pose parameters for each point through a sequence of keypoint-graph embedding and local graph embedding which are designed with graph convolutions, followed by rotation and translation heads. The final pose for each object is selected from the candidates of point-wise predictions. The method achieves competitive results on the benchmark dataset, demonstrating the effectiveness of our model. KGpose enables multi-object pose estimation without requiring an extra localization step, offering a unified and efficient solution for understanding geometric contexts in complex scenes for robotic applications. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 472,355 |
2404.19296 | Octopus v4: Graph of language models | Language models have been effective in a wide range of applications, yet the most sophisticated models are often proprietary. For example, GPT-4 by OpenAI and various models by Anthropic are expensive and consume substantial energy. In contrast, the open-source community has produced competitive models, like Llama3. Furthermore, niche-specific smaller language models, such as those tailored for legal, medical or financial tasks, have outperformed their proprietary counterparts. This paper introduces a novel approach that employs \textit{functional tokens} to integrate \textbf{multiple open-source models}, each optimized for particular tasks. Our newly developed Octopus v4 model leverages \textit{functional tokens} to intelligently direct user queries to the most appropriate vertical model and reformat the query to achieve the best performance. Octopus v4, an evolution of the Octopus v1, v2, and v3 models, excels in selection and parameter understanding and reformatting. Additionally, we explore the use of graph as a versatile data structure that effectively coordinates multiple open-source models by harnessing the capabilities of the Octopus model and \textit{functional tokens}. Use our open-sourced GitHub (\url{https://www.nexa4ai.com/}) to try Octopus v4 models (\url{https://huggingface.co/NexaAIDev/Octopus-v4}), and contrite to a larger graph of language models. By activating models less than 10B parameters, we achieved SOTA MMLU score of 74.8 among the same level models. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 450,587 |
2412.10749 | Patch-level Sounding Object Tracking for Audio-Visual Question Answering | Answering questions related to audio-visual scenes, i.e., the AVQA task, is becoming increasingly popular. A critical challenge is accurately identifying and tracking sounding objects related to the question along the timeline. In this paper, we present a new Patch-level Sounding Object Tracking (PSOT) method. It begins with a Motion-driven Key Patch Tracking (M-KPT) module, which relies on visual motion information to identify salient visual patches with significant movements that are more likely to relate to sounding objects and questions. We measure the patch-wise motion intensity map between neighboring video frames and utilize it to construct and guide a motion-driven graph network. Meanwhile, we design a Sound-driven KPT (S-KPT) module to explicitly track sounding patches. This module also involves a graph network, with the adjacency matrix regularized by the audio-visual correspondence map. The M-KPT and S-KPT modules are performed in parallel for each temporal segment, allowing balanced tracking of salient and sounding objects. Based on the tracked patches, we further propose a Question-driven KPT (Q-KPT) module to retain patches highly relevant to the question, ensuring the model focuses on the most informative clues. The audio-visual-question features are updated during the processing of these modules, which are then aggregated for final answer prediction. Extensive experiments on standard datasets demonstrate the effectiveness of our method, achieving competitive performance even compared to recent large-scale pretraining-based approaches. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 517,086 |
1905.11475 | GAT: Generative Adversarial Training for Adversarial Example Detection
and Robust Classification | The vulnerabilities of deep neural networks against adversarial examples have become a significant concern for deploying these models in sensitive domains. Devising a definitive defense against such attacks is proven to be challenging, and the methods relying on detecting adversarial samples are only valid when the attacker is oblivious to the detection mechanism. In this paper we propose a principled adversarial example detection method that can withstand norm-constrained white-box attacks. Inspired by one-versus-the-rest classification, in a K class classification problem, we train K binary classifiers where the i-th binary classifier is used to distinguish between clean data of class i and adversarially perturbed samples of other classes. At test time, we first use a trained classifier to get the predicted label (say k) of the input, and then use the k-th binary classifier to determine whether the input is a clean sample (of class k) or an adversarially perturbed example (of other classes). We further devise a generative approach to detecting/classifying adversarial examples by interpreting each binary classifier as an unnormalized density model of the class-conditional data. We provide comprehensive evaluation of the above adversarial example detection/classification methods, and demonstrate their competitive performances and compelling properties. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 132,430 |
1906.09551 | Confidence Calibration for Convolutional Neural Networks Using
Structured Dropout | In classification applications, we often want probabilistic predictions to reflect confidence or uncertainty. Dropout, a commonly used training technique, has recently been linked to Bayesian inference, yielding an efficient way to quantify uncertainty in neural network models. However, as previously demonstrated, confidence estimates computed with a naive implementation of dropout can be poorly calibrated, particularly when using convolutional networks. In this paper, through the lens of ensemble learning, we associate calibration error with the correlation between the models sampled with dropout. Motivated by this, we explore the use of structured dropout to promote model diversity and improve confidence calibration. We use the SVHN, CIFAR-10 and CIFAR-100 datasets to empirically compare model diversity and confidence errors obtained using various dropout techniques. We also show the merit of structured dropout in a Bayesian active learning application. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 136,203 |
2004.14254 | Hierarchical Reinforcement Learning for Automatic Disease Diagnosis | Motivation: Disease diagnosis oriented dialogue system models the interactive consultation procedure as Markov Decision Process and reinforcement learning algorithms are used to solve the problem. Existing approaches usually employ a flat policy structure that treat all symptoms and diseases equally for action making. This strategy works well in the simple scenario when the action space is small, however, its efficiency will be challenged in the real environment. Inspired by the offline consultation process, we propose to integrate a hierarchical policy structure of two levels into the dialogue systemfor policy learning. The high-level policy consists of amastermodel that is responsible for triggering a low-levelmodel, the lowlevel policy consists of several symptom checkers and a disease classifier. The proposed policy structure is capable to deal with diagnosis problem including large number of diseases and symptoms. Results: Experimental results on three real-world datasets and a synthetic dataset demonstrate that our hierarchical framework achieves higher accuracy and symptom recall in disease diagnosis compared with existing systems. We construct a benchmark including datasets and implementation of existing algorithms to encourage follow-up researches. Availability: The code and data is available from https://github.com/FudanDISC/DISCOpen-MedBox-DialoDiagnosis Contact: 21210980124@m.fudan.edu.cn Supplementary information: Supplementary data are available at Bioinformatics online. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 174,823 |
2407.10807 | Employing Sentence Space Embedding for Classification of Data Stream
from Fake News Domain | Tabular data is considered the last unconquered castle of deep learning, yet the task of data stream classification is stated to be an equally important and demanding research area. Due to the temporal constraints, it is assumed that deep learning methods are not the optimal solution for application in this field. However, excluding the entire -- and prevalent -- group of methods seems rather rash given the progress that has been made in recent years in its development. For this reason, the following paper is the first to present an approach to natural language data stream classification using the sentence space method, which allows for encoding text into the form of a discrete digital signal. This allows the use of convolutional deep networks dedicated to image classification to solve the task of recognizing fake news based on text data. Based on the real-life Fakeddit dataset, the proposed approach was compared with state-of-the-art algorithms for data stream classification based on generalization ability and time complexity. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 473,137 |
2402.09587 | DeepATLAS: One-Shot Localization for Biomedical Data | This paper introduces the DeepATLAS foundational model for localization tasks in the domain of high-dimensional biomedical data. Upon convergence of the proposed self-supervised objective, a pretrained model maps an input to an anatomically-consistent embedding from which any point or set of points (e.g., boxes or segmentations) may be identified in a one-shot or few-shot approach. As a representative benchmark, a DeepATLAS model pretrained on a comprehensive cohort of 51,000+ unlabeled 3D computed tomography exams yields high one-shot segmentation performance on over 50 anatomic structures across four different external test sets, either matching or exceeding the performance of a standard supervised learning model. Further improvements in accuracy can be achieved by adding a small amount of labeled data using either a semisupervised or more conventional fine-tuning strategy. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 429,586 |
1803.10336 | Graph Convolutions on Spectral Embeddings: Learning of Cortical Surface
Data | Neuronal cell bodies mostly reside in the cerebral cortex. The study of this thin and highly convoluted surface is essential for understanding how the brain works. The analysis of surface data is, however, challenging due to the high variability of the cortical geometry. This paper presents a novel approach for learning and exploiting surface data directly across surface domains. Current approaches rely on geometrical simplifications, such as spherical inflations, a popular but costly process. For instance, the widely used FreeSurfer takes about 3 hours to parcellate brain surfaces on a standard machine. Direct learning of surface data via graph convolutions would provide a new family of fast algorithms for processing brain surfaces. However, the current limitation of existing state-of-the-art approaches is their inability to compare surface data across different surface domains. Surface bases are indeed incompatible between brain geometries. This paper leverages recent advances in spectral graph matching to transfer surface data across aligned spectral domains. This novel approach enables a direct learning of surface data across compatible surface bases. It exploits spectral filters over intrinsic representations of surface neighborhoods. We illustrate the benefits of this approach with an application to brain parcellation. We validate the algorithm over 101 manually labeled brain surfaces. The results show a significant improvement in labeling accuracy over recent Euclidean approaches, while gaining a drastic speed improvement over conventional methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 93,676 |
2004.02178 | FastBERT: a Self-distilling BERT with Adaptive Inference Time | Pre-trained language models like BERT have proven to be highly performant. However, they are often computationally expensive in many practical scenarios, for such heavy models can hardly be readily implemented with limited resources. To improve their efficiency with an assured model performance, we propose a novel speed-tunable FastBERT with adaptive inference time. The speed at inference can be flexibly adjusted under varying demands, while redundant calculation of samples is avoided. Moreover, this model adopts a unique self-distillation mechanism at fine-tuning, further enabling a greater computational efficacy with minimal loss in performance. Our model achieves promising results in twelve English and Chinese datasets. It is able to speed up by a wide range from 1 to 12 times than BERT if given different speedup thresholds to make a speed-performance tradeoff. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 171,147 |
2210.10041 | Hidden State Variability of Pretrained Language Models Can Guide
Computation Reduction for Transfer Learning | While transferring a pretrained language model, common approaches conventionally attach their task-specific classifiers to the top layer and adapt all the pretrained layers. We investigate whether one could make a task-specific selection on which subset of the layers to adapt and where to place the classifier. The goal is to reduce the computation cost of transfer learning methods (e.g. fine-tuning or adapter-tuning) without sacrificing its performance. We propose to select layers based on the variability of their hidden states given a task-specific corpus. We say a layer is already "well-specialized" in a task if the within-class variability of its hidden states is low relative to the between-class variability. Our variability metric is cheap to compute and doesn't need any training or hyperparameter tuning. It is robust to data imbalance and data scarcity. Extensive experiments on the GLUE benchmark demonstrate that selecting layers based on our metric can yield significantly stronger performance than using the same number of top layers and often match the performance of fine-tuning or adapter-tuning the entire language model. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 324,768 |
1911.01608 | AReN: Assured ReLU NN Architecture for Model Predictive Control of LTI
Systems | In this paper, we consider the problem of automatically designing a Rectified Linear Unit (ReLU) Neural Network (NN) architecture that is sufficient to implement the optimal Model Predictive Control (MPC) strategy for an LTI system with quadratic cost. Specifically, we propose AReN, an algorithm to generate Assured ReLU Architectures. AReN takes as input an LTI system with quadratic cost specification, and outputs a ReLU NN architecture with the assurance that there exist network weights that exactly implement the associated MPC controller. AReN thus offers new insight into the design of ReLU NN architectures for the control of LTI systems: instead of training a heuristically chosen NN architecture on data -- or iterating over many architectures until a suitable one is found -- AReN can suggest an adequate NN architecture before training begins. While several previous works were inspired by the fact that both ReLU NN controllers and optimal MPC controller are both Continuous, Piecewise-Linear (CPWL) functions, exploiting this similarity to design NN architectures with correctness guarantees has remained elusive. AReN achieves this using two novel features. First, we reinterpret a recent result about the implementation of CPWL functions via ReLU NNs to show that a CPWL function may be implemented by a ReLU architecture that is determined by the number of distinct affine regions in the function. Second, we show that we can efficiently over-approximate the number of affine regions in the optimal MPC controller without solving the MPC problem exactly. Together, these results connect the MPC problem to a ReLU NN implementation without explicitly solving the MPC and directly translates this feature to a ReLU NN architecture that comes with the assurance that it can implement the MPC controller. We show through numerical results the effectiveness of AReN in designing an NN architecture. | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | 152,150 |
1712.02917 | Using Intermittent Synchronization to Compensate for Rhythmic Body
Motion During Autonomous Surgical Cutting and Debridement | Anatomical structures are rarely static during a surgical procedure due to breathing, heartbeats, and peristaltic movements. Inspired by observing an expert surgeon, we propose an intermittent synchronization with the extrema of the rhythmic motion (i.e., the lowest velocity windows). We performed 2 experiments: (1) pattern cutting, and (2) debridement. In (1), we found that the intermittent synchronization approach, while 1.8x slower than tracking motion, was significantly more robust to noise and control latency, and it reduced the max cutting error by 2.6x In (2), a baseline approach with no synchronization achieves 62% success rate for each removal, while intermittent synchronization achieves 80%. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 86,366 |
2405.03969 | Speak the Same Language: Global LiDAR Registration on BIM Using Pose
Hough Transform | The construction and robotic sensing data originate from disparate sources and are associated with distinct frames of reference. The primary objective of this study is to align LiDAR point clouds with building information modeling (BIM) using a global point cloud registration approach, aimed at establishing a shared understanding between the two modalities, i.e., ``speak the same language''. To achieve this, we design a cross-modality registration method, spanning from front end the back end. At the front end, we extract descriptors by identifying walls and capturing the intersected corners. Subsequently, for the back-end pose estimation, we employ the Hough transform for pose estimation and estimate multiple pose candidates. The final pose is verified by wall-pixel correlation. To evaluate the effectiveness of our method, we conducted real-world multi-session experiments in a large-scale university building, involving two different types of LiDAR sensors. We also report our findings and plan to make our collected dataset open-sourced. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 452,385 |
2009.11050 | Robust and efficient post-processing for video object detection | Object recognition in video is an important task for plenty of applications, including autonomous driving perception, surveillance tasks, wearable devices or IoT networks. Object recognition using video data is more challenging than using still images due to blur, occlusions or rare object poses. Specific video detectors with high computational cost or standard image detectors together with a fast post-processing algorithm achieve the current state-of-the-art. This work introduces a novel post-processing pipeline that overcomes some of the limitations of previous post-processing methods by introducing a learning-based similarity evaluation between detections across frames. Our method improves the results of state-of-the-art specific video detectors, specially regarding fast moving objects, and presents low resource requirements. And applied to efficient still image detectors, such as YOLO, provides comparable results to much more computationally intensive detectors. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 197,057 |
2312.00506 | Generative artificial intelligence enhances creativity but reduces the
diversity of novel content | Creativity is core to being human. Generative artificial intelligence (GenAI) holds promise for humans to be more creative by offering new ideas, or less creative by anchoring on GenAI ideas. We study the causal impact of GenAI on the production of a creative output in an online experimental study where some writers are could obtain ideas for a story from a GenAI platform. Access to GenAI ideas causes an increase in the writer's creativity with stories being evaluated as better written and more enjoyable, especially among less creative writers. However, GenAI-enabled stories are more similar to each other than stories by humans alone. Our results have implications for researchers, policy-makers and practitioners interested in bolstering creativity, but point to potential downstream consequences from over-reliance. | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 412,073 |
2405.15992 | Data Complexity Estimates for Operator Learning | Operator learning has emerged as a new paradigm for the data-driven approximation of nonlinear operators. Despite its empirical success, the theoretical underpinnings governing the conditions for efficient operator learning remain incomplete. The present work develops theory to study the data complexity of operator learning, complementing existing research on the parametric complexity. We investigate the fundamental question: How many input/output samples are needed in operator learning to achieve a desired accuracy $\epsilon$? This question is addressed from the point of view of $n$-widths, and this work makes two key contributions. The first contribution is to derive lower bounds on $n$-widths for general classes of Lipschitz and Fr\'echet differentiable operators. These bounds rigorously demonstrate a ``curse of data-complexity'', revealing that learning on such general classes requires a sample size exponential in the inverse of the desired accuracy $\epsilon$. The second contribution of this work is to show that ``parametric efficiency'' implies ``data efficiency''; using the Fourier neural operator (FNO) as a case study, we show rigorously that on a narrower class of operators, efficiently approximated by FNO in terms of the number of tunable parameters, efficient operator learning is attainable in data complexity as well. Specifically, we show that if only an algebraically increasing number of tunable parameters is needed to reach a desired approximation accuracy, then an algebraically bounded number of data samples is also sufficient to achieve the same accuracy. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 457,199 |
1608.04467 | Operations- and Uncertainty-Aware Installation of FACTS Devices in a
Large Transmission System | Decentralized electricity markets and more integration of renewables demand expansion of the existing transmission infrastructure to accommodate inflected variabilities in power flows. However, such expansion is severely limited in many countries because of political and environmental issues. Furthermore, high renewables integration requires additional reactive power support, which forces the transmission system operators to utilize the existing grid creatively, e.g., take advantage of new technologies, such as flexible alternating current transmission system (FACTS) devices. We formulate, analyze and solve the challenging investment planning problem of installation in an existing large-scale transmission grid multiple FACTS devices of two types (series capacitors and static VAR compensators.) We account for details of AC character of the power flows, probabilistic modeling of multiple-load scenarios, FACTS devices flexibility in terms of their adjustments within the capacity constraints, and long term practical tradeoffs between capital vs operational expenditures (CAPEX vs OPEX). It is demonstrated that proper installation of the devices allows to do both - extend or improve feasibility domain for the system and also decrease long term power generation cost (make cheaper generation available). Nonlinear, nonconvex, and multiple-scenario-aware optimization is resolved through an efficient heuristic algorithm consisting of a sequence of quadratic programmings solved by CPLEX combined with exact AC PF resolution for each scenario for maintaining feasible operational states during iterations. Efficiency and scalability of the approach is illustrated on the IEEE 30-bus model and the 2736-bus Polish model from Matpower. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 59,834 |
2101.10451 | Evolution of Small Cell from 4G to 6G: Past, Present, and Future | To boost the capacity of the cellular system, the operators have started to reuse the same licensed spectrum by deploying 4G LTE small cells (Femto Cells) in the past. But in time, these small cell licensed spectrum is not sufficient to satisfy future applications like augmented reality (AR)and virtual reality (VR). Hence, cellular operators look for alternate unlicensed spectrum in Wi-Fi 5 GHz band, later 3GPP named as LTE Licensed Assisted Access (LAA). The recent and current rollout of LAA deployments (in developed nations like the US) provides an opportunity to understand coexistence profound ground truth. This paper discusses a high-level overview of my past, present, and future research works in the direction of small cell benefits. In the future, we shift the focus onto the latest unlicensed band: 6 GHz, where the latest Wi-Fi version, 802.11ax, will coexist with the latest cellular technology, 5G New Radio(NR) in unlicensed | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 216,937 |
2206.14076 | Reasoning about Moving Target Defense in Attack Modeling Formalisms | Since 2009, Moving Target Defense (MTD) has become a new paradigm of defensive mechanism that frequently changes the state of the target system to confuse the attacker. This frequent change is costly and leads to a trade-off between misleading the attacker and disrupting the quality of service. Optimizing the MTD activation frequency is necessary to develop this defense mechanism when facing realistic, multi-step attack scenarios. Attack modeling formalisms based on DAG are prominently used to specify these scenarios. Our contribution is a new DAG-based formalism for MTDs and its translation into a Price Timed Markov Decision Process to find the best activation frequencies against the attacker's time/cost-optimal strategies. For the first time, MTD activation frequencies are analyzed in a state-of-the-art DAG-based representation. Moreover, this is the first paper that considers the specificity of MTDs in the automatic analysis of attack modeling formalisms. Finally, we present some experimental results using Uppaal Stratego to demonstrate its applicability and relevance. | false | false | false | false | false | false | false | false | false | false | false | false | true | false | true | false | false | false | 305,169 |
2403.01529 | Deep Incremental Model Based Reinforcement Learning: A One-Step Lookback
Approach for Continuous Robotics Control | Model-based reinforcement learning (MBRL) attempts to use an available or a learned model to improve the data efficiency of reinforcement learning. This work proposes a one-step lookback approach that jointly learns the latent-space model and the policy to realize the sample-efficient continuous robotic control, wherein the control-theoretical knowledge is utilized to decrease the model learning difficulty. Specifically, the so-called one-step backward data is utilized to facilitate the incremental evolution model, an alternative structured representation of the robotics evolution model in the MBRL field. The incremental evolution model accurately predicts the robotics movement but with low sample complexity. This is because the formulated incremental evolution model degrades the model learning difficulty into a parametric matrix learning problem, which is especially favourable to high-dimensional robotics applications. The imagined data from the learned incremental evolution model is used to supplement training data to enhance the sample efficiency. Comparative numerical simulations on benchmark continuous robotics control problems are conducted to validate the efficiency of our proposed one-step lookback approach. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 434,465 |
2204.10818 | Convergence of the Riemannian Langevin Algorithm | We study the Riemannian Langevin Algorithm for the problem of sampling from a distribution with density $\nu$ with respect to the natural measure on a manifold with metric $g$. We assume that the target density satisfies a log-Sobolev inequality with respect to the metric and prove that the manifold generalization of the Unadjusted Langevin Algorithm converges rapidly to $\nu$ for Hessian manifolds. This allows us to reduce the problem of sampling non-smooth (constrained) densities in ${\bf R}^n$ to sampling smooth densities over appropriate manifolds, while needing access only to the gradient of the log-density, and this, in turn, to sampling from the natural Brownian motion on the manifold. Our main analytic tools are (1) an extension of self-concordance to manifolds, and (2) a stochastic approach to bounding smoothness on manifolds. A special case of our approach is sampling isoperimetric densities restricted to polytopes by using the metric defined by the logarithmic barrier. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 292,927 |
0903.3114 | Markov Random Field Segmentation of Brain MR Images | We describe a fully-automatic 3D-segmentation technique for brain MR images. Using Markov random fields the segmentation algorithm captures three important MR features, i.e. non-parametric distributions of tissue intensities, neighborhood correlations and signal inhomogeneities. Detailed simulations and real MR images demonstrate the performance of the segmentation algorithm. The impact of noise, inhomogeneity, smoothing and structure thickness is analyzed quantitatively. Even single echo MR images are well classified into gray matter, white matter, cerebrospinal fluid, scalp-bone and background. A simulated annealing and an iterated conditional modes implementation are presented. Keywords: Magnetic Resonance Imaging, Segmentation, Markov Random Fields | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 3,376 |
1909.06154 | A Swash Mass Unmanned Aerial Vehicle: Design, Modeling and Control | In this paper, a new unmanned aerial vehicle (UAV) structure, referred to as swash mass UAV, is presented. It consists of a double blade coaxial shaft rotor and four swash masses that allow changing the orientation and maneuvering the UAV. The dynamical system model is derived from the Newton\textquotesingle s law framework. The rotational behavior of the UAV is discussed as a function of the design parameters. Given the uniqueness and the form of the obtained non-linear dynamical system model, a back-stepping control mechanism is proposed. It is obtained following the Lyapunov's control approach in each iteration step. Numerical results show that the swashed mass UAV can be maneuvered with the proposed control algorithm so that linear and aggressive trajectories can be accurately tracked. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 145,306 |
2409.07904 | FACT: Feature Adaptive Continual-learning Tracker for Multiple Object
Tracking | Multiple object tracking (MOT) involves identifying multiple targets and assigning them corresponding IDs within a video sequence, where occlusions are often encountered. Recent methods address occlusions using appearance cues through online learning techniques to improve adaptivity or offline learning techniques to utilize temporal information from videos. However, most existing online learning-based MOT methods are unable to learn from all past tracking information to improve adaptivity on long-term occlusions while maintaining real-time tracking speed. On the other hand, temporal information-based offline learning methods maintain a long-term memory to store past tracking information, but this approach restricts them to use only local past information during tracking. To address these challenges, we propose a new MOT framework called the Feature Adaptive Continual-learning Tracker (FACT), which enables real-time tracking and feature learning for targets by utilizing all past tracking information. We demonstrate that the framework can be integrated with various state-of-the-art feature-based trackers, thereby improving their tracking ability. Specifically, we develop the feature adaptive continual-learning (FAC) module, a neural network that can be trained online to learn features adaptively using all past tracking information during tracking. Moreover, we also introduce a two-stage association module specifically designed for the proposed continual learning-based tracking. Extensive experiment results demonstrate that the proposed method achieves state-of-the-art online tracking performance on MOT17 and MOT20 benchmarks. The code will be released upon acceptance. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 487,702 |
2209.12243 | Safety-compliant Generative Adversarial Networks for Human Trajectory
Forecasting | Human trajectory forecasting in crowds presents the challenges of modelling social interactions and outputting collision-free multimodal distribution. Following the success of Social Generative Adversarial Networks (SGAN), recent works propose various GAN-based designs to better model human motion in crowds. Despite superior performance in reducing distance-based metrics, current networks fail to output socially acceptable trajectories, as evidenced by high collisions in model predictions. To counter this, we introduce SGANv2: an improved safety-compliant SGAN architecture equipped with spatio-temporal interaction modelling and a transformer-based discriminator. The spatio-temporal modelling ability helps to learn the human social interactions better while the transformer-based discriminator design improves temporal sequence modelling. Additionally, SGANv2 utilizes the learned discriminator even at test-time via a collaborative sampling strategy that not only refines the colliding trajectories but also prevents mode collapse, a common phenomenon in GAN training. Through extensive experimentation on multiple real-world and synthetic datasets, we demonstrate the efficacy of SGANv2 to provide socially-compliant multimodal trajectories. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 319,464 |
1810.09732 | A Generalization of Smillie's Theorem on Strongly Cooperative
Tridiagonal Systems | Smillie (1984) proved an interesting result on the stability of nonlinear, time-invariant, strongly cooperative, and tridiagonal dynamical systems. This result has found many applications in models from various fields including biology, ecology, and chemistry. Smith (1991) has extended Smillie's result and proved entrainment in the case where the vector field is time-varying and periodic. We use the theory of linear totally nonnegative differential systems developed by Schwarz (1970) to give a generalization of these two results. This is based on weakening the requirement for strong cooperativity to cooperativity, and adding an additional observability-type condition. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 111,118 |
2203.17105 | A Computationally Informed Realisation Algorithm for Lithium-Ion
Batteries Implemented with LiiBRA.jl | Real-time battery modelling advancements have quickly become a requirement as the adoption of battery electric vehicles (BEVs) has rapidly increased. In this paper an open-source, improved discrete realisation algorithm, implemented in Julia for creation and simulation of reduced-order, real-time capable physics-based models is presented. This work reduces the Doyle-Fuller-Newman electrochemical model into continuous-form transfer functions and introduces a computationally informed discrete realisation algorithm (CI-DRA) to generate the reduced-order models. Further improvements in conventional offline model creation are obtained as well as achieving in-vehicle capable model creation for ARM based computing architectures. Furthermore, a sensitivity analysis on the resultant computational time is completed as well as experimental validation of a worldwide harmonised light vehicle test procedure (WLTP) for a LG Chem. M50 21700 parameterisation. A performance comparison to the conventional Matlab implemented discrete realisation algorithm (DRA) is completed showcasing a mean computational time improvement of 88%. Finally, an ARM based compilation is investigated for in-vehicle model generation and shows a modest performance reduction of 43% when compared to the x86 implementation while still generating accurate models within 5.5 seconds. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 289,041 |
2210.00923 | Masked Supervised Learning for Semantic Segmentation | Self-attention is of vital importance in semantic segmentation as it enables modeling of long-range context, which translates into improved performance. We argue that it is equally important to model short-range context, especially to tackle cases where not only the regions of interest are small and ambiguous, but also when there exists an imbalance between the semantic classes. To this end, we propose Masked Supervised Learning (MaskSup), an effective single-stage learning paradigm that models both short- and long-range context, capturing the contextual relationships between pixels via random masking. Experimental results demonstrate the competitive performance of MaskSup against strong baselines in both binary and multi-class segmentation tasks on three standard benchmark datasets, particularly at handling ambiguous regions and retaining better segmentation of minority classes with no added inference cost. In addition to segmenting target regions even when large portions of the input are masked, MaskSup is also generic and can be easily integrated into a variety of semantic segmentation methods. We also show that the proposed method is computationally efficient, yielding an improved performance by 10\% on the mean intersection-over-union (mIoU) while requiring $3\times$ less learnable parameters. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 321,059 |
2312.03041 | Transformer-Based Deep Learning Model for Bored Pile Load-Deformation
Prediction in Bangkok Subsoil | This paper presents a novel deep learning model based on the transformer architecture to predict the load-deformation behavior of large bored piles in Bangkok subsoil. The model encodes the soil profile and pile features as tokenization input, and generates the load-deformation curve as output. The model also incorporates the previous sequential data of load-deformation curve into the decoder to improve the prediction accuracy. The model also incorporates the previous sequential data of load-deformation curve into the decoder. The model shows a satisfactory accuracy and generalization ability for the load-deformation curve prediction, with a mean absolute error of 5.72% for the test data. The model could also be used for parametric analysis and design optimization of piles under different soil and pile conditions, pile cross section, pile length and type of pile. | false | true | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 413,113 |
2406.02197 | A Pipelined Memristive Neural Network Analog-to-Digital Converter | With the advent of high-speed, high-precision, and low-power mixed-signal systems, there is an ever-growing demand for accurate, fast, and energy-efficient analog-to-digital (ADCs) and digital-to-analog converters (DACs). Unfortunately, with the downscaling of CMOS technology, modern ADCs trade off speed, power and accuracy. Recently, memristive neuromorphic architectures of four-bit ADC/DAC have been proposed. Such converters can be trained in real-time using machine learning algorithms, to break through the speedpower-accuracy trade-off while optimizing the conversion performance for different applications. However, scaling such architectures above four bits is challenging. This paper proposes a scalable and modular neural network ADC architecture based on a pipeline of four-bit converters, preserving their inherent advantages in application reconfiguration, mismatch selfcalibration, noise tolerance, and power optimization, while approaching higher resolution and throughput in penalty of latency. SPICE evaluation shows that an 8-bit pipelined ADC achieves 0.18 LSB INL, 0.20 LSB DNL, 7.6 ENOB, and 0.97 fJ/conv FOM. This work presents a significant step towards the realization of large-scale neuromorphic data converters. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | true | false | false | 460,662 |
2202.11133 | Continual Auxiliary Task Learning | Learning auxiliary tasks, such as multiple predictions about the world, can provide many benefits to reinforcement learning systems. A variety of off-policy learning algorithms have been developed to learn such predictions, but as yet there is little work on how to adapt the behavior to gather useful data for those off-policy predictions. In this work, we investigate a reinforcement learning system designed to learn a collection of auxiliary tasks, with a behavior policy learning to take actions to improve those auxiliary predictions. We highlight the inherent non-stationarity in this continual auxiliary task learning problem, for both prediction learners and the behavior learner. We develop an algorithm based on successor features that facilitates tracking under non-stationary rewards, and prove the separation into learning successor features and rewards provides convergence rate improvements. We conduct an in-depth study into the resulting multi-prediction learning system. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 281,775 |
1909.09096 | Vision-Based Proprioceptive Sensing for Soft Inflatable Actuators | This paper presents a vision-based sensing approach for a soft linear actuator, which is equipped with an integrated camera. The proposed vision-based sensing pipeline predicts the three-dimensional position of a point of interest on the actuator. To train and evaluate the algorithm, predictions are compared to ground truth data from an external motion capture system. An off-the-shelf distance sensor is integrated in a similar actuator and its performance is used as a baseline for comparison. The resulting sensing pipeline runs at 40 Hz in real-time on a standard laptop and is additionally used for closed loop elongation control of the actuator. It is shown that the approach can achieve comparable accuracy to the distance sensor. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 146,149 |
2003.13827 | Co-occurrence of deep convolutional features for image search | Image search can be tackled using deep features from pre-trained Convolutional Neural Networks (CNN). The feature map from the last convolutional layer of a CNN encodes descriptive information from which a discriminative global descriptor can be obtained. We propose a new representation of co-occurrences from deep convolutional features to extract additional relevant information from this last convolutional layer. Combining this co-occurrence map with the feature map, we achieve an improved image representation. We present two different methods to get the co-occurrence representation, the first one based on direct aggregation of activations, and the second one, based on a trainable co-occurrence representation. The image descriptors derived from our methodology improve the performance in very well-known image retrieval datasets as we prove in the experiments. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 170,316 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.