id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2408.02298
Backward Compatibility in Attributive Explanation and Enhanced Model Training Method
Model update is a crucial process in the operation of ML/AI systems. While updating a model generally enhances the average prediction performance, it also significantly impacts the explanations of predictions. In real-world applications, even minor changes in explanations can have detrimental consequences. To tackle this issue, this paper introduces BCX, a quantitative metric that evaluates the backward compatibility of feature attribution explanations between pre- and post-update models. BCX utilizes practical agreement metrics to calculate the average agreement between the explanations of pre- and post-update models, specifically among samples on which both models accurately predict. In addition, we propose BCXR, a BCX-aware model training method by designing surrogate losses which theoretically lower bounds agreement scores. Furthermore, we present a universal variant of BCXR that improves all agreement metrics, utilizing L2 distance among the explanations of the models. To validate our approach, we conducted experiments on eight real-world datasets, demonstrating that BCXR achieves superior trade-offs between predictive performances and BCX scores, showcasing the effectiveness of our BCXR methods.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
478,586
2107.01656
IITP at WAT 2021: System description for English-Hindi Multimodal Translation Task
Neural Machine Translation (NMT) is a predominant machine translation technology nowadays because of its end-to-end trainable flexibility. However, NMT still struggles to translate properly in low-resource settings specifically on distant language pairs. One way to overcome this is to use the information from other modalities if available. The idea is that despite differences in languages, both the source and target language speakers see the same thing and the visual representation of both the source and target is the same, which can positively assist the system. Multimodal information can help the NMT system to improve the translation by removing ambiguity on some phrases or words. We participate in the 8th Workshop on Asian Translation (WAT - 2021) for English-Hindi multimodal translation task and achieve 42.47 and 37.50 BLEU points for Evaluation and Challenge subset, respectively.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
244,554
2309.16882
Message Propagation Through Time: An Algorithm for Sequence Dependency Retention in Time Series Modeling
Time series modeling, a crucial area in science, often encounters challenges when training Machine Learning (ML) models like Recurrent Neural Networks (RNNs) using the conventional mini-batch training strategy that assumes independent and identically distributed (IID) samples and initializes RNNs with zero hidden states. The IID assumption ignores temporal dependencies among samples, resulting in poor performance. This paper proposes the Message Propagation Through Time (MPTT) algorithm to effectively incorporate long temporal dependencies while preserving faster training times relative to the stateful solutions. MPTT utilizes two memory modules to asynchronously manage initial hidden states for RNNs, fostering seamless information exchange between samples and allowing diverse mini-batches throughout epochs. MPTT further implements three policies to filter outdated and preserve essential information in the hidden states to generate informative initial hidden states for RNNs, facilitating robust training. Experimental results demonstrate that MPTT outperforms seven strategies on four climate datasets with varying levels of temporal dependencies.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
395,536
2404.14949
Multi-Modal Prompt Learning on Blind Image Quality Assessment
Image Quality Assessment (IQA) models benefit significantly from semantic information, which allows them to treat different types of objects distinctly. Currently, leveraging semantic information to enhance IQA is a crucial research direction. Traditional methods, hindered by a lack of sufficiently annotated data, have employed the CLIP image-text pretraining model as their backbone to gain semantic awareness. However, the generalist nature of these pre-trained Vision-Language (VL) models often renders them suboptimal for IQA-specific tasks. Recent approaches have attempted to address this mismatch using prompt technology, but these solutions have shortcomings. Existing prompt-based VL models overly focus on incremental semantic information from text, neglecting the rich insights available from visual data analysis. This imbalance limits their performance improvements in IQA tasks. This paper introduces an innovative multi-modal prompt-based methodology for IQA. Our approach employs carefully crafted prompts that synergistically mine incremental semantic information from both visual and linguistic data. Specifically, in the visual branch, we introduce a multi-layer prompt structure to enhance the VL model's adaptability. In the text branch, we deploy a dual-prompt scheme that steers the model to recognize and differentiate between scene category and distortion type, thereby refining the model's capacity to assess image quality. Our experimental findings underscore the effectiveness of our method over existing Blind Image Quality Assessment (BIQA) approaches. Notably, it demonstrates competitive performance across various datasets. Our method achieves Spearman Rank Correlation Coefficient (SRCC) values of 0.961(surpassing 0.946 in CSIQ) and 0.941 (exceeding 0.930 in KADID), illustrating its robustness and accuracy in diverse contexts.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
448,879
2002.12830
A.I. based Embedded Speech to Text Using Deepspeech
Deepspeech was very useful for development IoT devices that need voice recognition. One of the voice recognition systems is deepspeech from Mozilla. Deepspeech is an open-source voice recognition that was using a neural network to convert speech spectrogram into a text transcript. This paper shows the implementation process of speech recognition on a low-end computational device. Development of English-language speech recognition that has many datasets become a good point for starting. The model that used results from pre-trained model that provide by each version of deepspeech, without change of the model that already released, furthermore the benefit of using raspberry pi as a media end-to-end speech recognition device become a good thing, user can change and modify of the speech recognition, and also deepspeech can be standalone device without need continuously internet connection to process speech recognition, and even this paper show the power of Tensorflow Lite can make a significant difference on inference by deepspeech rather than using Tensorflow non-Lite.This paper shows the experiment using Deepspeech version 0.1.0, 0.1.1, and 0.6.0, and there is some improvement on Deepspeech version 0.6.0, faster while processing speech-to-text on old hardware raspberry pi 3 b+.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
166,145
2312.01274
Learning to Compose SuperWeights for Neural Parameter Allocation Search
Neural parameter allocation search (NPAS) automates parameter sharing by obtaining weights for a network given an arbitrary, fixed parameter budget. Prior work has two major drawbacks we aim to address. First, there is a disconnect in the sharing pattern between the search and training steps, where weights are warped for layers of different sizes during the search to measure similarity, but not during training, resulting in reduced performance. To address this, we generate layer weights by learning to compose sets of SuperWeights, which represent a group of trainable parameters. These SuperWeights are created to be large enough so they can be used to represent any layer in the network, but small enough that they are computationally efficient. The second drawback we address is the method of measuring similarity between shared parameters. Whereas prior work compared the weights themselves, we argue this does not take into account the amount of conflict between the shared weights. Instead, we use gradient information to identify layers with shared weights that wish to diverge from each other. We demonstrate that our SuperWeight Networks consistently boost performance over the state-of-the-art on the ImageNet and CIFAR datasets in the NPAS setting. We further show that our approach can generate parameters for many network architectures using the same set of weights. This enables us to support tasks like efficient ensembling and anytime prediction, outperforming fully-parameterized ensembles with 17% fewer parameters.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
412,382
2105.03358
Soft-Attention Improves Skin Cancer Classification Performance
In clinical applications, neural networks must focus on and highlight the most important parts of an input image. Soft-Attention mechanism enables a neural network toachieve this goal. This paper investigates the effectiveness of Soft-Attention in deep neural architectures. The central aim of Soft-Attention is to boost the value of important features and suppress the noise-inducing features. We compare the performance of VGG, ResNet, InceptionResNetv2 and DenseNet architectures with and without the Soft-Attention mechanism, while classifying skin lesions. The original network when coupled with Soft-Attention outperforms the baseline[16] by 4.7% while achieving a precision of 93.7% on HAM10000 dataset [25]. Additionally, Soft-Attention coupling improves the sensitivity score by 3.8% compared to baseline[31] and achieves 91.6% on ISIC-2017 dataset [2]. The code is publicly available at github.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
234,128
2411.02753
Label Critic: Design Data Before Models
As medical datasets rapidly expand, creating detailed annotations of different body structures becomes increasingly expensive and time-consuming. We consider that requesting radiologists to create detailed annotations is unnecessarily burdensome and that pre-existing AI models can largely automate this process. Following the spirit don't use a sledgehammer on a nut, we find that, rather than creating annotations from scratch, radiologists only have to review and edit errors if the Best-AI Labels have mistakes. To obtain the Best-AI Labels among multiple AI Labels, we developed an automatic tool, called Label Critic, that can assess label quality through tireless pairwise comparisons. Extensive experiments demonstrate that, when incorporated with our developed Image-Prompt pairs, pre-existing Large Vision-Language Models (LVLM), trained on natural images and texts, achieve 96.5% accuracy when choosing the best label in a pair-wise comparison, without extra fine-tuning. By transforming the manual annotation task (30-60 min/scan) into an automatic comparison task (15 sec/scan), we effectively reduce the manual efforts required from radiologists by an order of magnitude. When the Best-AI Labels are sufficiently accurate (81% depending on body structures), they will be directly adopted as the gold-standard annotations for the dataset, with lower-quality AI Labels automatically discarded. Label Critic can also check the label quality of a single AI Label with 71.8% accuracy when no alternatives are available for comparison, prompting radiologists to review and edit if the estimated quality is low (19% depending on body structures).
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
505,643
2006.00680
Learning Distributed Controllers for V-Formation
We show how a high-performing, fully distributed and symmetric neural V-formation controller can be synthesized from a Centralized MPC (Model Predictive Control) controller using Deep Learning. This result is significant as we also establish that under very reasonable conditions, it is impossible to achieve V-formation using a deterministic, distributed, and symmetric controller. The learning process we use for the neural V-formation controller is significantly enhanced by CEGkR, a Counterexample-Guided k-fold Retraining technique we introduce, which extends prior work in this direction in important ways. Our experimental results show that our neural V-formation controller generalizes to a significantly larger number of agents than for which it was trained (from 7 to 15), and exhibits substantial speedup over the MPC-based controller. We use a form of statistical model checking to compute confidence intervals for our neural V-formation controller's convergence rate and time to convergence.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
179,540
1910.08832
Natural Question Generation with Reinforcement Learning Based Graph-to-Sequence Model
Natural question generation (QG) aims to generate questions from a passage and an answer. In this paper, we propose a novel reinforcement learning (RL) based graph-to-sequence (Graph2Seq) model for QG. Our model consists of a Graph2Seq generator where a novel Bidirectional Gated Graph Neural Network is proposed to embed the passage, and a hybrid evaluator with a mixed objective combining both cross-entropy and RL losses to ensure the generation of syntactically and semantically valid text. The proposed model outperforms previous state-of-the-art methods by a large margin on the SQuAD dataset.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
149,984
2105.12545
Successive Convex Approximation Based Off-Policy Optimization for Constrained Reinforcement Learning
We propose a successive convex approximation based off-policy optimization (SCAOPO) algorithm to solve the general constrained reinforcement learning problem, which is formulated as a constrained Markov decision process (CMDP) in the context of average cost. The SCAOPO is based on solving a sequence of convex objective/feasibility optimization problems obtained by replacing the objective and constraint functions in the original problems with convex surrogate functions. At each iteration, the convex surrogate problem can be efficiently solved by Lagrange dual method even the policy is parameterized by a high-dimensional function. Moreover, the SCAOPO enables to reuse old experiences from previous updates, thereby significantly reducing the implementation cost when deployed in the real-world engineering systems that need to online learn the environment. In spite of the time-varying state distribution and the stochastic bias incurred by the off-policy learning, the SCAOPO with a feasible initial point can still provably converge to a Karush-Kuhn-Tucker (KKT) point of the original problem almost surely.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
237,038
2501.04281
Cluster & Disperse: a general air conflict resolution heuristic using unsupervised learning
We provide a general and malleable heuristic for the air conflict resolution problem. This heuristic is based on a new neighborhood structure for searching the solution space of trajectories and flight-levels. Using unsupervised learning, the core idea of our heuristic is to cluster the conflict points and disperse them in various flight levels. Our first algorithm is called Cluster & Disperse and in each iteration it assigns the most problematic flights in each cluster to another flight-level. In effect, we shuffle them between the flight-levels until we achieve a well-balanced configuration. The Cluster & Disperse algorithm then uses any horizontal plane conflict resolution algorithm as a subroutine to solve these well-balanced instances. Nevertheless, we develop a novel algorithm for the horizontal plane based on a similar idea. That is we cluster and disperse the conflict points spatially in the same flight level using the gradient descent and a social force. We use a novel maneuver making flights travel on an arc instead of a straight path which is based on the aviation routine of the Radius to Fix legs. Our algorithms can handle a high density of flights within a reasonable computation time. We put their performance in context with some notable algorithms from the literature. Being a general framework, a particular strength of the Cluster & Disperse is its malleability in allowing various constraints regarding the aircraft or the environment to be integrated with ease. This is in contrast to the models for instance based on mixed integer programming.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
523,161
2407.05426
AI in Manufacturing: Market Analysis and Opportunities
In this paper, we explore the transformative impact of Artificial Intelligence (AI) in the manufacturing sector, highlighting its potential to revolutionize industry practices and enhance operational efficiency. We delve into various applications of AI in manufacturing, with a particular emphasis on human-machine interfaces (HMI) and AI-powered milling machines, showcasing how these technologies contribute to more intuitive operations and precision in production processes. Through rigorous market analysis, the paper presents insightful data on AI adoption rates among German manufacturers, comparing these figures with global trends and exploring the specific uses of AI in production, maintenance, customer service, and more. In addition, the paper examines the emerging field of Generative AI and the potential applications of large language models in manufacturing processes. The findings indicate a significant increase in AI adoption from 6% in 2020 to 13.3% in 2023 among German companies, with a projection of substantial economic impact by 2030. The study also addresses the challenges faced by companies, such as data quality and integration hurdles, providing a balanced view of the opportunities and obstacles in AI implementation.
false
false
false
false
true
false
true
false
false
false
false
false
false
true
false
false
false
false
470,975
2010.03373
Predictive Capability Maturity Quantification using Bayesian Network
In nuclear engineering, modeling and simulations (M&Ss) are widely applied to support risk-informed safety analysis. Since nuclear safety analysis has important implications, a convincing validation process is needed to assess simulation adequacy, i.e., the degree to which M&S tools can adequately represent the system quantities of interest. However, due to data gaps, validation becomes a decision-making process under uncertainties. Expert knowledge and judgments are required to collect, choose, characterize, and integrate evidence toward the final adequacy decision. However, in validation frameworks CSAU: Code Scaling, Applicability, and Uncertainty (NUREG/CR-5249) and EMDAP: Evaluation Model Development and Assessment Process (RG 1.203), such a decision-making process is largely implicit and obscure. When scenarios are complex, knowledge biases and unreliable judgments can be overlooked, which could increase uncertainty in the simulation adequacy result and the corresponding risks. Therefore, a framework is required to formalize the decision-making process for simulation adequacy in a practical, transparent, and consistent manner. This paper suggests a framework "Predictive Capability Maturity Quantification using Bayesian network (PCMQBN)" as a quantified framework for assessing simulation adequacy based on information collected from validation activities. A case study is prepared for evaluating the adequacy of a Smoothed Particle Hydrodynamic simulation in predicting the hydrodynamic forces onto static structures during an external flooding scenario. Comparing to the qualitative and implicit adequacy assessment, PCMQBN is able to improve confidence in the simulation adequacy result and to reduce expected loss in the risk-informed safety analysis.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
199,386
1404.0904
On a correlational clustering of integers
Correlation clustering is a concept of machine learning. The ultimate goal of such a clustering is to find a partition with minimal conflicts. In this paper we investigate a correlation clustering of integers, based upon the greatest common divisor.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
32,061
0905.2501
Macrodynamics of users' behavior in Information Retrieval
We present a method to geometrize massive data sets from search engines query logs. For this purpose, a macrodynamic-like quantitative model of the Information Retrieval (IR) process is developed, whose paradigm is inspired by basic constructions of Einstein's general relativity theory in which all IR objects are uniformly placed in a common Room. The Room has a structure similar to Einsteinian spacetime, namely that of a smooth manifold. Documents and queries are treated as matter objects and sources of material fields. Relevance, the central notion of IR, becomes a dynamical issue controlled by both gravitation (or, more precisely, as the motion in a curved spacetime) and forces originating from the interactions of matter fields. The spatio-temporal description ascribes dynamics to any document or query, thus providing a uniform description for documents of both initially static and dynamical nature. Within the IR context, the techniques presented are based on two ideas. The first is the placement of all objects participating in IR into a common continuous space. The second idea is the `objectivization' of the IR process; instead of expressing users' wishes, we consider the overall IR as an objective physical process, representing the IR process in terms of motion in a given external-fields configuration. Various semantic environments are treated as various IR universes.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
3,701
2301.04352
Graph based Environment Representation for Vision-and-Language Navigation in Continuous Environments
Vision-and-Language Navigation in Continuous Environments (VLN-CE) is a navigation task that requires an agent to follow a language instruction in a realistic environment. The understanding of environments is a crucial part of the VLN-CE task, but existing methods are relatively simple and direct in understanding the environment, without delving into the relationship between language instructions and visual environments. Therefore, we propose a new environment representation in order to solve the above problems. First, we propose an Environment Representation Graph (ERG) through object detection to express the environment in semantic level. This operation enhances the relationship between language and environment. Then, the relational representations of object-object, object-agent in ERG are learned through GCN, so as to obtain a continuous expression about ERG. Sequentially, we combine the ERG expression with object label embeddings to obtain the environment representation. Finally, a new cross-modal attention navigation framework is proposed, incorporating our environment representation and a special loss function dedicated to training ERG. Experimental result shows that our method achieves satisfactory performance in terms of success rate on VLN-CE tasks. Further analysis explains that our method attains better cross-modal matching and strong generalization ability.
false
false
false
false
true
false
false
true
false
false
false
true
false
false
false
false
false
false
340,036
2207.14298
Learning Personalized Representations using Graph Convolutional Network
Generating representations that precisely reflect customers' behavior is an important task for providing personalized skill routing experience in Alexa. Currently, Dynamic Routing (DR) team, which is responsible for routing Alexa traffic to providers or skills, relies on two features to be served as personal signals: absolute traffic count and normalized traffic count of every skill usage per customer. Neither of them considers the network based structure for interactions between customers and skills, which contain richer information for customer preferences. In this work, we first build a heterogeneous edge attributed graph based customers' past interactions with the invoked skills, in which the user requests (utterances) are modeled as edges. Then we propose a graph convolutional network(GCN) based model, namely Personalized Dynamic Routing Feature Encoder(PDRFE), that generates personalized customer representations learned from the built graph. Compared with existing models, PDRFE is able to further capture contextual information in the graph convolutional function. The performance of our proposed model is evaluated by a downstream task, defect prediction, that predicts the defect label from the learned embeddings of customers and their triggered skills. We observe up to 41% improvements on the cross entropy metric for our proposed models compared to the baselines.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
310,542
2205.13832
Counterfactual Analysis in Dynamic Latent State Models
We provide an optimization-based framework to perform counterfactual analysis in a dynamic model with hidden states. Our framework is grounded in the ``abduction, action, and prediction'' approach to answer counterfactual queries and handles two key challenges where (1) the states are hidden and (2) the model is dynamic. Recognizing the lack of knowledge on the underlying causal mechanism and the possibility of infinitely many such mechanisms, we optimize over this space and compute upper and lower bounds on the counterfactual quantity of interest. Our work brings together ideas from causality, state-space models, simulation, and optimization, and we apply it on a breast cancer case study. To the best of our knowledge, we are the first to compute lower and upper bounds on a counterfactual query in a dynamic latent-state model.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
299,107
1909.04556
Human Languages in Source Code: Auto-Translation for Localized Instruction
Computer science education has promised open access around the world, but access is largely determined by what human language you speak. As younger students learn computer science it is less appropriate to assume that they should learn English beforehand. To that end we present CodeInternational, the first tool to translate code between human languages. To develop a theory of non-English code, and inform our translation decisions, we conduct a study of public code repositories on GitHub. The study is to the best of our knowledge the first on human-language in code and covers 2.9 million Java repositories. To demonstrate CodeInternational's educational utility, we build an interactive version of the popular English-language Karel reader and translate it into 100 spoken languages. Our translations have already been used in classrooms around the world, and represent a first step in an important open CS-education problem.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
144,837
2211.07004
Advancing Learned Video Compression with In-loop Frame Prediction
Recent years have witnessed an increasing interest in end-to-end learned video compression. Most previous works explore temporal redundancy by detecting and compressing a motion map to warp the reference frame towards the target frame. Yet, it failed to adequately take advantage of the historical priors in the sequential reference frames. In this paper, we propose an Advanced Learned Video Compression (ALVC) approach with the in-loop frame prediction module, which is able to effectively predict the target frame from the previously compressed frames, without consuming any bit-rate. The predicted frame can serve as a better reference than the previously compressed frame, and therefore it benefits the compression performance. The proposed in-loop prediction module is a part of the end-to-end video compression and is jointly optimized in the whole framework. We propose the recurrent and the bi-directional in-loop prediction modules for compressing P-frames and B-frames, respectively. The experiments show the state-of-the-art performance of our ALVC approach in learned video compression. We also outperform the default hierarchical B mode of x265 in terms of PSNR and beat the slowest mode of the SSIM-tuned x265 on MS-SSIM. The project page: https://github.com/RenYang-home/ALVC.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
330,095
2307.05182
CAT-ViL: Co-Attention Gated Vision-Language Embedding for Visual Question Localized-Answering in Robotic Surgery
Medical students and junior surgeons often rely on senior surgeons and specialists to answer their questions when learning surgery. However, experts are often busy with clinical and academic work, and have little time to give guidance. Meanwhile, existing deep learning (DL)-based surgical Visual Question Answering (VQA) systems can only provide simple answers without the location of the answers. In addition, vision-language (ViL) embedding is still a less explored research in these kinds of tasks. Therefore, a surgical Visual Question Localized-Answering (VQLA) system would be helpful for medical students and junior surgeons to learn and understand from recorded surgical videos. We propose an end-to-end Transformer with the Co-Attention gaTed Vision-Language (CAT-ViL) embedding for VQLA in surgical scenarios, which does not require feature extraction through detection models. The CAT-ViL embedding module is designed to fuse multimodal features from visual and textual sources. The fused embedding will feed a standard Data-Efficient Image Transformer (DeiT) module, before the parallel classifier and detector for joint prediction. We conduct the experimental validation on public surgical videos from MICCAI EndoVis Challenge 2017 and 2018. The experimental results highlight the superior performance and robustness of our proposed model compared to the state-of-the-art approaches. Ablation studies further prove the outstanding performance of all the proposed components. The proposed method provides a promising solution for surgical scene understanding, and opens up a primary step in the Artificial Intelligence (AI)-based VQLA system for surgical training. Our code is publicly available.
false
false
false
false
true
false
false
true
false
false
false
true
false
false
false
false
false
false
378,653
2306.08915
Prompt Performance Prediction for Image Generation
The ability to predict the performance of a query before results are returned has been a longstanding challenge in Information Retrieval (IR) systems. Inspired by this task, we introduce, in this paper, a novel task called "Prompt Performance Prediction" (PPP) that aims to predict the performance of a prompt, before obtaining the actual generated images. We demonstrate the plausibility of our task by measuring the correlation coefficient between predicted and actual performance scores across: three datasets containing pairs of prompts and generated images AND three art domain datasets of real images and real user appreciation ratings. Our results show promising performance prediction capabilities, suggesting potential applications for optimizing user prompts.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
373,600
2004.09205
A Study of Cross-Lingual Ability and Language-specific Information in Multilingual BERT
Recently, multilingual BERT works remarkably well on cross-lingual transfer tasks, superior to static non-contextualized word embeddings. In this work, we provide an in-depth experimental study to supplement the existing literature of cross-lingual ability. We compare the cross-lingual ability of non-contextualized and contextualized representation model with the same data. We found that datasize and context window size are crucial factors to the transferability. We also observe the language-specific information in multilingual BERT. By manipulating the latent representations, we can control the output languages of multilingual BERT, and achieve unsupervised token translation. We further show that based on the observation, there is a computationally cheap but effective approach to improve the cross-lingual ability of multilingual BERT.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
173,283
1907.01041
Natural Language Understanding with the Quora Question Pairs Dataset
This paper explores the task Natural Language Understanding (NLU) by looking at duplicate question detection in the Quora dataset. We conducted extensive exploration of the dataset and used various machine learning models, including linear and tree-based models. Our final finding was that a simple Continuous Bag of Words neural network model had the best performance, outdoing more complicated recurrent and attention based models. We also conducted error analysis and found some subjectivity in the labeling of the dataset.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
137,203
2407.05328
Blind Bistatic Radar Parameter Estimation for AFDM Systems in Doubly-Dispersive Channels
We propose a novel method for blind bistatic radar parameter estimation (RPE), which enables integrated sensing and communications (ISAC) by allowing passive (receive) base stations (BSs) to extract radar parameters (ranges and velocities of targets), without requiring knowledge of the information sent by an active (transmit) BS to its users. The contributed method is formulated with basis on the covariance of received signals, and under a generalized doubly-dispersive channel model compatible with most of the waveforms typically considered for ISAC, such as orthogonal frequency division multiplexing (OFDM), orthogonal time frequency space (OTFS) and affine frequency division multiplexing (AFDM). The original non-convex problem, which includes an $\ell_0$-norm regularization term in order to mitigate clutter, is solved not by relaxation to an $\ell_1$-norm, but by introducing an arbitrarily-tight approximation then relaxed via fractional programming (FP). Simulation results show that the performance of the proposed method approaches that of an ideal system with perfect knowledge of the transmit signal covariance with an increasing number of transmit frames.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
470,925
2202.10898
Relational Algebra and Calculus with SQL Null Values
The logic of nulls in databases has been subject of investigation since their introduction in Codd's Relational Model, which is the foundation of the SQL standard. We show a logical characterisation of a first-order fragment of SQL with null values, by first focussing on a simple extension with null values of standard relational algebra, which captures exactly the SQL fragment, and then proposing two different domain relational calculi, in which the null value is a term of the language but it does not appear as an element of the semantic interpretation domain of the logics. In one calculus, a relation can be seen as a set of partial tuples, while in the other (equivalent) calculus, a relation is horizontally decomposed as a set of relations each one holding regular total tuples. We extend Codd's theorem by proving the equivalence of the relational algebra with both domain relational calculi in presence of SQL null values.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
true
true
281,703
2412.19237
SeaMo: A Multi-Seasonal and Multimodal Remote Sensing Foundation Model
Remote Sensing (RS) data contains a wealth of multi-dimensional information crucial for Earth observation. Owing to its vast volume, diverse sources, and temporal properties, RS data is highly suitable for the development of large Visual Foundation Models (VFMs). VFMs act as robust feature extractors, learning from extensive RS data, and are subsequently fine-tuned for deployment in various geoscientific tasks. However, current VFMs in the RS domain are predominantly pretrained and tailored exclusively for specific characteristics of RS imagery, neglecting the potential of utilizing the multi-dimensional properties of RS data. Therefore, in this work, we propose SeaMo, a pioneering visual foundation model that integrates multi-seasonal and multimodal information in the RS field. SeaMo is designed to harness multiple properties of RS data. Within the masked image modeling framework, we employ non-aligned cropping techniques to extract spatial properties, use multi-source inputs for multimodal integration, and incorporate temporal-multimodal fusion blocks for effective assimilation of multi-seasonal data. SeaMo explicitly models the multi-dimensional properties of RS data, making the model more comprehensive, robust, and versatile. We applied SeaMo to several downstream geoscience tasks, which demonstrated exceptional performance. Extensive ablation studies were conducted to validate the model's superiority.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
520,770
1907.02323
DeePCCI: Deep Learning-based Passive Congestion Control Identification
Transport protocols use congestion control to avoid overloading a network. Nowadays, different congestion control variants exist that influence performance. Studying their use is thus relevant, but it is hard to identify which variant is used. While passive identification approaches exist, these require detailed domain knowledge and often also rely on outdated assumptions about how congestion control operates and what data is accessible. We present DeePCCI, a passive, deep learning-based congestion control identification approach which does not need any domain knowledge other than training traffic of a congestion control variant. By only using packet arrival data, it is also directly applicable to encrypted (transport header) traffic. DeePCCI is therefore more easily extendable and can also be used with QUIC.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
137,587
2206.03291
GAAF: Searching Activation Functions for Binary Neural Networks through Genetic Algorithm
Binary neural networks (BNNs) show promising utilization in cost and power-restricted domains such as edge devices and mobile systems. This is due to its significantly less computation and storage demand, but at the cost of degraded performance. To close the accuracy gap, in this paper we propose to add a complementary activation function (AF) ahead of the sign based binarization, and rely on the genetic algorithm (GA) to automatically search for the ideal AFs. These AFs can help extract extra information from the input data in the forward pass, while allowing improved gradient approximation in the backward pass. Fifteen novel AFs are identified through our GA-based search, while most of them show improved performance (up to 2.54% on ImageNet) when testing on different datasets and network models. Our method offers a novel approach for designing general and application-specific BNN architecture. Our code is available at http://github.com/flying-Yan/GAAF.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
301,217
2407.13146
PG-Rainbow: Using Distributional Reinforcement Learning in Policy Gradient Methods
This paper introduces PG-Rainbow, a novel algorithm that incorporates a distributional reinforcement learning framework with a policy gradient algorithm. Existing policy gradient methods are sample inefficient and rely on the mean of returns when calculating the state-action value function, neglecting the distributional nature of returns in reinforcement learning tasks. To address this issue, we use an Implicit Quantile Network that provides the quantile information of the distribution of rewards to the critic network of the Proximal Policy Optimization algorithm. We show empirical results that through the integration of reward distribution information into the policy network, the policy agent acquires enhanced capabilities to comprehensively evaluate the consequences of potential actions in a given state, facilitating more sophisticated and informed decision-making processes. We evaluate the performance of the proposed algorithm in the Atari-2600 game suite, simulated via the Arcade Learning Environment (ALE).
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
474,253
2004.14619
The 4th AI City Challenge
The AI City Challenge was created to accelerate intelligent video analysis that helps make cities smarter and safer. Transportation is one of the largest segments that can benefit from actionable insights derived from data captured by sensors, where computer vision and deep learning have shown promise in achieving large-scale practical deployment. The 4th annual edition of the AI City Challenge has attracted 315 participating teams across 37 countries, who leveraged city-scale real traffic data and high-quality synthetic data to compete in four challenge tracks. Track 1 addressed video-based automatic vehicle counting, where the evaluation is conducted on both algorithmic effectiveness and computational efficiency. Track 2 addressed city-scale vehicle re-identification with augmented synthetic data to substantially increase the training set for the task. Track 3 addressed city-scale multi-target multi-camera vehicle tracking. Track 4 addressed traffic anomaly detection. The evaluation system shows two leader boards, in which a general leader board shows all submitted results, and a public leader board shows results limited to our contest participation rules, that teams are not allowed to use external data in their work. The public leader board shows results more close to real-world situations where annotated data are limited. Our results show promise that AI technology can enable smarter and safer transportation systems.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
true
false
174,955
1806.06887
The Minimax Learning Rates of Normal and Ising Undirected Graphical Models
Let $G$ be an undirected graph with $m$ edges and $d$ vertices. We show that $d$-dimensional Ising models on $G$ can be learned from $n$ i.i.d. samples within expected total variation distance some constant factor of $\min\{1, \sqrt{(m + d)/n}\}$, and that this rate is optimal. We show that the same rate holds for the class of $d$-dimensional multivariate normal undirected graphical models with respect to $G$. We also identify the optimal rate of $\min\{1, \sqrt{m/n}\}$ for Ising models with no external magnetic field.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
100,779
2201.05666
Reliable Causal Discovery with Improved Exact Search and Weaker Assumptions
Many of the causal discovery methods rely on the faithfulness assumption to guarantee asymptotic correctness. However, the assumption can be approximately violated in many ways, leading to sub-optimal solutions. Although there is a line of research in Bayesian network structure learning that focuses on weakening the assumption, such as exact search methods with well-defined score functions, they do not scale well to large graphs. In this work, we introduce several strategies to improve the scalability of exact score-based methods in the linear Gaussian setting. In particular, we develop a super-structure estimation method based on the support of inverse covariance matrix which requires assumptions that are strictly weaker than faithfulness, and apply it to restrict the search space of exact search. We also propose a local search strategy that performs exact search on the local clusters formed by each variable and its neighbors within two hops in the super-structure. Numerical experiments validate the efficacy of the proposed procedure, and demonstrate that it scales up to hundreds of nodes with a high accuracy.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
275,450
2404.09690
Harnessing GPT-4V(ision) for Insurance: A Preliminary Exploration
The emergence of Large Multimodal Models (LMMs) marks a significant milestone in the development of artificial intelligence. Insurance, as a vast and complex discipline, involves a wide variety of data forms in its operational processes, including text, images, and videos, thereby giving rise to diverse multimodal tasks. Despite this, there has been limited systematic exploration of multimodal tasks specific to insurance, nor a thorough investigation into how LMMs can address these challenges. In this paper, we explore GPT-4V's capabilities in the insurance domain. We categorize multimodal tasks by focusing primarily on visual aspects based on types of insurance (e.g., auto, household/commercial property, health, and agricultural insurance) and insurance stages (e.g., risk assessment, risk monitoring, and claims processing). Our experiment reveals that GPT-4V exhibits remarkable abilities in insurance-related tasks, demonstrating not only a robust understanding of multimodal content in the insurance domain but also a comprehensive knowledge of insurance scenarios. However, there are notable shortcomings: GPT-4V struggles with detailed risk rating and loss assessment, suffers from hallucination in image understanding, and shows variable support for different languages. Through this work, we aim to bridge the insurance domain with cutting-edge LMM technology, facilitate interdisciplinary exchange and development, and provide a foundation for the continued advancement and evolution of future research endeavors.
false
false
false
false
true
false
true
false
true
false
false
true
false
false
false
false
false
false
446,793
2210.10403
Segmentation-free Direct Iris Localization Networks
This paper proposes an efficient iris localization method without using iris segmentation and circle fitting. Conventional iris localization methods first extract iris regions by using semantic segmentation methods such as U-Net. Afterward, the inner and outer iris circles are localized using the traditional circle fitting algorithm. However, this approach requires high-resolution encoder-decoder networks for iris segmentation, so it causes computational costs to be high. In addition, traditional circle fitting tends to be sensitive to noise in input images and fitting parameters, causing the iris recognition performance to be poor. To solve these problems, we propose an iris localization network (ILN), that can directly localize pupil and iris circles with eyelid points from a low-resolution iris image. We also introduce a pupil refinement network (PRN) to improve the accuracy of pupil localization. Experimental results show that the combination of ILN and PRN works in 34.5 ms for one iris image on a CPU, and its localization performance outperforms conventional iris segmentation methods. In addition, generalized evaluation results show that the proposed method has higher robustness for datasets in different domain than other segmentation methods. Furthermore, we also confirm that the proposed ILN and PRN improve the iris recognition accuracy.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
324,904
2411.08010
ExpressivityArena: Can LLMs Express Information Implicitly?
While Large Language Models (LLMs) have demonstrated remarkable performance in certain dimensions, their ability to express implicit language cues that human use for effective communication remains unclear. This paper presents ExpressivityArena, a Python library for measuring the implicit communication abilities of LLMs. We provide a comprehensive framework to evaluate expressivity of arbitrary LLMs and explore its practical implications. To this end, we refine the definition and measurements of ``expressivity,'' and use our framework in a set of small experiments. These experiments test LLMs in creative and logical tasks such as poetry, coding, and emotion-based responses. They are then evaluated by an automated grader, through ExpressivityArena, which we verify to be the most pragmatic for testing expressivity. Building on these experiments, we deepen our understanding of the expressivity of LLMs by assessing their ability to remain expressive in conversations. Our findings indicate that LLMs are capable of generating and understanding expressive content, however, with some limitations. These insights will inform the future development and deployment of expressive LLMs. We provide the code for ExpressivityArena alongside our paper.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
507,746
2008.01705
Faded-Experience Trust Region Policy Optimization for Model-Free Power Allocation in Interference Channel
Policy gradient reinforcement learning techniques enable an agent to directly learn an optimal action policy through the interactions with the environment. Nevertheless, despite its advantages, it sometimes suffers from slow convergence speed. Inspired by human decision making approach, we work toward enhancing its convergence speed by augmenting the agent to memorize and use the recently learned policies. We apply our method to the trust-region policy optimization (TRPO), primarily developed for locomotion tasks, and propose faded-experience (FE) TRPO. To substantiate its effectiveness, we adopt it to learn continuous power control in an interference channel when only noisy location information of devices is available. Results indicate that with FE-TRPO it is possible to almost double the learning speed compared to TRPO. Importantly, our method neither increases the learning complexity nor imposes performance loss.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
190,416
2312.15855
Geometric-Aware Low-Light Image and Video Enhancement via Depth Guidance
Low-Light Enhancement (LLE) is aimed at improving the quality of photos/videos captured under low-light conditions. It is worth noting that most existing LLE methods do not take advantage of geometric modeling. We believe that incorporating geometric information can enhance LLE performance, as it provides insights into the physical structure of the scene that influences illumination conditions. To address this, we propose a Geometry-Guided Low-Light Enhancement Refine Framework (GG-LLERF) designed to assist low-light enhancement models in learning improved features for LLE by integrating geometric priors into the feature representation space. In this paper, we employ depth priors as the geometric representation. Our approach focuses on the integration of depth priors into various LLE frameworks using a unified methodology. This methodology comprises two key novel modules. First, a depth-aware feature extraction module is designed to inject depth priors into the image representation. Then, Hierarchical Depth-Guided Feature Fusion Module (HDGFFM) is formulated with a cross-domain attention mechanism, which combines depth-aware features with the original image features within the LLE model. We conducted extensive experiments on public low-light image and video enhancement benchmarks. The results illustrate that our designed framework significantly enhances existing LLE methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
418,160
2004.04645
Query-Focused EHR Summarization to Aid Imaging Diagnosis
Electronic Health Records (EHRs) provide vital contextual information to radiologists and other physicians when making a diagnosis. Unfortunately, because a given patient's record may contain hundreds of notes and reports, identifying relevant information within these in the short time typically allotted to a case is very difficult. We propose and evaluate models that extract relevant text snippets from patient records to provide a rough case summary intended to aid physicians considering one or more diagnoses. This is hard because direct supervision (i.e., physician annotations of snippets relevant to specific diagnoses in medical records) is prohibitively expensive to collect at scale. We propose a distantly supervised strategy in which we use groups of International Classification of Diseases (ICD) codes observed in 'future' records as noisy proxies for 'downstream' diagnoses. Using this we train a transformer-based neural model to perform extractive summarization conditioned on potential diagnoses. This model defines an attention mechanism that is conditioned on potential diagnoses (queries) provided by the diagnosing physician. We train (via distant supervision) and evaluate variants of this model on EHR data from Brigham and Women's Hospital in Boston and MIMIC-III (the latter to facilitate reproducibility). Evaluations performed by radiologists demonstrate that these distantly supervised models yield better extractive summaries than do unsupervised approaches. Such models may aid diagnosis by identifying sentences in past patient reports that are clinically relevant to a potential diagnosis.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
171,944
2401.00955
Learning Long Sequences in Spiking Neural Networks
Spiking neural networks (SNNs) take inspiration from the brain to enable energy-efficient computations. Since the advent of Transformers, SNNs have struggled to compete with artificial networks on modern sequential tasks, as they inherit limitations from recurrent neural networks (RNNs), with the added challenge of training with non-differentiable binary spiking activations. However, a recent renewed interest in efficient alternatives to Transformers has given rise to state-of-the-art recurrent architectures named state space models (SSMs). This work systematically investigates, for the first time, the intersection of state-of-the-art SSMs with SNNs for long-range sequence modelling. Results suggest that SSM-based SNNs can outperform the Transformer on all tasks of a well-established long-range sequence modelling benchmark. It is also shown that SSM-based SNNs can outperform current state-of-the-art SNNs with fewer parameters on sequential image classification. Finally, a novel feature mixing layer is introduced, improving SNN accuracy while challenging assumptions about the role of binary activations in SNNs. This work paves the way for deploying powerful SSM-based architectures, such as large language models, to neuromorphic hardware for energy-efficient long-range sequence modelling.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
419,167
1912.08057
A Novel Self-Organizing PID Approach for Controlling Mobile Robot Locomotion
A novel self-organizing fuzzy proportional-integral-derivative (SOF-PID) control system is proposed in this paper. The proposed system consists of a pair of control and reference models, both of which are implemented by a first-order autonomous learning multiple model (ALMMo) neuro-fuzzy system. The SOF-PID controller self-organizes and self-updates the structures and meta-parameters of both the control and reference models during the control process "on the fly". This gives the SOF-PID control system the capability of quickly adapting to entirely new operating environments without a full re-training. Moreover, the SOF-PID control system is free from user- and problem-specific parameters, and the uniform stability of the SOF-PID control system is theoretically guaranteed. Simulations and real-world experiments with mobile robots demonstrate the effectiveness and validity of the proposed SOF-PID control system.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
157,748
1906.11981
Convolution Based Spectral Partitioning Architecture for Hyperspectral Image Classification
Hyperspectral images (HSIs) can distinguish materials with high number of spectral bands, which is widely adopted in remote sensing applications and benefits in high accuracy land cover classifications. However, HSIs processing are tangled with the problem of high dimensionality and limited amount of labelled data. To address these challenges, this paper proposes a deep learning architecture using three dimensional convolutional neural networks with spectral partitioning to perform effective feature extraction. We conduct experiments using Indian Pines and Salinas scenes acquired by NASA Airborne Visible/Infra-Red Imaging Spectrometer. In comparison to prior results, our architecture shows competitive performance for classification results over current methods.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
136,813
2406.16018
Comprehensive characterization of three-qubit Grover search algorithm on IBM's 127-qubit superconducting quantum computers
The Grover search algorithm is a pivotal advancement in quantum computing, promising a remarkable speedup over classical algorithms in searching unstructured large databases. Here, we report results for the implementation and characterization of a three-qubit Grover search algorithm using the state-of-the-art scalable quantum computing technology of superconducting quantum architectures. To delve into the algorithm's scalability and performance metrics, our investigation spans the execution of the algorithm across all eight conceivable single-result oracles, alongside nine two-result oracles, employing IBM Quantum's 127-qubit quantum computers. Moreover, we conduct five quantum state tomography experiments to precisely gauge the behavior and efficiency of our implemented algorithm under diverse conditions; ranging from noisy, noise-free environments to the complexities of real-world quantum hardware. By connecting theoretical concepts with real-world experiments, this study not only shed light on the potential of NISQ (Noisy Intermediate-Scale Quantum) computers in facilitating large-scale database searches but also offer valuable insights into the practical application of the Grover search algorithm in real-world quantum computing applications.
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
false
true
466,963
1403.4224
Learning Negative Mixture Models by Tensor Decompositions
This work considers the problem of estimating the parameters of negative mixture models, i.e. mixture models that possibly involve negative weights. The contributions of this paper are as follows. (i) We show that every rational probability distributions on strings, a representation which occurs naturally in spectral learning, can be computed by a negative mixture of at most two probabilistic automata (or HMMs). (ii) We propose a method to estimate the parameters of negative mixture models having a specific tensor structure in their low order observable moments. Building upon a recent paper on tensor decompositions for learning latent variable models, we extend this work to the broader setting of tensors having a symmetric decomposition with positive and negative weights. We introduce a generalization of the tensor power method for complex valued tensors, and establish theoretical convergence guarantees. (iii) We show how our approach applies to negative Gaussian mixture models, for which we provide some experiments.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
31,632
2203.05117
Optimal Methods for Convex Risk Averse Distributed Optimization
This paper studies the communication complexity of convex risk-averse optimization over a network. The problem generalizes the well-studied risk-neutral finite-sum distributed optimization problem and its importance stems from the need to handle risk in an uncertain environment. For algorithms in the literature, there exists a gap in communication complexities for solving risk-averse and risk-neutral problems. We propose two distributed algorithms, namely the distributed risk averse optimization (DRAO) method and the distributed risk averse optimization with sliding (DRAO-S) method, to close the gap. Specifically, the DRAO method achieves the optimal communication complexity by assuming a certain saddle point subproblem can be easily solved in the server node. The DRAO-S method removes the strong assumption by introducing a novel saddle point sliding subroutine which only requires the projection over the ambiguity set $P$. We observe that the number of $P$-projections performed by DRAO-S is optimal. Moreover, we develop matching lower complexity bounds to show the communication complexities of both DRAO and DRAO-S to be improvable. Numerical experiments are conducted to demonstrate the encouraging empirical performance of the DRAO-S method.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
284,707
2303.01614
STEP: Stochastic Traversability Evaluation and Planning for Risk-Aware Off-road Navigation; Results from the DARPA Subterranean Challenge
Although autonomy has gained widespread usage in structured and controlled environments, robotic autonomy in unknown and off-road terrain remains a difficult problem. Extreme, off-road, and unstructured environments such as undeveloped wilderness, caves, rubble, and other post-disaster sites pose unique and challenging problems for autonomous navigation. Based on our participation in the DARPA Subterranean Challenge, we propose an approach to improve autonomous traversal of robots in subterranean environments that are perceptually degraded and completely unknown through a traversability and planning framework called STEP (Stochastic Traversability Evaluation and Planning). We present 1) rapid uncertainty-aware mapping and traversability evaluation, 2) tail risk assessment using the Conditional Value-at-Risk (CVaR), 3) efficient risk and constraint-aware kinodynamic motion planning using sequential quadratic programming-based (SQP) model predictive control (MPC), 4) fast recovery behaviors to account for unexpected scenarios that may cause failure, and 5) risk-based gait adaptation for quadrupedal robots. We illustrate and validate extensive results from our experiments on wheeled and legged robotic platforms in field studies at the Valentine Cave, CA (cave environment), Kentucky Underground, KY (mine environment), and Louisville Mega Cavern, KY (final competition site for the DARPA Subterranean Challenge with tunnel, urban, and cave environments).
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
349,036
1702.08574
Millimeter Wave Beam-Selection Using Out-of-Band Spatial Information
Millimeter wave (mmWave) communication is one feasible solution for high data-rate applications like vehicular-to-everything communication and next generation cellular communication. Configuring mmWave links, which can be done through channel estimation or beam-selection, however, is a source of significant overhead. In this paper, we propose to use spatial information extracted at sub-6 GHz to help establish the mmWave link. First, we review the prior work on frequency dependent channel behavior and outline a simulation strategy to generate multi-band frequency dependent channels. Second, assuming: (i) narrowband channels and a fully digital architecture at sub-6 GHz; and (ii) wideband frequency selective channels, OFDM signaling, and an analog architecture at mmWave, we outline strategies to incorporate sub-6 GHz spatial information in mmWave compressed beam selection. We formulate compressed beam-selection as a weighted sparse signal recovery problem, and obtain the weighting information from sub-6 GHz channels. In addition, we outline a structured precoder/combiner design to tailor the training to out-of-band information. We also extend the proposed out-of-band aided compressed beam-selection approach to leverage information from all active OFDM subcarriers. The simulation results for achievable rate show that out-of-band aided beam-selection can reduce the training overhead of in-band only beam-selection by 4x.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
69,015
2212.11325
Bent functions and strongly regular graphs
The family of bent functions is a known class of Boolean functions, which have a great importance in cryptography. The Cayley graph defined on $\mathbb{Z}_{2}^{n}$ by the support of a bent function is a strongly regular graph $srg(v,k\lambda,\mu)$, with $\lambda=\mu$. In this note we list the parameters of such Cayley graphs. Moreover, it is given a condition on $(n,m)$-bent functions $F=(f_1,\ldots,f_m)$, involving the support of their components $f_i$, and their $n$-ary symmetric differences.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
337,756
1603.06159
Fast Incremental Method for Nonconvex Optimization
We analyze a fast incremental aggregated gradient method for optimizing nonconvex problems of the form $\min_x \sum_i f_i(x)$. Specifically, we analyze the SAGA algorithm within an Incremental First-order Oracle framework, and show that it converges to a stationary point provably faster than both gradient descent and stochastic gradient descent. We also discuss a Polyak's special class of nonconvex problems for which SAGA converges at a linear rate to the global optimum. Finally, we analyze the practically valuable regularized and minibatch variants of SAGA. To our knowledge, this paper presents the first analysis of fast convergence for an incremental aggregated gradient method for nonconvex problems.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
53,453
2307.06699
Parmesan: mathematical concept extraction for education
Mathematics is a highly specialized domain with its own unique set of challenges that has seen limited study in natural language processing. However, mathematics is used in a wide variety of fields and multidisciplinary research in many different domains often relies on an understanding of mathematical concepts. To aid researchers coming from other fields, we develop a prototype system for searching for and defining mathematical concepts in context, focusing on the field of category theory. This system, Parmesan, depends on natural language processing components including concept extraction, relation extraction, definition extraction, and entity linking. In developing this system, we show that existing techniques cannot be applied directly to the category theory domain, and suggest hybrid techniques that do perform well, though we expect the system to evolve over time. We also provide two cleaned mathematical corpora that power the prototype system, which are based on journal articles and wiki pages, respectively. The corpora have been annotated with dependency trees, lemmas, and part-of-speech tags.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
379,151
2410.01669
Sparse Covariance Neural Networks
Covariance Neural Networks (VNNs) perform graph convolutions on the covariance matrix of tabular data and achieve success in a variety of applications. However, the empirical covariance matrix on which the VNNs operate may contain many spurious correlations, making VNNs' performance inconsistent due to these noisy estimates and decreasing their computational efficiency. To tackle this issue, we put forth Sparse coVariance Neural Networks (S-VNNs), a framework that applies sparsification techniques on the sample covariance matrix before convolution. When the true covariance matrix is sparse, we propose hard and soft thresholding to improve covariance estimation and reduce computational cost. Instead, when the true covariance is dense, we propose stochastic sparsification where data correlations are dropped in probability according to principled strategies. We show that S-VNNs are more stable than nominal VNNs as well as sparse principal component analysis. By analyzing the impact of sparsification on their behavior, we provide novel connections between S-VNN stability and data distribution. We support our theoretical findings with experimental results on various application scenarios, ranging from brain data to human action recognition, and show an improved task performance, stability, and computational efficiency of S-VNNs compared with nominal VNNs.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
493,879
1905.07089
Exact-K Recommendation via Maximal Clique Optimization
This paper targets to a novel but practical recommendation problem named exact-K recommendation. It is different from traditional top-K recommendation, as it focuses more on (constrained) combinatorial optimization which will optimize to recommend a whole set of K items called card, rather than ranking optimization which assumes that "better" items should be put into top positions. Thus we take the first step to give a formal problem definition, and innovatively reduce it to Maximum Clique Optimization based on graph. To tackle this specific combinatorial optimization problem which is NP-hard, we propose Graph Attention Networks (GAttN) with a Multi-head Self-attention encoder and a decoder with attention mechanism. It can end-to-end learn the joint distribution of the K items and generate an optimal card rather than rank individual items by prediction scores. Then we propose Reinforcement Learning from Demonstrations (RLfD) which combines the advantages in behavior cloning and reinforcement learning, making it sufficient- and-efficient to train the model. Extensive experiments on three datasets demonstrate the effectiveness of our proposed GAttN with RLfD method, it outperforms several strong baselines with a relative improvement of 7.7% and 4.7% on average in Precision and Hit Ratio respectively, and achieves state-of-the-art (SOTA) performance for the exact-K recommendation problem.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
131,146
1903.09203
Rate-Flexible Fast Polar Decoders
Polar codes have gained extensive attention during the past few years and recently they have been selected for the next generation of wireless communications standards (5G). Successive-cancellation-based (SC-based) decoders, such as SC list (SCL) and SC flip (SCF), provide a reasonable error performance for polar codes at the cost of low decoding speed. Fast SC-based decoders, such as Fast-SSC, Fast-SSCL, and Fast-SSCF, identify the special constituent codes in a polar code graph off-line, produce a list of operations, store the list in memory, and feed the list to the decoder to decode the constituent codes in order efficiently, thus increasing the decoding speed. However, the list of operations is dependent on the code rate and as the rate changes, a new list is produced, making fast SC-based decoders not rate-flexible. In this paper, we propose a completely rate-flexible fast SC-based decoder by creating the list of operations directly in hardware, with low implementation complexity. We further propose a hardware architecture implementing the proposed method and show that the area occupation of the rate-flexible fast SC-based decoder in this paper is only $38\%$ of the total area of the memory-based base-line decoder when 5G code rates are supported.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
124,996
2502.07631
Divide and Merge: Motion and Semantic Learning in End-to-End Autonomous Driving
Perceiving the environment and its changes over time corresponds to two fundamental yet heterogeneous types of information: semantics and motion. Previous end-to-end autonomous driving works represent both types of information in a single feature vector. However, including motion tasks, such as prediction and planning, always impairs detection and tracking performance, a phenomenon known as negative transfer in multi-task learning. To address this issue, we propose Neural-Bayes motion decoding, a novel parallel detection, tracking, and prediction method separating semantic and motion learning, similar to the Bayes filter. Specifically, we employ a set of learned motion queries that operate in parallel with the detection and tracking queries, sharing a unified set of recursively updated reference points. Moreover, we employ interactive semantic decoding to enhance information exchange in semantic tasks, promoting positive transfer. Experiments on the nuScenes dataset show improvements of 5% in detection and 11% in tracking. Our method achieves state-of-the-art collision rates in open-loop planning evaluation without any modifications to the planning module.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
532,692
2302.08100
Deep Reinforcement Learning Based Tracking Control of an Autonomous Surface Vessel in Natural Waters
Accurate control of autonomous marine robots still poses challenges due to the complex dynamics of the environment. In this paper, we propose a Deep Reinforcement Learning (DRL) approach to train a controller for autonomous surface vessel (ASV) trajectory tracking and compare its performance with an advanced nonlinear model predictive controller (NMPC) in real environments. Taking into account environmental disturbances (e.g., wind, waves, and currents), noisy measurements, and non-ideal actuators presented in the physical ASV, several effective reward functions for DRL tracking control policies are carefully designed. The control policies were trained in a simulation environment with diverse tracking trajectories and disturbances. The performance of the DRL controller has been verified and compared with the NMPC in both simulations with model-based environmental disturbances and in natural waters. Simulations show that the DRL controller has 53.33% lower tracking error than that of NMPC. Experimental results further show that, compared to NMPC, the DRL controller has 35.51% lower tracking error, indicating that DRL controllers offer better disturbance rejection in river environments than NMPC.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
345,941
2102.12035
Multiple Access Channel Simulation
We study the problem of simulating a two-user multiple-access channel (MAC) over a multiple access network of noiseless links. Two encoders observe independent and identically distributed (i.i.d.) copies of a source random variable each, while a decoder observes i.i.d. copies of a side-information random variable. There are rate-limited noiseless communication links between each encoder and the decoder, and there is independent pairwise shared randomness between all the three possible pairs of nodes. The decoder has to output approximately i.i.d. copies of another random variable jointly distributed with the two sources and the side information. We are interested in the rate tuples which permit this simulation. This setting can be thought of as a multi-terminal generalization of the point-to-point channel simulation problem studied by Bennett et al. (2002) and Cuff (2013). When the pairwise shared randomness between the encoders is absent, the setting reduces to a special case of MAC simulation using another MAC studied by Haddadpour et al.~(2013). We establish that the presence of encoder shared randomness can strictly improve the communication rate requirements. We first show that the inner bound derived from Haddadpour et al.~(2013) is tight when the sources at the encoders are conditionally independent given the side-information at the decoder. This result recovers the existing results on point-to-point channel simulation and function computation over such multi-terminal networks. We then explicitly compute the communication rate regions for an example both with and without the encoder shared randomness and demonstrate that its presence strictly reduces the communication rates. Inner and outer bounds for the general case are also obtained.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
221,588
1911.08478
Sibling Neural Estimators: Improving Iterative Image Decoding with Gradient Communication
For lossy image compression, we develop a neural-based system which learns a nonlinear estimator for decoding from quantized representations. The system links two recurrent networks that \help" each other reconstruct same target image patches using complementary portions of spatial context that communicate via gradient signals. This dual agent system builds upon prior work that proposed the iterative refinement algorithm for recurrent neural network (RNN)based decoding which improved image reconstruction compared to standard decoding techniques. Our approach, which works with any encoder, neural or non-neural, This system progressively reduces image patch reconstruction error over a fixed number of steps. Experiment with variants of RNN memory cells, with and without future information, find that our model consistently creates lower distortion images of higher perceptual quality compared to other approaches. Specifically, on the Kodak Lossless True Color Image Suite, we observe as much as a 1:64 decibel (dB) gain over JPEG, a 1:46 dB gain over JPEG 2000, a 1:34 dB gain over the GOOG neural baseline, 0:36 over E2E (a modern competitive neural compression model), and 0:37 over a single iterative neural decoder.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
154,198
1906.03950
Domain-Specific Batch Normalization for Unsupervised Domain Adaptation
We propose a novel unsupervised domain adaptation framework based on domain-specific batch normalization in deep neural networks. We aim to adapt to both domains by specializing batch normalization layers in convolutional neural networks while allowing them to share all other model parameters, which is realized by a two-stage algorithm. In the first stage, we estimate pseudo-labels for the examples in the target domain using an external unsupervised domain adaptation algorithm---for example, MSTN or CPUA---integrating the proposed domain-specific batch normalization. The second stage learns the final models using a multi-task classification loss for the source and target domains. Note that the two domains have separate batch normalization layers in both stages. Our framework can be easily incorporated into the domain adaptation techniques based on deep neural networks with batch normalization layers. We also present that our approach can be extended to the problem with multiple source domains. The proposed algorithm is evaluated on multiple benchmark datasets and achieves the state-of-the-art accuracy in the standard setting and the multi-source domain adaption scenario.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
134,546
1707.03164
Experimental comparison of single-pixel imaging algorithms
Single-pixel imaging (SPI) is a novel technique capturing 2D images using a photodiode, instead of conventional 2D array sensors. SPI owns high signal-to-noise ratio, wide spectrum range, low cost, and robustness to light scattering. Various algorithms have been proposed for SPI reconstruction, including the linear correlation methods, the alternating projection method (AP), and the compressive sensing based methods. However, there has been no comprehensive review discussing respective advantages, which is important for SPI's further applications and development. In this paper, we reviewed and compared these algorithms in a unified reconstruction framework. Besides, we proposed two other SPI algorithms including a conjugate gradient descent based method (CGD) and a Poisson maximum likelihood based method. Both simulations and experiments validate the following conclusions: to obtain comparable reconstruction accuracy, the compressive sensing based total variation regularization method (TV) requires the least measurements and consumes the least running time for small-scale reconstruction; the CGD and AP methods run fastest in large-scale cases; the TV and AP methods are the most robust to measurement noise. In a word, there are trade-offs between capture efficiency, computational complexity and robustness to noise among different SPI algorithms. We have released our source code for non-commercial use.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
76,811
2308.00942
On the use of deep learning for phase recovery
Phase recovery (PR) refers to calculating the phase of the light field from its intensity measurements. As exemplified from quantitative phase imaging and coherent diffraction imaging to adaptive optics, PR is essential for reconstructing the refractive index distribution or topography of an object and correcting the aberration of an imaging system. In recent years, deep learning (DL), often implemented through deep neural networks, has provided unprecedented support for computational imaging, leading to more efficient solutions for various PR problems. In this review, we first briefly introduce conventional methods for PR. Then, we review how DL provides support for PR from the following three stages, namely, pre-processing, in-processing, and post-processing. We also review how DL is used in phase image processing. Finally, we summarize the work in DL for PR and outlook on how to better use DL to improve the reliability and efficiency in PR. Furthermore, we present a live-updating resource (https://github.com/kqwang/phase-recovery) for readers to learn more about PR.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
383,085
1804.04318
Cross-Modal Retrieval with Implicit Concept Association
Traditional cross-modal retrieval assumes explicit association of concepts across modalities, where there is no ambiguity in how the concepts are linked to each other, e.g., when we do the image search with a query "dogs", we expect to see dog images. In this paper, we consider a different setting for cross-modal retrieval where data from different modalities are implicitly linked via concepts that must be inferred by high-level reasoning; we call this setting implicit concept association. To foster future research in this setting, we present a new dataset containing 47K pairs of animated GIFs and sentences crawled from the web, in which the GIFs depict physical or emotional reactions to the scenarios described in the text (called "reaction GIFs"). We report on a user study showing that, despite the presence of implicit concept association, humans are able to identify video-sentence pairs with matching concepts, suggesting the feasibility of our task. Furthermore, we propose a novel visual-semantic embedding network based on multiple instance learning. Unlike traditional approaches, we compute multiple embeddings from each modality, each representing different concepts, and measure their similarity by considering all possible combinations of visual-semantic embeddings in the framework of multiple instance learning. We evaluate our approach on two video-sentence datasets with explicit and implicit concept association and report competitive results compared to existing approaches on cross-modal retrieval.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
true
94,813
2212.09525
FreeEnricher: Enriching Face Landmarks without Additional Cost
Recent years have witnessed significant growth of face alignment. Though dense facial landmark is highly demanded in various scenarios, e.g., cosmetic medicine and facial beautification, most works only consider sparse face alignment. To address this problem, we present a framework that can enrich landmark density by existing sparse landmark datasets, e.g., 300W with 68 points and WFLW with 98 points. Firstly, we observe that the local patches along each semantic contour are highly similar in appearance. Then, we propose a weakly-supervised idea of learning the refinement ability on original sparse landmarks and adapting this ability to enriched dense landmarks. Meanwhile, several operators are devised and organized together to implement the idea. Finally, the trained model is applied as a plug-and-play module to the existing face alignment networks. To evaluate our method, we manually label the dense landmarks on 300W testset. Our method yields state-of-the-art accuracy not only in newly-constructed dense 300W testset but also in the original sparse 300W and WFLW testsets without additional cost.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
337,142
1807.06574
Jensen: An Easily-Extensible C++ Toolkit for Production-Level Machine Learning and Convex Optimization
This paper introduces Jensen, an easily extensible and scalable toolkit for production-level machine learning and convex optimization. Jensen implements a framework of convex (or loss) functions, convex optimization algorithms (including Gradient Descent, L-BFGS, Stochastic Gradient Descent, Conjugate Gradient, etc.), and a family of machine learning classifiers and regressors (Logistic Regression, SVMs, Least Square Regression, etc.). This framework makes it possible to deploy and train models with a few lines of code, and also extend and build upon this by integrating new loss functions and optimization algorithms.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
103,147
1701.03264
On the Successive Cancellation Decoding of Polar Codes with Arbitrary Linear Binary Kernels
A method for efficiently successive cancellation (SC) decoding of polar codes with high-dimensional linear binary kernels (HDLBK) is presented and analyzed. We devise a $l$-expressions method which can obtain simplified recursive formulas of SC decoder in likelihood ratio form for arbitrary linear binary kernels to reduce the complexity of corresponding SC decoder. By considering the bit-channel transition probabilities $W_{G}^{(\cdot)}(\cdot|0)$ and $W_{G}^{(\cdot)}(\cdot|1)$ separately, a $W$-expressions method is proposed to further reduce the complexity of HDLBK based SC decoder. For a $m\times m$ binary kernel, the complexity of straightforward SC decoder is $O(2^{m}N\log N)$. With $W$-expressions, we reduce the complexity of straightforward SC decoder to $O(m^{2}N\log N)$ when $m\leq 16$. Simulation results show that $16\times16$ kernel polar codes offer significant advantages in terms of error performances compared with $2\times2$ kernel polar codes under SC and list SC decoders.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
66,678
2309.01246
Towards Generic Image Manipulation Detection with Weakly-Supervised Self-Consistency Learning
As advanced image manipulation techniques emerge, detecting the manipulation becomes increasingly important. Despite the success of recent learning-based approaches for image manipulation detection, they typically require expensive pixel-level annotations to train, while exhibiting degraded performance when testing on images that are differently manipulated compared with training images. To address these limitations, we propose weakly-supervised image manipulation detection, such that only binary image-level labels (authentic or tampered with) are required for training purpose. Such a weakly-supervised setting can leverage more training images and has the potential to adapt quickly to new manipulation techniques. To improve the generalization ability, we propose weakly-supervised self-consistency learning (WSCL) to leverage the weakly annotated images. Specifically, two consistency properties are learned: multi-source consistency (MSC) and inter-patch consistency (IPC). MSC exploits different content-agnostic information and enables cross-source learning via an online pseudo label generation and refinement process. IPC performs global pair-wise patch-patch relationship reasoning to discover a complete region of manipulation. Extensive experiments validate that our WSCL, even though is weakly supervised, exhibits competitive performance compared with fully-supervised counterpart under both in-distribution and out-of-distribution evaluations, as well as reasonable manipulation localization ability.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
389,608
2201.01089
A Heterogeneous In-Memory Computing Cluster For Flexible End-to-End Inference of Real-World Deep Neural Networks
Deployment of modern TinyML tasks on small battery-constrained IoT devices requires high computational energy efficiency. Analog In-Memory Computing (IMC) using non-volatile memory (NVM) promises major efficiency improvements in deep neural network (DNN) inference and serves as on-chip memory storage for DNN weights. However, IMC's functional flexibility limitations and their impact on performance, energy, and area efficiency are not yet fully understood at the system level. To target practical end-to-end IoT applications, IMC arrays must be enclosed in heterogeneous programmable systems, introducing new system-level challenges which we aim at addressing in this work. We present a heterogeneous tightly-coupled clustered architecture integrating 8 RISC-V cores, an in-memory computing accelerator (IMA), and digital accelerators. We benchmark the system on a highly heterogeneous workload such as the Bottleneck layer from a MobileNetV2, showing 11.5x performance and 9.5x energy efficiency improvements, compared to highly optimized parallel execution on the cores. Furthermore, we explore the requirements for end-to-end inference of a full mobile-grade DNN (MobileNetV2) in terms of IMC array resources, by scaling up our heterogeneous architecture to a multi-array accelerator. Our results show that our solution, on the end-to-end inference of the MobileNetV2, is one order of magnitude better in terms of execution latency than existing programmable architectures and two orders of magnitude better than state-of-the-art heterogeneous solutions integrating in-memory computing analog cores.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
true
274,149
2409.11078
MonoKAN: Certified Monotonic Kolmogorov-Arnold Network
Artificial Neural Networks (ANNs) have significantly advanced various fields by effectively recognizing patterns and solving complex problems. Despite these advancements, their interpretability remains a critical challenge, especially in applications where transparency and accountability are essential. To address this, explainable AI (XAI) has made progress in demystifying ANNs, yet interpretability alone is often insufficient. In certain applications, model predictions must align with expert-imposed requirements, sometimes exemplified by partial monotonicity constraints. While monotonic approaches are found in the literature for traditional Multi-layer Perceptrons (MLPs), they still face difficulties in achieving both interpretability and certified partial monotonicity. Recently, the Kolmogorov-Arnold Network (KAN) architecture, based on learnable activation functions parametrized as splines, has been proposed as a more interpretable alternative to MLPs. Building on this, we introduce a novel ANN architecture called MonoKAN, which is based on the KAN architecture and achieves certified partial monotonicity while enhancing interpretability. To achieve this, we employ cubic Hermite splines, which guarantee monotonicity through a set of straightforward conditions. Additionally, by using positive weights in the linear combinations of these splines, we ensure that the network preserves the monotonic relationships between input and output. Our experiments demonstrate that MonoKAN not only enhances interpretability but also improves predictive performance across the majority of benchmarks, outperforming state-of-the-art monotonic MLP approaches.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
true
false
false
489,002
2306.13769
Functional-Group-Based Diffusion for Pocket-Specific Molecule Generation and Elaboration
In recent years, AI-assisted drug design methods have been proposed to generate molecules given the pockets' structures of target proteins. Most of them are atom-level-based methods, which consider atoms as basic components and generate atom positions and types. In this way, however, it is hard to generate realistic fragments with complicated structures. To solve this, we propose D3FG, a functional-group-based diffusion model for pocket-specific molecule generation and elaboration. D3FG decomposes molecules into two categories of components: functional groups defined as rigid bodies and linkers as mass points. And the two kinds of components can together form complicated fragments that enhance ligand-protein interactions. To be specific, in the diffusion process, D3FG diffuses the data distribution of the positions, orientations, and types of the components into a prior distribution; In the generative process, the noise is gradually removed from the three variables by denoisers parameterized with designed equivariant graph neural networks. In the experiments, our method can generate molecules with more realistic 3D structures, competitive affinities toward the protein targets, and better drug properties. Besides, D3FG as a solution to a new task of molecule elaboration, could generate molecules with high affinities based on existing ligands and the hotspots of target proteins.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
375,389
1805.09277
WisenetMD: Motion Detection Using Dynamic Background Region Analysis
Motion detection algorithms that can be applied to surveillance cameras such as CCTV (Closed Circuit Television) have been studied extensively. Motion detection algorithm is mostly based on background subtraction. One main issue in this technique is that false positives of dynamic backgrounds such as wind shaking trees and flowing rivers might occur. In this paper, we proposed a method to search for dynamic background region by analyzing the video and removing false positives by re-checking false positives. The proposed method was evaluated based on CDnet 2012/2014 dataset obtained at "changedetection.net" site. We also compared its processing speed with other algorithms.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
98,384
2406.05784
Optimizing Multi-Stuttered Speech Classification: Leveraging Whisper's Encoder for Efficient Parameter Reduction in Automated Assessment
The automated classification of stuttered speech has significant implications for timely assessments providing assistance to speech language pathologists. Despite notable advancements in the field, the cases in which multiple disfluencies occur in speech require attention. We have taken a progressive approach to fill this gap by classifying multi-stuttered speech more efficiently. The problem has been addressed by firstly curating a dataset of multi-stuttered disfluencies from open source dataset SEP-28k audio clips. Secondly, employing Whisper, a state-of-the-art speech recognition model has been leveraged by using its encoder and taking the problem as multi label classification. Thirdly, using a 6 encoder layer Whisper and experimenting with various layer freezing strategies, a computationally efficient configuration of the model was identified. The proposed configuration achieved micro, macro, and weighted F1-scores of 0.88, 0.85, and 0.87, correspondingly on an external test dataset i.e. Fluency-Bank. In addition, through layer freezing strategies, we were able to achieve the aforementioned results by fine-tuning a single encoder layer, consequently, reducing the model's trainable parameters from 20.27 million to 3.29 million. This research study unveils the contribution of the last encoder layer in the identification of disfluencies in stuttered speech. Consequently, it has led to a computationally efficient approach, 83.7% less parameters to train, making the proposed approach more adaptable for various dialects and languages.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
462,289
2406.18555
Using a Convolutional Neural Network and Explainable AI to Diagnose Dementia Based on MRI Scans
As the number of dementia patients rises, the need for accurate diagnostic procedures rises as well. Current methods, like using an MRI scan, rely on human input, which can be inaccurate. However, the decision logic behind machine learning algorithms and their outputs cannot be explained, as most operate in black-box models. Therefore, to increase the accuracy of diagnosing dementia through MRIs, a convolution neural network has been developed and trained using an open-source database of 6400 MRI scans divided into 4 dementia classes. The model, which attained a 98 percent validation accuracy, was shown to be well fit and able to generalize to new data. Furthermore, to aid in the visualization of the model output, an explainable AI algorithm was developed by visualizing the outputs of individual filters in each convolution layer, which highlighted regions of interest in the scan. These outputs do a great job of identifying the image features that contribute most to the model classification, thus allowing users to visualize and understand the results. Altogether, this combination of the convolution neural network and explainable AI algorithm creates a system that can be used in the medical field to not only aid in the proper classification of dementia but also allow everyone involved to visualize and understand the results.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
468,064
1710.07903
The Complexity of Graph-Based Reductions for Reachability in Markov Decision Processes
We study the never-worse relation (NWR) for Markov decision processes with an infinite-horizon reachability objective. A state q is never worse than a state p if the maximal probability of reaching the target set of states from p is at most the same value from q, regard- less of the probabilities labelling the transitions. Extremal-probability states, end components, and essential states are all special cases of the equivalence relation induced by the NWR. Using the NWR, states in the same equivalence class can be collapsed. Then, actions leading to sub- optimal states can be removed. We show the natural decision problem associated to computing the NWR is coNP-complete. Finally, we ex- tend a previously known incomplete polynomial-time iterative algorithm to under-approximate the NWR.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
83,012
2309.11166
Are Large Language Models Really Robust to Word-Level Perturbations?
The swift advancement in the scales and capabilities of Large Language Models (LLMs) positions them as promising tools for a variety of downstream tasks. In addition to the pursuit of better performance and the avoidance of violent feedback on a certain prompt, to ensure the responsibility of the LLM, much attention is drawn to the robustness of LLMs. However, existing evaluation methods mostly rely on traditional question answering datasets with predefined supervised labels, which do not align with the superior generation capabilities of contemporary LLMs. To address this issue, we propose a novel rational evaluation approach that leverages pre-trained reward models as diagnostic tools to evaluate the longer conversation generated from more challenging open questions by LLMs, which we refer to as the Reward Model for Reasonable Robustness Evaluation (TREvaL). Longer conversations manifest the comprehensive grasp of language models in terms of their proficiency in understanding questions, a capability not entirely encompassed by individual words or letters, which may exhibit oversimplification and inherent biases. Our extensive empirical experiments demonstrate that TREvaL provides an innovative method for evaluating the robustness of an LLM. Furthermore, our results demonstrate that LLMs frequently exhibit vulnerability to word-level perturbations that are commonplace in daily language usage. Notably, we are surprised to discover that robustness tends to decrease as fine-tuning (SFT and RLHF) is conducted. The code of TREval is available in https://github.com/Harry-mic/TREvaL.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
393,312
2009.09654
Generative Imagination Elevates Machine Translation
There are common semantics shared across text and images. Given a sentence in a source language, whether depicting the visual scene helps translation into a target language? Existing multimodal neural machine translation methods (MNMT) require triplets of bilingual sentence - image for training and tuples of source sentence - image for inference. In this paper, we propose ImagiT, a novel machine translation method via visual imagination. ImagiT first learns to generate visual representation from the source sentence, and then utilizes both source sentence and the "imagined representation" to produce a target translation. Unlike previous methods, it only needs the source sentence at the inference time. Experiments demonstrate that ImagiT benefits from visual imagination and significantly outperforms the text-only neural machine translation baselines. Further analysis reveals that the imagination process in ImagiT helps fill in missing information when performing the degradation strategy.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
196,646
2501.10152
Quantum Advantage in Private Multiple Hypothesis Testing
For multiple hypothesis testing based on classical data samples, we demonstrate a quantum advantage in the optimal privacy-utility trade-off (PUT), where the privacy and utility measures are set to (quantum) local differential privacy and the pairwise-minimum Chernoff information, respectively. To show the quantum advantage, we consider some class of hypotheses that we coin smoothed point masses. For such hypotheses, we derive an upper bound of the optimal PUT achieved by classical mechanisms, which is tight for some cases, and propose a certain quantum mechanism which achieves a better PUT than the upper bound. The proposed quantum mechanism consists of a classical-quantum channel whose outputs are pure states corresponding to a symmetric informationally complete positive operator-valued measure (SIC-POVM), and a depolarizing channel.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
525,414
2310.19053
Datasets and Benchmarks for Nanophotonic Structure and Parametric Design Simulations
Nanophotonic structures have versatile applications including solar cells, anti-reflective coatings, electromagnetic interference shielding, optical filters, and light emitting diodes. To design and understand these nanophotonic structures, electrodynamic simulations are essential. These simulations enable us to model electromagnetic fields over time and calculate optical properties. In this work, we introduce frameworks and benchmarks to evaluate nanophotonic structures in the context of parametric structure design problems. The benchmarks are instrumental in assessing the performance of optimization algorithms and identifying an optimal structure based on target optical properties. Moreover, we explore the impact of varying grid sizes in electrodynamic simulations, shedding light on how evaluation fidelity can be strategically leveraged in enhancing structure designs.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
403,826
2212.12675
Iterative regularization in classification via hinge loss diagonal descent
Iterative regularization is a classic idea in regularization theory, that has recently become popular in machine learning. On the one hand, it allows to design efficient algorithms controlling at the same time numerical and statistical accuracy. On the other hand it allows to shed light on the learning curves observed while training neural networks. In this paper, we focus on iterative regularization in the context of classification. After contrasting this setting with that of linear inverse problems, we develop an iterative regularization approach based on the use of the hinge loss function. More precisely we consider a diagonal approach for a family of algorithms for which we prove convergence as well as rates of convergence and stability results for a suitable classification noise model. Our approach compares favorably with other alternatives, as confirmed by numerical simulations.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
338,101
2312.01964
Semantics-aware Motion Retargeting with Vision-Language Models
Capturing and preserving motion semantics is essential to motion retargeting between animation characters. However, most of the previous works neglect the semantic information or rely on human-designed joint-level representations. Here, we present a novel Semantics-aware Motion reTargeting (SMT) method with the advantage of vision-language models to extract and maintain meaningful motion semantics. We utilize a differentiable module to render 3D motions. Then the high-level motion semantics are incorporated into the motion retargeting process by feeding the vision-language model with the rendered images and aligning the extracted semantic embeddings. To ensure the preservation of fine-grained motion details and high-level semantics, we adopt a two-stage pipeline consisting of skeleton-aware pre-training and fine-tuning with semantics and geometry constraints. Experimental results show the effectiveness of the proposed method in producing high-quality motion retargeting results while accurately preserving motion semantics.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
412,644
1908.09101
Where Is My Mirror?
Mirrors are everywhere in our daily lives. Existing computer vision systems do not consider mirrors, and hence may get confused by the reflected content inside a mirror, resulting in a severe performance degradation. However, separating the real content outside a mirror from the reflected content inside it is non-trivial. The key challenge is that mirrors typically reflect contents similar to their surroundings, making it very difficult to differentiate the two. In this paper, we present a novel method to segment mirrors from an input image. To the best of our knowledge, this is the first work to address the mirror segmentation problem with a computational approach. We make the following contributions. First, we construct a large-scale mirror dataset that contains mirror images with corresponding manually annotated masks. This dataset covers a variety of daily life scenes, and will be made publicly available for future research. Second, we propose a novel network, called MirrorNet, for mirror segmentation, by modeling both semantical and low-level color/texture discontinuities between the contents inside and outside of the mirrors. Third, we conduct extensive experiments to evaluate the proposed method, and show that it outperforms the carefully chosen baselines from the state-of-the-art detection and segmentation methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
142,755
1507.07492
Symmetric Canonical Quincunx Tight Framelets with High Vanishing Moments and Smoothness
We propose an approach to construct a family of two-dimensional compactly supported real-valued symmetric quincunx tight framelets $\{\phi; \psi_1,\psi_2,\psi_3\}$ in $L_2(R^2)$ with arbitrarily high orders of vanishing moments. Such symmetric quincunx tight framelets are associated with quincunx tight framelet filter banks $\{a;b_1,b_2,b_3\}$ having increasing orders of vanishing moments and enjoying the additional double canonical properties: \[ b_1(k_1,k_2)=(-1)^{1+k_1+k_2} a(1-k_1,-k_2), b_3(k_1,k_2)=(-1)^{1+k_1+k_2} b_2(1-k_1,-k_2). \] For a low-pass filter $a$ which is not a quincunx orthonormal wavelet filter, we show that a quincunx tight framelet filter bank $\{a;b_1,\ldots,b_L\}$ with $b_1$ taking the above canonical form must have $L\ge 3$ high-pass filters. Thus, our family of symmetric double canonical quincunx tight framelets has the minimum number of generators. Numerical calculation indicates that this family of symmetric double canonical quincunx tight framelets can be arbitrarily smooth. Using one-dimensional filters having linear-phase moments, in this paper we also provide a second approach to construct multiple canonical quincunx tight framelets with symmetry. In particular, the second approach yields a family of $6$-multiple canonical real-valued quincunx tight framelets in $L_2(R^2)$ and a family of double canonical complex-valued quincunx tight framelets in $L_2(R^2)$ such that both of them have symmetry and arbitrarily increasing orders of smoothness and vanishing moments. Several examples are provided to illustrate our general construction and theoretical results on canonical quincunx tight framelets in $L_2(R^2)$ with symmetry, high vanishing moments, and smoothness. Symmetric quincunx tight framelets constructed by both approaches in this paper are of particular interest for their applications in computer graphics and image processing.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
45,481
2208.03812
When can I Speak? Predicting initiation points for spoken dialogue agents
Current spoken dialogue systems initiate their turns after a long period of silence (700-1000ms), which leads to little real-time feedback, sluggish responses, and an overall stilted conversational flow. Humans typically respond within 200ms and successfully predicting initiation points in advance would allow spoken dialogue agents to do the same. In this work, we predict the lead-time to initiation using prosodic features from a pre-trained speech representation model (wav2vec 1.0) operating on user audio and word features from a pre-trained language model (GPT-2) operating on incremental transcriptions. To evaluate errors, we propose two metrics w.r.t. predicted and true lead times. We train and evaluate the models on the Switchboard Corpus and find that our method outperforms features from prior work on both metrics and vastly outperforms the common approach of waiting for 700ms of silence.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
311,906
2301.12537
Non-Asymptotic State-Space Identification of Closed-Loop Stochastic Linear Systems using Instrumental Variables
The paper suggests a generalization of the Sign-Perturbed Sums (SPS) finite sample system identification method for the identification of closed-loop observable stochastic linear systems in state-space form. The solution builds on the theory of matrix-variate regression and instrumental variable methods to construct distribution-free confidence regions for the state-space matrices. Both direct and indirect identification are studied, and the exactness as well as the strong consistency of the construction are proved. Furthermore, a new, computationally efficient ellipsoidal outer-approximation algorithm for the confidence regions is proposed. The new construction results in a semidefinite optimization problem which has an order-of-magnitude smaller number of constraints, as if one applied the ellipsoidal outer-approximation after vectorization. The effectiveness of the approach is also demonstrated empirically via a series of numerical experiments.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
342,570
2410.07746
Benign Overfitting in Single-Head Attention
The phenomenon of benign overfitting, where a trained neural network perfectly fits noisy training data but still achieves near-optimal test performance, has been extensively studied in recent years for linear models and fully-connected/convolutional networks. In this work, we study benign overfitting in a single-head softmax attention model, which is the fundamental building block of Transformers. We prove that under appropriate conditions, the model exhibits benign overfitting in a classification setting already after two steps of gradient descent. Moreover, we show conditions where a minimum-norm/maximum-margin interpolator exhibits benign overfitting. We study how the overfitting behavior depends on the signal-to-noise ratio (SNR) of the data distribution, namely, the ratio between norms of signal and noise tokens, and prove that a sufficiently large SNR is both necessary and sufficient for benign overfitting.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
496,780
2006.08878
CNN Acceleration by Low-rank Approximation with Quantized Factors
The modern convolutional neural networks although achieve great results in solving complex computer vision tasks still cannot be effectively used in mobile and embedded devices due to the strict requirements for computational complexity, memory and power consumption. The CNNs have to be compressed and accelerated before deployment. In order to solve this problem the novel approach combining two known methods, low-rank tensor approximation in Tucker format and quantization of weights and feature maps (activations), is proposed. The greedy one-step and multi-step algorithms for the task of multilinear rank selection are proposed. The approach for quality restoration after applying Tucker decomposition and quantization is developed. The efficiency of our method is demonstrated for ResNet18 and ResNet34 on CIFAR-10, CIFAR-100 and Imagenet classification tasks. As a result of comparative analysis performed for other methods for compression and acceleration our approach showed its promising features.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
182,342
1405.0312
Microsoft COCO: Common Objects in Context
We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
32,758
2203.07814
Competition-Level Code Generation with AlphaCode
Programming is a powerful and ubiquitous problem-solving tool. Developing systems that can assist programmers or even generate programs independently could make programming more productive and accessible, yet so far incorporating innovations in AI has proven challenging. Recent large-scale language models have demonstrated an impressive ability to generate code, and are now able to complete simple programming tasks. However, these models still perform poorly when evaluated on more complex, unseen problems that require problem-solving skills beyond simply translating instructions into code. For example, competitive programming problems which require an understanding of algorithms and complex natural language remain extremely challenging. To address this gap, we introduce AlphaCode, a system for code generation that can create novel solutions to these problems that require deeper reasoning. In simulated evaluations on recent programming competitions on the Codeforces platform, AlphaCode achieved on average a ranking of top 54.3% in competitions with more than 5,000 participants. We found that three key components were critical to achieve good and reliable performance: (1) an extensive and clean competitive programming dataset for training and evaluation, (2) large and efficient-to-sample transformer-based architectures, and (3) large-scale model sampling to explore the search space, followed by filtering based on program behavior to a small set of submissions.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
285,571
2310.16937
Learning Transfers over Several Programming Languages
Large language models (LLMs) have become remarkably good at improving developer productivity for high-resource programming languages. These models use two kinds of data: large amounts of unlabeled code samples for pre-training and relatively smaller amounts of labeled code samples for fine-tuning or in-context learning. Unfortunately, many programming languages are low-resource, lacking labeled samples for most tasks and often even lacking unlabeled samples. Therefore, users of low-resource languages (e.g., legacy or new languages) miss out on the benefits of LLMs. Cross-lingual transfer uses data from a source language to improve model performance on a target language. It has been well-studied for natural languages, but has received little attention for programming languages. This paper reports extensive experiments on four tasks using a transformer-based LLM and 11 to 41 programming languages to explore the following questions. First, how well does cross-lingual transfer work for a given task across different language pairs. Second, given a task and target language, how should one choose a source language. Third, which characteristics of a language pair are predictive of transfer performance, and how does that depend on the given task. Our empirical study with 1,808 experiments reveals practical and scientific insights, such as Kotlin and JavaScript being the most transferable source languages and different tasks relying on substantially different features. Overall, we find that learning transfers well across several programming languages.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
402,924
2105.00434
Dynamic Routing for Traffic Flow through Multi-agent Systems
Routing strategies for traffics and vehicles have been historically studied. However, in the absence of considering drivers' preferences, current route planning algorithms are developed under ideal situations where all drivers are expected to behave rationally and properly. Especially, for jumbled urban road networks, drivers' actual routing strategies deteriorated to a series of empirical and selfish decisions that result in congestion. Self-evidently, if minimum mobility can be kept, traffic congestion is avoidable by traffic load dispersing. In this paper, we establish a novel dynamic routing method catering drivers' preferences and retaining maximum traffic mobility simultaneously through multi-agent systems (MAS). Modeling human-drivers' behavior through agents' dynamics, MAS can analyze the global behavior of the entire traffic flow. Therefore, regarding agents as particles in smoothed particles hydrodynamics (SPH), we can enforce the traffic flow to behave like a real flow. Thereby, with the characteristic of distributing itself uniformly in road networks, our dynamic routing method realizes traffic load balancing without violating the individual time-saving motivation. Moreover, as a discrete control mechanism, our method is robust to chaos meaning driver's disobedience can be tolerated. As controlled by SPH based density, the only intelligent transportation system (ITS) we require is the location-based service (LBS). A mathematical proof is accomplished to scrutinize the stability of the proposed control law. Also, multiple testing cases are built to verify the effectiveness of the proposed dynamic routing algorithm.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
233,224
2205.04799
Designing a Recurrent Neural Network to Learn a Motion Planner for High-Dimensional Inputs
The use of machine learning in the self-driving industry has boosted a number of recent advancements. In particular, the usage of large deep learning models in the perception and prediction stack have proved quite successful, but there still lacks significant literature on the use of machine learning in the planning stack. The current state of the art in the planning stack often relies on fast constrained optimization or rule-based approaches. Both of these techniques fail to address a significant number of fundamental problems that would allow the vehicle to operate more similarly to that of human drivers. In this paper, we attempt to design a basic deep learning system to approach this problem. Furthermore, the main underlying goal of this paper is to demonstrate the potential uses of machine learning in the planning stack for autonomous vehicles (AV) and provide a baseline work for ongoing and future research.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
295,756
1912.03858
Bundle Adjustment Revisited
3D reconstruction has been developing all these two decades, from moderate to medium size and to large scale. It's well known that bundle adjustment plays an important role in 3D reconstruction, mainly in Structure from Motion(SfM) and Simultaneously Localization and Mapping(SLAM). While bundle adjustment optimizes camera parameters and 3D points as a non-negligible final step, it suffers from memory and efficiency requirements in very large scale reconstruction. In this paper, we study the development of bundle adjustment elaborately in both conventional and distributed approaches. The detailed derivation and pseudo code are also given in this paper.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
156,710
2204.02976
Follow My Eye: Using Gaze to Supervise Computer-Aided Diagnosis
When deep neural network (DNN) was first introduced to the medical image analysis community, researchers were impressed by its performance. However, it is evident now that a large number of manually labeled data is often a must to train a properly functioning DNN. This demand for supervision data and labels is a major bottleneck in current medical image analysis, since collecting a large number of annotations from experienced experts can be time-consuming and expensive. In this paper, we demonstrate that the eye movement of radiologists reading medical images can be a new form of supervision to train the DNN-based computer-aided diagnosis (CAD) system. Particularly, we record the tracks of the radiologists' gaze when they are reading images. The gaze information is processed and then used to supervise the DNN's attention via an Attention Consistency module. To the best of our knowledge, the above pipeline is among the earliest efforts to leverage expert eye movement for deep-learning-based CAD. We have conducted extensive experiments on knee X-ray images for osteoarthritis assessment. The results show that our method can achieve considerable improvement in diagnosis performance, with the help of gaze supervision.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
290,148
1004.1564
Polymatroids with Network Coding
The problem of network coding for multicasting a single source to multiple sinks has first been studied by Ahlswede, Cai, Li and Yeung in 2000, in which they have established the celebrated max-flow mini-cut theorem on non-physical information flow over a network of independent channels. On the other hand, in 1980, Han has studied the case with correlated multiple sources and a single sink from the viewpoint of polymatroidal functions in which a necessary and sufficient condition has been demonstrated for reliable transmission over the network. This paper presents an attempt to unify both cases, which leads to establish a necessary and sufficient condition for reliable transmission over a noisy network for multicasting all the correlated multiple sources to all the multiple sinks. Furthermore, we address also the problem of transmitting "independent" sources over a multiple-access-type of network as well as over a broadcast-type of network, which reveals that the (co-) polymatroidal structures are intrinsically involved in these types of network coding.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
6,120
1706.09777
Service adoption spreading in online social networks
The collective behaviour of people adopting an innovation, product or online service is commonly interpreted as a spreading phenomenon throughout the fabric of society. This process is arguably driven by social influence, social learning and by external effects like media. Observations of such processes date back to the seminal studies by Rogers and Bass, and their mathematical modelling has taken two directions: One paradigm, called simple contagion, identifies adoption spreading with an epidemic process. The other one, named complex contagion, is concerned with behavioural thresholds and successfully explains the emergence of large cascades of adoption resulting in a rapid spreading often seen in empirical data. The observation of real world adoption processes has become easier lately due to the availability of large digital social network and behavioural datasets. This has allowed simultaneous study of network structures and dynamics of online service adoption, shedding light on the mechanisms and external effects that influence the temporal evolution of behavioural or innovation adoption. These advancements have induced the development of more realistic models of social spreading phenomena, which in turn have provided remarkably good predictions of various empirical adoption processes. In this chapter we review recent data-driven studies addressing real-world service adoption processes. Our studies provide the first detailed empirical evidence of a heterogeneous threshold distribution in adoption. We also describe the modelling of such phenomena with formal methods and data-driven simulations. Our objective is to understand the effects of identified social mechanisms on service adoption spreading, and to provide potential new directions and open questions for future research.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
76,198
2106.05958
High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise
Stochastic first-order methods are standard for training large-scale machine learning models. Random behavior may cause a particular run of an algorithm to result in a highly suboptimal objective value, whereas theoretical guarantees are usually proved for the expectation of the objective value. Thus, it is essential to theoretically guarantee that algorithms provide small objective residual with high probability. Existing methods for non-smooth stochastic convex optimization have complexity bounds with the dependence on the confidence level that is either negative-power or logarithmic but under an additional assumption of sub-Gaussian (light-tailed) noise distribution that may not hold in practice. In our paper, we resolve this issue and derive the first high-probability convergence results with logarithmic dependence on the confidence level for non-smooth convex stochastic optimization problems with non-sub-Gaussian (heavy-tailed) noise. To derive our results, we propose novel stepsize rules for two stochastic methods with gradient clipping. Moreover, our analysis works for generalized smooth objectives with H\"older-continuous gradients, and for both methods, we provide an extension for strongly convex problems. Finally, our results imply that the first (accelerated) method we consider also has optimal iteration and oracle complexity in all the regimes, and the second one is optimal in the non-smooth setting.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
240,291
2302.12749
SurvivalGAN: Generating Time-to-Event Data for Survival Analysis
Synthetic data is becoming an increasingly promising technology, and successful applications can improve privacy, fairness, and data democratization. While there are many methods for generating synthetic tabular data, the task remains non-trivial and unexplored for specific scenarios. One such scenario is survival data. Here, the key difficulty is censoring: for some instances, we are not aware of the time of event, or if one even occurred. Imbalances in censoring and time horizons cause generative models to experience three new failure modes specific to survival analysis: (1) generating too few at-risk members; (2) generating too many at-risk members; and (3) censoring too early. We formalize these failure modes and provide three new generative metrics to quantify them. Following this, we propose SurvivalGAN, a generative model that handles survival data firstly by addressing the imbalance in the censoring and event horizons, and secondly by using a dedicated mechanism for approximating time-to-event/censoring. We evaluate this method via extensive experiments on medical datasets. SurvivalGAN outperforms multiple baselines at generating survival data, and in particular addresses the failure modes as measured by the new metrics, in addition to improving downstream performance of survival models trained on the synthetic data.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
347,681
2405.10893
COGNET-MD, an evaluation framework and dataset for Large Language Model benchmarks in the medical domain
Large Language Models (LLMs) constitute a breakthrough state-of-the-art Artificial Intelligence (AI) technology which is rapidly evolving and promises to aid in medical diagnosis either by assisting doctors or by simulating a doctor's workflow in more advanced and complex implementations. In this technical paper, we outline Cognitive Network Evaluation Toolkit for Medical Domains (COGNET-MD), which constitutes a novel benchmark for LLM evaluation in the medical domain. Specifically, we propose a scoring-framework with increased difficulty to assess the ability of LLMs in interpreting medical text. The proposed framework is accompanied with a database of Multiple Choice Quizzes (MCQs). To ensure alignment with current medical trends and enhance safety, usefulness, and applicability, these MCQs have been constructed in collaboration with several associated medical experts in various medical domains and are characterized by varying degrees of difficulty. The current (first) version of the database includes the medical domains of Psychiatry, Dentistry, Pulmonology, Dermatology and Endocrinology, but it will be continuously extended and expanded to include additional medical domains.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
454,924
2211.02005
Robust Dependence Measure using RKHS based Uncertainty Moments and Optimal Transport
Reliable measurement of dependence between variables is essential in many applications of statistics and machine learning. Current approaches for dependence estimation, especially density-based approaches, lack in precision, robustness and/or interpretability (in terms of the type of dependence being estimated). We propose a two-step approach for dependence quantification between random variables: 1) We first decompose the probability density functions (PDF) of the variables involved in terms of multiple local moments of uncertainty that systematically and precisely identify the different regions of the PDF (with special emphasis on the tail-regions). 2) We then compute an optimal transport map to measure the geometric similarity between the corresponding sets of decomposed local uncertainty moments of the variables. Dependence is then determined by the degree of one-to-one correspondence between the respective uncertainty moments of the variables in the optimal transport map. We utilize a recently introduced Gaussian reproducing kernel Hilbert space (RKHS) based framework for multi-moment uncertainty decomposition of the variables. Being based on the Gaussian RKHS, our approach is robust towards outliers and monotone transformations of data, while the multiple moments of uncertainty provide high resolution and interpretability of the type of dependence being quantified. We support these claims through some preliminary results using simulated data.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
328,444
2501.01516
Improving Robustness Estimates in Natural Language Explainable AI though Synonymity Weighted Similarity Measures
Explainable AI (XAI) has seen a surge in recent interest with the proliferation of powerful but intractable black-box models. Moreover, XAI has come under fire for techniques that may not offer reliable explanations. As many of the methods in XAI are themselves models, adversarial examples have been prominent in the literature surrounding the effectiveness of XAI, with the objective of these examples being to alter the explanation while maintaining the output of the original model. For explanations in natural language, it is natural to use measures found in the domain of information retrieval for use with ranked lists to guide the adversarial XAI process. We show that the standard implementation of these measures are poorly suited for the comparison of explanations in adversarial XAI and amend them by using information that is discarded, the synonymity of perturbed words. This synonymity weighting produces more accurate estimates of the actual weakness of XAI methods to adversarial examples.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
522,106
2011.07410
Robust and Efficient Multilevel-ILU Preconditioning of Hybrid Newton-GMRES for Incompressible Navier-Stokes Equations
We introduce a robust and efficient preconditioner for a hybrid Newton-GMRES method for solving the nonlinear systems arising from incompressible Navier-Stokes equations. When the Reynolds number is relatively high, these systems often involve millions of degrees of freedom (DOFs), and the nonlinear systems are difficult to converge, partially due to the strong asymmetry of the system and the saddle-point structure. In this work, we propose to alleviate these issues by leveraging a multilevel ILU preconditioner called HILUCSI, which is particularly effective for saddle-point problems and can enable robust and rapid convergence of the inner iterations in Newton-GMRES. We further use Picard iterations with the Oseen systems to hot-start Newton-GMRES to achieve global convergence, also preconditioned using HILUCSI. To further improve efficiency and robustness, we use the Oseen operators as physics-based sparsifiers when building preconditioners for Newton iterations and introduce adaptive refactorization and iterative refinement in HILUCSI. We refer to the resulting preconditioned hybrid Newton-GMRES as HILUNG. We demonstrate the effectiveness of HILUNG by solving the standard 2D driven-cavity problem with Re 5000 and a 3D flow-over-cylinder problem with low viscosity. We compare HILUNG with some state-of-the-art customized preconditioners for INS, including two variants of augmented Lagrangian preconditioners and two physics-based preconditioners, as well as some general-purpose approximate-factorization techniques. Our comparison shows that HILUNG is much more robust for solving high-Re problems and it is also more efficient in both memory and runtime for moderate-Re problems.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
206,545