id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2306.04811 | Generative Text-Guided 3D Vision-Language Pretraining for Unified
Medical Image Segmentation | Vision-Language Pretraining (VLP) has demonstrated remarkable capabilities in learning visual representations from textual descriptions of images without annotations. Yet, effective VLP demands large-scale image-text pairs, a resource that suffers scarcity in the medical domain. Moreover, conventional VLP is limited to 2D images while medical images encompass diverse modalities, often in 3D, making the learning process more challenging. To address these challenges, we present Generative Text-Guided 3D Vision-Language Pretraining for Unified Medical Image Segmentation (GTGM), a framework that extends of VLP to 3D medical images without relying on paired textual descriptions. Specifically, GTGM utilizes large language models (LLM) to generate medical-style text from 3D medical images. This synthetic text is then used to supervise 3D visual representation learning. Furthermore, a negative-free contrastive learning objective strategy is introduced to cultivate consistent visual representations between augmented 3D medical image patches, which effectively mitigates the biases associated with strict positive-negative sample pairings. We evaluate GTGM on three imaging modalities - Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and electron microscopy (EM) over 13 datasets. GTGM's superior performance across various medical image segmentation tasks underscores its effectiveness and versatility, by enabling VLP extension into 3D medical imagery while bypassing the need for paired text. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 371,906 |
1911.04650 | Throughput Prediction of Asynchronous SGD in TensorFlow | Modern machine learning frameworks can train neural networks using multiple nodes in parallel, each computing parameter updates with stochastic gradient descent (SGD) and sharing them asynchronously through a central parameter server. Due to communication overhead and bottlenecks, the total throughput of SGD updates in a cluster scales sublinearly, saturating as the number of nodes increases. In this paper, we present a solution to predicting training throughput from profiling traces collected from a single-node configuration. Our approach is able to model the interaction of multiple nodes and the scheduling of concurrent transmissions between the parameter server and each node. By accounting for the dependencies between received parts and pending computations, we predict overlaps between computation and communication and generate synthetic execution traces for configurations with multiple nodes. We validate our approach on TensorFlow training jobs for popular image classification neural networks, on AWS and on our in-house cluster, using nodes equipped with GPUs or only with CPUs. We also investigate the effects of data transmission policies used in TensorFlow and the accuracy of our approach when combined with optimizations of the transmission schedule. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 153,043 |
2401.01176 | Fundamental Limitation of Semantic Communications: Neural Estimation for
Rate-Distortion | This paper studies the fundamental limit of semantic communications over the discrete memoryless channel. We consider the scenario to send a semantic source consisting of an observation state and its corresponding semantic state, both of which are recovered at the receiver. To derive the performance limitation, we adopt the semantic rate-distortion function (SRDF) to study the relationship among the minimum compression rate, observation distortion, semantic distortion, and channel capacity. For the case with unknown semantic source distribution, while only a set of the source samples is available, we propose a neural-network-based method by leveraging the generative networks to learn the semantic source distribution. Furthermore, for a special case where the semantic state is a deterministic function of the observation, we design a cascade neural network to estimate the SRDF. For the case with perfectly known semantic source distribution, we propose a general Blahut-Arimoto algorithm to effectively compute the SRDF. Finally, experimental results validate our proposed algorithms for the scenarios with ideal Gaussian semantic source and some practical datasets. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 419,260 |
2412.20679 | Differentiable Convex Optimization Layers in Neural Architectures:
Foundations and Perspectives | The integration of optimization problems within neural network architectures represents a fundamental shift from traditional approaches to handling constraints in deep learning. While it is long known that neural networks can incorporate soft constraints with techniques such as regularization, strict adherence to hard constraints is generally more difficult. A recent advance in this field, however, has addressed this problem by enabling the direct embedding of optimization layers as differentiable components within deep networks. This paper surveys the evolution and current state of this approach, from early implementations limited to quadratic programming, to more recent frameworks supporting general convex optimization problems. We provide a comprehensive review of the background, theoretical foundations, and emerging applications of this technology. Our analysis includes detailed mathematical proofs and an examination of various use cases that demonstrate the potential of this hybrid approach. This work synthesizes developments at the intersection of optimization theory and deep learning, offering insights into both current capabilities and future research directions in this rapidly evolving field. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 521,315 |
2405.05204 | CARE-SD: Classifier-based analysis for recognizing and eliminating
stigmatizing and doubt marker labels in electronic health records: model
development and validation | Objective: To detect and classify features of stigmatizing and biased language in intensive care electronic health records (EHRs) using natural language processing techniques. Materials and Methods: We first created a lexicon and regular expression lists from literature-driven stem words for linguistic features of stigmatizing patient labels, doubt markers, and scare quotes within EHRs. The lexicon was further extended using Word2Vec and GPT 3.5, and refined through human evaluation. These lexicons were used to search for matches across 18 million sentences from the de-identified Medical Information Mart for Intensive Care-III (MIMIC-III) dataset. For each linguistic bias feature, 1000 sentence matches were sampled, labeled by expert clinical and public health annotators, and used to supervised learning classifiers. Results: Lexicon development from expanded literature stem-word lists resulted in a doubt marker lexicon containing 58 expressions, and a stigmatizing labels lexicon containing 127 expressions. Classifiers for doubt markers and stigmatizing labels had the highest performance, with macro F1-scores of .84 and .79, positive-label recall and precision values ranging from .71 to .86, and accuracies aligning closely with human annotator agreement (.87). Discussion: This study demonstrated the feasibility of supervised classifiers in automatically identifying stigmatizing labels and doubt markers in medical text, and identified trends in stigmatizing language use in an EHR setting. Additional labeled data may help improve lower scare quote model performance. Conclusions: Classifiers developed in this study showed high model performance and can be applied to identify patterns and target interventions to reduce stigmatizing labels and doubt markers in healthcare systems. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 452,834 |
2301.11361 | Distributed Optimization Methods for Multi-Robot Systems: Part II -- A
Survey | Although the field of distributed optimization is well-developed, relevant literature focused on the application of distributed optimization to multi-robot problems is limited. This survey constitutes the second part of a two-part series on distributed optimization applied to multi-robot problems. In this paper, we survey three main classes of distributed optimization algorithms -- distributed first-order methods, distributed sequential convex programming methods, and alternating direction method of multipliers (ADMM) methods -- focusing on fully-distributed methods that do not require coordination or computation by a central computer. We describe the fundamental structure of each category and note important variations around this structure, designed to address its associated drawbacks. Further, we provide practical implications of noteworthy assumptions made by distributed optimization algorithms, noting the classes of robotics problems suitable for these algorithms. Moreover, we identify important open research challenges in distributed optimization, specifically for robotics problems. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | 342,112 |
2409.03067 | A Comparative Study of Offline Models and Online LLMs in Fake News
Detection | Fake news detection remains a critical challenge in today's rapidly evolving digital landscape, where misinformation can spread faster than ever before. Traditional fake news detection models often rely on static datasets and auxiliary information, such as metadata or social media interactions, which limits their adaptability to real-time scenarios. Recent advancements in Large Language Models (LLMs) have demonstrated significant potential in addressing these challenges due to their extensive pre-trained knowledge and ability to analyze textual content without relying on auxiliary data. However, many of these LLM-based approaches are still rooted in static datasets, with limited exploration into their real-time processing capabilities. This paper presents a systematic evaluation of both traditional offline models and state-of-the-art LLMs for real-time fake news detection. We demonstrate the limitations of existing offline models, including their inability to adapt to dynamic misinformation patterns. Furthermore, we show that newer LLM models with online capabilities, such as GPT-4, Claude, and Gemini, are better suited for detecting emerging fake news in real-time contexts. Our findings emphasize the importance of transitioning from offline to online LLM models for real-time fake news detection. Additionally, the public accessibility of LLMs enhances their scalability and democratizes the tools needed to combat misinformation. By leveraging real-time data, our work marks a significant step toward more adaptive, effective, and scalable fake news detection systems. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 485,910 |
0910.0921 | Low-rank Matrix Completion with Noisy Observations: a Quantitative
Comparison | We consider a problem of significant practical importance, namely, the reconstruction of a low-rank data matrix from a small subset of its entries. This problem appears in many areas such as collaborative filtering, computer vision and wireless sensor networks. In this paper, we focus on the matrix completion problem in the case when the observed samples are corrupted by noise. We compare the performance of three state-of-the-art matrix completion algorithms (OptSpace, ADMiRA and FPCA) on a single simulation platform and present numerical results. We show that in practice these efficient algorithms can be used to reconstruct real data matrices, as well as randomly generated matrices, accurately. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 4,642 |
1808.02707 | A Method for Estimating the Probability of Extremely Rare Accidents in
Complex Systems | Estimating the probability of failures or accidents with aerospace systems is often necessary when new concepts or designs are introduced, as it is being done for Autonomous Aircraft. If the design is safe, as it is supposed to be, accident cases are hard to find. Such analysis needs some variance reduction technique and several algorithms exist for that, however specific model features may cause difficulties in practice, such as the case of system models where independent agents have to autonomously accomplish missions within finite time, and likely with the presence of human agents. For handling these scenarios, this paper presents a novel estimation approach, based on the combination of the well-established variation reduction technique of Interacting Particles System (IPS) with the long-standing optimization algorithm denominated DIviding RECTangles (DIRECT). When combined, these two techniques yield statistically significant results for extremely low probabilities. In addition, this novel approach allows the identification of intermediate events and simplifies the evaluation of sensitivity of the estimated probabilities to certain system parameters. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | 104,817 |
2111.03519 | S-multi-SNE: Semi-Supervised Classification and Visualisation of
Multi-View Data | An increasing number of multi-view data are being published by studies in several fields. This type of data corresponds to multiple data-views, each representing a different aspect of the same set of samples. We have recently proposed multi-SNE, an extension of t-SNE, that produces a single visualisation of multi-view data. The multi-SNE approach provides low-dimensional embeddings of the samples, produced by being updated iteratively through the different data-views. Here, we further extend multi-SNE to a semi-supervised approach, that classifies unlabelled samples by regarding the labelling information as an extra data-view. We look deeper into the performance, limitations and strengths of multi-SNE and its extension, S-multi-SNE, by applying the two methods on various multi-view datasets with different challenges. We show that by including the labelling information, the projection of the samples improves drastically and it is accompanied by a strong classification performance. | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 265,199 |
2201.02141 | Deep Learning Assisted End-to-End Synthesis of mm-Wave Passive Networks
with 3D EM Structures: A Study on A Transformer-Based Matching Network | This paper presents a deep learning assisted synthesis approach for direct end-to-end generation of RF/mm-wave passive matching network with 3D EM structures. Different from prior approaches that synthesize EM structures from target circuit component values and target topologies, our proposed approach achieves the direct synthesis of the passive network given the network topology from desired performance values as input. We showcase the proposed synthesis Neural Network (NN) model on an on-chip 1:1 transformer-based impedance matching network. By leveraging parameter sharing, the synthesis NN model successfully extracts relevant features from the input impedance and load capacitors, and predict the transformer 3D EM geometry in a 45nm SOI process that will match the standard 50$\Omega$ load to the target input impedance while absorbing the two loading capacitors. As a proof-of-concept, several example transformer geometries were synthesized, and verified in Ansys HFSS to provide the desired input impedance. | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | 274,462 |
2410.20558 | Neural rendering enables dynamic tomography | Interrupted X-ray computed tomography (X-CT) has been the common way to observe the deformation of materials during an experiment. While this approach is effective for quasi-static experiments, it has never been possible to reconstruct a full 3d tomography during a dynamic experiment which cannot be interrupted. In this work, we propose that neural rendering tools can be used to drive the paradigm shift to enable 3d reconstruction during dynamic events. First, we derive theoretical results to support the selection of projections angles. Via a combination of synthetic and experimental data, we demonstrate that neural radiance fields can reconstruct data modalities of interest more efficiently than conventional reconstruction methods. Finally, we develop a spatio-temporal model with spline-based deformation field and demonstrate that such model can reconstruct the spatio-temporal deformation of lattice samples in real-world experiments. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 502,865 |
2201.12122 | Can Wikipedia Help Offline Reinforcement Learning? | Fine-tuning reinforcement learning (RL) models has been challenging because of a lack of large scale off-the-shelf datasets as well as high variance in transferability among different environments. Recent work has looked at tackling offline RL from the perspective of sequence modeling with improved results as result of the introduction of the Transformer architecture. However, when the model is trained from scratch, it suffers from slow convergence speeds. In this paper, we look to take advantage of this formulation of reinforcement learning as sequence modeling and investigate the transferability of pre-trained sequence models on other domains (vision, language) when finetuned on offline RL tasks (control, games). To this end, we also propose techniques to improve transfer between these domains. Results show consistent performance gains in terms of both convergence speed and reward on a variety of environments, accelerating training by 3-6x and achieving state-of-the-art performance in a variety of tasks using Wikipedia-pretrained and GPT2 language models. We hope that this work not only brings light to the potentials of leveraging generic sequence modeling techniques and pre-trained models for RL, but also inspires future work on sharing knowledge between generative modeling tasks of completely different domains. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 277,533 |
1904.09901 | City-scale Road Extraction from Satellite Imagery | Automated road network extraction from remote sensing imagery remains a significant challenge despite its importance in a broad array of applications. To this end, we leverage recent open source advances and the high quality SpaceNet dataset to explore road network extraction at scale, an approach we call City-scale Road Extraction from Satellite Imagery (CRESI). Specifically, we create an algorithm to extract road networks directly from imagery over city-scale regions, which can subsequently be used for routing purposes. We quantify the performance of our algorithm with the APLS and TOPO graph-theoretic metrics over a diverse 608 square kilometer test area covering four cities. We find an aggregate score of APLS = 0.73, and a TOPO score of 0.58 (a significant improvement over existing methods). Inference speed is 160 square kilometers per hour on modest hardware. Finally, we demonstrate that one can use the extracted road network for any number of applications, such as optimized routing. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 128,513 |
0904.4343 | On the Achievability of Interference Alignment in the K-User Constant
MIMO Interference Channel | Interference alignment in the K-user MIMO interference channel with constant channel coefficients is considered. A novel constructive method for finding the interference alignment solution is proposed for the case where the number of transmit antennas equals the number of receive antennas (NT = NR = N), the number of transmitter-receiver pairs equals K = N + 1, and all interference alignment multiplexing gains are one. The core of the method consists of solving an eigenvalue problem that incorporates the channel matrices of all interfering links. This procedure provides insight into the feasibility of signal vector spaces alignment schemes in finite dimensional MIMO interference channels. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 3,599 |
1911.05758 | What do you mean, BERT? Assessing BERT as a Distributional Semantics
Model | Contextualized word embeddings, i.e. vector representations for words in context, are naturally seen as an extension of previous noncontextual distributional semantic models. In this work, we focus on BERT, a deep neural network that produces contextualized embeddings and has set the state-of-the-art in several semantic tasks, and study the semantic coherence of its embedding space. While showing a tendency towards coherence, BERT does not fully live up to the natural expectations for a semantic vector space. In particular, we find that the position of the sentence in which a word occurs, while having no meaning correlates, leaves a noticeable trace on the word embeddings and disturbs similarity relationships. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 153,358 |
1307.2785 | Rising tides or rising stars?: Dynamics of shared attention on Twitter
during media events | "Media events" such as political debates generate conditions of shared attention as many users simultaneously tune in with the dual screens of broadcast and social media to view and participate. Are collective patterns of user behavior under conditions of shared attention distinct from other "bursts" of activity like breaking news events? Using data from a population of approximately 200,000 politically-active Twitter users, we compare features of their behavior during eight major events during the 2012 U.S. presidential election to examine (1) the impact of "media events" have on patterns of social media use compared to "typical" time and (2) whether changes during media events are attributable to changes in behavior across the entire population or an artifact of changes in elite users' behavior. Our findings suggest that while this population became more active during media events, this additional activity reflects concentrated attention to a handful of users, hashtags, and tweets. Our work is the first study on distinguishing patterns of large-scale social behavior under condition of uncertainty and shared attention, suggesting new ways of mining information from social media to support collective sensemaking following major events. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 25,741 |
2309.13620 | PRIS: Practical robust invertible network for image steganography | Image steganography is a technique of hiding secret information inside another image, so that the secret is not visible to human eyes and can be recovered when needed. Most of the existing image steganography methods have low hiding robustness when the container images affected by distortion. Such as Gaussian noise and lossy compression. This paper proposed PRIS to improve the robustness of image steganography, it based on invertible neural networks, and put two enhance modules before and after the extraction process with a 3-step training strategy. Moreover, rounding error is considered which is always ignored by existing methods, but actually it is unavoidable in practical. A gradient approximation function (GAF) is also proposed to overcome the undifferentiable issue of rounding distortion. Experimental results show that our PRIS outperforms the state-of-the-art robust image steganography method in both robustness and practicability. Codes are available at https://github.com/yanghangAI/PRIS, demonstration of our model in practical at http://yanghang.site/hide/. | false | false | false | false | true | false | false | false | false | false | false | true | true | false | false | false | false | false | 394,289 |
2105.01910 | LEADOR: A Method for End-to-End Participatory Design of Autonomous
Social Robots | Participatory Design (PD) in Human-Robot Interaction (HRI) typically remains limited to the early phases of development, with subsequent robot behaviours then being hardcoded by engineers or utilised in Wizard-of-Oz (WoZ) systems that rarely achieve autonomy. We present LEADOR (Led-by-Experts Automation and Design Of Robots) an end-to-end PD methodology for domain expert co-design, automation and evaluation of social robots. LEADOR starts with typical PD to co-design the interaction specifications and state and action space of the robot. It then replaces traditional offline programming or WoZ by an in-situ, online teaching phase where the domain expert can live-program or teach the robot how to behave while being embedded in the interaction context. We believe that this live teaching can be best achieved by adding a learning component to a WoZ setup, to capture experts' implicit knowledge, as they intuitively respond to the dynamics of the situation. The robot progressively learns an appropriate, expert-approved policy, ultimately leading to full autonomy, even in sensitive and/or ill-defined environments. However, LEADOR is agnostic to the exact technical approach used to facilitate this learning process. The extensive inclusion of the domain expert(s) in robot design represents established responsible innovation practice, lending credibility to the system both during the teaching phase and when operating autonomously. The combination of this expert inclusion with the focus on in-situ development also means LEADOR supports a mutual shaping approach to social robotics. We draw on two previously published, foundational works from which this (generalisable) methodology has been derived in order to demonstrate the feasibility and worth of this approach, provide concrete examples in its application and identify limitations and opportunities when applying this framework in new environments. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 233,670 |
2203.16169 | Detecting Unassimilated Borrowings in Spanish: An Annotated Corpus and
Approaches to Modeling | This work presents a new resource for borrowing identification and analyzes the performance and errors of several models on this task. We introduce a new annotated corpus of Spanish newswire rich in unassimilated lexical borrowings -- words from one language that are introduced into another without orthographic adaptation -- and use it to evaluate how several sequence labeling models (CRF, BiLSTM-CRF, and Transformer-based models) perform. The corpus contains 370,000 tokens and is larger, more borrowing-dense, OOV-rich, and topic-varied than previous corpora available for this task. Our results show that a BiLSTM-CRF model fed with subword embeddings along with either Transformer-based embeddings pretrained on codeswitched data or a combination of contextualized word embeddings outperforms results obtained by a multilingual BERT-based model. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 288,683 |
2310.11049 | Nonet at SemEval-2023 Task 6: Methodologies for Legal Evaluation | This paper describes our submission to the SemEval-2023 for Task 6 on LegalEval: Understanding Legal Texts. Our submission concentrated on three subtasks: Legal Named Entity Recognition (L-NER) for Task-B, Legal Judgment Prediction (LJP) for Task-C1, and Court Judgment Prediction with Explanation (CJPE) for Task-C2. We conducted various experiments on these subtasks and presented the results in detail, including data statistics and methodology. It is worth noting that legal tasks, such as those tackled in this research, have been gaining importance due to the increasing need to automate legal analysis and support. Our team obtained competitive rankings of 15$^{th}$, 11$^{th}$, and 1$^{st}$ in Task-B, Task-C1, and Task-C2, respectively, as reported on the leaderboard. | false | false | false | false | true | true | true | false | true | false | false | false | false | false | false | false | false | false | 400,498 |
1311.6594 | Auto-adaptative Laplacian Pyramids for High-dimensional Data Analysis | Non-linear dimensionality reduction techniques such as manifold learning algorithms have become a common way for processing and analyzing high-dimensional patterns that often have attached a target that corresponds to the value of an unknown function. Their application to new points consists in two steps: first, embedding the new data point into the low dimensional space and then, estimating the function value on the test point from its neighbors in the embedded space. However, finding the low dimension representation of a test point, while easy for simple but often not powerful enough procedures such as PCA, can be much more complicated for methods that rely on some kind of eigenanalysis, such as Spectral Clustering (SC) or Diffusion Maps (DM). Similarly, when a target function is to be evaluated, averaging methods like nearest neighbors may give unstable results if the function is noisy. Thus, the smoothing of the target function with respect to the intrinsic, low-dimensional representation that describes the geometric structure of the examined data is a challenging task. In this paper we propose Auto-adaptive Laplacian Pyramids (ALP), an extension of the standard Laplacian Pyramids model that incorporates a modified LOOCV procedure that avoids the large cost of the standard one and offers the following advantages: (i) it selects automatically the optimal function resolution (stopping time) adapted to the data and its noise, (ii) it is easy to apply as it does not require parameterization, (iii) it does not overfit the training set and (iv) it adds no extra cost compared to other classical interpolation methods. We illustrate numerically ALP's behavior on a synthetic problem and apply it to the computation of the DM projection of new patterns and to the extension to them of target function values on a radiation forecasting problem over very high dimensional patterns. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 28,669 |
2205.13485 | Benchmarking of Deep Learning models on 2D Laminar Flow behind Cylinder | The rapidly advancing field of Fluid Mechanics has recently employed Deep Learning to solve various problems within that field. In that same spirit we try to perform Direct Numerical Simulation(DNS) which is one of the tasks in Computational Fluid Dynamics, using three fundamental architectures in the field of Deep Learning that were each used to solve various high dimensional problems. We train these three models in an autoencoder manner, for this the dataset is treated like sequential frames given to the model as input. We observe that recently introduced architecture called Transformer significantly outperforms its counterparts on the selected dataset.Furthermore, we conclude that using Transformers for doing DNS in the field of CFD is an interesting research area worth exploring. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 298,951 |
1610.06461 | Efficient Estimation of Compressible State-Space Models with Application
to Calcium Signal Deconvolution | In this paper, we consider linear state-space models with compressible innovations and convergent transition matrices in order to model spatiotemporally sparse transient events. We perform parameter and state estimation using a dynamic compressed sensing framework and develop an efficient solution consisting of two nested Expectation-Maximization (EM) algorithms. Under suitable sparsity assumptions on the innovations, we prove recovery guarantees and derive confidence bounds for the state estimates. We provide simulation studies as well as application to spike deconvolution from calcium imaging data which verify our theoretical results and show significant improvement over existing algorithms. | false | false | false | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | 62,651 |
1603.00154 | Broadcast Repair for Wireless Distributed Storage Systems | In wireless distributed storage systems, storage nodes are connected by wireless channels, which are broadcast in nature. This paper exploits this unique feature to design an efficient repair mechanism, called broadcast repair, for wireless distributed storage systems with multiple-node failures. Since wireless channels are typically bandwidth limited, we advocate a new measure on repair performance called repair-transmission bandwidth, which measures the average number of packets transmitted by helper nodes per failed node. The fundamental tradeoff between storage amount and repair-transmission bandwidth is obtained. It is shown that broadcast repair outperforms cooperative repair, which is the basic repair method for wired distributed storage systems with multiple-node failures, in terms of storage efficiency and repair-transmission bandwidth, thus yielding a better tradeoff curve. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 52,748 |
2405.08032 | Exploring the Potential of Conversational AI Support for Agent-Based
Social Simulation Model Design | ChatGPT, the AI-powered chatbot with a massive user base of hundreds of millions, has become a global phenomenon. However, the use of Conversational AI Systems (CAISs) like ChatGPT for research in the field of Social Simulation is still limited. Specifically, there is no evidence of its usage in Agent-Based Social Simulation (ABSS) model design. While scepticism towards anything new is inherent to human nature, we firmly believe it is imperative to initiate the use of this innovative technology to support ABSS model design. This paper presents a proof-of-concept that demonstrates how CAISs can facilitate the development of innovative conceptual ABSS models in a concise timeframe and with minimal required upfront case-based knowledge. By employing advanced prompt engineering techniques and adhering to the Engineering ABSS framework, we have constructed a comprehensive prompt script that enables the design of ABSS models with or by the CAIS. The effectiveness of the script is demonstrated through an illustrative case study concerning the use of adaptive architecture in museums. Despite occasional inaccuracies and divergences in conversation, the CAIS proved to be a valuable companion for ABSS modellers. | true | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | true | 453,962 |
2306.02948 | Learning under random distributional shifts | Many existing approaches for generating predictions in settings with distribution shift model distribution shifts as adversarial or low-rank in suitable representations. In various real-world settings, however, we might expect shifts to arise through the superposition of many small and random changes in the population and environment. Thus, we consider a class of random distribution shift models that capture arbitrary changes in the underlying covariate space, and dense, random shocks to the relationship between the covariates and the outcomes. In this setting, we characterize the benefits and drawbacks of several alternative prediction strategies: the standard approach that directly predicts the long-term outcome of interest, the proxy approach that directly predicts a shorter-term proxy outcome, and a hybrid approach that utilizes both the long-term policy outcome and (shorter-term) proxy outcome(s). We show that the hybrid approach is robust to the strength of the distribution shift and the proxy relationship. We apply this method to datasets in two high-impact domains: asylum-seeker assignment and early childhood education. In both settings, we find that the proposed approach results in substantially lower mean-squared error than current approaches. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 371,126 |
1712.00126 | An interpretable latent variable model for attribute applicability in
the Amazon catalogue | Learning attribute applicability of products in the Amazon catalog (e.g., predicting that a shoe should have a value for size, but not for battery-type at scale is a challenge. The need for an interpretable model is contingent on (1) the lack of ground truth training data, (2) the need to utilise prior information about the underlying latent space and (3) the ability to understand the quality of predictions on new, unseen data. To this end, we develop the MaxMachine, a probabilistic latent variable model that learns distributed binary representations, associated to sets of features that are likely to co-occur in the data. Layers of MaxMachines can be stacked such that higher layers encode more abstract information. Any set of variables can be clamped to encode prior information. We develop fast sampling based posterior inference. Preliminary results show that the model improves over the baseline in 17 out of 19 product groups and provides qualitatively reasonable predictions. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 85,834 |
2104.13026 | The Hessian Screening Rule | Predictor screening rules, which discard predictors before fitting a model, have had considerable impact on the speed with which sparse regression problems, such as the lasso, can be solved. In this paper we present a new screening rule for solving the lasso path: the Hessian Screening Rule. The rule uses second-order information from the model to provide both effective screening, particularly in the case of high correlation, as well as accurate warm starts. The proposed rule outperforms all alternatives we study on simulated data sets with both low and high correlation for $\ell_1$-regularized least-squares (the lasso) and logistic regression. It also performs best in general on the real data sets that we examine. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 232,380 |
2410.08459 | Beamforming Design for Intelligent Reffecting Surface Aided Near-Field
THz Communications | Intelligent reflecting surface (IRS) operating in the terahertz (THz) band has recently gained considerable interest due to its high spectrum bandwidth. Due to the exploitation of large scale of IRS, there is a high probability that the transceivers will be situated within the near-field region of the IRS. Thus, the near-field beam split effect poses a major challenge for the design of wideband IRS beamforming, which causes the radiation beam to deviate from its intended location, leading to significant gain losses and limiting the efficient use of available bandwidths. While delay-based IRS has emerged as a potential solution, current beamforming schemes generally assume unbounded range time delays (TDs). In this letter, we first investigate the near-field beam split issue at the IRS. Then, we extend the piece-wise far-field model to the IRS, based on which, a double-layer delta-delay (DLDD) IRS beamforming scheme is proposed. Specifically, we employ an element-grouping strategy and the TD imposed on each sub-surface of IRS is achieved by a series of TD modules. This method significantly reduces the required range of TDs. Numerical results show that the proposed DLDD IRS beamforming scheme can effectively mitigate the near-field beam split and achieve near-optimal performance. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 497,125 |
2208.11168 | Doc2Graph: a Task Agnostic Document Understanding Framework based on
Graph Neural Networks | Geometric Deep Learning has recently attracted significant interest in a wide range of machine learning fields, including document analysis. The application of Graph Neural Networks (GNNs) has become crucial in various document-related tasks since they can unravel important structural patterns, fundamental in key information extraction processes. Previous works in the literature propose task-driven models and do not take into account the full power of graphs. We propose Doc2Graph, a task-agnostic document understanding framework based on a GNN model, to solve different tasks given different types of documents. We evaluated our approach on two challenging datasets for key information extraction in form understanding, invoice layout analysis and table detection. Our code is freely accessible on https://github.com/andreagemelli/doc2graph. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 314,326 |
2406.13128 | A New Approach for Evaluating and Improving the Performance of
Segmentation Algorithms on Hard-to-Detect Blood Vessels | Many studies regarding the vasculature of biological tissues involve the segmentation of the blood vessels in a sample followed by the creation of a graph structure to model the vasculature. The graph is then used to extract relevant vascular properties. Small segmentation errors can lead to largely distinct connectivity patterns and a high degree of variability of the extracted properties. Nevertheless, global metrics such as Dice, precision, and recall are commonly applied for measuring the performance of blood vessel segmentation algorithms. These metrics might conceal important information about the accuracy at specific regions of a sample. To tackle this issue, we propose a local vessel salience (LVS) index to quantify the expected difficulty in segmenting specific blood vessel segments. The LVS index is calculated for each vessel pixel by comparing the local intensity of the vessel with the image background around the pixel. The index is then used for defining a new accuracy metric called low-salience recall (LSRecall), which quantifies the performance of segmentation algorithms on blood vessel segments having low salience. The perspective provided by the LVS index is used to define a data augmentation procedure that can be used to improve the segmentation performance of convolutional neural networks. We show that segmentation algorithms having high Dice and recall values can display very low LSRecall values, which reveals systematic errors of these algorithms for vessels having low salience. The proposed data augmentation procedure is able to improve the LSRecall of some samples by as much as 25%. The developed methodology opens up new possibilities for comparing the performance of segmentation algorithms regarding hard-to-detect blood vessels as well as their capabilities for vascular topology preservation. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 465,705 |
1209.2476 | Local Dimension of Complex Networks | Dimensionality is one of the most important properties of complex physical systems. However, only recently this concept has been considered in the context of complex networks. In this paper we further develop the previously introduced definitions of dimension in complex networks by presenting a new method to characterize the dimensionality of individual nodes. The methodology consists in obtaining patterns of dimensionality at different scales for each node, which can be used to detect regions with distinct dimensional structures as well as borders. We also apply this technique to power grid networks, showing, quantitatively, that the continental European power grid is substantially more planar than the network covering the western states of US, which present topological dimension higher than their intrinsic embedding space dimension. Local dimension also successfully revealed how distinct regions of network topologies spreads along the degrees of freedom when it is embedded in a metric space. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 18,510 |
2204.04584 | An improved method for constructing linear codes with small hulls | In this paper, we give a method for constructing linear codes with small hulls by generalizing the method in \cite{LCD-T-matric}. As a result, we obtain many optimal Euclidean LCD codes and Hermitian LCD codes, which improve the previously known lower bound on the largest minimum distance. We also obtain many optimal codes with one-dimension hull. Furthermore, we give three tables about formally self-dual LCD codes. | false | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | 290,706 |
1712.07835 | Exploring Models and Data for Remote Sensing Image Caption Generation | Inspired by recent development of artificial satellite, remote sensing images have attracted extensive attention. Recently, noticeable progress has been made in scene classification and target detection.However, it is still not clear how to describe the remote sensing image content with accurate and concise sentences. In this paper, we investigate to describe the remote sensing images with accurate and flexible sentences. First, some annotated instructions are presented to better describe the remote sensing images considering the special characteristics of remote sensing images. Second, in order to exhaustively exploit the contents of remote sensing images, a large-scale aerial image data set is constructed for remote sensing image caption. Finally, a comprehensive review is presented on the proposed data set to fully advance the task of remote sensing caption. Extensive experiments on the proposed data set demonstrate that the content of the remote sensing image can be completely described by generating language descriptions. The data set is available at https://github.com/201528014227051/RSICD_optimal | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 87,110 |
2411.04680 | Differentially Private Continual Learning using Pre-Trained Models | This work explores the intersection of continual learning (CL) and differential privacy (DP). Crucially, continual learning models must retain knowledge across tasks, but this conflicts with the differential privacy requirement of restricting individual samples to be memorised in the model. We propose using pre-trained models to address the trade-offs between privacy and performance in a continual learning setting. More specifically, we present necessary assumptions to enable privacy-preservation and propose combining pre-trained models with parameter-free classifiers and parameter-efficient adapters that are learned under differential privacy. Our experiments demonstrate their effectiveness and provide insights into balancing the competing demands of continual learning and privacy. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 506,367 |
1311.3355 | HINO: a BFO-aligned ontology representing human molecular interactions
and pathways | Many database resources, such as Reactome, collect manually annotated reactions, interactions, and pathways from peer-reviewed publications. The interactors (e.g., a protein), interactions, and pathways in these data resources are often represented as instances in using BioPAX, a standard pathway data exchange format. However, these interactions are better represented as classes (or universals) since they always occur given appropriate conditions. This study aims to represent various human interaction pathways and networks as classes via a formal ontology aligned with the Basic Formal Ontology (BFO). Towards this goal, the Human Interaction Network Ontology (HINO) was generated by extending the BFO-aligned Interaction Network Ontology (INO). All human pathways and associated processes and interactors listed in Reactome and represented in BioPAX were first converted to ontology classes by aligning them under INO. Related terms and associated relations and hierarchies from external ontologies (e.g., CHEBI and GO) were also retrieved and imported into HINO. HINO ontology terms were resolved in the linked ontology data server Ontobee. The RDF triples stored in the RDF triple store are queryable through a SPARQL program. Such an ontology system supports advanced pathway data integration and applications. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | true | false | 28,400 |
1606.00556 | Numerical Simulation of Multi-phase Flow in Porous Media on Parallel
Computers | A parallel reservoir simulator has been developed, which is designed for large-scale black oil simulations. It handles three phases, including water, oil and gas, and three components, including water, oil and gas. This simulator can calculate traditional reservoir models and naturally fractured models. Various well operations are supported, such as water flooding, gas flooding and polymer flooding. The operation constraints can be fixed bottom-hole pressure, a fixed fluid rate, and combinations of them. The simulator is based on our in-house platform, which provides grids, cell-centred data, linear solvers, preconditioners and well modeling. The simulator and the platform use MPI for communications among computation nodes. Our simulator is capable of simulating giant reservoir models with hundreds of millions of grid cells. Numerical simulations show that our simulator matches with commercial simulators and it has excellent scalability. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 56,680 |
2105.11039 | Digital-Twin-Based Improvements to Diagnosis, Prognosis, Strategy
Assessment, and Discrepancy Checking in a Nearly Autonomous Management and
Control System | The Nearly Autonomous Management and Control System (NAMAC) is a comprehensive control system that assists plant operations by furnishing control recommendations to operators in a broad class of situations. This study refines a NAMAC system for making reasonable recommendations during complex loss-of-flow scenarios with a validated Experimental Breeder Reactor II simulator, digital twins improved by machine-learning algorithms, a multi-attribute decision-making scheme, and a discrepancy checker for identifying unexpected recommendation effects. We assessed the performance of each NAMAC component, while we demonstrated and evaluated the capability of NAMAC in a class of loss-of-flow scenarios. | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | 236,576 |
2310.20227 | Achieving Scalable Capacity in Wireless Mesh Networks | Wireless mesh networks play a critical role in enabling key networking scenarios in beyond-5G (B5G) and 6G networks, including integrated access and backhaul (IAB), multi-hop sidelinks, and V2X. However, it still poses a challenge to deliver scalable per-node throughput via mesh networking, which significantly limits the potential of large-scale deployment of wireless mesh networks. Existing research has achieved $O(1)$ per-node throughput in a dense network, but how to achieve scalability remains an unresolved issue for an extended wireless network where the network size increases with a constant node density. This issue prevents a wireless mesh network from large-scale deployment. To this end, this paper aims to develop a theoretical approach to achieving scalable per-node throughput in wireless mesh networks. First, the key factors that limit the per-node throughput of wireless mesh networks are analyzed, through which two major ones are identified, i.e., link sharing and interference. Next, a multi-tier hierarchical architecture is proposed to overcome the link-sharing issue. The inter-tier interference under this architecture is then mitigated by utilizing orthogonal frequency allocation between adjacent tiers, while the intra-tier interference is reduced by considering two specific transmission schemes, one is MIMO spatial multiplexing with time-division, the other is MIMO beamforming. Theoretical analysis shows that the multi-tier mesh networking architecture can achieve a per-node throughput of $\Theta(1)$ in both schemes, as long as certain conditions on network parameters including bandwidth, antenna numbers, and node numbers of each tier are satisfied. A case study on a realistic deployment of 10,000 nodes is then carried out, which demonstrates that a scalable throughput of $\Theta(1)$ is achievable with a reasonable assumption on bandwidth and antenna numbers. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 404,311 |
1608.00658 | Rate Reduction for State-labelled Markov Chains with Upper Time-bounded
CSL Requirements | This paper presents algorithms for identifying and reducing a dedicated set of controllable transition rates of a state-labelled continuous-time Markov chain model. The purpose of the reduction is to make states to satisfy a given requirement, specified as a CSL upper time-bounded Until formula. We distinguish two different cases, depending on the type of probability bound. A natural partitioning of the state space allows us to develop possible solutions, leading to simple algorithms for both cases. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | 59,314 |
2402.01687 | "Which LLM should I use?": Evaluating LLMs for tasks performed by
Undergraduate Computer Science Students | This study evaluates the effectiveness of various large language models (LLMs) in performing tasks common among undergraduate computer science students. Although a number of research studies in the computing education community have explored the possibility of using LLMs for a variety of tasks, there is a lack of comprehensive research comparing different LLMs and evaluating which LLMs are most effective for different tasks. Our research systematically assesses some of the publicly available LLMs such as Google Bard, ChatGPT(3.5), GitHub Copilot Chat, and Microsoft Copilot across diverse tasks commonly encountered by undergraduate computer science students in India. These tasks include code explanation and documentation, solving class assignments, technical interview preparation, learning new concepts and frameworks, and email writing. Evaluation for these tasks was carried out by pre-final year and final year undergraduate computer science students and provides insights into the models' strengths and limitations. This study aims to guide students as well as instructors in selecting suitable LLMs for any specific task and offers valuable insights on how LLMs can be used constructively by students and instructors. | true | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | 426,141 |
1705.07884 | Facial Affect Estimation in the Wild Using Deep Residual and
Convolutional Networks | Automated affective computing in the wild is a challenging task in the field of computer vision. This paper presents three neural network-based methods proposed for the task of facial affect estimation submitted to the First Affect-in-the-Wild challenge. These methods are based on Inception-ResNet modules redesigned specifically for the task of facial affect estimation. These methods are: Shallow Inception-ResNet, Deep Inception-ResNet, and Inception-ResNet with LSTMs. These networks extract facial features in different scales and simultaneously estimate both the valence and arousal in each frame. Root Mean Square Error (RMSE) rates of 0.4 and 0.3 are achieved for the valence and arousal respectively with corresponding Concordance Correlation Coefficient (CCC) rates of 0.04 and 0.29 using Deep Inception-ResNet method. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 73,916 |
2111.05198 | Harmless interpolation in regression and classification with structured
features | Overparametrized neural networks tend to perfectly fit noisy training data yet generalize well on test data. Inspired by this empirical observation, recent work has sought to understand this phenomenon of benign overfitting or harmless interpolation in the much simpler linear model. Previous theoretical work critically assumes that either the data features are statistically independent or the input data is high-dimensional; this precludes general nonparametric settings with structured feature maps. In this paper, we present a general and flexible framework for upper bounding regression and classification risk in a reproducing kernel Hilbert space. A key contribution is that our framework describes precise sufficient conditions on the data Gram matrix under which harmless interpolation occurs. Our results recover prior independent-features results (with a much simpler analysis), but they furthermore show that harmless interpolation can occur in more general settings such as features that are a bounded orthonormal system. Furthermore, our results show an asymptotic separation between classification and regression performance in a manner that was previously only shown for Gaussian features. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 265,721 |
2109.09913 | Physics-based Human Motion Estimation and Synthesis from Videos | Human motion synthesis is an important problem with applications in graphics, gaming and simulation environments for robotics. Existing methods require accurate motion capture data for training, which is costly to obtain. Instead, we propose a framework for training generative models of physically plausible human motion directly from monocular RGB videos, which are much more widely available. At the core of our method is a novel optimization formulation that corrects imperfect image-based pose estimations by enforcing physics constraints and reasons about contacts in a differentiable way. This optimization yields corrected 3D poses and motions, as well as their corresponding contact forces. Results show that our physically-corrected motions significantly outperform prior work on pose estimation. We can then use these to train a generative model to synthesize future motion. We demonstrate both qualitatively and quantitatively improved motion estimation, synthesis quality and physical plausibility achieved by our method on the Human3.6m dataset~\cite{h36m_pami} as compared to prior kinematic and physics-based methods. By enabling learning of motion synthesis from video, our method paves the way for large-scale, realistic and diverse motion synthesis. Project page: \url{https://nv-tlabs.github.io/publication/iccv_2021_physics/} | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 256,445 |
2209.05641 | Input Delay Compensation for Neuron Growth by PDE Backstepping | Neurological studies show that injured neurons can regain their functionality with therapeutics such as Chondroitinase ABC (ChABC). These therapeutics promote axon elongation by manipulating the injured neuron and its intercellular space to modify tubulin protein concentration. This fundamental protein is the source of axon elongation, and its spatial distribution is the state of the axon growth dynamics. Such dynamics often contain time delays because of biological processes. This work introduces an input delay compensation with state-feedback control law for axon elongation by regulating tubulin concentration. Axon growth dynamics with input delay is modeled as coupled parabolic diffusion-reaction-advection Partial Differential Equations (PDE) with a boundary governed by Ordinary Differential Equations (ODE), associated with a transport PDE. A novel feedback law is proposed by using backstepping method for input-delay compensation. The gain kernels are provided after transforming the interconnected PDE-ODE-PDE system to a target system. The stability analysis is presented by applying Lyapunov analysis to the target system in the spatial H1-norm, thereby the local exponential stability of the original error system is proved by using norm equivalence. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 317,152 |
2410.17664 | Deep Generative Models for 3D Medical Image Synthesis | Deep generative modeling has emerged as a powerful tool for synthesizing realistic medical images, driving advances in medical image analysis, disease diagnosis, and treatment planning. This chapter explores various deep generative models for 3D medical image synthesis, with a focus on Variational Autoencoders (VAEs), Generative Adversarial Networks (GANs), and Denoising Diffusion Models (DDMs). We discuss the fundamental principles, recent advances, as well as strengths and weaknesses of these models and examine their applications in clinically relevant problems, including unconditional and conditional generation tasks like image-to-image translation and image reconstruction. We additionally review commonly used evaluation metrics for assessing image fidelity, diversity, utility, and privacy and provide an overview of current challenges in the field. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 501,571 |
2112.07426 | Direct Training via Backpropagation for Ultra-low Latency Spiking Neural
Networks with Multi-threshold | Spiking neural networks (SNNs) can utilize spatio-temporal information and have a nature of energy efficiency which is a good alternative to deep neural networks(DNNs). The event-driven information processing makes SNNs can reduce the expensive computation of DNNs and save a lot of energy consumption. However, high training and inference latency is a limitation of the development of deeper SNNs. SNNs usually need tens or even hundreds of time steps during the training and inference process which causes not only the increase of latency but also the waste of energy consumption. To overcome this problem, we proposed a novel training method based on backpropagation (BP) for ultra-low latency(1-2 time steps) SNN with multi-threshold. In order to increase the information capacity of each spike, we introduce the multi-threshold Leaky Integrate and Fired (LIF) model. In our proposed training method, we proposed three approximated derivative for spike activity to solve the problem of the non-differentiable issue which cause difficulties for direct training SNNs based on BP. The experimental results show that our proposed method achieves an average accuracy of 99.56%, 93.08%, and 87.90% on MNIST, FashionMNIST, and CIFAR10, respectively with only 2 time steps. For the CIFAR10 dataset, our proposed method achieve 1.12% accuracy improvement over the previously reported direct trained SNNs with fewer time steps. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 271,479 |
2201.03891 | A Saliency based Feature Fusion Model for EEG Emotion Estimation | Among the different modalities to assess emotion, electroencephalogram (EEG), representing the electrical brain activity, achieved motivating results over the last decade. Emotion estimation from EEG could help in the diagnosis or rehabilitation of certain diseases. In this paper, we propose a dual model considering two different representations of EEG feature maps: 1) a sequential based representation of EEG band power, 2) an image-based representation of the feature vectors. We also propose an innovative method to combine the information based on a saliency analysis of the image-based model to promote joint learning of both model parts. The model has been evaluated on four publicly available datasets: SEED-IV, SEED, DEAP and MPED. The achieved results outperform results from state-of-the-art approaches for three of the proposed datasets with a lower standard deviation that reflects higher stability. For sake of reproducibility, the codes and models proposed in this paper are available at https://github.com/VDelv/Emotion-EEG. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 274,962 |
2108.08957 | Unified Representation of Geometric Primitives for Graph-SLAM
Optimization Using Decomposed Quadrics | In Simultaneous Localization And Mapping (SLAM) problems, high-level landmarks have the potential to build compact and informative maps compared to traditional point-based landmarks. In this work, we focus on the parameterization of frequently used geometric primitives including points, lines, planes, ellipsoids, cylinders, and cones. We first present a unified representation based on quadrics, leading to a consistent and concise formulation. Then we further study a decomposed model of quadrics that discloses the symmetric and degenerated properties of a primitive. Based on the decomposition, we develop geometrically meaningful quadrics factors in the settings of a graph-SLAM problem. Then in simulation experiments, it is shown that the decomposed formulation has better efficiency and robustness to observation noises than baseline parameterizations. Finally, in real-world experiments, the proposed back-end framework is demonstrated to be capable of building compact and regularized maps. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 251,442 |
2404.19117 | Coexistence of eMBB+ and mMTC+ in Uplink Cell-Free Massive MIMO Networks | This paper tackles the problem of designing proper uplink multiple access (MA) schemes for coexistence between enhanced mobile broadband+ (eMBB+) users and massive machine-type communications+ (mMTC+) devices in a terminal-centric cell-free massive MIMO system. Specifically, the use of a time-frequency spreading technique for the mMTC+ devices has been proposed. Coupled with the assumption of imperfect channel knowledge, closed-form bounds of the achievable (ergodic) rate for the two types of data services are derived. Using suitable power control mechanisms, we show it is possible to efficiently multiplex eMBB+ and mMTC+ traffic in the same time-frequency resource grid. Numerical experiments reveal interesting trade-offs in the selection of the spreading gain and the number of serving access points within the system. Results also demonstrate that the performance of the mMTC+ devices is slightly affected by the presence of the eMBB+ users. Overall, our approach can endow good quality of service to both 6G cornerstones at once. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 450,511 |
2305.00605 | Classification and Online Clustering of Zero-Day Malware | A large amount of new malware is constantly being generated, which must not only be distinguished from benign samples, but also classified into malware families. For this purpose, investigating how existing malware families are developed and examining emerging families need to be explored. This paper focuses on the online processing of incoming malicious samples to assign them to existing families or, in the case of samples from new families, to cluster them. We experimented with seven prevalent malware families from the EMBER dataset, four in the training set and three additional new families in the test set. Based on the classification score of the multilayer perceptron, we determined which samples would be classified and which would be clustered into new malware families. We classified 97.21% of streaming data with a balanced accuracy of 95.33%. Then, we clustered the remaining data using a self-organizing map, achieving a purity from 47.61% for four clusters to 77.68% for ten clusters. These results indicate that our approach has the potential to be applied to the classification and clustering of zero-day malware into malware families. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 361,395 |
2012.09212 | Damage Modeling for the Tree-Like Network with Fractional-Order Calculus | In this paper, we propose that a tree-like network with damage can be modeled as the product of a fractional-order nominal plant and a fractional-order multiplicative disturbance, which is well structured and completely characterized by the damage amount at each damaged component. Such way of modeling brings us insights about that damaged network's behavior, helps us design robust controllers under uncertain damages and identify the damage. Although the main result in this paper is specialized to one model, we believe that this way of constructing a well-structured disturbance model can be applied to a class of damaged networks. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 211,985 |
1808.03504 | Model Approximation Using Cascade of Tree Decompositions | In this paper, we present a general, multistage framework for graphical model approximation using a cascade of models such as trees. In particular, we look at the problem of covariance matrix approximation for Gaussian distributions as linear transformations of tree models. This is a new way to decompose the covariance matrix. Here, we propose an algorithm which incorporates the Cholesky factorization method to compute the decomposition matrix and thus can approximate a simple graphical model using a cascade of the Cholesky factorization of the tree approximation transformations. The Cholesky decomposition enables us to achieve a tree structure factor graph at each cascade stage of the algorithm which facilitates the use of the message passing algorithm since the approximated graph has less loops compared to the original graph. The overall graph is a cascade of factor graphs with each factor graph being a tree. This is a different perspective on the approximation model, and algorithms such as Gaussian belief propagation can be used on this overall graph. Here, we present theoretical result that guarantees the convergence of the proposed model approximation using the cascade of tree decompositions. In the simulations, we look at synthetic and real data and measure the performance of the proposed framework by comparing the KL divergences. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 104,946 |
1908.02127 | Aligning Linguistic Words and Visual Semantic Units for Image Captioning | Image captioning attempts to generate a sentence composed of several linguistic words, which are used to describe objects, attributes, and interactions in an image, denoted as visual semantic units in this paper. Based on this view, we propose to explicitly model the object interactions in semantics and geometry based on Graph Convolutional Networks (GCNs), and fully exploit the alignment between linguistic words and visual semantic units for image captioning. Particularly, we construct a semantic graph and a geometry graph, where each node corresponds to a visual semantic unit, i.e., an object, an attribute, or a semantic (geometrical) interaction between two objects. Accordingly, the semantic (geometrical) context-aware embeddings for each unit are obtained through the corresponding GCN learning processers. At each time step, a context gated attention module takes as inputs the embeddings of the visual semantic units and hierarchically align the current word with these units by first deciding which type of visual semantic unit (object, attribute, or interaction) the current word is about, and then finding the most correlated visual semantic units under this type. Extensive experiments are conducted on the challenging MS-COCO image captioning dataset, and superior results are reported when comparing to state-of-the-art approaches. | false | false | false | false | false | false | true | false | true | false | false | true | false | false | false | false | false | true | 140,931 |
2301.13454 | Compliance Costs of AI Technology Commercialization: A Field Deployment
Perspective | While Artificial Intelligence (AI) technologies are progressing fast, compliance costs have become a huge financial burden for AI startups, which are already constrained on research & development budgets. This situation creates a compliance trap, as many AI startups are not financially prepared to cope with a broad spectrum of regulatory requirements. Particularly, the complex and varying regulatory processes across the globe subtly give advantages to well-established and resourceful technology firms over resource-constrained AI startups [1]. The continuation of this trend may phase out the majority of AI startups and lead to giant technology firms' monopolies of AI technologies. To demonstrate the reality of the compliance trap, from a field deployment perspective, we delve into the details of compliance costs of AI commercial operations. | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | 342,925 |
2208.06702 | UAV-CROWD: Violent and non-violent crowd activity simulator from the
perspective of UAV | Unmanned Aerial Vehicle (UAV) has gained significant traction in the recent years, particularly the context of surveillance. However, video datasets that capture violent and non-violent human activity from aerial point-of-view is scarce. To address this issue, we propose a novel, baseline simulator which is capable of generating sequences of photo-realistic synthetic images of crowds engaging in various activities that can be categorized as violent or non-violent. The crowd groups are annotated with bounding boxes that are automatically computed using semantic segmentation. Our simulator is capable of generating large, randomized urban environments and is able to maintain an average of 25 frames per second on a mid-range computer with 150 concurrent crowd agents interacting with each other. We also show that when synthetic data from the proposed simulator is augmented with real world data, binary video classification accuracy is improved by 5% on average across two different models. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 312,802 |
2110.13511 | AutoDEUQ: Automated Deep Ensemble with Uncertainty Quantification | Deep neural networks are powerful predictors for a variety of tasks. However, they do not capture uncertainty directly. Using neural network ensembles to quantify uncertainty is competitive with approaches based on Bayesian neural networks while benefiting from better computational scalability. However, building ensembles of neural networks is a challenging task because, in addition to choosing the right neural architecture or hyperparameters for each member of the ensemble, there is an added cost of training each model. We propose AutoDEUQ, an automated approach for generating an ensemble of deep neural networks. Our approach leverages joint neural architecture and hyperparameter search to generate ensembles. We use the law of total variance to decompose the predictive variance of deep ensembles into aleatoric (data) and epistemic (model) uncertainties. We show that AutoDEUQ outperforms probabilistic backpropagation, Monte Carlo dropout, deep ensemble, distribution-free ensembles, and hyper ensemble methods on a number of regression benchmarks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 263,211 |
2007.06344 | End-to-End Multi-Object Tracking with Global Response Map | Most existing Multi-Object Tracking (MOT) approaches follow the Tracking-by-Detection paradigm and the data association framework where objects are firstly detected and then associated. Although deep-learning based method can noticeably improve the object detection performance and also provide good appearance features for cross-frame association, the framework is not completely end-to-end, and therefore the computation is huge while the performance is limited. To address the problem, we present a completely end-to-end approach that takes image-sequence/video as input and outputs directly the located and tracked objects of learned types. Specifically, with our introduced multi-object representation strategy, a global response map can be accurately generated over frames, from which the trajectory of each tracked object can be easily picked up, just like how a detector inputs an image and outputs the bounding boxes of each detected object. The proposed model is fast and accurate. Experimental results based on the MOT16 and MOT17 benchmarks show that our proposed on-line tracker achieved state-of-the-art performance on several tracking metrics. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 186,988 |
1708.04366 | Deep Edge-Aware Saliency Detection | There has been profound progress in visual saliency thanks to the deep learning architectures, however, there still exist three major challenges that hinder the detection performance for scenes with complex compositions, multiple salient objects, and salient objects of diverse scales. In particular, output maps of the existing methods remain low in spatial resolution causing blurred edges due to the stride and pooling operations, networks often neglect descriptive statistical and handcrafted priors that have potential to complement saliency detection results, and deep features at different layers stay mainly desolate waiting to be effectively fused to handle multi-scale salient objects. In this paper, we tackle these issues by a new fully convolutional neural network that jointly learns salient edges and saliency labels in an end-to-end fashion. Our framework first employs convolutional layers that reformulate the detection task as a dense labeling problem, then integrates handcrafted saliency features in a hierarchical manner into lower and higher levels of the deep network to leverage available information for multi-scale response, and finally refines the saliency map through dilated convolutions by imposing context. In this way, the salient edge priors are efficiently incorporated and the output resolution is significantly improved while keeping the memory requirements low, leading to cleaner and sharper object boundaries. Extensive experimental analyses on ten benchmarks demonstrate that our framework achieves consistently superior performance and attains robustness for complex scenes in comparison to the very recent state-of-the-art approaches. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 78,930 |
2110.01928 | Time Encoding Quantization of Bandlimited and Finite-Rate-of-Innovation
Signals | This paper studies the impact of quantization in integrate-and-fire time encoding machine (IF-TEM) sampler used for bandlimited (BL) and finite-rate-of-innovation (FRI) signals. An upper bound is derived for the mean squared error (MSE) of IF-TEM sampler and is compared against that of classical analog-to-digital converters (ADCs) with uniform sampling and quantization. The interplay between a signal's energy, bandwidth, and peak amplitude is used to identify how the MSE of IF-TEM sampler with quantization is influenced by these parameters. More precisely, the quantization step size of the IF-TEM sampler can be reduced when the maximum frequency of a bandlimited signal or the number of pulses of an FRI signal is increased. Leveraging this insight, specific parameter settings are identified for which the quantized IF-TEM sampler achieves an MSE bound that is roughly 8 dB lower than that of a classical ADC with the same number of bits. Experimental results validate the theoretical conclusions. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 258,946 |
2305.05268 | Rotation Synchronization via Deep Matrix Factorization | In this paper we address the rotation synchronization problem, where the objective is to recover absolute rotations starting from pairwise ones, where the unknowns and the measures are represented as nodes and edges of a graph, respectively. This problem is an essential task for structure from motion and simultaneous localization and mapping. We focus on the formulation of synchronization via neural networks, which has only recently begun to be explored in the literature. Inspired by deep matrix completion, we express rotation synchronization in terms of matrix factorization with a deep neural network. Our formulation exhibits implicit regularization properties and, more importantly, is unsupervised, whereas previous deep approaches are supervised. Our experiments show that we achieve comparable accuracy to the closest competitors in most scenes, while working under weaker assumptions. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 363,076 |
2202.09113 | How to Manage Tiny Machine Learning at Scale: An Industrial Perspective | Tiny machine learning (TinyML) has gained widespread popularity where machine learning (ML) is democratized on ubiquitous microcontrollers, processing sensor data everywhere in real-time. To manage TinyML in the industry, where mass deployment happens, we consider the hardware and software constraints, ranging from available onboard sensors and memory size to ML-model architectures and runtime platforms. However, Internet of Things (IoT) devices are typically tailored to specific tasks and are subject to heterogeneity and limited resources. Moreover, TinyML models have been developed with different structures and are often distributed without a clear understanding of their working principles, leading to a fragmented ecosystem. Considering these challenges, we propose a framework using Semantic Web technologies to enable the joint management of TinyML models and IoT devices at scale, from modeling information to discovering possible combinations and benchmarking, and eventually facilitate TinyML component exchange and reuse. We present an ontology (semantic schema) for neural network models aligned with the World Wide Web Consortium (W3C) Thing Description, which semantically describes IoT devices. Furthermore, a Knowledge Graph of 23 publicly available ML models and six IoT devices were used to demonstrate our concept in three case studies, and we shared the code and examples to enhance reproducibility: https://github.com/Haoyu-R/How-to-Manage-TinyML-at-Scale | false | false | false | false | true | false | true | false | false | false | false | false | false | false | true | false | true | true | 281,097 |
2501.01425 | Free-Form Motion Control: A Synthetic Video Generation Dataset with
Controllable Camera and Object Motions | Controlling the movements of dynamic objects and the camera within generated videos is a meaningful yet challenging task. Due to the lack of datasets with comprehensive motion annotations, existing algorithms can not simultaneously control the motions of both camera and objects, resulting in limited controllability over generated contents. To address this issue and facilitate the research in this field, we introduce a Synthetic Dataset for Free-Form Motion Control (SynFMC). The proposed SynFMC dataset includes diverse objects and environments and covers various motion patterns according to specific rules, simulating common and complex real-world scenarios. The complete 6D pose information facilitates models learning to disentangle the motion effects from objects and the camera in a video. To validate the effectiveness and generalization of SynFMC, we further propose a method, Free-Form Motion Control (FMC). FMC enables independent or simultaneous control of object and camera movements, producing high-fidelity videos. Moreover, it is compatible with various personalized text-to-image (T2I) models for different content styles. Extensive experiments demonstrate that the proposed FMC outperforms previous methods across multiple scenarios. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 522,059 |
1906.03098 | Multi-modal Active Learning From Human Data: A Deep Reinforcement
Learning Approach | Human behavior expression and experience are inherently multi-modal, and characterized by vast individual and contextual heterogeneity. To achieve meaningful human-computer and human-robot interactions, multi-modal models of the users states (e.g., engagement) are therefore needed. Most of the existing works that try to build classifiers for the users states assume that the data to train the models are fully labeled. Nevertheless, data labeling is costly and tedious, and also prone to subjective interpretations by the human coders. This is even more pronounced when the data are multi-modal (e.g., some users are more expressive with their facial expressions, some with their voice). Thus, building models that can accurately estimate the users states during an interaction is challenging. To tackle this, we propose a novel multi-modal active learning (AL) approach that uses the notion of deep reinforcement learning (RL) to find an optimal policy for active selection of the users data, needed to train the target (modality-specific) models. We investigate different strategies for multi-modal data fusion, and show that the proposed model-level fusion coupled with RL outperforms the feature-level and modality-specific models, and the naive AL strategies such as random sampling, and the standard heuristics such as uncertainty sampling. We show the benefits of this approach on the task of engagement estimation from real-world child-robot interactions during an autism therapy. Importantly, we show that the proposed multi-modal AL approach can be used to efficiently personalize the engagement classifiers to the target user using a small amount of actively selected users data. | true | false | false | false | true | false | true | true | false | false | false | false | false | false | false | false | false | false | 134,278 |
1612.04165 | Information and Power Transfer by Energy Harvesting Transmitters over a
Fading Multiple Access Channel with Minimum Rate Constraints | We consider the problem of Simultaneous Wireless Information and Power Transfer (SWIPT) over a fading multiple access channel with additive Gaussian noise. The transmitters as well as the receiver harvest energy from ambient sources. We assume that the transmitters have two classes of data to send, viz. delay sensitive and delay tolerant data. Each transmitter sends the delay sensitive data at a certain minimum rate irrespective of the channel conditions (fading states). In addition, if the channel conditions are good, the delay tolerant data is sent. {Along with data, the transmitters also transfer power to aid the receiver in meeting its energy requirements.} In this setting, we characterize the \textit{minimum-rate capacity region} which provides the fundamental limit of transferring information and power simultaneously with minimum rate guarantees. Owing to the limitations of current technology, these limits might not be achievable in practice. Among the practical receiver structures proposed for SWIPT in literature, two popular architectures are the \textit{time switching} and \textit{power splitting} receivers. For each of these architectures, we derive the minimum-rate capacity regions. We show that power splitting receivers although more complex, provide a larger capacity region. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 65,488 |
2111.11785 | Realistic simulation of users for IT systems in cyber ranges | Generating user activity is a key capability for both evaluating security monitoring tools as well as improving the credibility of attacker analysis platforms (e.g., honeynets). In this paper, to generate this activity, we instrument each machine by means of an external agent. This agent combines both deterministic and deep learning based methods to adapt to different environment (e.g., multiple OS, software versions, etc.), while maintaining high performances. We also propose conditional text generation models to facilitate the creation of conversations and documents to accelerate the definition of coherent, system-wide, life scenarios. | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | false | false | false | 267,774 |
2312.07451 | Daily Assistive View Control Learning of Low-Cost Low-Rigidity Robot via
Large-Scale Vision-Language Model | In this study, we develop a simple daily assistive robot that controls its own vision according to linguistic instructions. The robot performs several daily tasks such as recording a user's face, hands, or screen, and remotely capturing images of desired locations. To construct such a robot, we combine a pre-trained large-scale vision-language model with a low-cost low-rigidity robot arm. The correlation between the robot's physical and visual information is learned probabilistically using a neural network, and changes in the probability distribution based on changes in time and environment are considered by parametric bias, which is a learnable network input variable. We demonstrate the effectiveness of this learning method by open-vocabulary view control experiments with an actual robot arm, MyCobot. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 414,927 |
1904.09163 | Transfer Entropy: where Shannon meets Turing | Transfer entropy is capable of capturing nonlinear source-destination relations between multi-variate time series. It is a measure of association between source data that are transformed into destination data via a set of linear transformations between their probability mass functions. The resulting tensor formalism is used to show that in specific cases, e.g., in the case the system consists of three stochastic processes, bivariate analysis suffices to distinguish true relations from false relations. This allows us to determine the causal structure as far as encoded in the probability mass functions of noisy data. The tensor formalism was also used to derive the Data Processing Inequality for transfer entropy. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 128,305 |
2106.08170 | How Modular Should Neural Module Networks Be for Systematic
Generalization? | Neural Module Networks (NMNs) aim at Visual Question Answering (VQA) via composition of modules that tackle a sub-task. NMNs are a promising strategy to achieve systematic generalization, i.e., overcoming biasing factors in the training distribution. However, the aspects of NMNs that facilitate systematic generalization are not fully understood. In this paper, we demonstrate that the degree of modularity of the NMN have large influence on systematic generalization. In a series of experiments on three VQA datasets (VQA-MNIST, SQOOP, and CLEVR-CoGenT), our results reveal that tuning the degree of modularity, especially at the image encoder stage, reaches substantially higher systematic generalization. These findings lead to new NMN architectures that outperform previous ones in terms of systematic generalization. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 241,207 |
1405.0054 | LTLf and LDLf Monitoring: A Technical Report | Runtime monitoring is one of the central tasks to provide operational decision support to running business processes, and check on-the-fly whether they comply with constraints and rules. We study runtime monitoring of properties expressed in LTL on finite traces (LTLf) and in its extension LDLf. LDLf is a powerful logic that captures all monadic second order logic on finite traces, which is obtained by combining regular expressions and LTLf, adopting the syntax of propositional dynamic logic (PDL). Interestingly, in spite of its greater expressivity, LDLf has exactly the same computational complexity of LTLf. We show that LDLf is able to capture, in the logic itself, not only the constraints to be monitored, but also the de-facto standard RV-LTL monitors. This makes it possible to declaratively capture monitoring metaconstraints, and check them by relying on usual logical services instead of ad-hoc algorithms. This, in turn, enables to flexibly monitor constraints depending on the monitoring state of other constraints, e.g., "compensation" constraints that are only checked when others are detected to be violated. In addition, we devise a direct translation of LDLf formulas into nondeterministic automata, avoiding to detour to Buechi automata or alternating automata, and we use it to implement a monitoring plug-in for the PROM suite. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 32,735 |
2305.18452 | Generating Driving Scenes with Diffusion | In this paper we describe a learned method of traffic scene generation designed to simulate the output of the perception system of a self-driving car. In our "Scene Diffusion" system, inspired by latent diffusion, we use a novel combination of diffusion and object detection to directly create realistic and physically plausible arrangements of discrete bounding boxes for agents. We show that our scene generation model is able to adapt to different regions in the US, producing scenarios that capture the intricacies of each region. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 369,068 |
2407.16541 | QPT V2: Masked Image Modeling Advances Visual Scoring | Quality assessment and aesthetics assessment aim to evaluate the perceived quality and aesthetics of visual content. Current learning-based methods suffer greatly from the scarcity of labeled data and usually perform sub-optimally in terms of generalization. Although masked image modeling (MIM) has achieved noteworthy advancements across various high-level tasks (e.g., classification, detection etc.). In this work, we take on a novel perspective to investigate its capabilities in terms of quality- and aesthetics-awareness. To this end, we propose Quality- and aesthetics-aware pretraining (QPT V2), the first pretraining framework based on MIM that offers a unified solution to quality and aesthetics assessment. To perceive the high-level semantics and fine-grained details, pretraining data is curated. To comprehensively encompass quality- and aesthetics-related factors, degradation is introduced. To capture multi-scale quality and aesthetic information, model structure is modified. Extensive experimental results on 11 downstream benchmarks clearly show the superior performance of QPT V2 in comparison with current state-of-the-art approaches and other pretraining paradigms. Code and models will be released at \url{https://github.com/KeiChiTse/QPT-V2}. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 475,634 |
2007.14550 | An Index-based Deterministic Asymptotically Optimal Algorithm for
Constrained Multi-armed Bandit Problems | For the model of constrained multi-armed bandit, we show that by construction there exists an index-based deterministic asymptotically optimal algorithm. The optimality is achieved by the convergence of the probability of choosing an optimal feasible arm to one over infinite horizon. The algorithm is built upon Locatelli et al.'s "anytime parameter-free thresholding" algorithm under the assumption that the optimal value is known. We provide a finite-time bound to the probability of the asymptotic optimality given as 1-O(|A|Te^{-T}) where T is the horizon size and A is the set of the arms in the bandit. We then study a relaxed-version of the algorithm in a general form that estimates the optimal value and discuss the asymptotic optimality of the algorithm after a sufficiently large T with examples. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 189,432 |
2410.21554 | Information diffusion assumptions can distort our understanding of
social network dynamics | To analyze the flow of information online, experts often rely on platform-provided data from social media companies, which typically attribute all resharing actions to an original poster. This obscures the true dynamics of how information spreads online, as users can be exposed to content in various ways. While most researchers analyze data as it is provided by the platform and overlook this issue, some attempt to infer the structure of these information cascades. However, the absence of ground truth about actual diffusion cascades makes verifying the efficacy of these efforts impossible. This study investigates the implications of the common practice of ignoring reconstruction all together. Two case studies involving data from Twitter and Bluesky reveal that reconstructing cascades significantly alters the identification of influential users, therefore affecting downstream analyses in general. We also propose a novel reconstruction approach that allows us to evaluate the effects of different assumptions made during the cascade inference procedure. Analysis of the diffusion of over 40,000 true and false news stories on Twitter reveals that the assumptions made during the reconstruction procedure drastically distort both microscopic and macroscopic properties of cascade networks. This work highlights the challenges of studying information spreading processes on complex networks and has significant implications for the broader study of digital platforms. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 503,294 |
2410.24002 | Assessing the Efficacy of Classical and Deep Neuroimaging Biomarkers in
Early Alzheimer's Disease Diagnosis | Alzheimer's disease (AD) is the leading cause of dementia, and its early detection is crucial for effective intervention, yet current diagnostic methods often fall short in sensitivity and specificity. This study aims to detect significant indicators of early AD by extracting and integrating various imaging biomarkers, including radiomics, hippocampal texture descriptors, cortical thickness measurements, and deep learning features. We analyze structural magnetic resonance imaging (MRI) scans from the Alzheimer's Disease Neuroimaging Initiative (ADNI) cohorts, utilizing comprehensive image analysis and machine learning techniques. Our results show that combining multiple biomarkers significantly improves detection accuracy. Radiomics and texture features emerged as the most effective predictors for early AD, achieving AUCs of 0.88 and 0.72 for AD and MCI detection, respectively. Although deep learning features proved to be less effective than traditional approaches, incorporating age with other biomarkers notably enhanced MCI detection performance. Additionally, our findings emphasize the continued importance of classical imaging biomarkers in the face of modern deep-learning approaches, providing a robust framework for early AD diagnosis. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 504,283 |
2209.11414 | Enabling Homogeneous GNNs to Handle Heterogeneous Graphs via Relation
Embedding | Graph Neural Networks (GNNs) have been generalized to process the heterogeneous graphs by various approaches. Unfortunately, these approaches usually model the heterogeneity via various complicated modules. This paper aims to propose a simple yet effective framework to assign adequate ability to the homogeneous GNNs to handle the heterogeneous graphs. Specifically, we propose Relation Embedding based Graph Neural Network (RE-GNN), which employs only one parameter per relation to embed the importance of distinct types of relations and node-type-specific self-loop connections. To optimize these relation embeddings and the model parameters simultaneously, a gradient scaling factor is proposed to constrain the embeddings to converge to suitable values. Besides, we interpret the proposed RE-GNN from two perspectives, and theoretically demonstrate that our RE-GCN possesses more expressive power than GTN (which is a typical heterogeneous GNN, and it can generate meta-paths adaptively). Extensive experiments demonstrate that our RE-GNN can effectively and efficiently handle the heterogeneous graphs and can be applied to various homogeneous GNNs. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 319,177 |
2203.04769 | Autoregressive based Drift Detection Method | In the classic machine learning framework, models are trained on historical data and used to predict future values. It is assumed that the data distribution does not change over time (stationarity). However, in real-world scenarios, the data generation process changes over time and the model has to adapt to the new incoming data. This phenomenon is known as concept drift and leads to a decrease in the predictive model's performance. In this study, we propose a new concept drift detection method based on autoregressive models called ADDM. This method can be integrated into any machine learning algorithm from deep neural networks to simple linear regression model. Our results show that this new concept drift detection method outperforms the state-of-the-art drift detection methods, both on synthetic data sets and real-world data sets. Our approach is theoretically guaranteed as well as empirical and effective for the detection of various concept drifts. In addition to the drift detector, we proposed a new method of concept drift adaptation based on the severity of the drift. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 284,599 |
2101.11472 | Spatial-Channel Transformer Network for Trajectory Prediction on the
Traffic Scenes | Predicting motion of surrounding agents is critical to real-world applications of tactical path planning for autonomous driving. Due to the complex temporal dependencies and social interactions of agents, on-line trajectory prediction is a challenging task. With the development of attention mechanism in recent years, transformer model has been applied in natural language sequence processing first and then image processing. In this paper, we present a Spatial-Channel Transformer Network for trajectory prediction with attention functions. Instead of RNN models, we employ transformer model to capture the spatial-temporal features of agents. A channel-wise module is inserted to measure the social interaction between agents. We find that the Spatial-Channel Transformer Network achieves promising results on real-world trajectory prediction datasets on the traffic scenes. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 217,291 |
2209.14161 | Supervised Contrastive Learning as Multi-Objective Optimization for
Fine-Tuning Large Pre-trained Language Models | Recently, Supervised Contrastive Learning (SCL) has been shown to achieve excellent performance in most classification tasks. In SCL, a neural network is trained to optimize two objectives: pull an anchor and positive samples together in the embedding space, and push the anchor apart from the negatives. However, these two different objectives may conflict, requiring trade-offs between them during optimization. In this work, we formulate the SCL problem as a Multi-Objective Optimization problem for the fine-tuning phase of RoBERTa language model. Two methods are utilized to solve the optimization problem: (i) the linear scalarization (LS) method, which minimizes a weighted linear combination of pertask losses; and (ii) the Exact Pareto Optimal (EPO) method which finds the intersection of the Pareto front with a given preference vector. We evaluate our approach on several GLUE benchmark tasks, without using data augmentations, memory banks, or generating adversarial examples. The empirical results show that the proposed learning strategy significantly outperforms a strong competitive contrastive learning baseline | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 320,161 |
2501.10617 | Mutual Regression Distance | The maximum mean discrepancy and Wasserstein distance are popular distance measures between distributions and play important roles in many machine learning problems such as metric learning, generative modeling, domain adaption, and clustering. However, since they are functions of pair-wise distances between data points in two distributions, they do not exploit the potential manifold properties of data such as smoothness and hence are not effective in measuring the dissimilarity between the two distributions in the form of manifolds. In this paper, different from existing measures, we propose a novel distance called Mutual Regression Distance (MRD) induced by a constrained mutual regression problem, which can exploit the manifold property of data. We prove that MRD is a pseudometric that satisfies almost all the axioms of a metric. Since the optimization of the original MRD is costly, we provide a tight MRD and a simplified MRD, based on which a heuristic algorithm is established. We also provide kernel variants of MRDs that are more effective in handling nonlinear data. Our MRDs especially the simplified MRDs have much lower computational complexity than the Wasserstein distance. We provide theoretical guarantees, such as robustness, for MRDs. Finally, we apply MRDs to distribution clustering, generative models, and domain adaptation. The numerical results demonstrate the effectiveness and superiority of MRDs compared to the baselines. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 525,589 |
2403.10778 | HCF-Net: Hierarchical Context Fusion Network for Infrared Small Object
Detection | Infrared small object detection is an important computer vision task involving the recognition and localization of tiny objects in infrared images, which usually contain only a few pixels. However, it encounters difficulties due to the diminutive size of the objects and the generally complex backgrounds in infrared images. In this paper, we propose a deep learning method, HCF-Net, that significantly improves infrared small object detection performance through multiple practical modules. Specifically, it includes the parallelized patch-aware attention (PPA) module, dimension-aware selective integration (DASI) module, and multi-dilated channel refiner (MDCR) module. The PPA module uses a multi-branch feature extraction strategy to capture feature information at different scales and levels. The DASI module enables adaptive channel selection and fusion. The MDCR module captures spatial features of different receptive field ranges through multiple depth-separable convolutional layers. Extensive experimental results on the SIRST infrared single-frame image dataset show that the proposed HCF-Net performs well, surpassing other traditional and deep learning models. Code is available at https://github.com/zhengshuchen/HCFNet. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 438,350 |
2101.00350 | Multi-Image Steganography Using Deep Neural Networks | Steganography is the science of hiding a secret message within an ordinary public message. Over the years, steganography has been used to encode a lower resolution image into a higher resolution image by simple methods like LSB manipulation. We aim to utilize deep neural networks for the encoding and decoding of multiple secret images inside a single cover image of the same resolution. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 214,051 |
2209.11919 | Concordance based Survival Cobra with regression type weak learners | In this paper, we predict conditional survival functions through a combined regression strategy. We take weak learners as different random survival trees. We propose to maximize concordance in the right-censored set up to find the optimal parameters. We explore two approaches, a usual survival cobra and a novel weighted predictor based on the concordance index. Our proposed formulations use two different norms, say, Max-norm and Frobenius norm, to find a proximity set of predictions from query points in the test dataset. We illustrate our algorithms through three different real-life dataset implementations. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 319,349 |
2208.04116 | UFNRec: Utilizing False Negative Samples for Sequential Recommendation | Sequential recommendation models are primarily optimized to distinguish positive samples from negative ones during training in which negative sampling serves as an essential component in learning the evolving user preferences through historical records. Except for randomly sampling negative samples from a uniformly distributed subset, many delicate methods have been proposed to mine negative samples with high quality. However, due to the inherent randomness of negative sampling, false negative samples are inevitably collected in model training. Current strategies mainly focus on removing such false negative samples, which leads to overlooking potential user interests, lack of recommendation diversity, less model robustness, and suffering from exposure bias. To this end, we propose a novel method that can Utilize False Negative samples for sequential Recommendation (UFNRec) to improve model performance. We first devise a simple strategy to extract false negative samples and then transfer these samples to positive samples in the following training process. Furthermore, we construct a teacher model to provide soft labels for false negative samples and design a consistency loss to regularize the predictions of these samples from the student model and the teacher model. To the best of our knowledge, this is the first work to utilize false negative samples instead of simply removing them for the sequential recommendation. Experiments on three benchmark public datasets are conducted using three widely applied SOTA models. The experiment results demonstrate that our proposed UFNRec can effectively draw information from false negative samples and further improve the performance of SOTA models. The code is available at https://github.com/UFNRec-code/UFNRec. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 311,999 |
2211.09545 | A Reinforcement Learning Approach for Process Parameter Optimization in
Additive Manufacturing | Process optimization for metal additive manufacturing (AM) is crucial to ensure repeatability, control microstructure, and minimize defects. Despite efforts to address this via the traditional design of experiments and statistical process mapping, there is limited insight on an on-the-fly optimization framework that can be integrated into a metal AM system. Additionally, most of these methods, being data-intensive, cannot be supported by a metal AM alloy or system due to budget restrictions. To tackle this issue, the article introduces a Reinforcement Learning (RL) methodology transformed into an optimization problem in the realm of metal AM. An off-policy RL framework based on Q-learning is proposed to find optimal laser power ($P$) - scan velocity ($v$) combinations with the objective of maintaining steady-state melt pool depth. For this, an experimentally validated Eagar-Tsai formulation is used to emulate the Laser-Directed Energy Deposition environment, where the laser operates as the agent across the $P-v$ space such that it maximizes rewards for a melt pool depth closer to the optimum. The culmination of the training process yields a Q-table where the state ($P,v$) with the highest Q-value corresponds to the optimized process parameter. The resultant melt pool depths and the mapping of Q-values to the $P-v$ space show congruence with experimental observations. The framework, therefore, provides a model-free approach to learning without any prior. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 331,012 |
2301.08391 | Brain Model State Space Reconstruction Using an LSTM Neural Network | Objective Kalman filtering has previously been applied to track neural model states and parameters, particularly at the scale relevant to EEG. However, this approach lacks a reliable method to determine the initial filter conditions and assumes that the distribution of states remains Gaussian. This study presents an alternative, data-driven method to track the states and parameters of neural mass models (NMMs) from EEG recordings using deep learning techniques, specifically an LSTM neural network. Approach An LSTM filter was trained on simulated EEG data generated by a neural mass model using a wide range of parameters. With an appropriately customised loss function, the LSTM filter can learn the behaviour of NMMs. As a result, it can output the state vector and parameters of NMMs given observation data as the input. Main Results Test results using simulated data yielded correlations with R squared of around 0.99 and verified that the method is robust to noise and can be more accurate than a nonlinear Kalman filter when the initial conditions of the Kalman filter are not accurate. As an example of real-world application, the LSTM filter was also applied to real EEG data that included epileptic seizures, and revealed changes in connectivity strength parameters at the beginnings of seizures. Significance Tracking the state vector and parameters of mathematical brain models is of great importance in the area of brain modelling, monitoring, imaging and control. This approach has no need to specify the initial state vector and parameters, which is very difficult to do in practice because many of the variables being estimated cannot be measured directly in physiological experiments. This method may be applied using any neural mass model and, therefore, provides a general, novel, efficient approach to estimate brain model variables that are often difficult to measure. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 341,180 |
2003.03353 | Reappraising the distribution of the number of edge crossings of graphs
on a sphere | Many real transportation and mobility networks have their vertices placed on the surface of the Earth. In such embeddings, the edges laid on that surface may cross. In his pioneering research, Moon analyzed the distribution of the number of crossings on complete graphs and complete bipartite graphs whose vertices are located uniformly at random on the surface of a sphere assuming that vertex placements are independent from each other. Here we revise his derivation of that variance in the light of recent theoretical developments on the variance of crossings and computer simulations. We show that Moon's formulae are inaccurate in predicting the true variance and provide exact formulae. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 167,198 |
2402.04050 | Connecting the Dots: Collaborative Fine-tuning for Black-Box
Vision-Language Models | With the emergence of pretrained vision-language models (VLMs), considerable efforts have been devoted to fine-tuning them for downstream tasks. Despite the progress made in designing efficient fine-tuning methods, such methods require access to the model's parameters, which can be challenging as model owners often opt to provide their models as a black box to safeguard model ownership. This paper proposes a \textbf{C}ollabo\textbf{ra}tive \textbf{F}ine-\textbf{T}uning (\textbf{CraFT}) approach for fine-tuning black-box VLMs to downstream tasks, where one only has access to the input prompts and the output predictions of the model. CraFT comprises two modules, a prompt generation module for learning text prompts and a prediction refinement module for enhancing output predictions in residual style. Additionally, we introduce an auxiliary prediction-consistent loss to promote consistent optimization across these modules. These modules are optimized by a novel collaborative training algorithm. Extensive experiments on few-shot classification over 15 datasets demonstrate the superiority of CraFT. The results show that CraFT achieves a decent gain of about 12\% with 16-shot datasets and only 8,000 queries. Moreover, CraFT trains faster and uses only about 1/80 of the memory footprint for deployment, while sacrificing only 1.62\% compared to the white-box method. Our code is publicly available at https://github.com/mrflogs/CraFT . | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 427,310 |
2304.03279 | Do the Rewards Justify the Means? Measuring Trade-Offs Between Rewards
and Ethical Behavior in the MACHIAVELLI Benchmark | Artificial agents have traditionally been trained to maximize reward, which may incentivize power-seeking and deception, analogous to how next-token prediction in language models (LMs) may incentivize toxicity. So do agents naturally learn to be Machiavellian? And how do we measure these behaviors in general-purpose models such as GPT-4? Towards answering these questions, we introduce MACHIAVELLI, a benchmark of 134 Choose-Your-Own-Adventure games containing over half a million rich, diverse scenarios that center on social decision-making. Scenario labeling is automated with LMs, which are more performant than human annotators. We mathematize dozens of harmful behaviors and use our annotations to evaluate agents' tendencies to be power-seeking, cause disutility, and commit ethical violations. We observe some tension between maximizing reward and behaving ethically. To improve this trade-off, we investigate LM-based methods to steer agents' towards less harmful behaviors. Our results show that agents can both act competently and morally, so concrete progress can currently be made in machine ethics--designing agents that are Pareto improvements in both safety and capabilities. | false | false | false | false | true | false | true | false | true | false | false | false | false | true | false | false | false | false | 356,737 |
1604.07103 | Voting with Feet: Who are Leaving Hillary Clinton and Donald Trump? | From a crowded field with 17 candidates, Hillary Clinton and Donald Trump have emerged as the two front-runners in the 2016 U.S. presidential campaign. The two candidates each boast more than 5 million followers on Twitter, and at the same time both have witnessed hundreds of thousands of people leave their camps. In this paper we attempt to characterize individuals who have left Hillary Clinton and Donald Trump between September 2015 and March 2016. Our study focuses on three dimensions of social demographics: social capital, gender, and age. Within each camp, we compare the characteristics of the current followers with former followers, i.e., individuals who have left since September 2015. We use the number of followers to measure social capital, and profile images to infer gender and age. For classifying gender, we train a convolutional neural network (CNN). For age, we use the Face++ API. Our study shows that for both candidates followers with more social capital are more likely to leave (or switch camps). For both candidates females make up a larger presence among unfollowers than among current followers. Somewhat surprisingly, the effect is particularly pronounced for Clinton. Lastly, middle-aged individuals are more likely to leave Trump, and the young are more likely to leave Hillary Clinton. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 55,046 |
1309.6027 | Energy-Efficient Optimization for Wireless Information and Power
Transfer in Large-Scale MIMO Systems Employing Energy Beamforming | In this letter, we consider a large-scale multiple-input multiple-output (MIMO) system where the receiver should harvest energy from the transmitter by wireless power transfer to support its wireless information transmission. The energy beamforming in the large-scale MIMO system is utilized to address the challenging problem of long-distance wireless power transfer. Furthermore, considering the limitation of the power in such a system, this letter focuses on the maximization of the energy efficiency of information transmission (bit per Joule) while satisfying the quality-of-service (QoS) requirement, i.e. delay constraint, by jointly optimizing transfer duration and transmit power. By solving the optimization problem, we derive an energy-efficient resource allocation scheme. Numerical results validate the effectiveness of the proposed scheme. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 27,215 |
2308.08525 | Likelihood-Based Text-to-Image Evaluation with Patch-Level Perceptual
and Semantic Credit Assignment | Text-to-image synthesis has made encouraging progress and attracted lots of public attention recently. However, popular evaluation metrics in this area, like the Inception Score and Fr'echet Inception Distance, incur several issues. First of all, they cannot explicitly assess the perceptual quality of generated images and poorly reflect the semantic alignment of each text-image pair. Also, they are inefficient and need to sample thousands of images to stabilise their evaluation results. In this paper, we propose to evaluate text-to-image generation performance by directly estimating the likelihood of the generated images using a pre-trained likelihood-based text-to-image generative model, i.e., a higher likelihood indicates better perceptual quality and better text-image alignment. To prevent the likelihood of being dominated by the non-crucial part of the generated image, we propose several new designs to develop a credit assignment strategy based on the semantic and perceptual significance of the image patches. In the experiments, we evaluate the proposed metric on multiple popular text-to-image generation models and datasets in accessing both the perceptual quality and the text-image alignment. Moreover, it can successfully assess the generation ability of these models with as few as a hundred samples, making it very efficient in practice. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 385,939 |
2208.07127 | Deception for Cyber Defence: Challenges and Opportunities | Deception is rapidly growing as an important tool for cyber defence, complementing existing perimeter security measures to rapidly detect breaches and data theft. One of the factors limiting the use of deception has been the cost of generating realistic artefacts by hand. Recent advances in Machine Learning have, however, created opportunities for scalable, automated generation of realistic deceptions. This vision paper describes the opportunities and challenges involved in developing models to mimic many common elements of the IT stack for deception effects. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 312,946 |
0911.3298 | Understanding the Principles of Recursive Neural networks: A Generative
Approach to Tackle Model Complexity | Recursive Neural Networks are non-linear adaptive models that are able to learn deep structured information. However, these models have not yet been broadly accepted. This fact is mainly due to its inherent complexity. In particular, not only for being extremely complex information processing models, but also because of a computational expensive learning phase. The most popular training method for these models is back-propagation through the structure. This algorithm has been revealed not to be the most appropriate for structured processing due to problems of convergence, while more sophisticated training methods enhance the speed of convergence at the expense of increasing significantly the computational cost. In this paper, we firstly perform an analysis of the underlying principles behind these models aimed at understanding their computational power. Secondly, we propose an approximate second order stochastic learning algorithm. The proposed algorithm dynamically adapts the learning rate throughout the training phase of the network without incurring excessively expensive computational effort. The algorithm operates in both on-line and batch modes. Furthermore, the resulting learning scheme is robust against the vanishing gradients problem. The advantages of the proposed algorithm are demonstrated with a real-world application example. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 4,959 |
2301.05347 | Towards Reconciling Usability and Usefulness of Explainable AI
Methodologies | Interactive Artificial Intelligence (AI) agents are becoming increasingly prevalent in society. However, application of such systems without understanding them can be problematic. Black-box AI systems can lead to liability and accountability issues when they produce an incorrect decision. Explainable AI (XAI) seeks to bridge the knowledge gap, between developers and end-users, by offering insights into how an AI algorithm functions. Many modern algorithms focus on making the AI model "transparent", i.e. unveil the inherent functionality of the agent in a simpler format. However, these approaches do not cater to end-users of these systems, as users may not possess the requisite knowledge to understand these explanations in a reasonable amount of time. Therefore, to be able to develop suitable XAI methods, we need to understand the factors which influence subjective perception and objective usability. In this paper, we present a novel user-study which studies four differing XAI modalities commonly employed in prior work for explaining AI behavior, i.e. Decision Trees, Text, Programs. We study these XAI modalities in the context of explaining the actions of a self-driving car on a highway, as driving is an easily understandable real-world task and self-driving cars is a keen area of interest within the AI community. Our findings highlight internal consistency issues wherein participants perceived language explanations to be significantly more usable, however participants were better able to objectively understand the decision making process of the car through a decision tree explanation. Our work also provides further evidence of importance of integrating user-specific and situational criteria into the design of XAI systems. Our findings show that factors such as computer science experience, and watching the car succeed or fail can impact the perception and usefulness of the explanation. | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | 340,327 |
2410.04988 | Efficient Model-Based Reinforcement Learning Through Optimistic Thompson
Sampling | Learning complex robot behavior through interactions with the environment necessitates principled exploration. Effective strategies should prioritize exploring regions of the state-action space that maximize rewards, with optimistic exploration emerging as a promising direction aligned with this idea and enabling sample-efficient reinforcement learning. However, existing methods overlook a crucial aspect: the need for optimism to be informed by a belief connecting the reward and state. To address this, we propose a practical, theoretically grounded approach to optimistic exploration based on Thompson sampling. Our model structure is the first that allows for reasoning about joint uncertainty over transitions and rewards. We apply our method on a set of MuJoCo and VMAS continuous control tasks. Our experiments demonstrate that optimistic exploration significantly accelerates learning in environments with sparse rewards, action penalties, and difficult-to-explore regions. Furthermore, we provide insights into when optimism is beneficial and emphasize the critical role of model uncertainty in guiding exploration. | false | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | 495,521 |
2311.15243 | ID-like Prompt Learning for Few-Shot Out-of-Distribution Detection | Out-of-distribution (OOD) detection methods often exploit auxiliary outliers to train model identifying OOD samples, especially discovering challenging outliers from auxiliary outliers dataset to improve OOD detection. However, they may still face limitations in effectively distinguishing between the most challenging OOD samples that are much like in-distribution (ID) data, i.e., \idlike samples. To this end, we propose a novel OOD detection framework that discovers \idlike outliers using CLIP \cite{DBLP:conf/icml/RadfordKHRGASAM21} from the vicinity space of the ID samples, thus helping to identify these most challenging OOD samples. Then a prompt learning framework is proposed that utilizes the identified \idlike outliers to further leverage the capabilities of CLIP for OOD detection. Benefiting from the powerful CLIP, we only need a small number of ID samples to learn the prompts of the model without exposing other auxiliary outlier datasets. By focusing on the most challenging \idlike OOD samples and elegantly exploiting the capabilities of CLIP, our method achieves superior few-shot learning performance on various real-world image datasets (e.g., in 4-shot OOD detection on the ImageNet-1k dataset, our method reduces the average FPR95 by 12.16\% and improves the average AUROC by 2.76\%, compared to state-of-the-art methods). Code is available at https://github.com/ycfate/ID-like. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 410,439 |
1704.01665 | Learning Combinatorial Optimization Algorithms over Graphs | The design of good heuristics or approximation algorithms for NP-hard combinatorial optimization problems often requires significant specialized knowledge and trial-and-error. Can we automate this challenging, tedious process, and learn the algorithms instead? In many real-world applications, it is typically the case that the same optimization problem is solved again and again on a regular basis, maintaining the same problem structure but differing in the data. This provides an opportunity for learning heuristic algorithms that exploit the structure of such recurring problems. In this paper, we propose a unique combination of reinforcement learning and graph embedding to address this challenge. The learned greedy policy behaves like a meta-algorithm that incrementally constructs a solution, and the action is determined by the output of a graph embedding network capturing the current state of the solution. We show that our framework can be applied to a diverse range of optimization problems over graphs, and learns effective algorithms for the Minimum Vertex Cover, Maximum Cut and Traveling Salesman problems. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 71,301 |
2002.10033 | Status drives how we cite: Evidence from thousands of authors | Researchers cite works for a variety of reasons, including some having nothing to do with acknowledging influence. The distribution of different citation types in the literature, and which papers attract which types, is poorly understood. We investigate high-influence and low-influence citations and the mechanisms producing them using 17,154 ground-truth citation types provided via survey by 9,380 authors systematically sampled across academic fields. Overall, 54% of citations denote little-to-no influence and these citations are concentrated among low status (lightly cited) papers. In contrast, high-influence citations are concentrated among high status (highly cited) papers through a number of steps that resemble a pipeline. Authors discover highly cited papers earlier in their projects, more often through social contacts, and read them more closely. Papers' status, above and beyond any quality differences, directly helps determine their pipeline: experimentally revealing or hiding citation counts during the survey shows that low counts cause lowered perceptions of quality. Accounting for citation types thus reveals a "double status effect": in addition to affecting how often a work is cited, status affects how meaningfully it is cited. Consequently, highly cited papers are even more influential than their raw citation counts suggest. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 165,261 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.