id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2407.13975
Personalized Privacy Protection Mask Against Unauthorized Facial Recognition
Face recognition (FR) can be abused for privacy intrusion. Governments, private companies, or even individual attackers can collect facial images by web scraping to build an FR system identifying human faces without their consent. This paper introduces Chameleon, which learns to generate a user-centric personalized privacy protection mask, coined as P3-Mask, to protect facial images against unauthorized FR with three salient features. First, we use a cross-image optimization to generate one P3-Mask for each user instead of tailoring facial perturbation for each facial image of a user. It enables efficient and instant protection even for users with limited computing resources. Second, we incorporate a perceptibility optimization to preserve the visual quality of the protected facial images. Third, we strengthen the robustness of P3-Mask against unknown FR models by integrating focal diversity-optimized ensemble learning into the mask generation process. Extensive experiments on two benchmark datasets show that Chameleon outperforms three state-of-the-art methods with instant protection and minimal degradation of image quality. Furthermore, Chameleon enables cost-effective FR authorization using the P3-Mask as a personalized de-obfuscation key, and it demonstrates high resilience against adaptive adversaries.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
474,588
2210.06268
On a Canonical Distributed Controller in the Behavioral Framework
Control in a classical transfer function or state-space setting typically views a controller as a signal processor: sensor outputs are mapped to actuator inputs. In behavioral system theory, control is simply viewed as interconnection; the interconnection of a plant with a controller. In this paper we consider the problem of control of interconnected systems in a behavioral setting. The behavioral setting is especially fit for modelling interconnected systems, because it allows for the interconnection of subsystems without imposing inputs and outputs. We introduce a so-called canonical distributed controller that implements a given interconnected behavior that is desired, provided that necessary and sufficient conditions hold true. The controller design can be performed in a decentralized manner, in the sense that a local controller only depends on the local system behavior. Regularity of interconnections is an important property in behavioral control that yields feedback interconnections. We provide conditions under which the interconnection of this distributed controller with the plant is regular. Furthermore, we show that the interconnections of subsystems of the canonical distributed controller are regular if and only if the interconnections of the plant and desired behavior are regular.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
323,196
1908.06556
Transfer in Deep Reinforcement Learning using Knowledge Graphs
Text adventure games, in which players must make sense of the world through text descriptions and declare actions through text descriptions, provide a stepping stone toward grounding action in language. Prior work has demonstrated that using a knowledge graph as a state representation and question-answering to pre-train a deep Q-network facilitates faster control policy transfer. In this paper, we explore the use of knowledge graphs as a representation for domain knowledge transfer for training text-adventure playing reinforcement learning agents. Our methods are tested across multiple computer generated and human authored games, varying in domain and complexity, and demonstrate that our transfer learning methods let us learn a higher-quality control policy faster.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
142,047
2102.04140
Quantifying and Mitigating Privacy Risks of Contrastive Learning
Data is the key factor to drive the development of machine learning (ML) during the past decade. However, high-quality data, in particular labeled data, is often hard and expensive to collect. To leverage large-scale unlabeled data, self-supervised learning, represented by contrastive learning, is introduced. The objective of contrastive learning is to map different views derived from a training sample (e.g., through data augmentation) closer in their representation space, while different views derived from different samples more distant. In this way, a contrastive model learns to generate informative representations for data samples, which are then used to perform downstream ML tasks. Recent research has shown that machine learning models are vulnerable to various privacy attacks. However, most of the current efforts concentrate on models trained with supervised learning. Meanwhile, data samples' informative representations learned with contrastive learning may cause severe privacy risks as well. In this paper, we perform the first privacy analysis of contrastive learning through the lens of membership inference and attribute inference. Our experimental results show that contrastive models trained on image datasets are less vulnerable to membership inference attacks but more vulnerable to attribute inference attacks compared to supervised models. The former is due to the fact that contrastive models are less prone to overfitting, while the latter is caused by contrastive models' capability of representing data samples expressively. To remedy this situation, we propose the first privacy-preserving contrastive learning mechanism, Talos, relying on adversarial training. Empirical results show that Talos can successfully mitigate attribute inference risks for contrastive models while maintaining their membership privacy and model utility.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
219,003
2105.01853
Two-Stage Stochastic Optimization via Primal-Dual Decomposition and Deep Unrolling
We consider a two-stage stochastic optimization problem, in which a long-term optimization variable is coupled with a set of short-term optimization variables in both objective and constraint functions. Despite that two-stage stochastic optimization plays a critical role in various engineering and scientific applications, there still lack efficient algorithms, especially when the long-term and short-term variables are coupled in the constraints. To overcome the challenge caused by tightly coupled stochastic constraints, we first establish a two-stage primal-dual decomposition (PDD) method to decompose the two-stage problem into a long-term problem and a family of short-term subproblems. Then we propose a PDD-based stochastic successive convex approximation (PDD-SSCA) algorithmic framework to find KKT solutions for two-stage stochastic optimization problems. At each iteration, PDD-SSCA first runs a short-term sub-algorithm to find stationary points of the short-term subproblems associated with a mini-batch of the state samples. Then it constructs a convex surrogate for the long-term problem based on the deep unrolling of the short-term sub-algorithm and the back propagation method. Finally, the optimal solution of the convex surrogate problem is solved to generate the next iterate. We establish the almost sure convergence of PDD-SSCA and customize the algorithmic framework to solve two important application problems. Simulations show that PDD-SSCA can achieve superior performance over existing solutions.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
233,650
1612.08274
Globally Optimal Object Tracking with Fully Convolutional Networks
Tracking is one of the most important but still difficult tasks in computer vision and pattern recognition. The main difficulties in the tracking field are appearance variation and occlusion. Most traditional tracking methods set the parameters or templates to track target objects in advance and should be modified accordingly. Thus, we propose a new and robust tracking method using a Fully Convolutional Network (FCN) to obtain an object probability map and Dynamic Programming (DP) to seek the globally optimal path through all frames of video. Our proposed method solves the object appearance variation problem with the use of a FCN and deals with occlusion by DP. We show that our method is effective in tracking various single objects through video frames.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
66,052
1805.09821
A Corpus for Multilingual Document Classification in Eight Languages
Cross-lingual document classification aims at training a document classifier on resources in one language and transferring it to a different language without any additional resources. Several approaches have been proposed in the literature and the current best practice is to evaluate them on a subset of the Reuters Corpus Volume 2. However, this subset covers only few languages (English, German, French and Spanish) and almost all published works focus on the the transfer between English and German. In addition, we have observed that the class prior distributions differ significantly between the languages. We argue that this complicates the evaluation of the multilinguality. In this paper, we propose a new subset of the Reuters corpus with balanced class priors for eight languages. By adding Italian, Russian, Japanese and Chinese, we cover languages which are very different with respect to syntax, morphology, etc. We provide strong baselines for all language transfer directions using multilingual word and sentence embeddings respectively. Our goal is to offer a freely available framework to evaluate cross-lingual document classification, and we hope to foster by these means, research in this important area.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
98,507
1709.06483
Magnus integrators on multicore CPUs and GPUs
In the present paper we consider numerical methods to solve the discrete Schr\"odinger equation with a time dependent Hamiltonian (motivated by problems encountered in the study of spin systems). We will consider both short-range interactions, which lead to evolution equations involving sparse matrices, and long-range interactions, which lead to dense matrices. Both of these settings show very different computational characteristics. We use Magnus integrators for time integration and employ a framework based on Leja interpolation to compute the resulting action of the matrix exponential. We consider both traditional Magnus integrators (which are extensively used for these types of problems in the literature) as well as the recently developed commutator-free Magnus integrators and implement them on modern CPU and GPU (graphics processing unit) based systems. We find that GPUs can yield a significant speed-up (up to a factor of $10$ in the dense case) for these types of problems. In the sparse case GPUs are only advantageous for large problem sizes and the achieved speed-ups are more modest. In most cases the commutator-free variant is superior but especially on the GPU this advantage is rather small. In fact, none of the advantage of commutator-free methods on GPUs (and on multi-core CPUs) is due to the elimination of commutators. This has important consequences for the design of more efficient numerical methods.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
81,110
1705.07410
Finding $K$ Contingency List in Power Networks using a New Model of Dependency
Smart grid systems are composed of power and communication network components. The components in either network exhibit complex dependencies on components in its own as well as the other network to drive their functionality. Existing, models fail to capture these complex dependencies. In this paper, we restrict to the dependencies in the power network and propose the Multi-scale Implicative Interdependency Relation (MIIR) model that address the existing limitations. A formal description of the model along with its working dynamics and a brief validation with respect to the 2011 Southwest blackout are provided. Utilizing the MIIR model, the $K$ Contingency List problem is proposed. For a given time instant, the problem solves for a set of $K$ entities in a power network which when failed at that time instant would cause the maximum number of entities to fail eventually. Owing to the problem being NP-complete we devised a Mixed Integer Program (MIP) to obtain the optimal solution and a polynomial time sub-optimal heuristic. The efficacy of the heuristic with respect to the MIP is compared by using different bus system data. In general, the heuristic is shown to provide near optimal solution at a much faster time than the MIP.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
73,831
2408.00684
Assessing the Variety of a Concept Space Using an Unbiased Estimate of Rao's Quadratic Index
Past research relates design creativity to 'divergent thinking,' i.e., how well the concept space is explored during the early phase of design. Researchers have argued that generating several concepts would increase the chances of producing better design solutions. 'Variety' is one of the parameters by which one can quantify the breadth of a concept space explored by the designers. It is useful to assess variety at the conceptual design stage because, at this stage, designers have the freedom to explore different solution principles so as to satisfy a design problem with substantially novel concepts. This article elaborates on and critically examines the existing variety metrics from the engineering design literature, discussing their limitations. A new distance-based variety metric is proposed, along with a prescriptive framework to support the assessment process. This framework uses the SAPPhIRE model of causality as a knowledge representation scheme to measure the real-valued distance between two design concepts. The proposed framework is implemented in a software tool called 'VariAnT.' Furthermore, the tool's application is demonstrated through an illustrative example.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
477,932
2306.13188
Uniform Convergence with Square-Root Lipschitz Loss
We establish generic uniform convergence guarantees for Gaussian data in terms of the Rademacher complexity of the hypothesis class and the Lipschitz constant of the square root of the scalar loss function. We show how these guarantees substantially generalize previous results based on smoothness (Lipschitz constant of the derivative), and allow us to handle the broader class of square-root-Lipschitz losses, which includes also non-smooth loss functions appropriate for studying phase retrieval and ReLU regression, as well as rederive and better understand "optimistic rate" and interpolation learning guarantees.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
375,193
2407.15459
Text-to-Battery Recipe: A language modeling-based protocol for automatic battery recipe extraction and retrieval
Recent studies have increasingly applied natural language processing (NLP) to automatically extract experimental research data from the extensive battery materials literature. Despite the complex process involved in battery manufacturing -- from material synthesis to cell assembly -- there has been no comprehensive study systematically organizing this information. In response, we propose a language modeling-based protocol, Text-to-Battery Recipe (T2BR), for the automatic extraction of end-to-end battery recipes, validated using a case study on batteries containing LiFePO4 cathode material. We report machine learning-based paper filtering models, screening 2,174 relevant papers from the keyword-based search results, and unsupervised topic models to identify 2,876 paragraphs related to cathode synthesis and 2,958 paragraphs related to cell assembly. Then, focusing on the two topics, two deep learning-based named entity recognition models are developed to extract a total of 30 entities -- including precursors, active materials, and synthesis methods -- achieving F1 scores of 88.18% and 94.61%. The accurate extraction of entities enables the systematic generation of 165 end-toend recipes of LiFePO4 batteries. Our protocol and results offer valuable insights into specific trends, such as associations between precursor materials and synthesis methods, or combinations between different precursor materials. We anticipate that our findings will serve as a foundational knowledge base for facilitating battery-recipe information retrieval. The proposed protocol will significantly accelerate the review of battery material literature and catalyze innovations in battery design and development.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
475,195
1811.10761
Speaker Diarization With Lexical Information
This work presents a novel approach to leverage lexical information for speaker diarization. We introduce a speaker diarization system that can directly integrate lexical as well as acoustic information into a speaker clustering process. Thus, we propose an adjacency matrix integration technique to integrate word level speaker turn probabilities with speaker embeddings in a comprehensive way. Our proposed method works without any reference transcript. Words, and word boundary information are provided by an ASR system. We show that our proposed method improves a baseline speaker diarization system solely based on speaker embeddings, achieving a meaningful improvement on the CALLHOME American English Speech dataset.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
114,574
2012.05653
Internet of Buoys: An Internet of Things Implementation at Sea
Internet of Things (IoT) applications are emerging in many different areas, including maritime environments. One of the applications in this area is the monitoring of buoys at sea. To realize wireless tracking of buoys, an accurate prediction of the path loss in an open-sea environment is essential. So far, channel measurements at sea have mainly been conducted with antennas placed a couple of meters above the sea surface, which is higher than the buoys themselves. Therefore, we investigated the validity of the published channel models at sea by means of path loss measurements using a LoRa link with a transmitter antenna height of 0.35 m and a base station antenna height of 2.65 m and 5.2 m. Our results show that the round earth loss model is not accurate at these antenna heights. The ITU-R P.2001-3 model and a model by Bullington show a better agreement with our measurements. However, the difference between our two measurement campaigns shows that more investigation is needed on the dependence of the path loss on the sea state. Additionally, the availability of Sigfox, Narrowband Internet of Things (NB-IoT), and The Things Network at sea has been explored. We found that NB-IoT and Sigfox can be used for IoT applications in the tested area at low antenna heights.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
210,852
2205.09307
Support-set based Multi-modal Representation Enhancement for Video Captioning
Video captioning is a challenging task that necessitates a thorough comprehension of visual scenes. Existing methods follow a typical one-to-one mapping, which concentrates on a limited sample space while ignoring the intrinsic semantic associations between samples, resulting in rigid and uninformative expressions. To address this issue, we propose a novel and flexible framework, namely Support-set based Multi-modal Representation Enhancement (SMRE) model, to mine rich information in a semantic subspace shared between samples. Specifically, we propose a Support-set Construction (SC) module to construct a support-set to learn underlying connections between samples and obtain semantic-related visual elements. During this process, we design a Semantic Space Transformation (SST) module to constrain relative distance and administrate multi-modal interactions in a self-supervised way. Extensive experiments on MSVD and MSR-VTT datasets demonstrate that our SMRE achieves state-of-the-art performance.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
297,218
2004.11302
Improving the Decision-Making Process of Self-Adaptive Systems by Accounting for Tactic Volatility
When self-adaptive systems encounter changes within their surrounding environments, they enact tactics to perform necessary adaptations. For example, a self-adaptive cloud-based system may have a tactic that initiates additional computing resources when response time thresholds are surpassed, or there may be a tactic to activate a specific security measure when an intrusion is detected. In real-world environments, these tactics frequently experience tactic volatility which is variable behavior during the execution of the tactic. Unfortunately, current self-adaptive approaches do not account for tactic volatility in their decision-making processes, and merely assume that tactics do not experience volatility. This limitation creates uncertainty in the decision-making process and may adversely impact the system's ability to effectively and efficiently adapt. Additionally, many processes do not properly account for volatility that may effect the system's Service Level Agreement (SLA). This can limit the system's ability to act proactively, especially when utilizing tactics that contain latency. To address the challenge of sufficiently accounting for tactic volatility, we propose a Tactic Volatility Aware (TVA) solution. Using Multiple Regression Analysis (MRA), TVA enables self-adaptive systems to accurately estimate the cost and time required to execute tactics. TVA also utilizes Autoregressive Integrated Moving Average (ARIMA) for time series forecasting, allowing the system to proactively maintain specifications.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
173,874
2205.08879
Control of Dynamic Financial Networks (The Extended Version)
The current global financial system forms a highly interconnected network where a default in one of its nodes can propagate to many other nodes, causing a catastrophic avalanche effect. In this paper we consider the problem of reducing the financial contagion by introducing some targeted interventions that can mitigate the cascaded failure effects. We consider a multi-step dynamic model of clearing payments and introduce an external control term that represents corrective cash injections made by a ruling authority. The proposed control model can be cast and efficiently solved as a linear program. We show via numerical examples that the proposed approach can significantly reduce the default propagation by applying small targeted cash injections.
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
297,087
2406.11689
Lightweight Model Pre-training via Language Guided Knowledge Distillation
This paper studies the problem of pre-training for small models, which is essential for many mobile devices. Current state-of-the-art methods on this problem transfer the representational knowledge of a large network (as a Teacher) into a smaller model (as a Student) using self-supervised distillation, improving the performance of the small model on downstream tasks. However, existing approaches are insufficient in extracting the crucial knowledge that is useful for discerning categories in downstream tasks during the distillation process. In this paper, for the first time, we introduce language guidance to the distillation process and propose a new method named Language-Guided Distillation (LGD) system, which uses category names of the target downstream task to help refine the knowledge transferred between the teacher and student. To this end, we utilize a pre-trained text encoder to extract semantic embeddings from language and construct a textual semantic space called Textual Semantics Bank (TSB). Furthermore, we design a Language-Guided Knowledge Aggregation (LGKA) module to construct the visual semantic space, also named Visual Semantics Bank (VSB). The task-related knowledge is transferred by driving a student encoder to mimic the similarity score distribution inferred by a teacher over TSB and VSB. Compared with other small models obtained by either ImageNet pre-training or self-supervised distillation, experiment results show that the distilled lightweight model using the proposed LGD method presents state-of-the-art performance and is validated on various downstream tasks, including classification, detection, and segmentation. We have made the code available at https://github.com/mZhenz/LGD.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
465,007
2311.14223
Information Velocity of Cascaded Gaussian Channels with Feedback
We consider a line network of nodes, connected by additive white Gaussian noise channels, equipped with local feedback. We study the velocity at which information spreads over this network. For transmission of a data packet, we give an explicit positive lower bound on the velocity, for any packet size. Furthermore, we consider streaming, that is, transmission of data packets generated at a given average arrival rate. We show that a positive velocity exists as long as the arrival rate is below the individual Gaussian channel capacity, and provide an explicit lower bound. Our analysis involves applying pulse-amplitude modulation to the data (successively in the streaming case), and using linear mean-squared error estimation at the network nodes. Due to the analog linear nature of the scheme, the results extend to any additive noise. For general noise, we derive exponential error-probability bounds. Moreover, for (sub-)Gaussian noise we show a doubly-exponential behavior, which reduces to the celebrated Schalkwijk-Kailath scheme when considering a single node. Viewing the constellation as an "analog source", we also provide bounds on the exponential decay of the mean-squared error of source transmission over the network.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
410,037
2204.09952
Recovering Patient Journeys: A Corpus of Biomedical Entities and Relations on Twitter (BEAR)
Text mining and information extraction for the medical domain has focused on scientific text generated by researchers. However, their direct access to individual patient experiences or patient-doctor interactions can be limited. Information provided on social media, e.g., by patients and their relatives, complements the knowledge in scientific text. It reflects the patient's journey and their subjective perspective on the process of developing symptoms, being diagnosed and offered a treatment, being cured or learning to live with a medical condition. The value of this type of data is therefore twofold: Firstly, it offers direct access to people's perspectives. Secondly, it might cover information that is not available elsewhere, including self-treatment or self-diagnoses. Named entity recognition and relation extraction are methods to structure information that is available in unstructured text. However, existing medical social media corpora focused on a comparably small set of entities and relations and particular domains, rather than putting the patient into the center of analyses. With this paper we contribute a corpus with a rich set of annotation layers following the motivation to uncover and model patients' journeys and experiences in more detail. We label 14 entity classes (incl. environmental factors, diagnostics, biochemical processes, patients' quality-of-life descriptions, pathogens, medical conditions, and treatments) and 20 relation classes (e.g., prevents, influences, interactions, causes) most of which have not been considered before for social media data. The publicly available dataset consists of 2,100 tweets with approx. 6,000 entity and 3,000 relation annotations. In a corpus analysis we find that over 80 % of documents contain relevant entities. Over 50 % of tweets express relations which we consider essential for uncovering patients' narratives about their journeys.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
292,620
2311.02096
Variational Autoencoders for Noise Reduction in Industrial LLRF Systems
Industrial particle accelerators inherently operate in much dirtier environments than typical research accelerators. This leads to an increase in noise both in the RF system and in other electronic systems. Combined with the fact that industrial accelerators are mass produced, there is less attention given to optimizing the performance of an individual system. As a result, industrial systems tend to under perform considering their hardware hardware capabilities. With the growing demand for accelerators for medical sterilization, food irradiation, cancer treatment, and imaging, improving the signal processing of these machines will increase the margin for the deployment of these systems. Our work is focusing on using machine learning techniques to reduce the noise of RF signals used for pulse-to-pulse feedback in industrial accelerators. We will review our algorithms, simulation results, and results working with measured data. We will then discuss next steps for deployment and testing on an industrial system.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
405,293
2012.06369
Interacting discovery processes on complex networks
Innovation is the driving force of human progress. Recent urn models reproduce well the dynamics through which the discovery of a novelty may trigger further ones, in an expanding space of opportunities, but neglect the effects of social interactions. Here we focus on the mechanisms of collective exploration and we propose a model in which many urns, representing different explorers, are coupled through the links of a social network and exploit opportunities coming from their contacts. We study different network structures showing, both analytically and numerically, that the pace of discovery of an explorer depends on its centrality in the social network. Our model sheds light on the role that social structures play in discovery processes.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
211,093
2203.14520
Optimistic Online Convex Optimization in Dynamic Environments
In this paper, we study the optimistic online convex optimization problem in dynamic environments. Existing works have shown that Ader enjoys an $O\left(\sqrt{\left(1+P_T\right)T}\right)$ dynamic regret upper bound, where $T$ is the number of rounds, and $P_T$ is the path length of the reference strategy sequence. However, Ader is not environment-adaptive. Based on the fact that optimism provides a framework for implementing environment-adaptive, we replace Greedy Projection (GP) and Normalized Exponentiated Subgradient (NES) in Ader with Optimistic-GP and Optimistic-NES respectively, and name the corresponding algorithm ONES-OGP. We also extend the doubling trick to the adaptive trick, and introduce three characteristic terms naturally arise from optimism, namely $M_T$, $\widetilde{M}_T$ and $V_T+1_{L^2\rho\left(\rho+2 P_T\right)\leqslant\varrho^2 V_T}D_T$, to replace the dependence of the dynamic regret upper bound on $T$. We elaborate ONES-OGP with adaptive trick and its subgradient variation version, all of which are environment-adaptive.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
288,044
2310.11636
A Uniform Language to Explain Decision Trees
The formal XAI community has studied a plethora of interpretability queries aiming to understand the classifications made by decision trees. However, a more uniform understanding of what questions we can hope to answer about these models, traditionally deemed to be easily interpretable, has remained elusive. In an initial attempt to understand uniform languages for interpretability, Arenas et al. (2021) proposed FOIL, a logic for explaining black-box ML models, and showed that it can express a variety of interpretability queries. However, we show that FOIL is limited in two important senses: (i) it is not expressive enough to capture some crucial queries, and (ii) its model agnostic nature results in a high computational complexity for decision trees. In this paper, we carefully craft two fragments of first-order logic that allow for efficiently interpreting decision trees: Q-DT-FOIL and its optimization variant OPT-DT-FOIL. We show that our proposed logics can express not only a variety of interpretability queries considered by previous literature, but also elegantly allows users to specify different objectives the sought explanations should optimize for. Using finite model-theoretic techniques, we show that the different ingredients of Q-DT-FOIL are necessary for its expressiveness, and yet that queries in Q-DT-FOIL can be evaluated with a polynomial number of queries to a SAT solver, as well as their optimization versions in OPT-DT-FOIL. Besides our theoretical results, we provide a SAT-based implementation of the evaluation for OPT-DT-FOIL that is performant on industry-size decision trees.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
400,712
2305.12520
SLaDe: A Portable Small Language Model Decompiler for Optimized Assembly
Decompilation is a well-studied area with numerous high-quality tools available. These are frequently used for security tasks and to port legacy code. However, they regularly generate difficult-to-read programs and require a large amount of engineering effort to support new programming languages and ISAs. Recent interest in neural approaches has produced portable tools that generate readable code. However, to-date such techniques are usually restricted to synthetic programs without optimization, and no models have evaluated their portability. Furthermore, while the code generated may be more readable, it is usually incorrect. This paper presents SLaDe, a Small Language model Decompiler based on a sequence-to-sequence transformer trained over real-world code. We develop a novel tokenizer and exploit no-dropout training to produce high-quality code. We utilize type-inference to generate programs that are more readable and accurate than standard analytic and recent neural approaches. Unlike standard approaches, SLaDe can infer out-of-context types and unlike neural approaches, it generates correct code. We evaluate SLaDe on over 4,000 functions from ExeBench on two ISAs and at two optimizations levels. SLaDe is up to 6 times more accurate than Ghidra, a state-of-the-art, industrial-strength decompiler and up to 4 times more accurate than the large language model ChatGPT and generates significantly more readable code than both.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
366,040
1911.02191
Experience Sharing Between Cooperative Reinforcement Learning Agents
The idea of experience sharing between cooperative agents naturally emerges from our understanding of how humans learn. Our evolution as a species is tightly linked to the ability to exchange learned knowledge with one another. It follows that experience sharing (ES) between autonomous and independent agents could become the key to accelerate learning in cooperative multiagent settings. We investigate if randomly selecting experiences to share can increase the performance of deep reinforcement learning agents, and propose three new methods for selecting experiences to accelerate the learning process. Firstly, we introduce Focused ES, which prioritizes unexplored regions of the state space. Secondly, we present Prioritized ES, in which temporal-difference error is used as a measure of priority. Finally, we devise Focused Prioritized ES, which combines both previous approaches. The methods are empirically validated in a control problem. While sharing randomly selected experiences between two Deep Q-Network agents shows no improvement over a single agent baseline, we show that the proposed ES methods can successfully outperform the baseline. In particular, the Focused ES accelerates learning by a factor of 2, reducing by 51% the number of episodes required to complete the task.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
152,305
1911.09713
Using Socially Expressive Mixed Reality Arms for Enhancing Low-Expressivity Robots
Expressivity--the use of multiple modalities to convey internal state and intent of a robot--is critical for interaction. Yet, due to cost, safety, and other constraints, many robots lack high degrees of physical expressivity. This paper explores using mixed reality to enhance a robot with limited expressivity by adding virtual arms that extend the robot's expressiveness. The arms, capable of a range of non-physically-constrained gestures, were evaluated in a between-subject study ($n=34$) where participants engaged in a mixed reality mathematics task with a socially assistive robot. The study results indicate that the virtual arms added a higher degree of perceived emotion, helpfulness, and physical presence to the robot. Users who reported a higher perceived physical presence also found the robot to have a higher degree of social presence, ease of use, usefulness, and had a positive attitude toward using the robot with mixed reality. The results also demonstrate the users' ability to distinguish the virtual gestures' valence and intent.
true
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
154,587
1905.06512
Incorporating Sememes into Chinese Definition Modeling
Chinese definition modeling is a challenging task that generates a dictionary definition in Chinese for a given Chinese word. To accomplish this task, we construct the Chinese Definition Modeling Corpus (CDM), which contains triples of word, sememes and the corresponding definition. We present two novel models to improve Chinese definition modeling: the Adaptive-Attention model (AAM) and the Self- and Adaptive-Attention Model (SAAM). AAM successfully incorporates sememes for generating the definition with an adaptive attention mechanism. It has the capability to decide which sememes to focus on and when to pay attention to sememes. SAAM further replaces recurrent connections in AAM with self-attention and relies entirely on the attention mechanism, reducing the path length between word, sememes and definition. Experiments on CDM demonstrate that by incorporating sememes, our best proposed model can outperform the state-of-the-art method by +6.0 BLEU.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
131,014
2009.12280
Locally orderless tensor networks for classifying two- and three-dimensional medical images
Tensor networks are factorisations of high rank tensors into networks of lower rank tensors and have primarily been used to analyse quantum many-body problems. Tensor networks have seen a recent surge of interest in relation to supervised learning tasks with a focus on image classification. In this work, we improve upon the matrix product state (MPS) tensor networks that can operate on one-dimensional vectors to be useful for working with 2D and 3D medical images. We treat small image regions as orderless, squeeze their spatial information into feature dimensions and then perform MPS operations on these locally orderless regions. These local representations are then aggregated in a hierarchical manner to retain global structure. The proposed locally orderless tensor network (LoTeNet) is compared with relevant methods on three datasets. The architecture of LoTeNet is fixed in all experiments and we show it requires lesser computational resources to attain performance on par or superior to the compared methods.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
197,381
2401.14758
Off-Policy Primal-Dual Safe Reinforcement Learning
Primal-dual safe RL methods commonly perform iterations between the primal update of the policy and the dual update of the Lagrange Multiplier. Such a training paradigm is highly susceptible to the error in cumulative cost estimation since this estimation serves as the key bond connecting the primal and dual update processes. We show that this problem causes significant underestimation of cost when using off-policy methods, leading to the failure to satisfy the safety constraint. To address this issue, we propose conservative policy optimization, which learns a policy in a constraint-satisfying area by considering the uncertainty in cost estimation. This improves constraint satisfaction but also potentially hinders reward maximization. We then introduce local policy convexification to help eliminate such suboptimality by gradually reducing the estimation uncertainty. We provide theoretical interpretations of the joint coupling effect of these two ingredients and further verify them by extensive experiments. Results on benchmark tasks show that our method not only achieves an asymptotic performance comparable to state-of-the-art on-policy methods while using much fewer samples, but also significantly reduces constraint violation during training. Our code is available at https://github.com/ZifanWu/CAL.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
424,213
2203.02902
Domain Adaptation with Factorizable Joint Shift
Existing domain adaptation (DA) usually assumes the domain shift comes from either the covariates or the labels. However, in real-world applications, samples selected from different domains could have biases in both the covariates and the labels. In this paper, we propose a new assumption, Factorizable Joint Shift (FJS), to handle the co-existence of sampling bias in covariates and labels. Although allowing for the shift from both sides, FJS assumes the independence of the bias between the two factors. We provide theoretical and empirical understandings about when FJS degenerates to prior assumptions and when it is necessary. We further propose Joint Importance Aligning (JIA), a discriminative learning objective to obtain joint importance estimators for both supervised and unsupervised domain adaptation. Our method can be seamlessly incorporated with existing domain adaptation algorithms for better importance estimation and weighting on the training data. Experiments on a synthetic dataset demonstrate the advantage of our method.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
283,904
2101.00264
Formation Flight Control of Multi-UAV System Using Neighbor-based Trajectory Generation Topology
In this paper, a distributed formation flight control topology for Leader-Follower formation structure is presented. Such topology depends in the first place on online generation of the trajectories that should be followed by the agents in the formation. The trajectory of each agent is planned during execution depending on its neighbors and considering that the desired reference trajectory is only given to the leader. Simulation using MATLAB/SIMULINK is done on a formation of quadrotor UAVs to illustrate the proposed method. The topology shows very good results in achieving the formation and following the reference trajectory.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
214,025
2210.05080
How to construct the symmetric cycle of length 5 using Haj\'os construction with an adapted Rank Genetic Algorithm
In 2020 Bang-Jensen et. al. generalized the Haj\'os join of two graphs to the class of digraphs and generalized several results for vertex colorings in digraphs. Although, as a consequence of these results, a digraph can be obtained by Haj\'os constructions (directed Haj\'os join and identifying non-adjacent vertices), determining the Haj\'os constructions to obtain the digraph is a complex problem. In particular, Bang-Jensen et al. posed the problem of determining the Haj\'os operations to construct the symmetric 5-cycle from the complete symmetric digraph of order 3 using only Haj\'os constructions. We successfully adapted a rank-based genetic algorithm to solve this problem by the introduction of innovative recombination and mutation operators from graph theory. The Haj\'os Join became the recombination operator and the identification of independent vertices became the mutation operator. In this way, we were able to obtain a sequence of only 16 Haj\'os operations to construct the symmetric cycle of order 5.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
322,692
2003.08111
Transformer Networks for Trajectory Forecasting
Most recent successes on forecasting the people motion are based on LSTM models and all most recent progress has been achieved by modelling the social interaction among people and the people interaction with the scene. We question the use of the LSTM models and propose the novel use of Transformer Networks for trajectory forecasting. This is a fundamental switch from the sequential step-by-step processing of LSTMs to the only-attention-based memory mechanisms of Transformers. In particular, we consider both the original Transformer Network (TF) and the larger Bidirectional Transformer (BERT), state-of-the-art on all natural language processing tasks. Our proposed Transformers predict the trajectories of the individual people in the scene. These are "simple" model because each person is modelled separately without any complex human-human nor scene interaction terms. In particular, the TF model without bells and whistles yields the best score on the largest and most challenging trajectory forecasting benchmark of TrajNet. Additionally, its extension which predicts multiple plausible future trajectories performs on par with more engineered techniques on the 5 datasets of ETH + UCY. Finally, we show that Transformers may deal with missing observations, as it may be the case with real sensor data. Code is available at https://github.com/FGiuliari/Trajectory-Transformer.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
168,639
1809.10219
Trading information complexity for error II: the case of a large error and external information complexity
Two problems are studied in this paper. (1) How much external or internal information cost is required to compute a Boolean-valued function with an error at most $1/2-\epsilon$ for a small $\epsilon$? It is shown that information cost of order $\epsilon^2$ is necessary and of order $\epsilon$ is sufficient. (2) How much external information cost can be saved to compute a function with a small error $\epsilon>0$ comparing to the case when no error is allowed? It is shown that information cost of order at least $\epsilon$ and at most $h(\sqrt{\epsilon})$ can be saved. Except the $O(h(\sqrt{\epsilon}))$ upper bound, the other three bounds are tight. For distribution $\mu$ that is equally distributed on $(0,0)$ and $(1,1)$, it is shown that $IC^{ext}_\mu(XOR, \epsilon)=1-2\epsilon$ where XOR is the two-bit xor function. This equality seems to be the first example of exact information complexity when an error is allowed.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
108,848
2210.08884
HyperDomainNet: Universal Domain Adaptation for Generative Adversarial Networks
Domain adaptation framework of GANs has achieved great progress in recent years as a main successful approach of training contemporary GANs in the case of very limited training data. In this work, we significantly improve this framework by proposing an extremely compact parameter space for fine-tuning the generator. We introduce a novel domain-modulation technique that allows to optimize only 6 thousand-dimensional vector instead of 30 million weights of StyleGAN2 to adapt to a target domain. We apply this parameterization to the state-of-art domain adaptation methods and show that it has almost the same expressiveness as the full parameter space. Additionally, we propose a new regularization loss that considerably enhances the diversity of the fine-tuned generator. Inspired by the reduction in the size of the optimizing parameter space we consider the problem of multi-domain adaptation of GANs, i.e. setting when the same model can adapt to several domains depending on the input query. We propose the HyperDomainNet that is a hypernetwork that predicts our parameterization given the target domain. We empirically confirm that it can successfully learn a number of domains at once and may even generalize to unseen domains. Source code can be found at https://github.com/MACderRu/HyperDomainNet
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
324,340
2212.04385
BEVBert: Multimodal Map Pre-training for Language-guided Navigation
Large-scale pre-training has shown promising results on the vision-and-language navigation (VLN) task. However, most existing pre-training methods employ discrete panoramas to learn visual-textual associations. This requires the model to implicitly correlate incomplete, duplicate observations within the panoramas, which may impair an agent's spatial understanding. Thus, we propose a new map-based pre-training paradigm that is spatial-aware for use in VLN. Concretely, we build a local metric map to explicitly aggregate incomplete observations and remove duplicates, while modeling navigation dependency in a global topological map. This hybrid design can balance the demand of VLN for both short-term reasoning and long-term planning. Then, based on the hybrid map, we devise a pre-training framework to learn a multimodal map representation, which enhances spatial-aware cross-modal reasoning thereby facilitating the language-guided navigation goal. Extensive experiments demonstrate the effectiveness of the map-based pre-training route for VLN, and the proposed method achieves state-of-the-art on four VLN benchmarks.
false
false
false
false
true
false
false
true
true
false
false
true
false
false
false
false
false
false
335,428
1805.04827
An attention-based Bi-GRU-CapsNet model for hypernymy detection between compound entities
Named entities are usually composable and extensible. Typical examples are names of symptoms and diseases in medical areas. To distinguish these entities from general entities, we name them \textit{compound entities}. In this paper, we present an attention-based Bi-GRU-CapsNet model to detect hypernymy relationship between compound entities. Our model consists of several important components. To avoid the out-of-vocabulary problem, English words or Chinese characters in compound entities are fed into the bidirectional gated recurrent units. An attention mechanism is designed to focus on the differences between the two compound entities. Since there are some different cases in hypernymy relationship between compound entities, capsule network is finally employed to decide whether the hypernymy relationship exists or not. Experimental results demonstrate
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
97,318
2305.02937
End-to-end spoken language understanding using joint CTC loss and self-supervised, pretrained acoustic encoders
It is challenging to extract semantic meanings directly from audio signals in spoken language understanding (SLU), due to the lack of textual information. Popular end-to-end (E2E) SLU models utilize sequence-to-sequence automatic speech recognition (ASR) models to extract textual embeddings as input to infer semantics, which, however, require computationally expensive auto-regressive decoding. In this work, we leverage self-supervised acoustic encoders fine-tuned with Connectionist Temporal Classification (CTC) to extract textual embeddings and use joint CTC and SLU losses for utterance-level SLU tasks. Experiments show that our model achieves 4% absolute improvement over the the state-of-the-art (SOTA) dialogue act classification model on the DSTC2 dataset and 1.3% absolute improvement over the SOTA SLU model on the SLURP dataset.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
362,219
2102.06657
End-to-end Audio-visual Speech Recognition with Conformers
In this work, we present a hybrid CTC/Attention model based on a ResNet-18 and Convolution-augmented transformer (Conformer), that can be trained in an end-to-end manner. In particular, the audio and visual encoders learn to extract features directly from raw pixels and audio waveforms, respectively, which are then fed to conformers and then fusion takes place via a Multi-Layer Perceptron (MLP). The model learns to recognise characters using a combination of CTC and an attention mechanism. We show that end-to-end training, instead of using pre-computed visual features which is common in the literature, the use of a conformer, instead of a recurrent network, and the use of a transformer-based language model, significantly improve the performance of our model. We present results on the largest publicly available datasets for sentence-level speech recognition, Lip Reading Sentences 2 (LRS2) and Lip Reading Sentences 3 (LRS3), respectively. The results show that our proposed models raise the state-of-the-art performance by a large margin in audio-only, visual-only, and audio-visual experiments.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
219,830
2006.04973
Pixel-Wise Motion Deblurring of Thermal Videos
Uncooled microbolometers can enable robots to see in the absence of visible illumination by imaging the "heat" radiated from the scene. Despite this ability to see in the dark, these sensors suffer from significant motion blur. This has limited their application on robotic systems. As described in this paper, this motion blur arises due to the thermal inertia of each pixel. This has meant that traditional motion deblurring techniques, which rely on identifying an appropriate spatial blur kernel to perform spatial deconvolution, are unable to reliably perform motion deblurring on thermal camera images. To address this problem, this paper formulates reversing the effect of thermal inertia at a single pixel as a Least Absolute Shrinkage and Selection Operator (LASSO) problem which we can solve rapidly using a quadratic programming solver. By leveraging sparsity and a high frame rate, this pixel-wise LASSO formulation is able to recover motion deblurred frames of thermal videos without using any spatial information. To compare its quality against state-of-the-art visible camera based deblurring methods, this paper evaluated the performance of a family of pre-trained object detectors on a set of images restored by different deblurring algorithms. All evaluated object detectors performed systematically better on images restored by the proposed algorithm rather than any other tested, state-of-the-art methods.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
180,878
2001.03960
Attention Flow: End-to-End Joint Attention Estimation
This paper addresses the problem of understanding joint attention in third-person social scene videos. Joint attention is the shared gaze behaviour of two or more individuals on an object or an area of interest and has a wide range of applications such as human-computer interaction, educational assessment, treatment of patients with attention disorders, and many more. Our method, Attention Flow, learns joint attention in an end-to-end fashion by using saliency-augmented attention maps and two novel convolutional attention mechanisms that determine to select relevant features and improve joint attention localization. We compare the effect of saliency maps and attention mechanisms and report quantitative and qualitative results on the detection and localization of joint attention in the VideoCoAtt dataset, which contains complex social scenes.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
160,113
2406.13692
Synchronous Faithfulness Monitoring for Trustworthy Retrieval-Augmented Generation
Retrieval-augmented language models (RALMs) have shown strong performance and wide applicability in knowledge-intensive tasks. However, there are significant trustworthiness concerns as RALMs are prone to generating unfaithful outputs, including baseless information or contradictions with the retrieved context. This paper proposes SynCheck, a lightweight monitor that leverages fine-grained decoding dynamics including sequence likelihood, uncertainty quantification, context influence, and semantic alignment to synchronously detect unfaithful sentences. By integrating efficiently measurable and complementary signals, SynCheck enables accurate and immediate feedback and intervention, achieving 0.85 AUROC in detecting faithfulness errors across six long-form retrieval-augmented generation tasks, improving prior best method by 4%. Leveraging SynCheck, we further introduce FOD, a faithfulness-oriented decoding algorithm guided by beam search for long-form retrieval-augmented generation. Empirical results demonstrate that FOD outperforms traditional strategies such as abstention, reranking, or contrastive decoding significantly in terms of faithfulness, achieving over 10% improvement across six datasets.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
465,956
1602.04521
Quasi Linear Codes: Application to Point-to-Point and Multi-Terminal Source Coding
A new ensemble of structured codes is introduced. These codes are called Quasi Linear Codes (QLC). The QLC's are constructed by taking subsets of linear codes. They have a looser structure compared to linear codes and are not closed under addition. We argue that these codes provide gains in terms of achievable Rate-Distortions (RD) in different multi-terminal source coding problems. We derive the necessary covering bounds for analyzing the performance of QLC's. We then consider the Multiple-Descriptions (MD) problem, and prove through an example that the application of QLC's gives an improved achievable RD region for this problem. Finally, we derive an inner bound to the achievable RD region for the general MD problem which strictly contains all of the previous known achievable regions.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
52,156
2203.02901
A Robust Framework of Chromosome Straightening with ViT-Patch GAN
Chromosomes carry the genetic information of humans. They exhibit non-rigid and non-articulated nature with varying degrees of curvature. Chromosome straightening is an important step for subsequent karyotype construction, pathological diagnosis and cytogenetic map development. However, robust chromosome straightening remains challenging, due to the unavailability of training images, distorted chromosome details and shapes after straightening, as well as poor generalization capability. In this paper, we propose a novel architecture, ViT-Patch GAN, consisting of a self-learned motion transformation generator and a Vision Transformer-based patch (ViT-Patch) discriminator. The generator learns the motion representation of chromosomes for straightening. With the help of the ViT-Patch discriminator, the straightened chromosomes retain more shape and banding pattern details. The experimental results show that the proposed method achieves better performance on Fr\'echet Inception Distance (FID), Learned Perceptual Image Patch Similarity (LPIPS) and downstream chromosome classification accuracy, and shows excellent generalization capability on a large dataset.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
283,903
2203.12922
Horizon-Free Reinforcement Learning in Polynomial Time: the Power of Stationary Policies
This paper gives the first polynomial-time algorithm for tabular Markov Decision Processes (MDP) that enjoys a regret bound \emph{independent on the planning horizon}. Specifically, we consider tabular MDP with $S$ states, $A$ actions, a planning horizon $H$, total reward bounded by $1$, and the agent plays for $K$ episodes. We design an algorithm that achieves an $O\left(\mathrm{poly}(S,A,\log K)\sqrt{K}\right)$ regret in contrast to existing bounds which either has an additional $\mathrm{polylog}(H)$ dependency~\citep{zhang2020reinforcement} or has an exponential dependency on $S$~\citep{li2021settling}. Our result relies on a sequence of new structural lemmas establishing the approximation power, stability, and concentration property of stationary policies, which can have applications in other problems related to Markov chains.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
287,448
2003.08310
On the Distribution of Minima in Intrinsic-Metric Rotation Averaging
Rotation Averaging is a non-convex optimization problem that determines orientations of a collection of cameras from their images of a 3D scene. The problem has been studied using a variety of distances and robustifiers. The intrinsic (or geodesic) distance on SO(3) is geometrically meaningful; but while some extrinsic distance-based solvers admit (conditional) guarantees of correctness, no comparable results have been found under the intrinsic metric. In this paper, we study the spatial distribution of local minima. First, we do a novel empirical study to demonstrate sharp transitions in qualitative behavior: as problems become noisier, they transition from a single (easy-to-find) dominant minimum to a cost surface filled with minima. In the second part of this paper we derive a theoretical bound for when this transition occurs. This is an extension of the results of [24], which used local convexity as a proxy to study the difficulty of problem. By recognizing the underlying quotient manifold geometry of the problem we achieve an n-fold improvement over prior work. Incidentally, our analysis also extends the prior $l_2$ work to general $l_p$ costs. Our results suggest using algebraic connectivity as an indicator of problem difficulty.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
168,694
2305.14383
A Rational Model of Dimension-reduced Human Categorization
Humans can categorize with only a few samples despite the numerous features. To mimic this ability, we propose a novel dimension-reduced category representation using a mixture of probabilistic principal component analyzers (mPPCA). Tests on the ${\tt CIFAR-10H}$ dataset demonstrate that mPPCA with only a single principal component for each category effectively predicts human categorization of natural images. We further impose a hierarchical prior on mPPCA to account for new category generalization. mPPCA captures human behavior in our experiments on images with simple size-color combinations. We also provide sufficient and necessary conditions when reducing dimensions in categorization is rational.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
367,009
2006.02460
Shallow Neural Hawkes: Non-parametric kernel estimation for Hawkes processes
Multi-dimensional Hawkes process (MHP) is a class of self and mutually exciting point processes that find wide range of applications -- from prediction of earthquakes to modelling of order books in high frequency trading. This paper makes two major contributions, we first find an unbiased estimator for the log-likelihood estimator of the Hawkes process to enable efficient use of the stochastic gradient descent method for maximum likelihood estimation. The second contribution is, we propose a specific single hidden layered neural network for the non-parametric estimation of the underlying kernels of the MHP. We evaluate the proposed model on both synthetic and real datasets, and find the method has comparable or better performance than existing estimation methods. The use of shallow neural network ensures that we do not compromise on the interpretability of the Hawkes model, while at the same time have the flexibility to estimate any non-standard Hawkes excitation kernel.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
180,041
2408.08231
DaRec: A Disentangled Alignment Framework for Large Language Model and Recommender System
Benefiting from the strong reasoning capabilities, Large language models (LLMs) have demonstrated remarkable performance in recommender systems. Various efforts have been made to distill knowledge from LLMs to enhance collaborative models, employing techniques like contrastive learning for representation alignment. In this work, we prove that directly aligning the representations of LLMs and collaborative models is sub-optimal for enhancing downstream recommendation tasks performance, based on the information theorem. Consequently, the challenge of effectively aligning semantic representations between collaborative models and LLMs remains unresolved. Inspired by this viewpoint, we propose a novel plug-and-play alignment framework for LLMs and collaborative models. Specifically, we first disentangle the latent representations of both LLMs and collaborative models into specific and shared components via projection layers and representation regularization. Subsequently, we perform both global and local structure alignment on the shared representations to facilitate knowledge transfer. Additionally, we theoretically prove that the specific and shared representations contain more pertinent and less irrelevant information, which can enhance the effectiveness of downstream recommendation tasks. Extensive experimental results on benchmark datasets demonstrate that our method is superior to existing state-of-the-art algorithms.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
480,915
2502.08151
Local Differential Privacy is Not Enough: A Sample Reconstruction Attack against Federated Learning with Local Differential Privacy
Reconstruction attacks against federated learning (FL) aim to reconstruct users' samples through users' uploaded gradients. Local differential privacy (LDP) is regarded as an effective defense against various attacks, including sample reconstruction in FL, where gradients are clipped and perturbed. Existing attacks are ineffective in FL with LDP since clipped and perturbed gradients obliterate most sample information for reconstruction. Besides, existing attacks embed additional sample information into gradients to improve the attack effect and cause gradient expansion, leading to a more severe gradient clipping in FL with LDP. In this paper, we propose a sample reconstruction attack against LDP-based FL with any target models to reconstruct victims' sensitive samples to illustrate that FL with LDP is not flawless. Considering gradient expansion in reconstruction attacks and noise in LDP, the core of the proposed attack is gradient compression and reconstructed sample denoising. For gradient compression, an inference structure based on sample characteristics is presented to reduce redundant gradients against LDP. For reconstructed sample denoising, we artificially introduce zero gradients to observe noise distribution and scale confidence interval to filter the noise. Theoretical proof guarantees the effectiveness of the proposed attack. Evaluations show that the proposed attack is the only attack that reconstructs victims' training samples in LDP-based FL and has little impact on the target model's accuracy. We conclude that LDP-based FL needs further improvements to defend against sample reconstruction attacks effectively.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
532,915
2408.11523
LARR: Large Language Model Aided Real-time Scene Recommendation with Semantic Understanding
Click-Through Rate (CTR) prediction is crucial for Recommendation System(RS), aiming to provide personalized recommendation services for users in many aspects such as food delivery, e-commerce and so on. However, traditional RS relies on collaborative signals, which lacks semantic understanding to real-time scenes. We also noticed that a major challenge in utilizing Large Language Models (LLMs) for practical recommendation purposes is their efficiency in dealing with long text input. To break through the problems above, we propose Large Language Model Aided Real-time Scene Recommendation(LARR), adopt LLMs for semantic understanding, utilizing real-time scene information in RS without requiring LLM to process the entire real-time scene text directly, thereby enhancing the efficiency of LLM-based CTR modeling. Specifically, recommendation domain-specific knowledge is injected into LLM and then RS employs an aggregation encoder to build real-time scene information from separate LLM's outputs. Firstly, a LLM is continual pretrained on corpus built from recommendation data with the aid of special tokens. Subsequently, the LLM is fine-tuned via contrastive learning on three kinds of sample construction strategies. Through this step, LLM is transformed into a text embedding model. Finally, LLM's separate outputs for different scene features are aggregated by an encoder, aligning to collaborative signals in RS, enhancing the performance of recommendation model.
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
482,328
2501.14271
TLXML: Task-Level Explanation of Meta-Learning via Influence Functions
The scheme of adaptation via meta-learning is seen as an ingredient for solving the problem of data shortage or distribution shift in real-world applications, but it also brings the new risk of inappropriate updates of the model in the user environment, which increases the demand for explainability. Among the various types of XAI methods, establishing a method of explanation based on past experience in meta-learning requires special consideration due to its bi-level structure of training, which has been left unexplored. In this work, we propose influence functions for explaining meta-learning that measure the sensitivities of training tasks to adaptation and inference. We also argue that the approximation of the Hessian using the Gauss-Newton matrix resolves computational barriers peculiar to meta-learning. We demonstrate the adequacy of the method through experiments on task distinction and task distribution distinction using image classification tasks with MAML and Prototypical Network.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
527,050
1805.05047
Triclustering of Gene Expression Microarray data using Evolutionary Approach
In Tri-clustering, a sub-matrix is being created, which exhibit highly similar behavior with respect to genes, conditions and time-points. In this technique, genes with same expression values are discovered across some fragment of time points, under certain conditions. In this paper, triclustering using evolutionary algorithm is implemented using a new fitness function consisting of 3D Mean Square residue (MSR) and Least Square approximation (LSL). The primary objective is to find triclusters with minimum overlapping, low MSR, low LSL and covering almost every element of expression matrix, thus minimizing the overall fitness value. To improve the results of algorithm, new fitness function is introduced to find good quality triclusters. It is observed from experiments that, triclustering using EA yielded good quality triclusters. The experiment was implemented on yeast Saccharomyces dataset. Index Terms-Tri-clustering, Genetic Algorithm, Mean squared residue, Volume, Weights, Least square approximation.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
97,368
1801.07073
BiographyNet: Extracting Relations Between People and Events
This paper describes BiographyNet, a digital humanities project (2012-2016) that brings together researchers from history, computational linguistics and computer science. The project uses data from the Biography Portal of the Netherlands (BPN), which contains approximately 125,000 biographies from a variety of Dutch biographical dictionaries from the eighteenth century until now, describing around 76,000 individuals. BiographyNet's aim is to strengthen the value of the portal and comparable biographical datasets for historical research, by improving the search options and the presentation of its outcome, with a historically justified NLP pipeline that works through a user evaluated demonstrator. The project's main target group are professional historians. The project therefore worked with two key concepts: "provenance" -understood as a term allowing for both historical source criticism and for references to data-management and programming interventions in digitized sources; and "perspective" interpreted as inherent uncertainty concerning the interpretation of historical results.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
88,720
2312.10857
Minimal Macro-Based Rewritings of Formal Languages: Theory and Applications in Ontology Engineering (and beyond)
In this paper, we introduce the problem of rewriting finite formal languages using syntactic macros such that the rewriting is minimal in size. We present polynomial-time algorithms to solve variants of this problem and show their correctness. To demonstrate the practical relevance of the proposed problems and the feasibility and effectiveness of our algorithms in practice, we apply these to biomedical ontologies authored in OWL. We find that such rewritings can significantly reduce the size of ontologies by capturing repeated expressions with macros. In addition to offering valuable assistance in enhancing ontology quality and comprehension, the presented approach introduces a systematic way of analysing and evaluating features of rewriting systems (including syntactic macros, templates, or other forms of rewriting rules) in terms of their influence on computational problems.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
true
false
416,339
1905.05822
Performance Analysis of Non-DC-Biased OFDM
The performance analysis of a novel optical modulation scheme is presented in this paper. The basic concept is to transmit signs of modulated optical orthogonal frequency division multiplexing (O-OFDM) symbols and absolute values of the symbols separately by two information carrying units: 1) indices of two light emitting diodes (LED) transmitters that represent positive and negative signs separately; and 2) optical intensity symbols that carry the absolute values of signals. The new approach, referred as to non-DC-biased OFDM (NDC-OFDM), uses the optical spatial modulation (OSM) technique to eliminate the effect of the clipping distortion in DC-biased optical OFDM (DCO-OFDM). In addition, it can achieve similar advantages as the conventional unipolar modulation scheme, asymmetrically clipped optical OFDM (ACO-OFDM), without using additional subcarriers. In this paper, the analytical BER performance is compared with the Monte Carlo result in order to prove the reliability of the new method. Moreover, the practical BER performance of NDC-OFDM with DCO-OFDM and ACO-OFDM is compared for different constellation sizes to verify the improvement of NDC-OFDM on the spectral and power efficiencies.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
130,819
2403.12297
Leveraging Large Language Models to Extract Information on Substance Use Disorder Severity from Clinical Notes: A Zero-shot Learning Approach
Substance use disorder (SUD) poses a major concern due to its detrimental effects on health and society. SUD identification and treatment depend on a variety of factors such as severity, co-determinants (e.g., withdrawal symptoms), and social determinants of health. Existing diagnostic coding systems used by American insurance providers, like the International Classification of Diseases (ICD-10), lack granularity for certain diagnoses, but clinicians will add this granularity (as that found within the Diagnostic and Statistical Manual of Mental Disorders classification or DSM-5) as supplemental unstructured text in clinical notes. Traditional natural language processing (NLP) methods face limitations in accurately parsing such diverse clinical language. Large Language Models (LLMs) offer promise in overcoming these challenges by adapting to diverse language patterns. This study investigates the application of LLMs for extracting severity-related information for various SUD diagnoses from clinical notes. We propose a workflow employing zero-shot learning of LLMs with carefully crafted prompts and post-processing techniques. Through experimentation with Flan-T5, an open-source LLM, we demonstrate its superior recall compared to the rule-based approach. Focusing on 11 categories of SUD diagnoses, we show the effectiveness of LLMs in extracting severity information, contributing to improved risk assessment and treatment planning for SUD patients.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
439,105
2410.17960
Zeitenwenden: Detecting changes in the German political discourse
From a monarchy to a democracy, to a dictatorship and back to a democracy -- the German political landscape has been constantly changing ever since the first German national state was formed in 1871. After World War II, the Federal Republic of Germany was formed in 1949. Since then every plenary session of the German Bundestag was logged and even has been digitized over the course of the last few years. We analyze these texts using a time series variant of the topic model LDA to investigate which events had a lasting effect on the political discourse and how the political topics changed over time. This allows us to detect changes in word frequency (and thus key discussion points) in political discourse.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
501,685
1811.08837
Recognizing Disguised Faces in the Wild
Research in face recognition has seen tremendous growth over the past couple of decades. Beginning from algorithms capable of performing recognition in constrained environments, the current face recognition systems achieve very high accuracies on large-scale unconstrained face datasets. While upcoming algorithms continue to achieve improved performance, a majority of the face recognition systems are susceptible to failure under disguise variations, one of the most challenging covariate of face recognition. Most of the existing disguise datasets contain images with limited variations, often captured in controlled settings. This does not simulate a real world scenario, where both intentional and unintentional unconstrained disguises are encountered by a face recognition system. In this paper, a novel Disguised Faces in the Wild (DFW) dataset is proposed which contains over 11000 images of 1000 identities with different types of disguise accessories. The dataset is collected from the Internet, resulting in unconstrained face images similar to real world settings. This is the first-of-a-kind dataset with the availability of impersonator and genuine obfuscated face images for each subject. The proposed dataset has been analyzed in terms of three levels of difficulty: (i) easy, (ii) medium, and (iii) hard in order to showcase the challenging nature of the problem. It is our view that the research community can greatly benefit from the DFW dataset in terms of developing algorithms robust to such adversaries. The proposed dataset was released as part of the First International Workshop and Competition on Disguised Faces in the Wild at CVPR, 2018. This paper presents the DFW dataset in detail, including the evaluation protocols, baseline results, performance analysis of the submissions received as part of the competition, and three levels of difficulties of the DFW challenge dataset.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
114,135
1902.08318
Parsing Gigabytes of JSON per Second
JavaScript Object Notation or JSON is a ubiquitous data exchange format on the Web. Ingesting JSON documents can become a performance bottleneck due to the sheer volume of data. We are thus motivated to make JSON parsing as fast as possible. Despite the maturity of the problem of JSON parsing, we show that substantial speedups are possible. We present the first standard-compliant JSON parser to process gigabytes of data per second on a single core, using commodity processors. We can use a quarter or fewer instructions than a state-of-the-art reference parser like RapidJSON. Unlike other validating parsers, our software (simdjson) makes extensive use of Single Instruction, Multiple Data (SIMD) instructions. To ensure reproducibility, simdjson is freely available as open-source software under a liberal license.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
true
122,167
1605.01033
Universal Multiparty Data Exchange and Secret Key Agreement
Multiple parties observing correlated data seek to recover each other's data and attain omniscience. To that end, they communicate interactively over a noiseless broadcast channel - each bit transmitted over this channel is received by all the parties. We give a universal interactive communication protocol, termed the recursive data exchange protocol (RDE), which attains omniscience for any sequence of data observed by the parties and provide an individual sequence guarantee of performance. As a by-product, for observations of length $n$, we show the universal rate optimality of RDE up to an $\mathcal{O}\left(n^{-\frac 12}\sqrt{\log n}\right)$ term in a generative setting where the data sequence is independent and identically distributed (in time). Furthermore, drawing on the duality between omniscience and secret key agreement due to Csiszar and Narayan, we obtain a universal protocol for generating a multiparty secret key of rate at most $\mathcal{O}\left(n^{-\frac 12}\sqrt{\log n}\right)$ less than the maximum rate possible. A key feature of RDE is its recursive structure whereby when a subset $A$ of parties recover each-other's data, the rates appear as if the parties have been executing the protocol in an alternative model where the parties in $A$ are collocated.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
55,422
2004.13973
Deep Transfer Learning For Plant Center Localization
Plant phenotyping focuses on the measurement of plant characteristics throughout the growing season, typically with the goal of evaluating genotypes for plant breeding. Estimating plant location is important for identifying genotypes which have low emergence, which is also related to the environment and management practices such as fertilizer applications. The goal of this paper is to investigate methods that estimate plant locations for a field-based crop using RGB aerial images captured using Unmanned Aerial Vehicles (UAVs). Deep learning approaches provide promising capability for locating plants observed in RGB images, but they require large quantities of labeled data (ground truth) for training. Using a deep learning architecture fine-tuned on a single field or a single type of crop on fields in other geographic areas or with other crops may not have good results. The problem of generating ground truth for each new field is labor-intensive and tedious. In this paper, we propose a method for estimating plant centers by transferring an existing model to a new scenario using limited ground truth data. We describe the use of transfer learning using a model fine-tuned for a single field or a single type of plant on a varied set of similar crops and fields. We show that transfer learning provides promising results for detecting plant locations.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
174,737
2405.04371
Community Detection for Heterogeneous Multiple Social Networks
The community plays a crucial role in understanding user behavior and network characteristics in social networks. Some users can use multiple social networks at once for a variety of objectives. These users are called overlapping users who bridge different social networks. Detecting communities across multiple social networks is vital for interaction mining, information diffusion, and behavior migration analysis among networks. This paper presents a community detection method based on nonnegative matrix tri-factorization for multiple heterogeneous social networks, which formulates a common consensus matrix to represent the global fused community. Specifically, the proposed method involves creating adjacency matrices based on network structure and content similarity, followed by alignment matrices which distinguish overlapping users in different social networks. With the generated alignment matrices, the method could enhance the fusion degree of the global community by detecting overlapping user communities across networks. The effectiveness of the proposed method is evaluated with new metrics on Twitter, Instagram, and Tumblr datasets. The results of the experiments demonstrate its superior performance in terms of community quality and community fusion.
false
false
false
true
true
false
false
false
false
false
false
false
false
true
false
false
false
false
452,547
2308.15119
AI-Based Facial Emotion Recognition Solutions for Education: A Study of Teacher-User and Other Categories
Existing information on AI-based facial emotion recognition (FER) is not easily comprehensible by those outside the field of computer science, requiring cross-disciplinary effort to determine a categorisation framework that promotes the understanding of this technology, and its impact on users. Most proponents classify FER in terms of methodology, implementation and analysis; relatively few by its application in education; and none by its users. This paper is concerned primarily with (potential) teacher-users of FER tools for education. It proposes a three-part classification of these teachers, by orientation, condition and preference, based on a classical taxonomy of affective educational objectives, and related theories. It also compiles and organises the types of FER solutions found in or inferred from the literature into "technology" and "applications" categories, as a prerequisite for structuring the proposed "teacher-user" category. This work has implications for proponents', critics', and users' understanding of the relationship between teachers and FER.
true
false
false
false
true
false
false
false
false
false
false
true
false
true
false
false
false
false
388,574
2304.10095
Transmit Power Minimization for STAR-RIS Empowered Symbiotic Radio Communications
In this paper, we propose a simultaneously transmitting and reflecting reconfigurable intelligent surface (STAR-RIS) empowered transmission scheme for symbiotic radio (SR) systems to make more flexibility for network deployment and enhance system performance. The STAR-RIS is utilized to not only beam the primary signals from the base station (BS) towards multiple primary users on the same side of the STAR-RIS, but also achieve the secondary transmission to the secondary users on another side. We consider both the broadcasting signal model and unicasting signal model at the BS. For each model, we aim for minimizing the transmit power of the BS by designing the active beamforming and simultaneous reflection and transmission coefficients under the practical phase correlation constraint. To address the challenge of solving the formulated problem, we propose a block coordinate descent based algorithm with the semidefinite relaxation, penalty dual decomposition and successive convex approximation methods, which decomposes the original problem into one sub-problem about active beamforming and the other sub-problem about simultaneous reflection and transmission coefficients, and iteratively solve them until the convergence is achieved. Numerical results indicate that the proposed scheme can reduce up to 150.6% transmit power compared to the backscattering device enabled scheme.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
359,290
1410.1805
Performance Analysis of Wireless Powered Communication with Finite/Infinite Energy Storage
In this paper, we consider an energy harvesting (EH) node which harvests energy from a radio frequency (RF) signal broadcasted by an access point (AP) in the downlink (DL). The node stores the harvested energy in an energy buffer and uses the stored energy to transmit data to the AP in the uplink (UL). We consider a simple transmission policy, which accounts for the fact that in practice the EH node may not have knowledge of the EH profile nor of the UL channel state information. In particular, in each time slot, the EH node transmits with either a constant desired power or a lower power if not enough energy is available in its energy buffer. For this simple policy, we use the theory of discrete-time continuous-state Markov chains to analyze the limiting distribution of the stored energy for finite- and infinite-size energy buffers. Moreover, we take into account imperfections of the energy buffer and the circuit power consumption of the EH node. For a Rayleigh fading DL channel, we provide the limiting distribution of the energy buffer content in closed form. In addition, we analyze the average error rate and the outage probability of a Rayleigh faded UL channel and show that the diversity order is not affected by the finite capacity of the energy buffer. Our results reveal that the optimal desired transmit power by the EH node is always less than the average harvested power and increases with the capacity of the energy buffer.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
36,568
2209.14406
Biological connectomes as a representation for the architecture of artificial neural networks
Grand efforts in neuroscience are working toward mapping the connectomes of many new species, including the near completion of the Drosophila melanogaster. It is important to ask whether these models could benefit artificial intelligence. In this work we ask two fundamental questions: (1) where and when biological connectomes can provide use in machine learning, (2) which design principles are necessary for extracting a good representation of the connectome. Toward this end, we translate the motor circuit of the C. Elegans nematode into artificial neural networks at varying levels of biophysical realism and evaluate the outcome of training these networks on motor and non-motor behavioral tasks. We demonstrate that biophysical realism need not be upheld to attain the advantages of using biological circuits. We also establish that, even if the exact wiring diagram is not retained, the architectural statistics provide a valuable prior. Finally, we show that while the C. Elegans locomotion circuit provides a powerful inductive bias on locomotion problems, its structure may hinder performance on tasks unrelated to locomotion such as visual classification problems.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
true
false
false
320,230
2409.05072
A General Framework for Clustering and Distribution Matching with Bandit Feedback
We develop a general framework for clustering and distribution matching problems with bandit feedback. We consider a $K$-armed bandit model where some subset of $K$ arms is partitioned into $M$ groups. Within each group, the random variable associated to each arm follows the same distribution on a finite alphabet. At each time step, the decision maker pulls an arm and observes its outcome from the random variable associated to that arm. Subsequent arm pulls depend on the history of arm pulls and their outcomes. The decision maker has no knowledge of the distributions of the arms or the underlying partitions. The task is to devise an online algorithm to learn the underlying partition of arms with the least number of arm pulls on average and with an error probability not exceeding a pre-determined value~$\delta$. Several existing problems fall under our general framework, including finding $M$ pairs of arms, odd arm identification, and $N$-ary clustering of $K$ arms belong to our general framework. We derive a non-asymptotic lower bound on the average number of arm pulls for any online algorithm with an error probability not exceeding $\delta$. Furthermore, we develop a computationally-efficient online algorithm based on the Track-and-Stop method and Frank--Wolfe algorithm, and show that the average number of arm pulls of our algorithm asymptotically matches that of the lower bound. Our refined analysis also uncovers a novel bound on the speed at which the average number of arm pulls of our algorithm converges to the fundamental limit as $\delta$ vanishes.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
486,627
1804.00567
Asymptotic normality and analysis of variance of log-likelihood ratios in spiked random matrix models
The present manuscript studies signal detection by likelihood ratio tests in a number of spiked random matrix models, including but not limited to Gaussian mixtures and spiked Wishart covariance matrices. We work directly with multi-spiked cases in these models and with flexible priors on the signal component that allow dependence across spikes. We derive asymptotic normality for the log-likelihood ratios when the signal-to- noise ratios are below certain thresholds. In addition, we show that the variances of the log-likelihood ratios can be asymptotically decomposed as the sums of those of a collection of statistics which we call bipartite signed cycles.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
94,070
1508.00213
Distributed Output Regulation for a Class of Nonlinear Multi-Agent Systems with Unknown-Input Leaders
In this paper, a distributed output regulation problem is formulated for a class of uncertain nonlinear multi-agent systems subject to local disturbances. The formulation is given to study a leader-following problem when the leader contains unknown inputs and its dynamics is different from those of the followers. Based on the conventional output regulation assumptions and graph theory, distributed feedback controllers are constructed to make the agents globally or semi-globally follow the uncertain leader even when the bound of the leader's inputs is unknown to the followers.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
45,647
1904.07087
Recurrent Neural Network for (Un-)supervised Learning of Monocular VideoVisual Odometry and Depth
Deep learning-based, single-view depth estimation methods have recently shown highly promising results. However, such methods ignore one of the most important features for determining depth in the human vision system, which is motion. We propose a learning-based, multi-view dense depth map and odometry estimation method that uses Recurrent Neural Networks (RNN) and trains utilizing multi-view image reprojection and forward-backward flow-consistency losses. Our model can be trained in a supervised or even unsupervised mode. It is designed for depth and visual odometry estimation from video where the input frames are temporally correlated. However, it also generalizes to single-view depth estimation. Our method produces superior results to the state-of-the-art approaches for single-view and multi-view learning-based depth estimation on the KITTI driving dataset.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
127,704
1908.00862
Adversarial Camera Alignment Network for Unsupervised Cross-camera Person Re-identification
In person re-identification (Re-ID), supervised methods usually need a large amount of expensive label information, while unsupervised ones are still unable to deliver satisfactory identification performance. In this paper, we introduce a novel person Re-ID task called unsupervised cross-camera person Re-ID, which only needs the within-camera (intra-camera) label information but not cross-camera (inter-camera) labels which are more expensive to obtain. In real-world applications, the intra-camera label information can be easily captured by tracking algorithms or few manual annotations. In this situation, the main challenge becomes the distribution discrepancy across different camera views, caused by the various body pose, occlusion, image resolution, illumination conditions, and background noises in different cameras. To address this situation, we propose a novel Adversarial Camera Alignment Network (ACAN) for unsupervised cross-camera person Re-ID. It consists of the camera-alignment task and the supervised within-camera learning task. To achieve the camera alignment, we develop a Multi-Camera Adversarial Learning (MCAL) to map images of different cameras into a shared subspace. Particularly, we investigate two different schemes, including the existing GRL (i.e., gradient reversal layer) scheme and the proposed scheme called "other camera equiprobability" (OCE), to conduct the multi-camera adversarial task. Based on this shared subspace, we then leverage the within-camera labels to train the network. Extensive experiments on five large-scale datasets demonstrate the superiority of ACAN over multiple state-of-the-art unsupervised methods that take advantage of labeled source domains and generated images by GAN-based models. In particular, we verify that the proposed multi-camera adversarial task does contribute to the significant improvement.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
140,615
2107.07224
StyleVideoGAN: A Temporal Generative Model using a Pretrained StyleGAN
Generative adversarial models (GANs) continue to produce advances in terms of the visual quality of still images, as well as the learning of temporal correlations. However, few works manage to combine these two interesting capabilities for the synthesis of video content: Most methods require an extensive training dataset to learn temporal correlations, while being rather limited in the resolution and visual quality of their output. We present a novel approach to the video synthesis problem that helps to greatly improve visual quality and drastically reduce the amount of training data and resources necessary for generating videos. Our formulation separates the spatial domain, in which individual frames are synthesized, from the temporal domain, in which motion is generated. For the spatial domain we use a pre-trained StyleGAN network, the latent space of which allows control over the appearance of the objects it was trained for. The expressive power of this model allows us to embed our training videos in the StyleGAN latent space. Our temporal architecture is then trained not on sequences of RGB frames, but on sequences of StyleGAN latent codes. The advantageous properties of the StyleGAN space simplify the discovery of temporal correlations. We demonstrate that it suffices to train our temporal architecture on only 10 minutes of footage of 1 subject for about 6 hours. After training, our model can not only generate new portrait videos for the training subject, but also for any random subject which can be embedded in the StyleGAN space.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
246,351
2308.03956
Fixed Inter-Neuron Covariability Induces Adversarial Robustness
The vulnerability to adversarial perturbations is a major flaw of Deep Neural Networks (DNNs) that raises question about their reliability when in real-world scenarios. On the other hand, human perception, which DNNs are supposed to emulate, is highly robust to such perturbations, indicating that there may be certain features of the human perception that make it robust but are not represented in the current class of DNNs. One such feature is that the activity of biological neurons is correlated and the structure of this correlation tends to be rather rigid over long spans of times, even if it hampers performance and learning. We hypothesize that integrating such constraints on the activations of a DNN would improve its adversarial robustness, and, to test this hypothesis, we have developed the Self-Consistent Activation (SCA) layer, which comprises of neurons whose activations are consistent with each other, as they conform to a fixed, but learned, covariability pattern. When evaluated on image and sound recognition tasks, the models with a SCA layer achieved high accuracy, and exhibited significantly greater robustness than multi-layer perceptron models to state-of-the-art Auto-PGD adversarial attacks \textit{without being trained on adversarially perturbed data
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
384,226
1709.02884
Degrees of Freedom of the Broadcast Channel with Hybrid CSI at Transmitter and Receivers
In general, the different links of a broadcast channel may experience different fading dynamics and, potentially, unequal or hybrid channel state information (CSI) conditions. The faster the fading and the shorter the fading block length, the more often the link needs to be trained and estimated at the receiver, and the more likely that CSI is stale or unavailable at the transmitter. Disparity of link fading dynamics in the presence of CSI limitations can be modeled by a multi-user broadcast channel with both non-identical link fading block lengths as well as dissimilar link CSIR/CSIT conditions. This paper investigates a MISO broadcast channel where some receivers experience longer coherence intervals (static receivers) and have CSIR, while some other receivers experience shorter coherence intervals (dynamic receivers) and do not enjoy free CSIR. We consider a variety of CSIT conditions for the above mentioned model, including no CSIT, delayed CSIT, or hybrid CSIT. To investigate the degrees of freedom region, we employ interference alignment and beamforming along with a product superposition that allows simultaneous but non-contaminating transmission of pilots and data to different receivers. Outer bounds employ the extremal entropy inequality as well as a bounding of the performance of a discrete memoryless multiuser multilevel broadcast channel. For several cases, inner and outer bounds are established that either partially meet, or the gap diminishes with increasing coherence times.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
80,348
1402.2941
Multispectral Palmprint Encoding and Recognition
Palmprints are emerging as a new entity in multi-modal biometrics for human identification and verification. Multispectral palmprint images captured in the visible and infrared spectrum not only contain the wrinkles and ridge structure of a palm, but also the underlying pattern of veins; making them a highly discriminating biometric identifier. In this paper, we propose a feature encoding scheme for robust and highly accurate representation and matching of multispectral palmprints. To facilitate compact storage of the feature, we design a binary hash table structure that allows for efficient matching in large databases. Comprehensive experiments for both identification and verification scenarios are performed on two public datasets -- one captured with a contact-based sensor (PolyU dataset), and the other with a contact-free sensor (CASIA dataset). Recognition results in various experimental setups show that the proposed method consistently outperforms existing state-of-the-art methods. Error rates achieved by our method (0.003% on PolyU and 0.2% on CASIA) are the lowest reported in literature on both dataset and clearly indicate the viability of palmprint as a reliable and promising biometric. All source codes are publicly available.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
30,827
cs/0504089
Universal Similarity
We survey a new area of parameter-free similarity distance measures useful in data-mining, pattern recognition, learning and automatic semantics extraction. Given a family of distances on a set of objects, a distance is universal up to a certain precision for that family if it minorizes every distance in the family between every two objects in the set, up to the stated precision (we do not require the universal distance to be an element of the family). We consider similarity distances for two types of objects: literal objects that as such contain all of their meaning, like genomes or books, and names for objects. The latter may have literal embodyments like the first type, but may also be abstract like ``red'' or ``christianity.'' For the first type we consider a family of computable distance measures corresponding to parameters expressing similarity according to particular features between pairs of literal objects. For the second type we consider similarity distances generated by web users corresponding to particular semantic relations between the (names for) the designated objects. For both families we give universal similarity distance measures, incorporating all particular distance measures in the family. In the first case the universal distance is based on compression and in the second case it is based on Google page counts related to search terms. In both cases experiments on a massive scale give evidence of the viability of the approaches.
false
false
false
false
true
true
false
false
true
false
false
false
false
false
false
false
false
false
538,691
2006.09540
COLREG-Compliant Collision Avoidance for Unmanned Surface Vehicle using Deep Reinforcement Learning
Path Following and Collision Avoidance, be it for unmanned surface vessels or other autonomous vehicles, are two fundamental guidance problems in robotics. For many decades, they have been subject to academic study, leading to a vast number of proposed approaches. However, they have mostly been treated as separate problems, and have typically relied on non-linear first-principles models with parameters that can only be determined experimentally. The rise of Deep Reinforcement Learning (DRL) in recent years suggests an alternative approach: end-to-end learning of the optimal guidance policy from scratch by means of a trial-and-error based approach. In this article, we explore the potential of Proximal Policy Optimization (PPO), a DRL algorithm with demonstrated state-of-the-art performance on Continuous Control tasks, when applied to the dual-objective problem of controlling an underactuated Autonomous Surface Vehicle in a COLREGs compliant manner such that it follows an a priori known desired path while avoiding collisions with other vessels along the way. Based on high-fidelity elevation and AIS tracking data from the Trondheim Fjord, an inlet of the Norwegian sea, we evaluate the trained agent's performance in challenging, dynamic real-world scenarios where the ultimate success of the agent rests upon its ability to navigate non-uniform marine terrain while handling challenging, but realistic vessel encounters.
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
182,577
2101.07172
HarDNet-MSEG: A Simple Encoder-Decoder Polyp Segmentation Neural Network that Achieves over 0.9 Mean Dice and 86 FPS
We propose a new convolution neural network called HarDNet-MSEG for polyp segmentation. It achieves SOTA in both accuracy and inference speed on five popular datasets. For Kvasir-SEG, HarDNet-MSEG delivers 0.904 mean Dice running at 86.7 FPS on a GeForce RTX 2080 Ti GPU. It consists of a backbone and a decoder. The backbone is a low memory traffic CNN called HarDNet68, which has been successfully applied to various CV tasks including image classification, object detection, multi-object tracking and semantic segmentation, etc. The decoder part is inspired by the Cascaded Partial Decoder, known for fast and accurate salient object detection. We have evaluated HarDNet-MSEG using those five popular datasets. The code and all experiment details are available at Github. https://github.com/james128333/HarDNet-MSEG
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
215,956
2302.06561
Gait design for limbless obstacle aided locomotion using geometric mechanics
Limbless robots have the potential to maneuver through cluttered environments that conventional robots cannot traverse. As illustrated in their biological counterparts such as snakes and nematodes, limbless locomotors can benefit from interactions with obstacles, yet such obstacle-aided locomotion (OAL) requires properly coordinated high-level self-deformation patterns (gait templates) as well as low-level body adaptation to environments. Most prior work on OAL utilized stereotyped traveling-wave gait templates and relied on local body deformations (e.g., passive body mechanics or decentralized controller parameter adaptation based on force feedback) for obstacle navigation, while gait template design for OAL remains less studied. In this paper, we explore novel gait templates for OAL based on tools derived from geometric mechanics (GM), which thus far has been limited to homogeneous environments. Here, we expand the scope of GM to obstacle-rich environments. Specifically, we establish a model that maps the presence of an obstacle to directional constraints in optimization. In doing so, we identify novel gait templates suitable for sparsely and densely distributed obstacle-rich environments respectively. Open-loop robophysical experiments verify the effectiveness of our identified OAL gaits in obstacle-rich environments. We posit that when such OAL gait templates are augmented with appropriate sensing and feedback controls, limbless locomotors will gain robust function in obstacle rich environments.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
345,449
1907.01882
An Experimental Evaluation of Large Scale GBDT Systems
Gradient boosting decision tree (GBDT) is a widely-used machine learning algorithm in both data analytic competitions and real-world industrial applications. Further, driven by the rapid increase in data volume, efforts have been made to train GBDT in a distributed setting to support large-scale workloads. However, we find it surprising that the existing systems manage the training dataset in different ways, but none of them have studied the impact of data management. To that end, this paper aims to study the pros and cons of different data management methods regarding the performance of distributed GBDT. We first introduce a quadrant categorization of data management policies based on data partitioning and data storage. Then we conduct an in-depth systematic analysis and summarize the advantageous scenarios of the quadrants. Based on the analysis, we further propose a novel distributed GBDT system named Vero, which adopts the unexplored composition of vertical partitioning and row-store and suits for many large-scale cases. To validate our analysis empirically, we implement different quadrants in the same code base and compare them under extensive workloads, and finally compare Vero with other state-of-the-art systems over a wide range of datasets. Our theoretical and experimental results provide a guideline on choosing a proper data management policy for a given workload.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
137,466
2410.20451
BlinkVision: A Benchmark for Optical Flow, Scene Flow and Point Tracking Estimation using RGB Frames and Events
Recent advances in event-based vision suggest that these systems complement traditional cameras by providing continuous observation without frame rate limitations and a high dynamic range, making them well-suited for correspondence tasks such as optical flow and point tracking. However, there is still a lack of comprehensive benchmarks for correspondence tasks that include both event data and images. To address this gap, we propose BlinkVision, a large-scale and diverse benchmark with multiple modalities and dense correspondence annotations. BlinkVision offers several valuable features: 1) Rich modalities: It includes both event data and RGB images. 2) Extensive annotations: It provides dense per-pixel annotations covering optical flow, scene flow, and point tracking. 3) Large vocabulary: It contains 410 everyday categories, sharing common classes with popular 2D and 3D datasets like LVIS and ShapeNet. 4) Naturalistic: It delivers photorealistic data and covers various naturalistic factors, such as camera shake and deformation. BlinkVision enables extensive benchmarks on three types of correspondence tasks (optical flow, point tracking, and scene flow estimation) for both image-based and event-based methods, offering new observations, practices, and insights for future research. The benchmark website is https://www.blinkvision.net/.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
502,820
2408.16969
Point Neuron Learning: A New Physics-Informed Neural Network Architecture
Machine learning and neural networks have advanced numerous research domains, but challenges such as large training data requirements and inconsistent model performance hinder their application in certain scientific problems. To overcome these challenges, researchers have investigated integrating physics principles into machine learning models, mainly through: (i) physics-guided loss functions, generally termed as physics-informed neural networks, and (ii) physics-guided architectural design. While both approaches have demonstrated success across multiple scientific disciplines, they have limitations including being trapped to a local minimum, poor interpretability, and restricted generalizability. This paper proposes a new physics-informed neural network (PINN) architecture that combines the strengths of both approaches by embedding the fundamental solution of the wave equation into the network architecture, enabling the learned model to strictly satisfy the wave equation. The proposed point neuron learning method can model an arbitrary sound field based on microphone observations without any dataset. Compared to other PINN methods, our approach directly processes complex numbers and offers better interpretability and generalizability. We evaluate the versatility of the proposed architecture by a sound field reconstruction problem in a reverberant environment. Results indicate that the point neuron method outperforms two competing methods and can efficiently handle noisy environments with sparse microphone observations.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
484,525
1105.4385
b-Bit Minwise Hashing for Large-Scale Linear SVM
In this paper, we propose to (seamlessly) integrate b-bit minwise hashing with linear SVM to substantially improve the training (and testing) efficiency using much smaller memory, with essentially no loss of accuracy. Theoretically, we prove that the resemblance matrix, the minwise hashing matrix, and the b-bit minwise hashing matrix are all positive definite matrices (kernels). Interestingly, our proof for the positive definiteness of the b-bit minwise hashing kernel naturally suggests a simple strategy to integrate b-bit hashing with linear SVM. Our technique is particularly useful when the data can not fit in memory, which is an increasingly critical issue in large-scale machine learning. Our preliminary experimental results on a publicly available webspam dataset (350K samples and 16 million dimensions) verified the effectiveness of our algorithm. For example, the training time was reduced to merely a few seconds. In addition, our technique can be easily extended to many other linear and nonlinear machine learning applications such as logistic regression.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
10,460
2312.13311
Unlocking Deep Learning: A BP-Free Approach for Parallel Block-Wise Training of Neural Networks
Backpropagation (BP) has been a successful optimization technique for deep learning models. However, its limitations, such as backward- and update-locking, and its biological implausibility, hinder the concurrent updating of layers and do not mimic the local learning processes observed in the human brain. To address these issues, recent research has suggested using local error signals to asynchronously train network blocks. However, this approach often involves extensive trial-and-error iterations to determine the best configuration for local training. This includes decisions on how to decouple network blocks and which auxiliary networks to use for each block. In our work, we introduce a novel BP-free approach: a block-wise BP-free (BWBPF) neural network that leverages local error signals to optimize distinct sub-neural networks separately, where the global loss is only responsible for updating the output layer. The local error signals used in the BP-free model can be computed in parallel, enabling a potential speed-up in the weight update process through parallel implementation. Our experimental results consistently show that this approach can identify transferable decoupled architectures for VGG and ResNet variations, outperforming models trained with end-to-end backpropagation and other state-of-the-art block-wise learning techniques on datasets such as CIFAR-10 and Tiny-ImageNet. The code is released at https://github.com/Belis0811/BWBPF.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
417,272
1609.08201
Robust Time-Series Retrieval Using Probabilistic Adaptive Segmental Alignment
Traditional pairwise sequence alignment is based on matching individual samples from two sequences, under time monotonicity constraints. However, in many application settings matching subsequences (segments) instead of individual samples may bring in additional robustness to noise or local non-causal perturbations. This paper presents an approach to segmental sequence alignment that jointly segments and aligns two sequences, generalizing the traditional per-sample alignment. To accomplish this task, we introduce a distance metric between segments based on average pairwise distances and then present a modified pair-HMM (PHMM) that incorporates the proposed distance metric to solve the joint segmentation and alignment task. We also propose a relaxation to our model that improves the computational efficiency of the generic segmental PHMM. Our results demonstrate that this new measure of sequence similarity can lead to improved classification performance, while being resilient to noise, on a variety of sequence retrieval problems, from EEG to motion sequence classification.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
61,547
2302.04527
Toward Extremely Lightweight Distracted Driver Recognition With Distillation-Based Neural Architecture Search and Knowledge Transfer
The number of traffic accidents has been continuously increasing in recent years worldwide. Many accidents are caused by distracted drivers, who take their attention away from driving. Motivated by the success of Convolutional Neural Networks (CNNs) in computer vision, many researchers developed CNN-based algorithms to recognize distracted driving from a dashcam and warn the driver against unsafe behaviors. However, current models have too many parameters, which is unfeasible for vehicle-mounted computing. This work proposes a novel knowledge-distillation-based framework to solve this problem. The proposed framework first constructs a high-performance teacher network by progressively strengthening the robustness to illumination changes from shallow to deep layers of a CNN. Then, the teacher network is used to guide the architecture searching process of a student network through knowledge distillation. After that, we use the teacher network again to transfer knowledge to the student network by knowledge distillation. Experimental results on the Statefarm Distracted Driver Detection Dataset and AUC Distracted Driver Dataset show that the proposed approach is highly effective for recognizing distracted driving behaviors from photos: (1) the teacher network's accuracy surpasses the previous best accuracy; (2) the student network achieves very high accuracy with only 0.42M parameters (around 55% of the previous most lightweight model). Furthermore, the student network architecture can be extended to a spatial-temporal 3D CNN for recognizing distracted driving from video clips. The 3D student network largely surpasses the previous best accuracy with only 2.03M parameters on the Drive&Act Dataset. The source code is available at https://github.com/Dichao-Liu/Lightweight_Distracted_Driver_Recognition_with_Distillation-Based_NAS_and_Knowledge_Transfer.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
344,737
1509.09089
Moving Object Detection in Video Using Saliency Map and Subspace Learning
Moving object detection is a key to intelligent video analysis. On the one hand, what moves is not only interesting objects but also noise and cluttered background. On the other hand, moving objects without rich texture are prone not to be detected. So there are undesirable false alarms and missed alarms in many algorithms of moving object detection. To reduce the false alarms and missed alarms, in this paper, we propose to incorporate a saliency map into an incremental subspace analysis framework where the saliency map makes estimated background has less chance than foreground (i.e., moving objects) to contain salient objects. The proposed objective function systematically takes account into the properties of sparsity, low-rank, connectivity, and saliency. An alternative minimization algorithm is proposed to seek the optimal solutions. Experimental results on the Perception Test Images Sequences demonstrate that the proposed method is effective in reducing false alarms and missed alarms.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
47,446
1012.1643
Process Makna - A Semantic Wiki for Scientific Workflows
Virtual e-Science infrastructures supporting Web-based scientific workflows are an example for knowledge-intensive collaborative and weakly-structured processes where the interaction with the human scientists during process execution plays a central role. In this paper we propose the lightweight dynamic user-friendly interaction with humans during execution of scientific workflows via the low-barrier approach of Semantic Wikis as an intuitive interface for non-technical scientists. Our Process Makna Semantic Wiki system is a novel combination of an business process management system adapted for scientific workflows with a Corporate Semantic Web Wiki user interface supporting knowledge intensive human interaction tasks during scientific workflow execution.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
8,456
1512.08050
Large deviation, Basic Information Theory for Wireless Sensor Networks
In this article, we prove Shannon-MacMillan-Breiman Theorem for Wireless Sensor Networks modelled as coloured geometric random graphs. For large $n,$ we show that a Wireless Sensor Network consisting of $n$ sensors in $[0,1]^d$ connected by an average number of links of order $n\log n $ can be coded by about $[n(\log n )^2\pi^{d/2}/(d/2)!]\,\mathcal{H}$ bits, where $\mathcal{H}$ is an explicitly defined entropy. In the process, we derive a joint large deviation principle (LDP) for the \emph{empirical sensor measure} and \emph{the empirical link measure} of coloured random geometric graph models.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
50,480
2212.12533
Adaptive Risk-Aware Bidding with Budget Constraint in Display Advertising
Real-time bidding (RTB) has become a major paradigm of display advertising. Each ad impression generated from a user visit is auctioned in real time, where demand-side platform (DSP) automatically provides bid price usually relying on the ad impression value estimation and the optimal bid price determination. However, the current bid strategy overlooks large randomness of the user behaviors (e.g., click) and the cost uncertainty caused by the auction competition. In this work, we explicitly factor in the uncertainty of estimated ad impression values and model the risk preference of a DSP under a specific state and market environment via a sequential decision process. Specifically, we propose a novel adaptive risk-aware bidding algorithm with budget constraint via reinforcement learning, which is the first to simultaneously consider estimation uncertainty and the dynamic risk tendency of a DSP. We theoretically unveil the intrinsic relation between the uncertainty and the risk tendency based on value at risk (VaR). Consequently, we propose two instantiations to model risk tendency, including an expert knowledge-based formulation embracing three essential properties and an adaptive learning method based on self-supervised reinforcement learning. We conduct extensive experiments on public datasets and show that the proposed framework outperforms state-of-the-art methods in practical settings.
false
false
false
false
true
true
true
false
false
false
false
false
false
false
false
false
false
false
338,056
2410.16154
Unsupervised Replay Strategies for Continual Learning with Limited Data
Artificial neural networks (ANNs) show limited performance with scarce or imbalanced training data and face challenges with continuous learning, such as forgetting previously learned data after new tasks training. In contrast, the human brain can learn continuously and from just a few examples. This research explores the impact of 'sleep', an unsupervised phase incorporating stochastic activation with local Hebbian learning rules, on ANNs trained incrementally with limited and imbalanced datasets, specifically MNIST and Fashion MNIST. We discovered that introducing a sleep phase significantly enhanced accuracy in models trained with limited data. When a few tasks were trained sequentially, sleep replay not only rescued previously learned information that had been catastrophically forgetting following new task training but often enhanced performance in prior tasks, especially those trained with limited data. This study highlights the multifaceted role of sleep replay in augmenting learning efficiency and facilitating continual learning in ANNs.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
500,901
2112.11669
Dynamic Combination of Heterogeneous Models for Hierarchical Time Series
We introduce a framework to dynamically combine heterogeneous models called \texttt{DYCHEM}, which forecasts a set of time series that are related through an aggregation hierarchy. Different types of forecasting models can be employed as individual ``experts'' so that each model is tailored to the nature of the corresponding time series. \texttt{DYCHEM} learns hierarchical structures during the training stage to help generalize better across all the time series being modeled and also mitigates coherency issues that arise due to constraints imposed by the hierarchy. To improve the reliability of forecasts, we construct quantile estimations based on the point forecasts obtained from combined heterogeneous models. The resulting quantile forecasts are coherent and independent of the choice of forecasting models. We conduct a comprehensive evaluation of both point and quantile forecasts for hierarchical time series (HTS), including public data and user records from a large financial software company. In general, our method is robust, adaptive to datasets with different properties, and highly configurable and efficient for large-scale forecasting pipelines.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
272,767
1911.07344
ELoPE: Fine-Grained Visual Classification with Efficient Localization, Pooling and Embedding
The task of fine-grained visual classification (FGVC) deals with classification problems that display a small inter-class variance such as distinguishing between different bird species or car models. State-of-the-art approaches typically tackle this problem by integrating an elaborate attention mechanism or (part-) localization method into a standard convolutional neural network (CNN). Also in this work the aim is to enhance the performance of a backbone CNN such as ResNet by including three efficient and lightweight components specifically designed for FGVC. This is achieved by using global k-max pooling, a discriminative embedding layer trained by optimizing class means and an efficient bounding box estimator that only needs class labels for training. The resulting model achieves new best state-of-the-art recognition accuracies on the Stanford cars and FGVC-Aircraft datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
153,813
2405.00314
Model Quantization and Hardware Acceleration for Vision Transformers: A Comprehensive Survey
Vision Transformers (ViTs) have recently garnered considerable attention, emerging as a promising alternative to convolutional neural networks (CNNs) in several vision-related applications. However, their large model sizes and high computational and memory demands hinder deployment, especially on resource-constrained devices. This underscores the necessity of algorithm-hardware co-design specific to ViTs, aiming to optimize their performance by tailoring both the algorithmic structure and the underlying hardware accelerator to each other's strengths. Model quantization, by converting high-precision numbers to lower-precision, reduces the computational demands and memory needs of ViTs, allowing the creation of hardware specifically optimized for these quantized algorithms, boosting efficiency. This article provides a comprehensive survey of ViTs quantization and its hardware acceleration. We first delve into the unique architectural attributes of ViTs and their runtime characteristics. Subsequently, we examine the fundamental principles of model quantization, followed by a comparative analysis of the state-of-the-art quantization techniques for ViTs. Additionally, we explore the hardware acceleration of quantized ViTs, highlighting the importance of hardware-friendly algorithm design. In conclusion, this article will discuss ongoing challenges and future research paths. We consistently maintain the related open-source materials at https://github.com/DD-DuDa/awesome-vit-quantization-acceleration.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
true
450,874
2303.08114
Simfluence: Modeling the Influence of Individual Training Examples by Simulating Training Runs
Training data attribution (TDA) methods offer to trace a model's prediction on any given example back to specific influential training examples. Existing approaches do so by assigning a scalar influence score to each training example, under a simplifying assumption that influence is additive. But in reality, we observe that training examples interact in highly non-additive ways due to factors such as inter-example redundancy, training order, and curriculum learning effects. To study such interactions, we propose Simfluence, a new paradigm for TDA where the goal is not to produce a single influence score per example, but instead a training run simulator: the user asks, ``If my model had trained on example $z_1$, then $z_2$, ..., then $z_n$, how would it behave on $z_{test}$?''; the simulator should then output a simulated training run, which is a time series predicting the loss on $z_{test}$ at every step of the simulated run. This enables users to answer counterfactual questions about what their model would have learned under different training curricula, and to directly see where in training that learning would occur. We present a simulator, Simfluence-Linear, that captures non-additive interactions and is often able to predict the spiky trajectory of individual example losses with surprising fidelity. Furthermore, we show that existing TDA methods such as TracIn and influence functions can be viewed as special cases of Simfluence-Linear. This enables us to directly compare methods in terms of their simulation accuracy, subsuming several prior TDA approaches to evaluation. In experiments on large language model (LLM) fine-tuning, we show that our method predicts loss trajectories with much higher accuracy than existing TDA methods (doubling Spearman's correlation and reducing mean-squared error by 75%) across several tasks, models, and training methods.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
351,512
2309.11172
Partition-A-Medical-Image: Extracting Multiple Representative Sub-regions for Few-shot Medical Image Segmentation
Few-shot Medical Image Segmentation (FSMIS) is a more promising solution for medical image segmentation tasks where high-quality annotations are naturally scarce. However, current mainstream methods primarily focus on extracting holistic representations from support images with large intra-class variations in appearance and background, and encounter difficulties in adapting to query images. In this work, we present an approach to extract multiple representative sub-regions from a given support medical image, enabling fine-grained selection over the generated image regions. Specifically, the foreground of the support image is decomposed into distinct regions, which are subsequently used to derive region-level representations via a designed Regional Prototypical Learning (RPL) module. We then introduce a novel Prototypical Representation Debiasing (PRD) module based on a two-way elimination mechanism which suppresses the disturbance of regional representations by a self-support, Multi-direction Self-debiasing (MS) block, and a support-query, Interactive Debiasing (ID) block. Finally, an Assembled Prediction (AP) module is devised to balance and integrate predictions of multiple prototypical representations learned using stacked PRD modules. Results obtained through extensive experiments on three publicly accessible medical imaging datasets demonstrate consistent improvements over the leading FSMIS methods. The source code is available at https://github.com/YazhouZhu19/PAMI.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
393,314
2209.14430
Minimax Optimal Kernel Operator Learning via Multilevel Training
Learning mappings between infinite-dimensional function spaces has achieved empirical success in many disciplines of machine learning, including generative modeling, functional data analysis, causal inference, and multi-agent reinforcement learning. In this paper, we study the statistical limit of learning a Hilbert-Schmidt operator between two infinite-dimensional Sobolev reproducing kernel Hilbert spaces. We establish the information-theoretic lower bound in terms of the Sobolev Hilbert-Schmidt norm and show that a regularization that learns the spectral components below the bias contour and ignores the ones that are above the variance contour can achieve the optimal learning rate. At the same time, the spectral components between the bias and variance contours give us flexibility in designing computationally feasible machine learning algorithms. Based on this observation, we develop a multilevel kernel operator learning algorithm that is optimal when learning linear operators between infinite-dimensional function spaces.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
320,240
2404.15877
Effective Unsupervised Constrained Text Generation based on Perturbed Masking
Unsupervised constrained text generation aims to generate text under a given set of constraints without any supervised data. Current state-of-the-art methods stochastically sample edit positions and actions, which may cause unnecessary search steps. In this paper, we propose PMCTG to improve effectiveness by searching for the best edit position and action in each step. Specifically, PMCTG extends perturbed masking technique to effectively search for the most incongruent token to edit. Then it introduces four multi-aspect scoring functions to select edit action to further reduce search difficulty. Since PMCTG does not require supervised data, it could be applied to different generation tasks. We show that under the unsupervised setting, PMCTG achieves new state-of-the-art results in two representative tasks, namely keywords-to-sentence generation and paraphrasing.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
449,273