id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
1611.06880
The subset-matched Jaccard index for evaluation of Segmentation for Plant Images
We describe a new measure for the evaluation of region level segmentation of objects, as applied to evaluating the accuracy of leaf-level segmentation of plant images. The proposed approach enforces the rule that a region (e.g. a leaf) in either the image being evaluated or the ground truth image evaluated against can be mapped to no more than one region in the other image. We call this measure the subset-matched Jaccard index.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
64,268
2007.09331
Strudel: Learning Structured-Decomposable Probabilistic Circuits
Probabilistic circuits (PCs) represent a probability distribution as a computational graph. Enforcing structural properties on these graphs guarantees that several inference scenarios become tractable. Among these properties, structured decomposability is a particularly appealing one: it enables the efficient and exact computations of the probability of complex logical formulas, and can be used to reason about the expected output of certain predictive models under missing data. This paper proposes Strudel, a simple, fast and accurate learning algorithm for structured-decomposable PCs. Compared to prior work for learning structured-decomposable PCs, Strudel delivers more accurate single PC models in fewer iterations, and dramatically scales learning when building ensembles of PCs. It achieves this scalability by exploiting another structural property of PCs, called determinism, and by sharing the same computational graph across mixture components. We show these advantages on standard density estimation benchmarks and challenging inference scenarios.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
187,901
1702.02048
Multiplex Network Regression: How do relations drive interactions?
We introduce a statistical regression model to investigate the impact of dyadic relations on complex networks generated from observed repeated interactions. It is based on generalised hypergeometric ensembles (gHypEG), a class of statistical network ensembles developed recently to deal with multi-edge graph and count data. We represent different types of known relations between system elements by weighted graphs, separated in the different layers of a multiplex network. With our method, we can regress the influence of each relational layer, the explanatory variables, on the interaction counts, the dependent variables. Moreover, we can quantify the statistical significance of the relations as explanatory variables for the observed interactions. To demonstrate the power of our approach, we investigate an example based on empirical data.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
67,917
2307.07914
Exploiting FPGA Capabilities for Accelerated Biomedical Computing
This study presents advanced neural network architectures including Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), Long Short-Term Memory Networks (LSTMs), and Deep Belief Networks (DBNs) for enhanced ECG signal analysis using Field Programmable Gate Arrays (FPGAs). We utilize the MIT-BIH Arrhythmia Database for training and validation, introducing Gaussian noise to improve algorithm robustness. The implemented models feature various layers for distinct processing and classification tasks and techniques like EarlyStopping callback and Dropout layer are used to mitigate overfitting. Our work also explores the development of a custom Tensor Compute Unit (TCU) accelerator for the PYNQ Z1 board, offering comprehensive steps for FPGA-based machine learning, including setting up the Tensil toolchain in Docker, selecting architecture, configuring PS-PL, and compiling and executing models. Performance metrics such as latency and throughput are calculated for practical insights, demonstrating the potential of FPGAs in high-performance biomedical computing. The study ultimately offers a guide for optimizing neural network performance on FPGAs for various applications.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
379,587
2411.02948
Grounding Natural Language to SQL Translation with Data-Based Self-Explanations
Natural Language Interfaces for Databases empower non-technical users to interact with data using natural language (NL). Advanced approaches, utilizing either neural sequence-to-sequence or more recent sophisticated large-scale language models, typically implement NL to SQL (NL2SQL) translation in an end-to-end fashion. However, like humans, these end-to-end translation models may not always generate the best SQL output on their first try. In this paper, we propose CycleSQL, an iterative framework designed for end-to-end translation models to autonomously generate the best output through self-evaluation. The main idea of CycleSQL is to introduce data-grounded NL explanations of query results as self-provided feedback, and use the feedback to validate the correctness of the translation iteratively, hence improving the overall translation accuracy. Extensive experiments, including quantitative and qualitative evaluations, are conducted to study CycleSQL by applying it to seven existing translation models on five widely used benchmarks. The results show that 1) the feedback loop introduced in CycleSQL can consistently improve the performance of existing models, and in particular, by applying CycleSQL to RESDSQL, obtains a translation accuracy of 82.0% (+2.6%) on the validation set, and 81.6% (+3.2%) on the test set of Spider benchmark; 2) the generated NL explanations can also provide insightful information for users, aiding in the comprehension of translation results and consequently enhancing the interpretability of NL2SQL translation.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
false
505,732
1701.05686
High Rate LDPC Codes from Difference Covering Arrays
This paper presents a combinatorial construction of low-density parity-check (LDPC) codes from difference covering arrays. While the original construction by Gallagher was by randomly allocating bits in a sparse parity-check matrix, over the past 20 years researchers have used a variety of more structured approaches to construct these codes, with the more recent constructions of well-structured LDPC coming from balanced incomplete block designs (BIBDs) and from Latin squares over finite fields. However these constructions have suffered from the limited orders for which these designs exist. Here we present a construction of LDPC codes of length $4n^2 - 2n$ for all $n$ using the cyclic group of order $2n$. These codes achieve high information rate (greater than 0.8) for $n \geq 8$, have girth at least 6 and have minimum distance 6 for $n$ odd.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
67,018
2101.10713
Exploring Transitivity in Neural NLI Models through Veridicality
Despite the recent success of deep neural networks in natural language processing, the extent to which they can demonstrate human-like generalization capacities for natural language understanding remains unclear. We explore this issue in the domain of natural language inference (NLI), focusing on the transitivity of inference relations, a fundamental property for systematically drawing inferences. A model capturing transitivity can compose basic inference patterns and draw new inferences. We introduce an analysis method using synthetic and naturalistic NLI datasets involving clause-embedding verbs to evaluate whether models can perform transitivity inferences composed of veridical inferences and arbitrary inference types. We find that current NLI models do not perform consistently well on transitivity inference tasks, suggesting that they lack the generalization capacity for drawing composite inferences from provided training examples. The data and code for our analysis are publicly available at https://github.com/verypluming/transitivity.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
217,023
2405.12452
Prompt-Based Spatio-Temporal Graph Transfer Learning
Spatio-temporal graph neural networks have proven efficacy in capturing complex dependencies for urban computing tasks such as forecasting and kriging. Yet, their performance is constrained by the reliance on extensive data for training on a specific task, thereby limiting their adaptability to new urban domains with varied task demands. Although transfer learning has been proposed to remedy this problem by leveraging knowledge across domains, the cross-task generalization still remains under-explored in spatio-temporal graph transfer learning due to the lack of a unified framework. To bridge the gap, we propose Spatio-Temporal Graph Prompting (STGP), a prompt-based framework capable of adapting to multi-diverse tasks in a data-scarce domain. Specifically, we first unify different tasks into a single template and introduce a task-agnostic network architecture that aligns with this template. This approach enables capturing dependencies shared across tasks. Furthermore, we employ learnable prompts to achieve domain and task transfer in a two-stage prompting pipeline, facilitating the prompts to effectively capture domain knowledge and task-specific properties. Our extensive experiments demonstrate that STGP outperforms state-of-the-art baselines in three tasks-forecasting, kriging, and extrapolation-achieving an improvement of up to 10.7%.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
455,526
2306.08014
Realising Synthetic Active Inference Agents, Part I: Epistemic Objectives and Graphical Specification Language
The Free Energy Principle (FEP) is a theoretical framework for describing how (intelligent) systems self-organise into coherent, stable structures by minimising a free energy functional. Active Inference (AIF) is a corollary of the FEP that specifically details how systems that are able to plan for the future (agents) function by minimising particular free energy functionals that incorporate information seeking components. This paper is the first in a series of two where we derive a synthetic version of AIF on free form factor graphs. The present paper focuses on deriving a local version of the free energy functionals used for AIF. This enables us to construct a version of AIF which applies to arbitrary graphical models and interfaces with prior work on message passing algorithms. The resulting messages are derived in our companion paper. We also identify a gap in the graphical notation used for factor graphs. While factor graphs are great at expressing a generative model, they have so far been unable to specify the full optimisation problem including constraints. To solve this problem we develop Constrained Forney-style Factor Graph (CFFG) notation which permits a fully graphical description of variational inference objectives. We then proceed to show how CFFG's can be used to reconstruct prior algorithms for AIF as well as derive new ones. The latter is demonstrated by deriving an algorithm that permits direct policy inference for AIF agents, circumventing a long standing scaling issue that has so far hindered the application of AIF in industrial settings. We demonstrate our algorithm on the classic T-maze task and show that it reproduces the information seeking behaviour that is a hallmark feature of AIF.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
373,258
1210.4893
Sparse Q-learning with Mirror Descent
This paper explores a new framework for reinforcement learning based on online convex optimization, in particular mirror descent and related algorithms. Mirror descent can be viewed as an enhanced gradient method, particularly suited to minimization of convex functions in highdimensional spaces. Unlike traditional gradient methods, mirror descent undertakes gradient updates of weights in both the dual space and primal space, which are linked together using a Legendre transform. Mirror descent can be viewed as a proximal algorithm where the distance generating function used is a Bregman divergence. A new class of proximal-gradient based temporal-difference (TD) methods are presented based on different Bregman divergences, which are more powerful than regular TD learning. Examples of Bregman divergences that are studied include p-norm functions, and Mahalanobis distance based on the covariance of sample gradients. A new family of sparse mirror-descent reinforcement learning methods are proposed, which are able to find sparse fixed points of an l1-regularized Bellman equation at significantly less computational cost than previous methods based on second-order matrix methods. An experimental study of mirror-descent reinforcement learning is presented using discrete and continuous Markov decision processes.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
19,218
2407.06581
Vision language models are blind
While large language models with vision capabilities (VLMs), e.g., GPT-4o and Gemini 1.5 Pro, are powering various image-text applications and scoring high on many vision-understanding benchmarks, we find that they are surprisingly still struggling with low-level vision tasks that are easy to humans. Specifically, on BlindTest, our suite of 7 very simple tasks such as identifying (a) whether two circles overlap; (b) whether two lines intersect; (c) which letter is being circled in a word; and (d) counting circles in an Olympic-like logo, four state-of-the-art VLMs are only 58.57% accurate on average. Claude 3.5 Sonnet performs the best at 74.94% accuracy, but this is still far from the human expected accuracy of 100%. Across different image resolutions and line widths, VLMs consistently struggle with tasks that require precise spatial information and recognizing geometric primitives that overlap or are close together. Code and data are available at: https://vlmsareblind.github.io
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
471,453
1911.08607
Robust Adaptive Model Predictive Control with Worst-Case Cost
A robust adaptive model predictive control (MPC) algorithm is presented for linear, time invariant systems with unknown dynamics and subject to bounded measurement noise. The system is characterized by an impulse response model, which is assumed to lie within a bounded set called the feasible system set. Online set-membership identification is used to reduce uncertainty in the impulse response. In the MPC scheme, robust constraints are enforced to ensure constraint satisfaction for all the models in the feasible set. The performance objective is formulated as a worst-case cost with respect to the modeling uncertainties. That is, at each time step an optimization problem is solved in which the control input is optimized for the worst-case plant in the uncertainty set. The performance of the proposed algorithm is compared to an adaptive MPC algorithm from the literature using Monte-Carlo simulations.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
154,238
1705.09391
Discovering Reliable Approximate Functional Dependencies
Given a database and a target attribute of interest, how can we tell whether there exists a functional, or approximately functional dependence of the target on any set of other attributes in the data? How can we reliably, without bias to sample size or dimensionality, measure the strength of such a dependence? And, how can we efficiently discover the optimal or $\alpha$-approximate top-$k$ dependencies? These are exactly the questions we answer in this paper. As we want to be agnostic on the form of the dependence, we adopt an information-theoretic approach, and construct a reliable, bias correcting score that can be efficiently computed. Moreover, we give an effective optimistic estimator of this score, by which for the first time we can mine the approximate functional dependencies from data with guarantees of optimality. Empirical evaluation shows that the derived score achieves a good bias for variance trade-off, can be used within an efficient discovery algorithm, and indeed discovers meaningful dependencies. Most important, it remains reliable in the face of data sparsity.
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
true
false
74,186
1402.6556
Evolutionary solving of the debts' clearing problem
The debts' clearing problem is about clearing all the debts in a group of n entities (persons, companies etc.) using a minimal number of money transaction operations. The problem is known to be NP-hard in the strong sense. As for many intractable problems, techniques from the field of artificial intelligence are useful in finding solutions close to optimum for large inputs. An evolutionary algorithm for solving the debts' clearing problem is proposed.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
false
31,189
1310.3360
A Probabilistic Approach to Risk Mapping for Mt. Etna
We evaluate susceptibility to lava flows on Mt. Etna based on specially designed die-toss experiments using probabilities for type, time and place of activation from the volcano's 400-year recorded history and current studies on its known fractures and fissures. The types of activations were forcast using a table of probabilities for events, typed by duration and volume of ejecta. Lengths of time were represented by the number of activations to expect within a given time-frame, calculated assuming Poisson-distributed inter-arrival times for activations. Locations of future activations were forecast with a probability distribution function for activation probabilities. Most likely scenarios for risk and resulting topography were generated for Etna's next activation (average 7.76 years), the next 25, 50 and 100 years. Forecasts for areas most likely affected are in good agreement with previous risk studies made. Forecasts for risks of lava invasions, as well as future topographies might be a first. Threats to lifelines are also discussed.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
27,739
1902.02603
Radial and Directional Posteriors for Bayesian Neural Networks
We propose a new variational family for Bayesian neural networks. We decompose the variational posterior into two components, where the radial component captures the strength of each neuron in terms of its magnitude; while the directional component captures the statistical dependencies among the weight parameters. The dependencies learned via the directional density provide better modeling performance compared to the widely-used Gaussian mean-field-type variational family. In addition, the strength of input and output neurons learned via the radial density provides a structured way to compress neural networks. Indeed, experiments show that our variational family improves predictive performance and yields compressed networks simultaneously.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
120,916
1509.07951
Error Gradient-based Variable-Lp Norm Constraint LMS Algorithm for Sparse System Identification
Sparse adaptive filtering has gained much attention due to its wide applicability in the field of signal processing. Among the main algorithm families, sparse norm constraint adaptive filters develop rapidly in recent years. However, when applied for system identification, most priori work in sparse norm constraint adaptive filtering suffers from the difficulty of adaptability to the sparsity of the systems to be identified. To address this problem, we propose a novel variable p-norm constraint least mean square (LMS) algorithm, which serves as a variant of the conventional Lp-LMS algorithm established for sparse system identification. The parameter p is iteratively adjusted by the gradient descent method applied to the instantaneous square error. Numerical simulations show that this new approach achieves better performance than the traditional Lp-LMS and LMS algorithms in terms of steady-state error and convergence rate.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
47,307
1911.05797
AI-optimized detector design for the future Electron-Ion Collider: the dual-radiator RICH case
Advanced detector R&D requires performing computationally intensive and detailed simulations as part of the detector-design optimization process. We propose a general approach to this process based on Bayesian optimization and machine learning that encodes detector requirements. As a case study, we focus on the design of the dual-radiator Ring Imaging Cherenkov (dRICH) detector under development as part of the particle-identification system at the future Electron-Ion Collider (EIC). The EIC is a US-led frontier accelerator project for nuclear physics, which has been proposed to further explore the structure and interactions of nuclear matter at the scale of sea quarks and gluons. We show that the detector design obtained with our automated and highly parallelized framework outperforms the baseline dRICH design within the assumptions of the current model. Our approach can be applied to any detector R&D, provided that realistic simulations are available.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
153,368
1107.4161
Local Optima Networks of the Quadratic Assignment Problem
Using a recently proposed model for combinatorial landscapes, Local Optima Networks (LON), we conduct a thorough analysis of two types of instances of the Quadratic Assignment Problem (QAP). This network model is a reduction of the landscape in which the nodes correspond to the local optima, and the edges account for the notion of adjacency between their basins of attraction. The model was inspired by the notion of 'inherent network' of potential energy surfaces proposed in physical-chemistry. The local optima networks extracted from the so called uniform and real-like QAP instances, show features clearly distinguishing these two types of instances. Apart from a clear confirmation that the search difficulty increases with the problem dimension, the analysis provides new confirming evidence explaining why the real-like instances are easier to solve exactly using heuristic search, while the uniform instances are easier to solve approximately. Although the local optima network model is still under development, we argue that it provides a novel view of combinatorial landscapes, opening up the possibilities for new analytical tools and understanding of problem difficulty in combinatorial optimization.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
11,382
2408.09524
Enhancing Quantum Memory Lifetime with Measurement-Free Local Error Correction and Reinforcement Learning
Reliable quantum computation requires systematic identification and correction of errors that occur and accumulate in quantum hardware. To diagnose and correct such errors, standard quantum error-correcting protocols utilize $\textit{global}$ error information across the system obtained by mid-circuit readout of ancillary qubits. We investigate circuit-level error-correcting protocols that are measurement-free and based on $\textit{local}$ error information. Such a local error correction (LEC) circuit consists of faulty multi-qubit gates to perform both syndrome extraction and ancilla-controlled error removal. We develop and implement a reinforcement learning framework that takes a fixed set of faulty gates as inputs and outputs an optimized LEC circuit. To evaluate this approach, we quantitatively characterize an extension of logical qubit lifetime by a noisy LEC circuit. For the 2D classical Ising model and 4D toric code, our optimized LEC circuit performs better at extending a memory lifetime compared to a conventional LEC circuit based on Toom's rule in a sub-threshold gate error regime. We further show that such circuits can be used to reduce the rate of mid-circuit readouts to preserve a 2D toric code memory. Finally, we discuss the application of the LEC protocol on dissipative preparation of quantum states with topological phases.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
481,475
2403.15221
Mutual Information of a class of Poisson-type Channels using Markov Renewal Theory
The mutual information (MI) of Poisson-type channels has been linked to a filtering problem since the 70s, but its evaluation for specific continuous-time, discrete-state systems remains a demanding task. As an advantage, Markov renewal processes (MrP) retain their renewal property under state space filtering. This offers a way to solve the filtering problem analytically for small systems. We consider a class of communication systems $X \to Y$ that can be derived from an MrP by a custom filtering procedure. For the subclasses, where (i) $Y$ is a renewal process or (ii) $(X,Y)$ belongs to a class of MrPs, we provide an evolution equation for finite transmission duration $T>0$ and limit theorems for $T \to \infty$ that facilitate simulation-free evaluation of the MI $\mathbb{I}(X_{[0,T]}; Y_{[0,T]})$ and its associated mutual information rate (MIR). In other cases, simulation cost is reduced to the marginal system $(X,Y)$ or $Y$. We show that systems with an additional $X$-modulating level $C$, which statically chooses between different processes $X_{[0,T]}(c)$, can naturally be included in our framework, thereby giving an expression for $\mathbb{I}(C; Y_{[0,T]})$. Our primary contribution is to apply the results of classical (Markov renewal) filtering theory in a novel manner to the problem of exactly computing the MI/MIR. The theoretical framework is showcased in an application to bacterial gene expression, where filtering is analytically tractable.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
440,450
2202.03595
Model and predict age and sex in healthy subjects using brain white matter features: A deep learning approach
The human brain's white matter (WM) structure is of immense interest to the scientific community. Diffusion MRI gives a powerful tool to describe the brain WM structure noninvasively. To potentially enable monitoring of age-related changes and investigation of sex-related brain structure differences on the mapping between the brain connectome and healthy subjects' age and sex, we extract fiber-cluster-based diffusion features and predict sex and age with a novel ensembled neural network classifier. We conduct experiments on the Human Connectome Project (HCP) young adult dataset and show that our model achieves 94.82% accuracy in sex prediction and 2.51 years MAE in age prediction. We also show that the fractional anisotropy (FA) is the most predictive of sex, while the number of fibers is the most predictive of age and the combination of different features can improve the model performance.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
279,264
2205.10663
Transformer based Generative Adversarial Network for Liver Segmentation
Automated liver segmentation from radiology scans (CT, MRI) can improve surgery and therapy planning and follow-up assessment in addition to conventional use for diagnosis and prognosis. Although convolutional neural networks (CNNs) have become the standard image segmentation tasks, more recently this has started to change towards Transformers based architectures because Transformers are taking advantage of capturing long range dependence modeling capability in signals, so called attention mechanism. In this study, we propose a new segmentation approach using a hybrid approach combining the Transformer(s) with the Generative Adversarial Network (GAN) approach. The premise behind this choice is that the self-attention mechanism of the Transformers allows the network to aggregate the high dimensional feature and provide global information modeling. This mechanism provides better segmentation performance compared with traditional methods. Furthermore, we encode this generator into the GAN based architecture so that the discriminator network in the GAN can classify the credibility of the generated segmentation masks compared with the real masks coming from human (expert) annotations. This allows us to extract the high dimensional topology information in the mask for biomedical image segmentation and provide more reliable segmentation results. Our model achieved a high dice coefficient of 0.9433, recall of 0.9515, and precision of 0.9376 and outperformed other Transformer based approaches.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
297,804
1912.09421
Neural Design Network: Graphic Layout Generation with Constraints
Graphic design is essential for visual communication with layouts being fundamental to composing attractive designs. Layout generation differs from pixel-level image synthesis and is unique in terms of the requirement of mutual relations among the desired components. We propose a method for design layout generation that can satisfy user-specified constraints. The proposed neural design network (NDN) consists of three modules. The first module predicts a graph with complete relations from a graph with user-specified relations. The second module generates a layout from the predicted graph. Finally, the third module fine-tunes the predicted layout. Quantitative and qualitative experiments demonstrate that the generated layouts are visually similar to real design layouts. We also construct real designs based on predicted layouts for a better understanding of the visual quality. Finally, we demonstrate a practical application on layout recommendation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
158,067
1909.08794
A literature review on current approaches and applications of fuzzy expert systems
The main purposes of this study are to distinguish the trends of research in publication exits for the utilisations of the fuzzy expert and knowledge-based systems that is done based on the classification of studies in the last decade. The present investigation covers 60 articles from related scholastic journals, International conference proceedings and some major literature review papers. Our outcomes reveal an upward trend in the up-to-date publications number, that is evidence of growing notoriety on the various applications of fuzzy expert systems. This raise in the reports is mainly in the medical neuro-fuzzy and fuzzy expert systems. Moreover, another most critical observation is that many modern industrial applications are extended, employing knowledge-based systems by extracting the experts' knowledge.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
146,059
2306.05401
RDumb: A simple approach that questions our progress in continual test-time adaptation
Test-Time Adaptation (TTA) allows to update pre-trained models to changing data distributions at deployment time. While early work tested these algorithms for individual fixed distribution shifts, recent work proposed and applied methods for continual adaptation over long timescales. To examine the reported progress in the field, we propose the Continually Changing Corruptions (CCC) benchmark to measure asymptotic performance of TTA techniques. We find that eventually all but one state-of-the-art methods collapse and perform worse than a non-adapting model, including models specifically proposed to be robust to performance collapse. In addition, we introduce a simple baseline, "RDumb", that periodically resets the model to its pretrained state. RDumb performs better or on par with the previously proposed state-of-the-art in all considered benchmarks. Our results show that previous TTA approaches are neither effective at regularizing adaptation to avoid collapse nor able to outperform a simplistic resetting strategy.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
372,177
2006.01096
Invariant Policy Optimization: Towards Stronger Generalization in Reinforcement Learning
A fundamental challenge in reinforcement learning is to learn policies that generalize beyond the operating domains experienced during training. In this paper, we approach this challenge through the following invariance principle: an agent must find a representation such that there exists an action-predictor built on top of this representation that is simultaneously optimal across all training domains. Intuitively, the resulting invariant policy enhances generalization by finding causes of successful actions. We propose a novel learning algorithm, Invariant Policy Optimization (IPO), that implements this principle and learns an invariant policy during training. We compare our approach with standard policy gradient methods and demonstrate significant improvements in generalization performance on unseen domains for linear quadratic regulator and grid-world problems, and an example where a robot must learn to open doors with varying physical properties.
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
179,677
2408.04851
Your Classifier Can Be Secretly a Likelihood-Based OOD Detector
The ability to detect out-of-distribution (OOD) inputs is critical to guarantee the reliability of classification models deployed in an open environment. A fundamental challenge in OOD detection is that a discriminative classifier is typically trained to estimate the posterior probability p(y|z) for class y given an input z, but lacks the explicit likelihood estimation of p(z) ideally needed for OOD detection. While numerous OOD scoring functions have been proposed for classification models, these estimate scores are often heuristic-driven and cannot be rigorously interpreted as likelihood. To bridge the gap, we propose Intrinsic Likelihood (INK), which offers rigorous likelihood interpretation to modern discriminative-based classifiers. Specifically, our proposed INK score operates on the constrained latent embeddings of a discriminative classifier, which are modeled as a mixture of hyperspherical embeddings with constant norm. We draw a novel connection between the hyperspherical distribution and the intrinsic likelihood, which can be effectively optimized in modern neural networks. Extensive experiments on the OpenOOD benchmark empirically demonstrate that INK establishes a new state-of-the-art in a variety of OOD detection setups, including both far-OOD and near-OOD. Code is available at https://github.com/deeplearning-wisc/ink.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
479,558
2104.11559
Optimizing small BERTs trained for German NER
Currently, the most widespread neural network architecture for training language models is the so called BERT which led to improvements in various Natural Language Processing (NLP) tasks. In general, the larger the number of parameters in a BERT model, the better the results obtained in these NLP tasks. Unfortunately, the memory consumption and the training duration drastically increases with the size of these models. In this article, we investigate various training techniques of smaller BERT models: We combine different methods from other BERT variants like ALBERT, RoBERTa, and relative positional encoding. In addition, we propose two new fine-tuning modifications leading to better performance: Class-Start-End tagging and a modified form of Linear Chain Conditional Random Fields. Furthermore, we introduce Whole-Word Attention which reduces BERTs memory usage and leads to a small increase in performance compared to classical Multi-Head-Attention. We evaluate these techniques on five public German Named Entity Recognition (NER) tasks of which two are introduced by this article.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
231,949
2409.13568
Tackling fluffy clouds: field boundaries detection using time series of S2 and/or S1 imagery
Accurate field boundary delineation is a critical challenge in digital agriculture, impacting everything from crop monitoring to resource management. Existing methods often struggle with noise and fail to generalize across varied landscapes, particularly when dealing with cloud cover in optical remote sensing. In response, this study presents a new approach that leverages time series data from Sentinel-2 (S2) and Sentinel-1 (S1) imagery to improve performance under diverse cloud conditions, without the need for manual cloud filtering. We introduce a 3D Vision Transformer architecture specifically designed for satellite image time series, incorporating a memory-efficient attention mechanism. Two models are proposed: PTAViT3D, which handles either S2 or S1 data independently, and PTAViT3D-CA, which fuses both datasets to enhance accuracy. Both models are evaluated under sparse and dense cloud coverage by exploiting spatio-temporal correlations. Our results demonstrate that the models can effectively delineate field boundaries, even with partial (S2 or S2 and S1 data fusion) or dense cloud cover (S1), with the S1-based model providing performance comparable to S2 imagery in terms of spatial resolution. A key strength of this approach lies in its capacity to directly process cloud-contaminated imagery by leveraging spatio-temporal correlations in a memory-efficient manner. This methodology, used in the ePaddocks product to map Australia's national field boundaries, offers a robust, scalable solution adaptable to varying agricultural environments, delivering precision and reliability where existing methods falter. Our code is available at https://github.com/feevos/tfcl.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
490,051
2407.03637
QET: Enhancing Quantized LLM Parameters and KV cache Compression through Element Substitution and Residual Clustering
The matrix quantization entails representing matrix elements in a more space-efficient form to reduce storage usage, with dequantization restoring the original matrix for use. We formulate the Quantization Error Minimization (QEM) problem as minimizing the distance between a matrix before and after quantization, under the condition that the quantized matrix occupies the same memory space. Matrix quantization is crucial in various applications, including Large Language Models (LLMs) weight quantization, vector databases, KV cache quantization, graph compression, and image compression. Recent advancements in LLMs, such as GPT-4 and BERT, have highlighted the importance of matrix compression due to the large size of parameters and KV cache, which are stored as matrices. We propose Quantum Entanglement Trees (QET) to address the QEM problem by leveraging the local orderliness of matrix elements, involving iterative element swapping to form a locally ordered matrix. This matrix is then grouped and quantized by columns. To enhance QET, we introduce two optimizations: further quantizing residuals to reduce MSE, and using masking and batch processing to accelerate the algorithm. Experimental results demonstrate that QET can effectively reduce MSE to 5.05%, 13.33%, and 11.89% of the current best method on the LLM dataset, K cache, and V cache, respectively. Our contributions include the abstraction of the QEM problem, the design of the QET algorithm, and the proposal of two optimizations to improve accuracy and speed.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
470,234
1811.00119
A task in a suit and a tie: paraphrase generation with semantic augmentation
Paraphrasing is rooted in semantics. We show the effectiveness of transformers (Vaswani et al. 2017) for paraphrase generation and further improvements by incorporating PropBank labels via a multi-encoder. Evaluating on MSCOCO and WikiAnswers, we find that transformers are fast and effective, and that semantic augmentation for both transformers and LSTMs leads to sizable 2-3 point gains in BLEU, METEOR and TER. More importantly, we find surprisingly large gains on human evaluations compared to previous models. Nevertheless, manual inspection of generated paraphrases reveals ample room for improvement: even our best model produces human-acceptable paraphrases for only 28% of captions from the CHIA dataset (Sharma et al. 2018), and it fails spectacularly on sentences from Wikipedia. Overall, these results point to the potential for incorporating semantics in the task while highlighting the need for stronger evaluation.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
112,001
1906.06566
LioNets: Local Interpretation of Neural Networks through Penultimate Layer Decoding
Technological breakthroughs on smart homes, self-driving cars, health care and robotic assistants, in addition to reinforced law regulations, have critically influenced academic research on explainable machine learning. A sufficient number of researchers have implemented ways to explain indifferently any black box model for classification tasks. A drawback of building agnostic explanators is that the neighbourhood generation process is universal and consequently does not guarantee true adjacency between the generated neighbours and the instance. This paper explores a methodology on providing explanations for a neural network's decisions, in a local scope, through a process that actively takes into consideration the neural network's architecture on creating an instance's neighbourhood, that assures the adjacency among the generated neighbours and the instance.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
135,337
1606.04552
A New Approach to Dimensionality Reduction for Anomaly Detection in Data Traffic
The monitoring and management of high-volume feature-rich traffic in large networks offers significant challenges in storage, transmission and computational costs. The predominant approach to reducing these costs is based on performing a linear mapping of the data to a low-dimensional subspace such that a certain large percentage of the variance in the data is preserved in the low-dimensional representation. This variance-based subspace approach to dimensionality reduction forces a fixed choice of the number of dimensions, is not responsive to real-time shifts in observed traffic patterns, and is vulnerable to normal traffic spoofing. Based on theoretical insights proved in this paper, we propose a new distance-based approach to dimensionality reduction motivated by the fact that the real-time structural differences between the covariance matrices of the observed and the normal traffic is more relevant to anomaly detection than the structure of the training data alone. Our approach, called the distance-based subspace method, allows a different number of reduced dimensions in different time windows and arrives at only the number of dimensions necessary for effective anomaly detection. We present centralized and distributed versions of our algorithm and, using simulation on real traffic traces, demonstrate the qualitative and quantitative advantages of the distance-based subspace approach.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
true
57,272
1110.2480
Beyond Traditional DTN Routing: Social Networks for Opportunistic Communication
This article examines the evolution of routing protocols for intermittently connected ad hoc networks and discusses the trend toward social-based routing protocols. A survey of current routing solutions is presented, where routing protocols for opportunistic networks are classified based on the network graph employed. The need to capture performance tradeoffs from a multi-objective perspective is highlighted.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
true
12,591
2409.10226
Privacy-Preserving Distributed Maximum Consensus Without Accuracy Loss
In distributed networks, calculating the maximum element is a fundamental task in data analysis, known as the distributed maximum consensus problem. However, the sensitive nature of the data involved makes privacy protection essential. Despite its importance, privacy in distributed maximum consensus has received limited attention in the literature. Traditional privacy-preserving methods typically add noise to updates, degrading the accuracy of the final result. To overcome these limitations, we propose a novel distributed optimization-based approach that preserves privacy without sacrificing accuracy. Our method introduces virtual nodes to form an augmented graph and leverages a carefully designed initialization process to ensure the privacy of honest participants, even when all their neighboring nodes are dishonest. Through a comprehensive information-theoretical analysis, we derive a sufficient condition to protect private data against both passive and eavesdropping adversaries. Extensive experiments validate the effectiveness of our approach, demonstrating that it not only preserves perfect privacy but also maintains accuracy, outperforming existing noise-based methods that typically suffer from accuracy loss.
false
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
true
488,669
2104.02036
Modelling and Analysis of Magnetic Fields from Skeletal Muscle for Valuable Physiological Measurements
MagnetoMyoGraphy (MMG) is a method of studying muscle function via weak magnetic fields generated from human active organs and tissues. The correspondence between MMG and electromyography means directly derived from the Maxwell-Amp\`ere law. Here, upon briefly describing the principles of voltage distribution inside skeletal muscles due to the electrical stimulation, we provide a protocol to determine the effects of the magnetic field generated from a time-changing action potential propagating in a group of skeletal muscle cells. The position-dependent and the magnetic field behaviour on account of the different currents in muscle fibres are performed in temporal, spectral and spatial domains. The procedure covers identification of the fibre subpopulations inside the fascicles of a given nerve section, characterization of soleus skeletal muscle currents, check of axial intracellular currents, calculation of the generated magnetic field ultimately. We expect this protocol to take approximately 2-3 hours to complete for the whole finite-element analysis.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
228,566
2309.00885
A Generic Fundus Image Enhancement Network Boosted by Frequency Self-supervised Representation Learning
Fundus photography is prone to suffer from image quality degradation that impacts clinical examination performed by ophthalmologists or intelligent systems. Though enhancement algorithms have been developed to promote fundus observation on degraded images, high data demands and limited applicability hinder their clinical deployment. To circumvent this bottleneck, a generic fundus image enhancement network (GFE-Net) is developed in this study to robustly correct unknown fundus images without supervised or extra data. Levering image frequency information, self-supervised representation learning is conducted to learn robust structure-aware representations from degraded images. Then with a seamless architecture that couples representation learning and image enhancement, GFE-Net can accurately correct fundus images and meanwhile preserve retinal structures. Comprehensive experiments are implemented to demonstrate the effectiveness and advantages of GFE-Net. Compared with state-of-the-art algorithms, GFE-Net achieves superior performance in data dependency, enhancement performance, deployment efficiency, and scale generalizability. Follow-up fundus image analysis is also facilitated by GFE-Net, whose modules are respectively verified to be effective for image enhancement.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
389,458
2205.14649
Speaker Identification using Speech Recognition
The audio data is increasing day by day throughout the globe with the increase of telephonic conversations, video conferences and voice messages. This research provides a mechanism for identifying a speaker in an audio file, based on the human voice biometric features like pitch, amplitude, frequency etc. We proposed an unsupervised learning model where the model can learn speech representation with limited dataset. Librispeech dataset was used in this research and we were able to achieve word error rate of 1.8.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
299,444
2409.07801
SURGIVID: Annotation-Efficient Surgical Video Object Discovery
Surgical scenes convey crucial information about the quality of surgery. Pixel-wise localization of tools and anatomical structures is the first task towards deeper surgical analysis for microscopic or endoscopic surgical views. This is typically done via fully-supervised methods which are annotation greedy and in several cases, demanding medical expertise. Considering the profusion of surgical videos obtained through standardized surgical workflows, we propose an annotation-efficient framework for the semantic segmentation of surgical scenes. We employ image-based self-supervised object discovery to identify the most salient tools and anatomical structures in surgical videos. These proposals are further refined within a minimally supervised fine-tuning step. Our unsupervised setup reinforced with only 36 annotation labels indicates comparable localization performance with fully-supervised segmentation models. Further, leveraging surgical phase labels as weak labels can better guide model attention towards surgical tools, leading to $\sim 2\%$ improvement in tool localization. Extensive ablation studies on the CaDIS dataset validate the effectiveness of our proposed solution in discovering relevant surgical objects with minimal or no supervision.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
487,671
1810.01152
Learning Discriminators as Energy Networks in Adversarial Learning
We propose a novel framework for structured prediction via adversarial learning. Existing adversarial learning methods involve two separate networks, i.e., the structured prediction models and the discriminative models, in the training. The information captured by discriminative models complements that in the structured prediction models, but few existing researches have studied on utilizing such information to improve structured prediction models at the inference stage. In this work, we propose to refine the predictions of structured prediction models by effectively integrating discriminative models into the prediction. Discriminative models are treated as energy-based models. Similar to the adversarial learning, discriminative models are trained to estimate scores which measure the quality of predicted outputs, while structured prediction models are trained to predict contrastive outputs with maximal energy scores. In this way, the gradient vanishing problem is ameliorated, and thus we are able to perform inference by following the ascent gradient directions of discriminative models to refine structured prediction models. The proposed method is able to handle a range of tasks, e.g., multi-label classification and image segmentation. Empirical results on these two tasks validate the effectiveness of our learning method.
false
false
false
false
false
false
true
false
false
false
false
true
true
false
false
false
false
false
109,343
1901.11382
Learning to Clean: A GAN Perspective
In the big data era, the impetus to digitize the vast reservoirs of data trapped in unstructured scanned documents such as invoices, bank documents and courier receipts has gained fresh momentum. The scanning process often results in the introduction of artifacts such as background noise, blur due to camera motion, watermarkings, coffee stains, or faded text. These artifacts pose many readability challenges to current text recognition algorithms and significantly degrade their performance. Existing learning based denoising techniques require a dataset comprising of noisy documents paired with cleaned versions. In such scenarios, a model can be trained to generate clean documents from noisy versions. However, very often in the real world such a paired dataset is not available, and all we have for training our denoising model are unpaired sets of noisy and clean images. This paper explores the use of GANs to generate denoised versions of the noisy documents. In particular, where paired information is available, we formulate the problem as an image-to-image translation task i.e, translating a document from noisy domain ( i.e., background noise, blurred, faded, watermarked ) to a target clean document using Generative Adversarial Networks (GAN). However, in the absence of paired images for training, we employed CycleGAN which is known to learn a mapping between the distributions of the noisy images to the denoised images using unpaired data to achieve image-to-image translation for cleaning the noisy documents. We compare the performance of CycleGAN for document cleaning tasks using unpaired images with a Conditional GAN trained on paired data from the same dataset. Experiments were performed on a public document dataset on which different types of noise were artificially induced, results demonstrate that CycleGAN learns a more robust mapping from the space of noisy to clean documents.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
120,236
1605.00775
Spatially Aware Dictionary Learning and Coding for Fossil Pollen Identification
We propose a robust approach for performing automatic species-level recognition of fossil pollen grains in microscopy images that exploits both global shape and local texture characteristics in a patch-based matching methodology. We introduce a novel criteria for selecting meaningful and discriminative exemplar patches. We optimize this function during training using a greedy submodular function optimization framework that gives a near-optimal solution with bounded approximation error. We use these selected exemplars as a dictionary basis and propose a spatially-aware sparse coding method to match testing images for identification while maintaining global shape correspondence. To accelerate the coding process for fast matching, we introduce a relaxed form that uses spatially-aware soft-thresholding during coding. Finally, we carry out an experimental study that demonstrates the effectiveness and efficiency of our exemplar selection and classification mechanisms, achieving $86.13\%$ accuracy on a difficult fine-grained species classification task distinguishing three types of fossil spruce pollen.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
55,392
cmp-lg/9406037
Multi-Paragraph Segmentation of Expository Text
This paper describes TextTiling, an algorithm for partitioning expository texts into coherent multi-paragraph discourse units which reflect the subtopic structure of the texts. The algorithm uses domain-independent lexical frequency and distribution information to recognize the interactions of multiple simultaneous themes. Two fully-implemented versions of the algorithm are described and shown to produce segmentation that corresponds well to human judgments of the major subtopic boundaries of thirteen lengthy texts.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
536,116
2403.13807
Editing Massive Concepts in Text-to-Image Diffusion Models
Text-to-image diffusion models suffer from the risk of generating outdated, copyrighted, incorrect, and biased content. While previous methods have mitigated the issues on a small scale, it is essential to handle them simultaneously in larger-scale real-world scenarios. We propose a two-stage method, Editing Massive Concepts In Diffusion Models (EMCID). The first stage performs memory optimization for each individual concept with dual self-distillation from text alignment loss and diffusion noise prediction loss. The second stage conducts massive concept editing with multi-layer, closed form model editing. We further propose a comprehensive benchmark, named ImageNet Concept Editing Benchmark (ICEB), for evaluating massive concept editing for T2I models with two subtasks, free-form prompts, massive concept categories, and extensive evaluation metrics. Extensive experiments conducted on our proposed benchmark and previous benchmarks demonstrate the superior scalability of EMCID for editing up to 1,000 concepts, providing a practical approach for fast adjustment and re-deployment of T2I diffusion models in real-world applications.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
439,790
2309.08648
MAPLE: Mobile App Prediction Leveraging Large Language Model Embeddings
In recent years, predicting mobile app usage has become increasingly important for areas like app recommendation, user behaviour analysis, and mobile resource management. Existing models, however, struggle with the heterogeneous nature of contextual data and the user cold start problem. This study introduces a novel prediction model, Mobile App Prediction Leveraging Large Language Model Embeddings (MAPLE), which employs Large Language Models (LLMs) and installed app similarity to overcome these challenges. MAPLE utilises the power of LLMs to process contextual data and discern intricate relationships within it effectively. Additionally, we explore the use of installed app similarity to address the cold start problem, facilitating the modelling of user preferences and habits, even for new users with limited historical data. In essence, our research presents MAPLE as a novel, potent, and practical approach to app usage prediction, making significant strides in resolving issues faced by existing models. MAPLE stands out as a comprehensive and effective solution, setting a new benchmark for more precise and personalised app usage predictions. In tests on two real-world datasets, MAPLE surpasses contemporary models in both standard and cold start scenarios. These outcomes validate MAPLE's capacity for precise app usage predictions and its resilience against the cold start problem. This enhanced performance stems from the model's proficiency in capturing complex temporal patterns and leveraging contextual information. As a result, MAPLE can potentially improve personalised mobile app usage predictions and user experiences markedly.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
392,274
1909.12401
A Hierarchical Approach for Visual Storytelling Using Image Description
One of the primary challenges of visual storytelling is developing techniques that can maintain the context of the story over long event sequences to generate human-like stories. In this paper, we propose a hierarchical deep learning architecture based on encoder-decoder networks to address this problem. To better help our network maintain this context while also generating long and diverse sentences, we incorporate natural language image descriptions along with the images themselves to generate each story sentence. We evaluate our system on the Visual Storytelling (VIST) dataset and show that our method outperforms state-of-the-art techniques on a suite of different automatic evaluation metrics. The empirical results from this evaluation demonstrate the necessities of different components of our proposed architecture and shows the effectiveness of the architecture for visual storytelling.
false
false
false
false
false
false
true
false
true
false
false
true
false
false
false
false
false
false
147,118
2304.03228
FedBot: Enhancing Privacy in Chatbots with Federated Learning
Chatbots are mainly data-driven and usually based on utterances that might be sensitive. However, training deep learning models on shared data can violate user privacy. Such issues have commonly existed in chatbots since their inception. In the literature, there have been many approaches to deal with privacy, such as differential privacy and secure multi-party computation, but most of them need to have access to users' data. In this context, Federated Learning (FL) aims to protect data privacy through distributed learning methods that keep the data in its location. This paper presents Fedbot, a proof-of-concept (POC) privacy-preserving chatbot that leverages large-scale customer support data. The POC combines Deep Bidirectional Transformer models and federated learning algorithms to protect customer data privacy during collaborative model training. The results of the proof-of-concept showcase the potential for privacy-preserving chatbots to transform the customer support industry by delivering personalized and efficient customer service that meets data privacy regulations and legal requirements. Furthermore, the system is specifically designed to improve its performance and accuracy over time by leveraging its ability to learn from previous interactions.
false
false
false
false
true
false
true
false
true
false
false
false
true
false
false
false
false
false
356,717
2310.08866
Adaptivity and Modularity for Efficient Generalization Over Task Complexity
Can transformers generalize efficiently on problems that require dealing with examples with different levels of difficulty? We introduce a new task tailored to assess generalization over different complexities and present results that indicate that standard transformers face challenges in solving these tasks. These tasks are variations of pointer value retrieval previously introduced by Zhang et al. (2021). We investigate how the use of a mechanism for adaptive and modular computation in transformers facilitates the learning of tasks that demand generalization over the number of sequential computation steps (i.e., the depth of the computation graph). Based on our observations, we propose a transformer-based architecture called Hyper-UT, which combines dynamic function generation from hyper networks with adaptive depth from Universal Transformers. This model demonstrates higher accuracy and a fairer allocation of computational resources when generalizing to higher numbers of computation steps. We conclude that mechanisms for adaptive depth and modularity complement each other in improving efficient generalization concerning example complexity. Additionally, to emphasize the broad applicability of our findings, we illustrate that in a standard image recognition task, Hyper- UT's performance matches that of a ViT model but with considerably reduced computational demands (achieving over 70\% average savings by effectively using fewer layers).
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
399,572
1508.04040
On The Exact Recovery Condition of Simultaneous Orthogonal Matching Pursuit
Several exact recovery criteria (ERC) ensuring that orthogonal matching pursuit (OMP) identifies the correct support of sparse signals have been developed in the last few years. These ERC rely on the restricted isometry property (RIP), the associated restricted isometry constant (RIC) and sometimes the restricted orthogonality constant (ROC). In this paper, three of the most recent ERC for OMP are examined. The contribution is to show that these ERC remain valid for a generalization of OMP, entitled simultaneous orthogonal matching pursuit (SOMP), that is capable to process several measurement vectors simultaneously and return a common support estimate for the underlying sparse vectors. The sharpness of the bounds is also briefly discussed in light of previous works focusing on OMP.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
46,080
2405.03987
Navigating Chemical Space with Latent Flows
Recent progress of deep generative models in the vision and language domain has stimulated significant interest in more structured data generation such as molecules. However, beyond generating new random molecules, efficient exploration and a comprehensive understanding of the vast chemical space are of great importance to molecular science and applications in drug design and materials discovery. In this paper, we propose a new framework, ChemFlow, to traverse chemical space through navigating the latent space learned by molecule generative models through flows. We introduce a dynamical system perspective that formulates the problem as learning a vector field that transports the mass of the molecular distribution to the region with desired molecular properties or structure diversity. Under this framework, we unify previous approaches on molecule latent space traversal and optimization and propose alternative competing methods incorporating different physical priors. We validate the efficacy of ChemFlow on molecule manipulation and single- and multi-objective molecule optimization tasks under both supervised and unsupervised molecular discovery settings. Codes and demos are publicly available on GitHub at https://github.com/garywei944/ChemFlow.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
452,393
1905.12236
Kernel-Induced Label Propagation by Mapping for Semi-Supervised Classification
Kernel methods have been successfully applied to the areas of pattern recognition and data mining. In this paper, we mainly discuss the issue of propagating labels in kernel space. A Kernel-Induced Label Propagation (Kernel-LP) framework by mapping is proposed for high-dimensional data classification using the most informative patterns of data in kernel space. The essence of Kernel-LP is to perform joint label propagation and adaptive weight learning in a transformed kernel space. That is, our Kernel-LP changes the task of label propagation from the commonly-used Euclidean space in most existing work to kernel space. The motivation of our Kernel-LP to propagate labels and learn the adaptive weights jointly by the assumption of an inner product space of inputs, i.e., the original linearly inseparable inputs may be mapped to be separable in kernel space. Kernel-LP is based on existing positive and negative LP model, i.e., the effects of negative label information are integrated to improve the label prediction power. Also, Kernel-LP performs adaptive weight construction over the same kernel space, so it can avoid the tricky process of choosing the optimal neighborhood size suffered in traditional criteria. Two novel and efficient out-of-sample approaches for our Kernel-LP to involve new test data are also presented, i.e., (1) direct kernel mapping and (2) kernel mapping-induced label reconstruction, both of which purely depend on the kernel matrix between training set and testing set. Owing to the kernel trick, our algorithms will be applicable to handle the high-dimensional real data. Extensive results on real datasets demonstrate the effectiveness of our approach.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
132,704
2206.03467
Discrete State-Action Abstraction via the Successor Representation
While the difficulty of reinforcement learning problems is typically related to the complexity of their state spaces, Abstraction proposes that solutions often lie in simpler underlying latent spaces. Prior works have focused on learning either a continuous or dense abstraction, or require a human to provide one. Information-dense representations capture features irrelevant for solving tasks, and continuous spaces can struggle to represent discrete objects. In this work we automatically learn a sparse discrete abstraction of the underlying environment. We do so using a simple end-to-end trainable model based on the successor representation and max-entropy regularization. We describe an algorithm to apply our model, named Discrete State-Action Abstraction (DSAA), which computes an action abstraction in the form of temporally extended actions, i.e., Options, to transition between discrete abstract states. Empirically, we demonstrate the effects of different exploration schemes on our resulting abstraction, and show that it is efficient for solving downstream tasks.
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
301,296
2304.03641
A Block Coordinate Descent Method for Nonsmooth Composite Optimization under Orthogonality Constraints
Nonsmooth composite optimization with orthogonality constraints has a wide range of applications in statistical learning and data science. However, this problem is challenging due to its nonsmooth objective and computationally expensive, non-convex constraints. In this paper, we propose a new approach called \textbf{OBCD}, which leverages Block Coordinate Descent to address these challenges. \textbf{OBCD} is a feasible method with a small computational footprint. In each iteration, it updates $k$ rows of the solution matrix, where $k \geq 2$, by globally solving a small nonsmooth optimization problem under orthogonality constraints. We prove that the limiting points of \textbf{OBCD}, referred to as (global) block-$k$ stationary points, offer stronger optimality than standard critical points. Furthermore, we show that \textbf{OBCD} converges to $\epsilon$-block-$k$ stationary points with an ergodic convergence rate of $\mathcal{O}(1/\epsilon)$. Additionally, under the Kurdyka-Lojasiewicz (KL) inequality, we establish the non-ergodic convergence rate of \textbf{OBCD}. We also extend \textbf{OBCD} by incorporating breakpoint searching methods for subproblem solving and greedy strategies for working set selection. Comprehensive experiments demonstrate the superior performance of our approach across various tasks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
356,893
1708.06510
Handling Homographs in Neural Machine Translation
Homographs, words with different meanings but the same surface form, have long caused difficulty for machine translation systems, as it is difficult to select the correct translation based on the context. However, with the advent of neural machine translation (NMT) systems, which can theoretically take into account global sentential context, one may hypothesize that this problem has been alleviated. In this paper, we first provide empirical evidence that existing NMT systems in fact still have significant problems in properly translating ambiguous words. We then proceed to describe methods, inspired by the word sense disambiguation literature, that model the context of the input word with context-aware word embeddings that help to differentiate the word sense be- fore feeding it into the encoder. Experiments on three language pairs demonstrate that such models improve the performance of NMT systems both in terms of BLEU score and in the accuracy of translating homographs.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
79,333
2211.00523
Learning utterance-level representations through token-level acoustic latents prediction for Expressive Speech Synthesis
This paper proposes an Expressive Speech Synthesis model that utilizes token-level latent prosodic variables in order to capture and control utterance-level attributes, such as character acting voice and speaking style. Current works aim to explicitly factorize such fine-grained and utterance-level speech attributes into different representations extracted by modules that operate in the corresponding level. We show that the fine-grained latent space also captures coarse-grained information, which is more evident as the dimension of latent space increases in order to capture diverse prosodic representations. Therefore, a trade-off arises between the diversity of the token-level and utterance-level representations and their disentanglement. We alleviate this issue by first capturing rich speech attributes into a token-level latent space and then, separately train a prior network that given the input text, learns utterance-level representations in order to predict the phoneme-level, posterior latents extracted during the previous step. Both qualitative and quantitative evaluations are used to demonstrate the effectiveness of the proposed approach. Audio samples are available in our demo page.
false
false
true
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
327,899
2310.17651
High-Dimensional Prediction for Sequential Decision Making
We study the problem of making predictions of an adversarially chosen high-dimensional state that are unbiased subject to an arbitrary collection of conditioning events, with the goal of tailoring these events to downstream decision makers. We give efficient algorithms for solving this problem, as well as a number of applications that stem from choosing an appropriate set of conditioning events. For example, we can efficiently make predictions targeted at polynomially many decision makers, giving each of them optimal swap regret if they best-respond to our predictions. We generalize this to online combinatorial optimization, where the decision makers have a very large action space, to give the first algorithms offering polynomially many decision makers no regret on polynomially many subsequences that may depend on their actions and the context. We apply these results to get efficient no-subsequence-regret algorithms in extensive-form games (EFGs), yielding a new family of regret guarantees for EFGs that generalizes some existing EFG regret notions, e.g. regret to informed causal deviations, and is generally incomparable to other known such notions. Next, we develop a novel transparent alternative to conformal prediction for building valid online adversarial multiclass prediction sets. We produce class scores that downstream algorithms can use for producing valid-coverage prediction sets, as if these scores were the true conditional class probabilities. We show this implies strong conditional validity guarantees including set-size-conditional and multigroup-fair coverage for polynomially many downstream prediction sets. Moreover, our class scores can be guaranteed to have improved $L_2$ loss, cross-entropy loss, and generally any Bregman loss, compared to any collection of benchmark models, yielding a high-dimensional real-valued version of omniprediction.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
403,228
2501.00873
Exploring Structured Semantic Priors Underlying Diffusion Score for Test-time Adaptation
Capitalizing on the complementary advantages of generative and discriminative models has always been a compelling vision in machine learning, backed by a growing body of research. This work discloses the hidden semantic structure within score-based generative models, unveiling their potential as effective discriminative priors. Inspired by our theoretical findings, we propose DUSA to exploit the structured semantic priors underlying diffusion score to facilitate the test-time adaptation of image classifiers or dense predictors. Notably, DUSA extracts knowledge from a single timestep of denoising diffusion, lifting the curse of Monte Carlo-based likelihood estimation over timesteps. We demonstrate the efficacy of our DUSA in adapting a wide variety of competitive pre-trained discriminative models on diverse test-time scenarios. Additionally, a thorough ablation study is conducted to dissect the pivotal elements in DUSA. Code is publicly available at https://github.com/BIT-DA/DUSA.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
521,833
2401.09680
Tiny Multi-Agent DRL for Twins Migration in UAV Metaverses: A Multi-Leader Multi-Follower Stackelberg Game Approach
The synergy between Unmanned Aerial Vehicles (UAVs) and metaverses is giving rise to an emerging paradigm named UAV metaverses, which create a unified ecosystem that blends physical and virtual spaces, transforming drone interaction and virtual exploration. UAV Twins (UTs), as the digital twins of UAVs that revolutionize UAV applications by making them more immersive, realistic, and informative, are deployed and updated on ground base stations, e.g., RoadSide Units (RSUs), to offer metaverse services for UAV Metaverse Users (UMUs). Due to the dynamic mobility of UAVs and limited communication coverages of RSUs, it is essential to perform real-time UT migration to ensure seamless immersive experiences for UMUs. However, selecting appropriate RSUs and optimizing the required bandwidth is challenging for achieving reliable and efficient UT migration. To address the challenges, we propose a tiny machine learning-based Stackelberg game framework based on pruning techniques for efficient UT migration in UAV metaverses. Specifically, we formulate a multi-leader multi-follower Stackelberg model considering a new immersion metric of UMUs in the utilities of UAVs. Then, we design a Tiny Multi-Agent Deep Reinforcement Learning (Tiny MADRL) algorithm to obtain the tiny networks representing the optimal game solution. Specifically, the actor-critic network leverages the pruning techniques to reduce the number of network parameters and achieve model size and computation reduction, allowing for efficient implementation of Tiny MADRL. Numerical results demonstrate that our proposed schemes have better performance than traditional schemes.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
422,343
2404.18465
M3oE: Multi-Domain Multi-Task Mixture-of Experts Recommendation Framework
Multi-domain recommendation and multi-task recommendation have demonstrated their effectiveness in leveraging common information from different domains and objectives for comprehensive user modeling. Nonetheless, the practical recommendation usually faces multiple domains and tasks simultaneously, which cannot be well-addressed by current methods. To this end, we introduce M3oE, an adaptive Multi-domain Multi-task Mixture-of-Experts recommendation framework. M3oE integrates multi-domain information, maps knowledge across domains and tasks, and optimizes multiple objectives. We leverage three mixture-of-experts modules to learn common, domain-aspect, and task-aspect user preferences respectively to address the complex dependencies among multiple domains and tasks in a disentangled manner. Additionally, we design a two-level fusion mechanism for precise control over feature extraction and fusion across diverse domains and tasks. The framework's adaptability is further enhanced by applying AutoML technique, which allows dynamic structure optimization. To the best of the authors' knowledge, our M3oE is the first effort to solve multi-domain multi-task recommendation self-adaptively. Extensive experiments on two benchmark datasets against diverse baselines demonstrate M3oE's superior performance. The implementation code is available to ensure reproducibility.
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
450,279
cs/0701160
Supporting Finite Element Analysis with a Relational Database Backend, Part II: Database Design and Access
This is Part II of a three article series on using databases for Finite Element Analysis (FEA). It discusses (1) db design, (2) data loading, (3) typical use cases during grid building, (4) typical use cases during simulation (get and put), (5) typical use cases during analysis (also done in Part III) and some performance measures of these cases. It argues that using a database is simpler to implement than custom data schemas, has better performance because it can use data parallelism, and better supports FEA modularity and tool evolution because database schema evolution, data independence, and self-defining data.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
540,100
2305.15731
A Tutorial on Holographic MIMO Communications--Part III: Open Opportunities and Challenges
Holographic multiple-input multiple-output (HMIMO) technology, which uses spatially continuous surfaces for signal transmission and reception, is envisioned to be a promising solution for improving the data rate and coverage of wireless networks. In Parts I and II of this three-part tutorial on HMIMO communications, we provided an overview of channel modeling and highlighted the state-of-the-art in holographic beamforming. In this part, we will discuss the unique properties of HMIMO systems, highlighting the open challenges and opportunities that arise as the transceiver array apertures become denser and electromagnetically larger. Additionally, we explore the interplay between HMIMO and other emerging technologies in next-generation networks.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
367,750
2006.15462
Genericity and Rigidity for Slow Entropy Transformations
The notion of slow entropy, both upper and lower slow entropy, was defined by Katok and Thouvenot as a more refined measure of complexity for dynamical systems, than the classical Kolmogorov-Sinai entropy. For any subexponential rate function $a_n(t)$, we prove there exists a generic class of invertible measure preserving systems such that the lower slow entropy is zero and the upper slow entropy is infinite. Also, given any subexponential rate $a_n(t)$, we show there exists a rigid, weak mixing, invertible system such that the lower slow entropy is infinite with respect to $a_n(t)$. This gives a general solution to a question on the existence of rigid transformations with positive polynomial upper slow entropy, Finally, we connect slow entropy with the notion of entropy covergence rate presented by Blume. In particular, we show slow entropy is a strictly stronger notion of complexity and give examples which have zero upper slow entropy, but also have an arbitrary sublinear positive entropy convergence rate.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
184,511
1912.13091
Basis Pursuit and Orthogonal Matching Pursuit for Subspace-preserving Recovery: Theoretical Analysis
Given an overcomplete dictionary $A$ and a signal $b = Ac^*$ for some sparse vector $c^*$ whose nonzero entries correspond to linearly independent columns of $A$, classical sparse signal recovery theory considers the problem of whether $c^*$ can be recovered as the unique sparsest solution to $b = A c$. It is now well-understood that such recovery is possible by practical algorithms when the dictionary $A$ is incoherent or restricted isometric. In this paper, we consider the more general case where $b$ lies in a subspace $\mathcal{S}_0$ spanned by a subset of linearly dependent columns of $A$, and the remaining columns are outside of the subspace. In this case, the sparsest representation may not be unique, and the dictionary may not be incoherent or restricted isometric. The goal is to have the representation $c$ correctly identify the subspace, i.e. the nonzero entries of $c$ should correspond to columns of $A$ that are in the subspace $\mathcal{S}_0$. Such a representation $c$ is called subspace-preserving, a key concept that has found important applications for learning low-dimensional structures in high-dimensional data. We present various geometric conditions that guarantee subspace-preserving recovery. Among them, the major results are characterized by the covering radius and the angular distance, which capture the distribution of points in the subspace and the similarity between points in the subspace and points outside the subspace, respectively. Importantly, these conditions do not require the dictionary to be incoherent or restricted isometric. By establishing that the subspace-preserving recovery problem and the classical sparse signal recovery problem are equivalent under common assumptions on the latter, we show that several of our proposed conditions are generalizations of some well-known conditions in the sparse signal recovery literature.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
159,003
1401.3868
Clause-Learning Algorithms with Many Restarts and Bounded-Width Resolution
We offer a new understanding of some aspects of practical SAT-solvers that are based on DPLL with unit-clause propagation, clause-learning, and restarts. We do so by analyzing a concrete algorithm which we claim is faithful to what practical solvers do. In particular, before making any new decision or restart, the solver repeatedly applies the unit-resolution rule until saturation, and leaves no component to the mercy of non-determinism except for some internal randomness. We prove the perhaps surprising fact that, although the solver is not explicitly designed for it, with high probability it ends up behaving as width-k resolution after no more than O(n^2k+2) conflicts and restarts, where n is the number of variables. In other words, width-k resolution can be thought of as O(n^2k+2) restarts of the unit-resolution rule with learning.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
29,982
2305.06165
Searching Mobile App Screens via Text + Doodle
Locating a specific mobile application screen from existing repositories is restricted to basic keyword searches, such as Google Image Search, or necessitates a complete query screen image, as in the case of Swire. However, interactive partial sketch-based solutions like PSDoodle have limitations, including inaccuracy and an inability to consider text appearing on the screen. A potentially effective solution involves implementing a system that provides interactive partial sketching functionality for efficiently structuring user interface elements. Additionally, the system should incorporate text queries to enhance its capabilities further. Our approach, TpD, represents the pioneering effort to enable an iterative search of screens by combining interactive sketching and keyword search techniques. TpD is built on a combination of the Rico repository of approximately 58k Android app screens and the PSDoodle. Our evaluation with third-party software developers showed that PSDoodle provided higher top-10 screen retrieval accuracy than state-of-the-art Swire and required less time to complete a query than other interactive solutions.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
363,438
1304.1517
Model-based Influence Diagrams for Machine Vision
We show an approach to automated control of machine vision systems based on incremental creation and evaluation of a particular family of influence diagrams that represent hypotheses of imagery interpretation and possible subsequent processing decisions. In our approach, model-based machine vision techniques are integrated with hierarchical Bayesian inference to provide a framework for representing and matching instances of objects and relationships in imagery and for accruing probabilities to rank order conflicting scene interpretations. We extend a result of Tatman and Shachter to show that the sequence of processing decisions derived from evaluating the diagrams at each stage is the same as the sequence that would have been derived by evaluating the final influence diagram that contains all random variables created during the run of the vision system.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
23,550
2107.03815
Collaboration of Experts: Achieving 80% Top-1 Accuracy on ImageNet with 100M FLOPs
In this paper, we propose a Collaboration of Experts (CoE) framework to pool together the expertise of multiple networks towards a common aim. Each expert is an individual network with expertise on a unique portion of the dataset, which enhances the collective capacity. Given a sample, an expert is selected by the delegator, which simultaneously outputs a rough prediction to support early termination. To fulfill this framework, we propose three modules to impel each model to play its role, namely weight generation module (WGM), label generation module (LGM) and variance calculation module (VCM). Our method achieves the state-of-the-art performance on ImageNet, 80.7% top-1 accuracy with 194M FLOPs. Combined with PWLU activation function and CondConv, CoE further achieves the accuracy of 80.0% with only 100M FLOPs for the first time. More importantly, our method is hardware friendly and achieves a 3-6x speedup compared with some existing conditional computation approaches.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
245,263
2306.07576
Action Recognition with Multi-stream Motion Modeling and Mutual Information Maximization
Action recognition has long been a fundamental and intriguing problem in artificial intelligence. The task is challenging due to the high dimensionality nature of an action, as well as the subtle motion details to be considered. Current state-of-the-art approaches typically learn from articulated motion sequences in the straightforward 3D Euclidean space. However, the vanilla Euclidean space is not efficient for modeling important motion characteristics such as the joint-wise angular acceleration, which reveals the driving force behind the motion. Moreover, current methods typically attend to each channel equally and lack theoretical constrains on extracting task-relevant features from the input. In this paper, we seek to tackle these challenges from three aspects: (1) We propose to incorporate an acceleration representation, explicitly modeling the higher-order variations in motion. (2) We introduce a novel Stream-GCN network equipped with multi-stream components and channel attention, where different representations (i.e., streams) supplement each other towards a more precise action recognition while attention capitalizes on those important channels. (3) We explore feature-level supervision for maximizing the extraction of task-relevant information and formulate this into a mutual information loss. Empirically, our approach sets the new state-of-the-art performance on three benchmark datasets, NTU RGB+D, NTU RGB+D 120, and NW-UCLA. Our code is anonymously released at https://github.com/ActionR-Group/Stream-GCN, hoping to inspire the community.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
373,068
2312.08075
TERM Model: Tensor Ring Mixture Model for Density Estimation
Efficient probability density estimation is a core challenge in statistical machine learning. Tensor-based probabilistic graph methods address interpretability and stability concerns encountered in neural network approaches. However, a substantial number of potential tensor permutations can lead to a tensor network with the same structure but varying expressive capabilities. In this paper, we take tensor ring decomposition for density estimator, which significantly reduces the number of permutation candidates while enhancing expressive capability compared with existing used decompositions. Additionally, a mixture model that incorporates multiple permutation candidates with adaptive weights is further designed, resulting in increased expressive flexibility and comprehensiveness. Different from the prevailing directions of tensor network structure/permutation search, our approach provides a new viewpoint inspired by ensemble learning. This approach acknowledges that suboptimal permutations can offer distinctive information besides that of optimal permutations. Experiments show the superiority of the proposed approach in estimating probability density for moderately dimensional datasets and sampling to capture intricate details.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
415,189
2405.05567
Perfect Subset Privacy in Polynomial Computation via Reed-Muller Information Super-sets
Delegating large-scale computations to service providers is a common practice which raises privacy concerns. This paper studies information-theoretic privacy-preserving delegation of data to a service provider, who may further delegate the computation to auxiliary worker nodes, in order to compute a polynomial over that data at a later point in time. We study techniques which are compatible with robust management of distributed computation systems, an area known as coded computing. Privacy in coded computing, however, has traditionally addressed the problem of colluding workers, and assumed that the server that administrates the computation is trusted. This viewpoint of privacy does not accurately reflect real-world privacy concerns, since normally, the service provider as a whole (i.e., the administrator and the worker nodes) form one cohesive entity which itself poses a privacy risk. This paper aims to shift the focus of privacy in coded computing to safeguarding the privacy of the user against the service provider as a whole, instead of merely against colluding workers inside the service provider. To this end, we leverage the recently defined notion of perfect subset privacy, which guarantees zero information leakage from all subsets of the data up to a certain size. Using known techniques from Reed-Muller decoding, we provide a scheme which enables polynomial computation with perfect subset privacy in straggler-free systems. Furthermore, by studying information super-sets in Reed-Muller codes, which may be of independent interest, we extend the previous scheme to tolerate straggling worker nodes inside the service provider.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
452,972
2006.13295
A critique of the Mean Field Approximation in preferential attachment networks
The Mean Field Approximation (MFA), or continuum method, is often used in courses on Networks to derive the degree distribution of preferential attachment networks. This method is simple and the outcome is close to the correct answer. However, this paper shows that the method is flawed in several aspects, leading to unresolvable contradictions. More importantly, the MFA is not explicitly derived from a mathematical model. An analysis of the implied model shows that it makes an approximation which is far from the truth and another one which can not be motivated in general. The success of the MFA for preferential attachment networks is therefore accidental and the method is not suitable for teaching undergraduates.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
183,852
1409.7556
Location Recognition Over Large Time Lags
Would it be possible to automatically associate ancient pictures to modern ones and create fancy cultural heritage city maps? We introduce here the task of recognizing the location depicted in an old photo given modern annotated images collected from the Internet. We present an extensive analysis on different features, looking for the most discriminative and most robust to the image variability induced by large time lags. Moreover, we show that the described task benefits from domain adaptation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
36,336
1501.06339
Average power limitations in Sliding Window Contention Resolution Diversity Slotted Aloha
Recently a new Random Access technique based on Aloha and using Interference Cancellation (IC) named Sliding Window Contention Resolution Diversity Slotted Aloha (SW-CRDSA) has been introduced. Differently from classic CRDSA that operates grouping slots in frames, this technique operates in an unframed manner yielding to better throughput results and smaller average packet delay with respect to frame-based CRDSA. However as classic CRDSA also SW-CRDSA relies on multiple transmission of the same packet. While this can be acceptable in systems where the only limit resides in the peak transmission power, it could represent a problem when constraints on the average power (e.g. at the transponder of a satellite system) are present. In this paper, a comparison in terms of normalized efficiency is carried out between Slotted Aloha and the two CRDSA techniques.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
39,608
2105.07558
fybrrStream: A WebRTC based Efficient and Scalable P2P Live Streaming Platform
The demand for streaming media and live video conferencing is at peak and expected to grow further, thereby the need for low-cost streaming services with better quality and lower latency is essential. Therefore, in this paper, we propose a novel peer-to-peer (P2P) live streaming platform, called fybrrStream, where a logical mesh and physical tree i.e., hybrid topology-based approach is leveraged for low latency streaming. fybrrStream distributes the load on participating peers in a hierarchical manner by considering their network bandwidth, network latency, and node stability. fybrrStream costs as low as the cost of just hosting a light-weight website and the performance is comparable to the existing state-of-the-art media streaming services. We evaluated and tested the proposed fybrrStream platform with real-field experiments using 50+ users spread across India and results obtained show significant improvements in the live streaming performance over other schemes.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
true
235,471
2406.04490
User Intent Recognition and Semantic Cache Optimization-Based Query Processing Framework using CFLIS and MGR-LAU
Query Processing (QP) is optimized by a Cloud-based cache by storing the frequently accessed data closer to users. Nevertheless, the lack of focus on user intention type in queries affected the efficiency of QP in prevailing works. Thus, by using a Contextual Fuzzy Linguistic Inference System (CFLIS), this work analyzed the informational, navigational, and transactional-based intents in queries for enhanced QP. Primarily, the user query is parsed using tokenization, normalization, stop word removal, stemming, and POS tagging and then expanded using the WordNet technique. After expanding the queries, to enhance query understanding and to facilitate more accurate analysis and retrieval in query processing, the named entity is recognized using Bidirectional Encoder UnispecNorm Representations from Transformers (BEUNRT). Next, for efficient QP and retrieval of query information from the semantic cache database, the data is structured using Epanechnikov Kernel-Ordering Points To Identify the Clustering Structure (EK-OPTICS). The features are extracted from the structured data. Now, sentence type is identified and intent keywords are extracted from the parsed query. Next, the extracted features, detected intents and structured data are inputted to the Multi-head Gated Recurrent Learnable Attention Unit (MGR-LAU), which processes the query based on a semantic cache database (stores previously interpreted queries to expedite effective future searches). Moreover, the query is processed with a minimum latency of 12856ms. Lastly, the Semantic Similarity (SS) is analyzed between the retrieved query and the inputted user query, which continues until the similarity reaches 0.9 and above. Thus, the proposed work surpassed the previous methodologies.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
461,703
2109.12445
Algorithmic Information Design in Multi-Player Games: Possibility and Limits in Singleton Congestion
Most algorithmic studies on multi-agent information design so far have focused on the restricted situation with no inter-agent externalities; a few exceptions investigated truly strategic games such as zero-sum games and second-price auctions but have all focused only on optimal public signaling. This paper initiates the algorithmic information design of both \emph{public} and \emph{private} signaling in a fundamental class of games with negative externalities, i.e., singleton congestion games, with wide application in today's digital economy, machine scheduling, routing, etc. For both public and private signaling, we show that the optimal information design can be efficiently computed when the number of resources is a constant. To our knowledge, this is the first set of efficient \emph{exact} algorithms for information design in succinctly representable many-player games. Our results hinge on novel techniques such as developing certain "reduced forms" to compactly characterize equilibria in public signaling or to represent players' marginal beliefs in private signaling. When there are many resources, we show computational intractability results. To overcome the issue of multiple equilibria, here we introduce a new notion of equilibrium-\emph{oblivious} hardness, which rules out any possibility of computing a good signaling scheme, irrespective of the equilibrium selection rule.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
257,292
2204.04028
A Generic Image Retrieval Method for Date Estimation of Historical Document Collections
Date estimation of historical document images is a challenging problem, with several contributions in the literature that lack of the ability to generalize from one dataset to others. This paper presents a robust date estimation system based in a retrieval approach that generalizes well in front of heterogeneous collections. we use a ranking loss function named smooth-nDCG to train a Convolutional Neural Network that learns an ordination of documents for each problem. One of the main usages of the presented approach is as a tool for historical contextual retrieval. It means that scholars could perform comparative analysis of historical images from big datasets in terms of the period where they were produced. We provide experimental evaluation on different types of documents from real datasets of manuscript and newspaper images.
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
true
290,513
2210.01625
Energy Consumption of Neural Networks on NVIDIA Edge Boards: an Empirical Model
Recently, there has been a trend of shifting the execution of deep learning inference tasks toward the edge of the network, closer to the user, to reduce latency and preserve data privacy. At the same time, growing interest is being devoted to the energetic sustainability of machine learning. At the intersection of these trends, we hence find the energetic characterization of machine learning at the edge, which is attracting increasing attention. Unfortunately, calculating the energy consumption of a given neural network during inference is complicated by the heterogeneity of the possible underlying hardware implementation. In this work, we hence aim at profiling the energetic consumption of inference tasks for some modern edge nodes and deriving simple but realistic models. To this end, we performed a large number of experiments to collect the energy consumption of convolutional and fully connected layers on two well-known edge boards by NVIDIA, namely Jetson TX2 and Xavier. From the measurements, we have then distilled a simple, practical model that can provide an estimate of the energy consumption of a certain inference task on the considered boards. We believe that this model can be used in many contexts as, for instance, to guide the search for efficient architectures in Neural Architecture Search, as a heuristic in Neural Network pruning, or to find energy-efficient offloading strategies in a Split computing context, or simply to evaluate the energetic performance of Deep Neural Network architectures.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
321,329
1504.03212
Optimal Parameter Choices Through Self-Adjustment: Applying the 1/5-th Rule in Discrete Settings
While evolutionary algorithms are known to be very successful for a broad range of applications, the algorithm designer is often left with many algorithmic choices, for example, the size of the population, the mutation rates, and the crossover rates of the algorithm. These parameters are known to have a crucial influence on the optimization time, and thus need to be chosen carefully, a task that often requires substantial efforts. Moreover, the optimal parameters can change during the optimization process. It is therefore of great interest to design mechanisms that dynamically choose best-possible parameters. An example for such an update mechanism is the one-fifth success rule for step-size adaption in evolutionary strategies. While in continuous domains this principle is well understood also from a mathematical point of view, no comparable theory is available for problems in discrete domains. In this work we show that the one-fifth success rule can be effective also in discrete settings. We regard the $(1+(\lambda,\lambda))$~GA proposed in [Doerr/Doerr/Ebel: From black-box complexity to designing new genetic algorithms, TCS 2015]. We prove that if its population size is chosen according to the one-fifth success rule then the expected optimization time on \textsc{OneMax} is linear. This is better than what \emph{any} static population size $\lambda$ can achieve and is asymptotically optimal also among all adaptive parameter choices.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
42,006
1705.07355
Generalized Degrees Freedom of Noncoherent MIMO Channels with Asymmetric Link Strengths
We study the generalized degrees of freedom (gDoF) of block-fading noncoherent multiple input multiple output (MIMO) channels with asymmetric distributions of link strengths and a coherence time of T symbol durations. We derive the optimal signaling structure for communication for the asymmetric MIMO channel, which is distinct from that for the MIMO channel with independent and identically distributed (i.i.d.) links. We extend the existing results for the single input multiple output (SIMO) channel with i.i.d. links to the asymmetric case, proving that selecting the statistically best antenna is gDoF-optimal. Using the gDoF result for the SIMO channel, we prove that for T=1, the gDoF is zero for MIMO channels with arbitrary link strengths., extending the result for MIMO with i.i.d. links We show that selecting the statistically best antenna is gDoF-optimal for the multiple input single output (MISO) channel. We also derive the gDoF for the 2X2 MIMO channel with different exponents in the direct and cross links. In this setting, we show that it is always necessary to use both the antennas to achieve the gDoF, in contrast to the results for the 2X2 MIMO channel with i.i.d. links. We show that having weaker crosslinks, gives gDoF gain compared to the case with i.i.d. links. For the noncoherent MIMO channel with i.i.d. links, the traditional method of training each transmit antenna independently is degrees of freedom (DoF) optimal, whereas we observe that for the asymmetric 2X2 MIMO channel, the traditional training is not gDoF-optimal. We extend this observation to a larger MX M MIMO channel by demonstrating a strategy that can achieve larger gDoF than a traditional training-based method.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
73,818
2407.03302
A Review of the Applications of Deep Learning-Based Emergent Communication
Emergent communication, or emergent language, is the field of research which studies how human language-like communication systems emerge de novo in deep multi-agent reinforcement learning environments. The possibilities of replicating the emergence of a complex behavior like language have strong intuitive appeal, yet it is necessary to complement this with clear notions of how such research can be applicable to other fields of science, technology, and engineering. This paper comprehensively reviews the applications of emergent communication research across machine learning, natural language processing, linguistics, and cognitive science. Each application is illustrated with a description of its scope, an explication of emergent communication's unique role in addressing it, a summary of the extant literature working towards the application, and brief recommendations for near-term research directions.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
470,108
2408.06790
Residual Deep Reinforcement Learning for Inverter-based Volt-Var Control
A residual deep reinforcement learning (RDRL) approach is proposed by integrating DRL with model-based optimization for inverter-based volt-var control in active distribution networks when the accurate power flow model is unknown. RDRL learns a residual action with a reduced residual action space, based on the action of the model-based approach with an approximate model. RDRL inherits the control capability of the approximate-model-based optimization and enhances the policy optimization capability by residual policy learning. Additionally, it improves the approximation accuracy of the critic and reduces the search difficulties of the actor by reducing residual action space. To address the issues of "too small" or "too large" residual action space of RDRL and further improve the optimization performance, we extend RDRL to a boosting RDRL approach. It selects a much smaller residual action space and learns a residual policy by using the policy of RDRL as a base policy. Simulations demonstrate that RDRL and boosting RDRL improve the optimization performance considerably throughout the learning stage and verify their rationales point-by-point, including 1) inheriting the capability of the approximate model-based optimization, 2) residual policy learning, and 3) learning in a reduced action space.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
480,346
2106.11528
Recent Deep Semi-supervised Learning Approaches and Related Works
This work proposes an overview of the recent semi-supervised learning approaches and related works. Despite the remarkable success of neural networks in various applications, there exist a few formidable constraints, including the need for a large amount of labeled data. Therefore, semi-supervised learning, which is a learning scheme in which scarce labels and a larger amount of unlabeled data are utilized to train models (e.g., deep neural networks), is getting more important. Based on the key assumptions of semi-supervised learning, which are the manifold assumption, cluster assumption, and continuity assumption, the work reviews the recent semi-supervised learning approaches. In particular, the methods in regard to using deep neural networks in a semi-supervised learning setting are primarily discussed. In addition, the existing works are first classified based on the underlying idea and explained, then the holistic approaches that unify the aforementioned ideas are detailed.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
242,431
2206.05514
Toward Real-world Single Image Deraining: A New Benchmark and Beyond
Single image deraining (SID) in real scenarios attracts increasing attention in recent years. Due to the difficulty in obtaining real-world rainy/clean image pairs, previous real datasets suffer from low-resolution images, homogeneous rain streaks, limited background variation, and even misalignment of image pairs, resulting in incomprehensive evaluation of SID methods. To address these issues, we establish a new high-quality dataset named RealRain-1k, consisting of $1,120$ high-resolution paired clean and rainy images with low- and high-density rain streaks, respectively. Images in RealRain-1k are automatically generated from a large number of real-world rainy video clips through a simple yet effective rain density-controllable filtering method, and have good properties of high image resolution, background diversity, rain streaks variety, and strict spatial alignment. RealRain-1k also provides abundant rain streak layers as a byproduct, enabling us to build a large-scale synthetic dataset named SynRain-13k by pasting the rain streak layers on abundant natural images. Based on them and existing datasets, we benchmark more than 10 representative SID methods on three tracks: (1) fully supervised learning on RealRain-1k, (2) domain generalization to real datasets, and (3) syn-to-real transfer learning. The experimental results (1) show the difference of representative methods in image restoration performance and model complexity, (2) validate the significance of the proposed datasets for model generalization, and (3) provide useful insights on the superiority of learning from diverse domains and shed lights on the future research on real-world SID. The datasets will be released at https://github.com/hiker-lw/RealRain-1k
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
302,032
2306.04954
RE-Matching: A Fine-Grained Semantic Matching Method for Zero-Shot Relation Extraction
Semantic matching is a mainstream paradigm of zero-shot relation extraction, which matches a given input with a corresponding label description. The entities in the input should exactly match their hypernyms in the description, while the irrelevant contexts should be ignored when matching. However, general matching methods lack explicit modeling of the above matching pattern. In this work, we propose a fine-grained semantic matching method tailored for zero-shot relation extraction. Following the above matching pattern, we decompose the sentence-level similarity score into entity and context matching scores. Due to the lack of explicit annotations of the redundant components, we design a feature distillation module to adaptively identify the relation-irrelevant features and reduce their negative impact on context matching. Experimental results show that our method achieves higher matching $F_1$ score and has an inference speed 10 times faster, when compared with the state-of-the-art methods.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
371,986
2306.10364
Residual Spatial Fusion Network for RGB-Thermal Semantic Segmentation
Semantic segmentation plays an important role in widespread applications such as autonomous driving and robotic sensing. Traditional methods mostly use RGB images which are heavily affected by lighting conditions, \eg, darkness. Recent studies show thermal images are robust to the night scenario as a compensating modality for segmentation. However, existing works either simply fuse RGB-Thermal (RGB-T) images or adopt the encoder with the same structure for both the RGB stream and the thermal stream, which neglects the modality difference in segmentation under varying lighting conditions. Therefore, this work proposes a Residual Spatial Fusion Network (RSFNet) for RGB-T semantic segmentation. Specifically, we employ an asymmetric encoder to learn the compensating features of the RGB and the thermal images. To effectively fuse the dual-modality features, we generate the pseudo-labels by saliency detection to supervise the feature learning, and develop the Residual Spatial Fusion (RSF) module with structural re-parameterization to learn more promising features by spatially fusing the cross-modality features. RSF employs a hierarchical feature fusion to aggregate multi-level features, and applies the spatial weights with the residual connection to adaptively control the multi-spectral feature fusion by the confidence gate. Extensive experiments were carried out on two benchmarks, \ie, MFNet database and PST900 database. The results have shown the state-of-the-art segmentation performance of our method, which achieves a good balance between accuracy and speed.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
374,202
2311.12842
Multimodal Identification of Alzheimer's Disease: A Review
Alzheimer's disease is a progressive neurological disorder characterized by cognitive impairment and memory loss. With the increasing aging population, the incidence of AD is continuously rising, making early diagnosis and intervention an urgent need. In recent years, a considerable number of teams have applied computer-aided diagnostic techniques to early classification research of AD. Most studies have utilized imaging modalities such as magnetic resonance imaging (MRI), positron emission tomography (PET), and electroencephalogram (EEG). However, there have also been studies that attempted to use other modalities as input features for the models, such as sound, posture, biomarkers, cognitive assessment scores, and their fusion. Experimental results have shown that the combination of multiple modalities often leads to better performance compared to a single modality. Therefore, this paper will focus on different modalities and their fusion, thoroughly elucidate the mechanisms of various modalities, explore which methods should be combined to better harness their utility, analyze and summarize the literature in the field of early classification of AD in recent years, in order to explore more possibilities of modality combinations.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
409,509
2409.10323
On the Hardness of Meaningful Local Guarantees in Nonsmooth Nonconvex Optimization
We study the oracle complexity of nonsmooth nonconvex optimization, with the algorithm assumed to have access only to local function information. It has been shown by Davis, Drusvyatskiy, and Jiang (2023) that for nonsmooth Lipschitz functions satisfying certain regularity and strictness conditions, perturbed gradient descent converges to local minimizers asymptotically. Motivated by this result and by other recent algorithmic advances in nonconvex nonsmooth optimization concerning Goldstein stationarity, we consider the question of obtaining a non-asymptotic rate of convergence to local minima for this problem class. We provide the following negative answer to this question: Local algorithms acting on regular Lipschitz functions cannot, in the worst case, provide meaningful local guarantees in terms of function value in sub-exponential time, even when all near-stationary points are global minima. This sharply contrasts with the smooth setting, for which it is well-known that standard gradient methods can do so in a dimension-independent rate. Our result complements the rich body of work in the theoretical computer science literature that provide hardness results conditional on conjectures such as $\mathsf{P}\neq\mathsf{NP}$ or cryptographic assumptions, in that ours holds unconditional of any such assumptions.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
488,702
2412.11379
Controllable Distortion-Perception Tradeoff Through Latent Diffusion for Neural Image Compression
Neural image compression often faces a challenging trade-off among rate, distortion and perception. While most existing methods typically focus on either achieving high pixel-level fidelity or optimizing for perceptual metrics, we propose a novel approach that simultaneously addresses both aspects for a fixed neural image codec. Specifically, we introduce a plug-and-play module at the decoder side that leverages a latent diffusion process to transform the decoded features, enhancing either low distortion or high perceptual quality without altering the original image compression codec. Our approach facilitates fusion of original and transformed features without additional training, enabling users to flexibly adjust the balance between distortion and perception during inference. Extensive experimental results demonstrate that our method significantly enhances the pretrained codecs with a wide, adjustable distortion-perception range while maintaining their original compression capabilities. For instance, we can achieve more than 150% improvement in LPIPS-BDRate without sacrificing more than 1 dB in PSNR.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
517,374
2209.03353
Learned Image Compression with Generalized Octave Convolution and Cross-Resolution Parameter Estimation
The application of the context-adaptive entropy model significantly improves the rate-distortion (R-D) performance, in which hyperpriors and autoregressive models are jointly utilized to effectively capture the spatial redundancy of the latent representations. However, the latent representations still contain some spatial correlations. In addition, these methods based on the context-adaptive entropy model cannot be accelerated in the decoding process by parallel computing devices, e.g. FPGA or GPU. To alleviate these limitations, we propose a learned multi-resolution image compression framework, which exploits the recently developed octave convolutions to factorize the latent representations into the high-resolution (HR) and low-resolution (LR) parts, similar to wavelet transform, which further improves the R-D performance. To speed up the decoding, our scheme does not use context-adaptive entropy model. Instead, we exploit an additional hyper layer including hyper encoder and hyper decoder to further remove the spatial redundancy of the latent representation. Moreover, the cross-resolution parameter estimation (CRPE) is introduced into the proposed framework to enhance the flow of information and further improve the rate-distortion performance. An additional information-fidelity loss is proposed to the total loss function to adjust the contribution of the LR part to the final bit stream. Experimental results show that our method separately reduces the decoding time by approximately 73.35 % and 93.44 % compared with that of state-of-the-art learned image compression methods, and the R-D performance is still better than H.266/VVC(4:2:0) and some learning-based methods on both PSNR and MS-SSIM metrics across a wide bit rates.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
316,477
2203.12891
An Ensemble Approach for Facial Expression Analysis in Video
Human emotions recognization contributes to the development of human-computer interaction. The machines understanding human emotions in the real world will significantly contribute to life in the future. This paper will introduce the Affective Behavior Analysis in-the-wild (ABAW3) 2022 challenge. The paper focuses on solving the problem of the valence-arousal estimation and action unit detection. For valence-arousal estimation, we conducted two stages: creating new features from multimodel and temporal learning to predict valence-arousal. First, we make new features; the Gated Recurrent Unit (GRU) and Transformer are combined using a Regular Networks (RegNet) feature, which is extracted from the image. The next step is the GRU combined with Local Attention to predict valence-arousal. The Concordance Correlation Coefficient (CCC) was used to evaluate the model.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
287,434
1906.05990
Divide and Conquer the Embedding Space for Metric Learning
Learning the embedding space, where semantically similar objects are located close together and dissimilar objects far apart, is a cornerstone of many computer vision applications. Existing approaches usually learn a single metric in the embedding space for all available data points, which may have a very complex non-uniform distribution with different notions of similarity between objects, e.g. appearance, shape, color or semantic meaning. Approaches for learning a single distance metric often struggle to encode all different types of relationships and do not generalize well. In this work, we propose a novel easy-to-implement divide and conquer approach for deep metric learning, which significantly improves the state-of-the-art performance of metric learning. Our approach utilizes the embedding space more efficiently by jointly splitting the embedding space and data into $K$ smaller sub-problems. It divides both, the data and the embedding space into $K$ subsets and learns $K$ separate distance metrics in the non-overlapping subspaces of the embedding space, defined by groups of neurons in the embedding layer of the neural network. The proposed approach increases the convergence speed and improves generalization since the complexity of each sub-problem is reduced compared to the original one. We show that our approach outperforms the state-of-the-art by a large margin in retrieval, clustering and re-identification tasks on CUB200-2011, CARS196, Stanford Online Products, In-shop Clothes and PKU VehicleID datasets.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
135,179
1908.09931
Multi-stage Deep Classifier Cascades for Open World Recognition
At present, object recognition studies are mostly conducted in a closed lab setting with classes in test phase typically in training phase. However, real-world problem is far more challenging because: i) new classes unseen in the training phase can appear when predicting; ii) discriminative features need to evolve when new classes emerge in real time; and iii) instances in new classes may not follow the "independent and identically distributed" (iid) assumption. Most existing work only aims to detect the unknown classes and is incapable of continuing to learn newer classes. Although a few methods consider both detecting and including new classes, all are based on the predefined handcrafted features that cannot evolve and are out-of-date for characterizing emerging classes. Thus, to address the above challenges, we propose a novel generic end-to-end framework consisting of a dynamic cascade of classifiers that incrementally learn their dynamic and inherent features. The proposed method injects dynamic elements into the system by detecting instances from unknown classes, while at the same time incrementally updating the model to include the new classes. The resulting cascade tree grows by adding a new leaf node classifier once a new class is detected, and the discriminative features are updated via an end-to-end learning strategy. Experiments on two real-world datasets demonstrate that our proposed method outperforms existing state-of-the-art methods.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
142,970
2009.02704
Deep Learning for Automatic Spleen Length Measurement in Sickle Cell Disease Patients
Sickle Cell Disease (SCD) is one of the most common genetic diseases in the world. Splenomegaly (abnormal enlargement of the spleen) is frequent among children with SCD. If left untreated, splenomegaly can be life-threatening. The current workflow to measure spleen size includes palpation, possibly followed by manual length measurement in 2D ultrasound imaging. However, this manual measurement is dependent on operator expertise and is subject to intra- and inter-observer variability. We investigate the use of deep learning to perform automatic estimation of spleen length from ultrasound images. We investigate two types of approach, one segmentation-based and one based on direct length estimation, and compare the results against measurements made by human experts. Our best model (segmentation-based) achieved a percentage length error of 7.42%, which is approaching the level of inter-observer variability (5.47%-6.34%). To the best of our knowledge, this is the first attempt to measure spleen size in a fully automated way from ultrasound images.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
194,630
1902.03187
Controlled Forgetting: Targeted Stimulation and Dopaminergic Plasticity Modulation for Unsupervised Lifelong Learning in Spiking Neural Networks
Stochastic gradient descent requires that training samples be drawn from a uniformly random distribution of the data. For a deployed system that must learn online from an uncontrolled and unknown environment, the ordering of input samples often fails to meet this criterion, making lifelong learning a difficult challenge. We exploit the locality of the unsupervised Spike Timing Dependent Plasticity (STDP) learning rule to target local representations in a Spiking Neural Network (SNN) to adapt to novel information while protecting essential information in the remainder of the SNN from catastrophic forgetting. In our Controlled Forgetting Networks (CFNs), novel information triggers stimulated firing and heterogeneously modulated plasticity, inspired by biological dopamine signals, to cause rapid and isolated adaptation in the synapses of neurons associated with outlier information. This targeting controls the forgetting process in a way that reduces the degradation of accuracy for older tasks while learning new tasks. Our experimental results on the MNIST dataset validate the capability of CFNs to learn successfully over time from an unknown, changing environment, achieving 95.36% accuracy, which we believe is the best unsupervised accuracy ever achieved by a fixed-size, single-layer SNN on a completely disjoint MNIST dataset.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
121,042
2109.10657
Beamforming Design for IRS-Aided Decode-and-Forward Relay Wireless Network
As a low-cost and low-power-consumption passive reflector, intelligent reflecting surface (IRS) can make a significant rate improvement by building a programmable wireless environment. To improve the rate performance and coverage range of wireless networks, an IRS-aided decode-and-forward (DF) relay network is proposed with multiple antennas at relay station (RS). To achieve a high rate, an alternately iterative structure (AIS) of maximizing receive power (Max-RP) at RS is proposed to jointly optimize the beamforming vectors at RS and phase shifts at IRS. Considering its high-complexity, two low-complexity Max-RP schemes of null-space projection (NSP) plus maximum ratio combining (MRC) and IRS element selection (IRSES) plus MRC are presented to reduce this complexity, respectively. For the former, NSP is used to separate the reflected signal from IRS and the direct transmitted signal from source and MRC is adopted to combine the two signals at RS. For the latter, the basic concept of IRSES is as follows: IRS is partitioned into M subsets of elements and adjusting the phases of all elements per subset make all reflected signals and the direct signal from source phase alignment (PA) at the corresponding antenna of relay. Simulation results show that the proposed three methods perform much better than the existing network with single-antenna relay in terms of rate performance. In particular, a 85% rate gain over existing scheme is achieved in the high signal-to-noise ratio region. Moreover, it is verified that the positions of RS and IRS have a substantial impact on rate performance, and there exists an optimal positions of RS and IRS.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
256,700
2212.02196
FedUKD: Federated UNet Model with Knowledge Distillation for Land Use Classification from Satellite and Street Views
Federated Deep Learning frameworks can be used strategically to monitor Land Use locally and infer environmental impacts globally. Distributed data from across the world would be needed to build a global model for Land Use classification. The need for a Federated approach in this application domain would be to avoid transfer of data from distributed locations and save network bandwidth to reduce communication cost. We use a Federated UNet model for Semantic Segmentation of satellite and street view images. The novelty of the proposed architecture is the integration of Knowledge Distillation to reduce communication cost and response time. The accuracy obtained was above 95% and we also brought in a significant model compression to over 17 times and 62 times for street View and satellite images respectively. Our proposed framework has the potential to be a game-changer in real-time tracking of climate change across the planet.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
true
334,716
2208.10273
Long-Short History of Gradients is All You Need: Detecting Malicious and Unreliable Clients in Federated Learning
Federated learning offers a framework of training a machine learning model in a distributed fashion while preserving privacy of the participants. As the server cannot govern the clients' actions, nefarious clients may attack the global model by sending malicious local gradients. In the meantime, there could also be unreliable clients who are benign but each has a portion of low-quality training data (e.g., blur or low-resolution images), thus may appearing similar as malicious clients. Therefore, a defense mechanism will need to perform a three-fold differentiation which is much more challenging than the conventional (two-fold) case. This paper introduces MUD-HoG, a novel defense algorithm that addresses this challenge in federated learning using long-short history of gradients, and treats the detected malicious and unreliable clients differently. Not only this, but we can also distinguish between targeted and untargeted attacks among malicious clients, unlike most prior works which only consider one type of the attacks. Specifically, we take into account sign-flipping, additive-noise, label-flipping, and multi-label-flipping attacks, under a non-IID setting. We evaluate MUD-HoG with six state-of-the-art methods on two datasets. The results show that MUD-HoG outperforms all of them in terms of accuracy as well as precision and recall, in the presence of a mixture of multiple (four) types of attackers as well as unreliable clients. Moreover, unlike most prior works which can only tolerate a low population of harmful users, MUD-HoG can work with and successfully detect a wide range of malicious and unreliable clients - up to 47.5% and 10%, respectively, of the total population. Our code is open-sourced at https://github.com/LabSAINT/MUD-HoG_Federated_Learning.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
314,000
1702.02382
An Adversarial Regularisation for Semi-Supervised Training of Structured Output Neural Networks
We propose a method for semi-supervised training of structured-output neural networks. Inspired by the framework of Generative Adversarial Networks (GAN), we train a discriminator network to capture the notion of a quality of network output. To this end, we leverage the qualitative difference between outputs obtained on the labelled training data and unannotated data. We then use the discriminator as a source of error signal for unlabelled data. This effectively boosts the performance of a network on a held out test set. Initial experiments in image segmentation demonstrate that the proposed framework enables achieving the same network performance as in a fully supervised scenario, while using two times less annotations.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
67,969