id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2008.06030
Nicolas Rougier
Nicolas P. Rougier
On the design of text editors
5 pages, 5 figures, conference
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
Text editors are written by and for developers. They come with a large set of default and implicit choices in terms of layout, typography, colorization and interaction that hardly change from one editor to the other. It is not clear if these implicit choices derive from the ignorance of alternatives or if they derive from developers' habits, reproducing what they are used to. The goal of this article is to characterize these implicit choices and to illustrate what are some alternatives without prescribing one or the other.
[ { "created": "Thu, 13 Aug 2020 17:40:48 GMT", "version": "v1" }, { "created": "Thu, 3 Sep 2020 09:51:05 GMT", "version": "v2" } ]
2020-09-04
[ [ "Rougier", "Nicolas P.", "" ] ]
Text editors are written by and for developers. They come with a large set of default and implicit choices in terms of layout, typography, colorization and interaction that hardly change from one editor to the other. It is not clear if these implicit choices derive from the ignorance of alternatives or if they derive from developers' habits, reproducing what they are used to. The goal of this article is to characterize these implicit choices and to illustrate what are some alternatives without prescribing one or the other.
1006.0991
Noam Shazeer
Georges Harik and Noam Shazeer
Variational Program Inference
null
null
null
HSL-000001
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a framework for representing a variety of interesting problems as inference over the execution of probabilistic model programs. We represent a "solution" to such a problem as a guide program which runs alongside the model program and influences the model program's random choices, leading the model program to sample from a different distribution than from its priors. Ideally the guide program influences the model program to sample from the posteriors given the evidence. We show how the KL- divergence between the true posterior distribution and the distribution induced by the guided model program can be efficiently estimated (up to an additive constant) by sampling multiple executions of the guided model program. In addition, we show how to use the guide program as a proposal distribution in importance sampling to statistically prove lower bounds on the probability of the evidence and on the probability of a hypothesis and the evidence. We can use the quotient of these two bounds as an estimate of the conditional probability of the hypothesis given the evidence. We thus turn the inference problem into a heuristic search for better guide programs.
[ { "created": "Fri, 4 Jun 2010 20:55:04 GMT", "version": "v1" } ]
2010-06-08
[ [ "Harik", "Georges", "" ], [ "Shazeer", "Noam", "" ] ]
We introduce a framework for representing a variety of interesting problems as inference over the execution of probabilistic model programs. We represent a "solution" to such a problem as a guide program which runs alongside the model program and influences the model program's random choices, leading the model program to sample from a different distribution than from its priors. Ideally the guide program influences the model program to sample from the posteriors given the evidence. We show how the KL- divergence between the true posterior distribution and the distribution induced by the guided model program can be efficiently estimated (up to an additive constant) by sampling multiple executions of the guided model program. In addition, we show how to use the guide program as a proposal distribution in importance sampling to statistically prove lower bounds on the probability of the evidence and on the probability of a hypothesis and the evidence. We can use the quotient of these two bounds as an estimate of the conditional probability of the hypothesis given the evidence. We thus turn the inference problem into a heuristic search for better guide programs.
1705.10706
Georgios Amanatidis
Georgios Amanatidis, Georgios Birmpas, George Christodoulou, Evangelos Markakis
Truthful Allocation Mechanisms Without Payments: Characterization and Implications on Fairness
To appear in the 18th ACM Conference on Economics and Computation (ACM EC '17)
null
10.1145/3033274.3085147
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the mechanism design problem of allocating a set of indivisible items without monetary transfers. Despite the vast literature on this very standard model, it still remains unclear how do truthful mechanisms look like. We focus on the case of two players with additive valuation functions and our purpose is twofold. First, our main result provides a complete characterization of truthful mechanisms that allocate all the items to the players. Our characterization reveals an interesting structure underlying all truthful mechanisms, showing that they can be decomposed into two components: a selection part where players pick their best subset among prespecified choices determined by the mechanism, and an exchange part where players are offered the chance to exchange certain subsets if it is favorable to do so. In the remaining paper, we apply our main result and derive several consequences on the design of mechanisms with fairness guarantees. We consider various notions of fairness, (indicatively, maximin share guarantees and envy-freeness up to one item) and provide tight bounds for their approximability. Our work settles some of the open problems in this agenda, and we conclude by discussing possible extensions to more players.
[ { "created": "Tue, 30 May 2017 15:42:32 GMT", "version": "v1" } ]
2017-05-31
[ [ "Amanatidis", "Georgios", "" ], [ "Birmpas", "Georgios", "" ], [ "Christodoulou", "George", "" ], [ "Markakis", "Evangelos", "" ] ]
We study the mechanism design problem of allocating a set of indivisible items without monetary transfers. Despite the vast literature on this very standard model, it still remains unclear how do truthful mechanisms look like. We focus on the case of two players with additive valuation functions and our purpose is twofold. First, our main result provides a complete characterization of truthful mechanisms that allocate all the items to the players. Our characterization reveals an interesting structure underlying all truthful mechanisms, showing that they can be decomposed into two components: a selection part where players pick their best subset among prespecified choices determined by the mechanism, and an exchange part where players are offered the chance to exchange certain subsets if it is favorable to do so. In the remaining paper, we apply our main result and derive several consequences on the design of mechanisms with fairness guarantees. We consider various notions of fairness, (indicatively, maximin share guarantees and envy-freeness up to one item) and provide tight bounds for their approximability. Our work settles some of the open problems in this agenda, and we conclude by discussing possible extensions to more players.
1902.09101
Jing Jiang
Jing Jiang, Xiaojing Wang, Guftaar Ahmad Sardar Sidhu, Li Zhen, Runchen Gao
Clustering-Based Codebook Design for MIMO Communication System
null
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Codebook design is one of the core technologies in limited feedback multi-input multi-output (MIMO) communication systems. However, the conventional codebook designs usually assume MIMO vectors are uniformly distributed or isotropic. Motivated by the excellent classfication and analysis ability of clustering algorithms, we propose a K-means clustering based codebook design. First, large amounts of channel state information (CSI) is stored as the input data of the clustering, and finally divided into N clusters according to the minimal distance. The clustering centroids are used as the statistic channel information of the codebook construction which the sum distance is minimal to the real channel information. Simulation results consist with theoretical analysis in terms of the achievable rate, and demonstrate that the proposed codebook design outperforms conventional schemes, especially in the non-uniform distribution of channel scenarios.
[ { "created": "Mon, 25 Feb 2019 06:14:42 GMT", "version": "v1" } ]
2019-02-26
[ [ "Jiang", "Jing", "" ], [ "Wang", "Xiaojing", "" ], [ "Sidhu", "Guftaar Ahmad Sardar", "" ], [ "Zhen", "Li", "" ], [ "Gao", "Runchen", "" ] ]
Codebook design is one of the core technologies in limited feedback multi-input multi-output (MIMO) communication systems. However, the conventional codebook designs usually assume MIMO vectors are uniformly distributed or isotropic. Motivated by the excellent classfication and analysis ability of clustering algorithms, we propose a K-means clustering based codebook design. First, large amounts of channel state information (CSI) is stored as the input data of the clustering, and finally divided into N clusters according to the minimal distance. The clustering centroids are used as the statistic channel information of the codebook construction which the sum distance is minimal to the real channel information. Simulation results consist with theoretical analysis in terms of the achievable rate, and demonstrate that the proposed codebook design outperforms conventional schemes, especially in the non-uniform distribution of channel scenarios.
0711.4406
Pascal Vontobel
Parastoo Sadeghi, Pascal O. Vontobel, Ramtin Shams
Optimization of Information Rate Upper and Lower Bounds for Channels with Memory
Submitted to IEEE Transactions on Information Theory, November 24, 2007
null
10.1109/TIT.2008.2009581
null
cs.IT math.IT
null
We consider the problem of minimizing upper bounds and maximizing lower bounds on information rates of stationary and ergodic discrete-time channels with memory. The channels we consider can have a finite number of states, such as partial response channels, or they can have an infinite state-space, such as time-varying fading channels. We optimize recently-proposed information rate bounds for such channels, which make use of auxiliary finite-state machine channels (FSMCs). Our main contribution in this paper is to provide iterative expectation-maximization (EM) type algorithms to optimize the parameters of the auxiliary FSMC to tighten these bounds. We provide an explicit, iterative algorithm that improves the upper bound at each iteration. We also provide an effective method for iteratively optimizing the lower bound. To demonstrate the effectiveness of our algorithms, we provide several examples of partial response and fading channels, where the proposed optimization techniques significantly tighten the initial upper and lower bounds. Finally, we compare our results with an improved variation of the \emph{simplex} local optimization algorithm, called \emph{Soblex}. This comparison shows that our proposed algorithms are superior to the Soblex method, both in terms of robustness in finding the tightest bounds and in computational efficiency. Interestingly, from a channel coding/decoding perspective, optimizing the lower bound is related to increasing the achievable mismatched information rate, i.e., the information rate of a communication system where the decoder at the receiver is matched to the auxiliary channel, and not to the original channel.
[ { "created": "Wed, 28 Nov 2007 02:16:22 GMT", "version": "v1" } ]
2016-11-17
[ [ "Sadeghi", "Parastoo", "" ], [ "Vontobel", "Pascal O.", "" ], [ "Shams", "Ramtin", "" ] ]
We consider the problem of minimizing upper bounds and maximizing lower bounds on information rates of stationary and ergodic discrete-time channels with memory. The channels we consider can have a finite number of states, such as partial response channels, or they can have an infinite state-space, such as time-varying fading channels. We optimize recently-proposed information rate bounds for such channels, which make use of auxiliary finite-state machine channels (FSMCs). Our main contribution in this paper is to provide iterative expectation-maximization (EM) type algorithms to optimize the parameters of the auxiliary FSMC to tighten these bounds. We provide an explicit, iterative algorithm that improves the upper bound at each iteration. We also provide an effective method for iteratively optimizing the lower bound. To demonstrate the effectiveness of our algorithms, we provide several examples of partial response and fading channels, where the proposed optimization techniques significantly tighten the initial upper and lower bounds. Finally, we compare our results with an improved variation of the \emph{simplex} local optimization algorithm, called \emph{Soblex}. This comparison shows that our proposed algorithms are superior to the Soblex method, both in terms of robustness in finding the tightest bounds and in computational efficiency. Interestingly, from a channel coding/decoding perspective, optimizing the lower bound is related to increasing the achievable mismatched information rate, i.e., the information rate of a communication system where the decoder at the receiver is matched to the auxiliary channel, and not to the original channel.
2104.13362
Arka Ray
Arka Ray
There is no APTAS for 2-dimensional vector bin packing: Revisited
10 pages; omitted proof can be found in the source; changes: improved presentation
Information Processing Letters 183C (2024) 106430
10.1016/j.ipl.2023.106430
null
cs.DS cs.CC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the Vector Bin Packing and the Vector Bin Covering problems, multidimensional generalizations of the Bin Packing and the Bin Covering problems, respectively. In the Vector Bin Packing, we are given a set of $d$-dimensional vectors from $[0,1]^d$ and the aim is to partition the set into the minimum number of bins such that for each bin $B$, each component of the sum of the vectors in $B$ is at most 1. Woeginger [Woe97] claimed that the problem has no APTAS for dimensions greater than or equal to 2. We note that there was a slight oversight in the original proof. In this work, we give a revised proof using some additional ideas from [BCKS06,CC09]. In fact, we show that it is NP-hard to get an asymptotic approximation ratio better than $\frac{600}{599}$. An instance of Vector Bin Packing is called $\delta$-skewed if every item has at most one dimension greater than $\delta$. As a natural extension of our general $d$-Dimensional Vector Bin Packing result we show that for $\varepsilon\in (0,\frac{1}{2500})$ it is NP-hard to obtain a $(1+\varepsilon)$-approximation for $\delta$-Skewed Vector Bin Packing if $\delta>20\sqrt \varepsilon$. In the Vector Bin Covering problem given a set of $d$-dimensional vectors from $[0,1]^d$, the aim is to obtain a family of disjoint subsets (called bins) with the maximum cardinality such that for each bin $B$, each component of the sum of the vectors in $B$ is at least 1. Using ideas similar to our Vector Bin Packing result, we show that for Vector Bin Covering there is no APTAS for dimensions greater than or equal to 2. In fact, we show that it is NP-hard to get an asymptotic approximation ratio better than $\frac{998}{997}$.
[ { "created": "Tue, 27 Apr 2021 17:43:33 GMT", "version": "v1" }, { "created": "Sat, 8 May 2021 18:35:21 GMT", "version": "v2" }, { "created": "Sat, 16 Oct 2021 13:37:22 GMT", "version": "v3" }, { "created": "Tue, 1 Aug 2023 05:07:08 GMT", "version": "v4" } ]
2023-08-02
[ [ "Ray", "Arka", "" ] ]
We study the Vector Bin Packing and the Vector Bin Covering problems, multidimensional generalizations of the Bin Packing and the Bin Covering problems, respectively. In the Vector Bin Packing, we are given a set of $d$-dimensional vectors from $[0,1]^d$ and the aim is to partition the set into the minimum number of bins such that for each bin $B$, each component of the sum of the vectors in $B$ is at most 1. Woeginger [Woe97] claimed that the problem has no APTAS for dimensions greater than or equal to 2. We note that there was a slight oversight in the original proof. In this work, we give a revised proof using some additional ideas from [BCKS06,CC09]. In fact, we show that it is NP-hard to get an asymptotic approximation ratio better than $\frac{600}{599}$. An instance of Vector Bin Packing is called $\delta$-skewed if every item has at most one dimension greater than $\delta$. As a natural extension of our general $d$-Dimensional Vector Bin Packing result we show that for $\varepsilon\in (0,\frac{1}{2500})$ it is NP-hard to obtain a $(1+\varepsilon)$-approximation for $\delta$-Skewed Vector Bin Packing if $\delta>20\sqrt \varepsilon$. In the Vector Bin Covering problem given a set of $d$-dimensional vectors from $[0,1]^d$, the aim is to obtain a family of disjoint subsets (called bins) with the maximum cardinality such that for each bin $B$, each component of the sum of the vectors in $B$ is at least 1. Using ideas similar to our Vector Bin Packing result, we show that for Vector Bin Covering there is no APTAS for dimensions greater than or equal to 2. In fact, we show that it is NP-hard to get an asymptotic approximation ratio better than $\frac{998}{997}$.
1209.1076
Konstantinos Tsianos
Konstantinos I. Tsianos and Sean Lawlor and Michael G. Rabbat
Communication/Computation Tradeoffs in Consensus-Based Distributed Optimization
10 Pages, 3 Figures, Appearing at NIPS 2012
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the scalability of consensus-based distributed optimization algorithms by considering two questions: How many processors should we use for a given problem, and how often should they communicate when communication is not free? Central to our analysis is a problem-specific value $r$ which quantifies the communication/computation tradeoff. We show that organizing the communication among nodes as a $k$-regular expander graph (Reingold, Vadhan, and Wigderson, 2002) yields speedups, while when all pairs of nodes communicate (as in a complete graph), there is an optimal number of processors that depends on $r$. Surprisingly, a speedup can be obtained, in terms of the time to reach a fixed level of accuracy, by communicating less and less frequently as the computation progresses. Experiments on a real cluster solving metric learning and non-smooth convex minimization tasks demonstrate strong agreement between theory and practice.
[ { "created": "Wed, 5 Sep 2012 18:45:21 GMT", "version": "v1" } ]
2012-09-06
[ [ "Tsianos", "Konstantinos I.", "" ], [ "Lawlor", "Sean", "" ], [ "Rabbat", "Michael G.", "" ] ]
We study the scalability of consensus-based distributed optimization algorithms by considering two questions: How many processors should we use for a given problem, and how often should they communicate when communication is not free? Central to our analysis is a problem-specific value $r$ which quantifies the communication/computation tradeoff. We show that organizing the communication among nodes as a $k$-regular expander graph (Reingold, Vadhan, and Wigderson, 2002) yields speedups, while when all pairs of nodes communicate (as in a complete graph), there is an optimal number of processors that depends on $r$. Surprisingly, a speedup can be obtained, in terms of the time to reach a fixed level of accuracy, by communicating less and less frequently as the computation progresses. Experiments on a real cluster solving metric learning and non-smooth convex minimization tasks demonstrate strong agreement between theory and practice.
2004.00603
Andrea Celli
Andrea Celli, Alberto Marchesi, Gabriele Farina, Nicola Gatti
No-Regret Learning Dynamics for Extensive-Form Correlated Equilibrium
null
null
null
null
cs.GT cs.AI cs.LG cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The existence of simple, uncoupled no-regret dynamics that converge to correlated equilibria in normal-form games is a celebrated result in the theory of multi-agent systems. Specifically, it has been known for more than 20 years that when all players seek to minimize their internal regret in a repeated normal-form game, the empirical frequency of play converges to a normal-form correlated equilibrium. Extensive-form (that is, tree-form) games generalize normal-form games by modeling both sequential and simultaneous moves, as well as private information. Because of the sequential nature and presence of partial information in the game, extensive-form correlation has significantly different properties than the normal-form counterpart, many of which are still open research directions. Extensive-form correlated equilibrium (EFCE) has been proposed as the natural extensive-form counterpart to normal-form correlated equilibrium. However, it was currently unknown whether EFCE emerges as the result of uncoupled agent dynamics. In this paper, we give the first uncoupled no-regret dynamics that converge to the set of EFCEs in $n$-player general-sum extensive-form games with perfect recall. First, we introduce a notion of trigger regret in extensive-form games, which extends that of internal regret in normal-form games. When each player has low trigger regret, the empirical frequency of play is close to an EFCE. Then, we give an efficient no-trigger-regret algorithm. Our algorithm decomposes trigger regret into local subproblems at each decision point for the player, and constructs a global strategy of the player from the local solutions at each decision point.
[ { "created": "Wed, 1 Apr 2020 17:39:00 GMT", "version": "v1" }, { "created": "Thu, 2 Apr 2020 08:54:26 GMT", "version": "v2" }, { "created": "Thu, 9 Apr 2020 16:00:40 GMT", "version": "v3" }, { "created": "Sat, 20 Jun 2020 09:32:36 GMT", "version": "v4" }, { "created": "Fri, 2 Sep 2022 16:09:00 GMT", "version": "v5" } ]
2022-09-05
[ [ "Celli", "Andrea", "" ], [ "Marchesi", "Alberto", "" ], [ "Farina", "Gabriele", "" ], [ "Gatti", "Nicola", "" ] ]
The existence of simple, uncoupled no-regret dynamics that converge to correlated equilibria in normal-form games is a celebrated result in the theory of multi-agent systems. Specifically, it has been known for more than 20 years that when all players seek to minimize their internal regret in a repeated normal-form game, the empirical frequency of play converges to a normal-form correlated equilibrium. Extensive-form (that is, tree-form) games generalize normal-form games by modeling both sequential and simultaneous moves, as well as private information. Because of the sequential nature and presence of partial information in the game, extensive-form correlation has significantly different properties than the normal-form counterpart, many of which are still open research directions. Extensive-form correlated equilibrium (EFCE) has been proposed as the natural extensive-form counterpart to normal-form correlated equilibrium. However, it was currently unknown whether EFCE emerges as the result of uncoupled agent dynamics. In this paper, we give the first uncoupled no-regret dynamics that converge to the set of EFCEs in $n$-player general-sum extensive-form games with perfect recall. First, we introduce a notion of trigger regret in extensive-form games, which extends that of internal regret in normal-form games. When each player has low trigger regret, the empirical frequency of play is close to an EFCE. Then, we give an efficient no-trigger-regret algorithm. Our algorithm decomposes trigger regret into local subproblems at each decision point for the player, and constructs a global strategy of the player from the local solutions at each decision point.
1909.10801
Michael Poli
Michael Poli, Jinkyoo Park, Ilija Ilievski
WATTNet: Learning to Trade FX via Hierarchical Spatio-Temporal Representation of Highly Multivariate Time Series
Submitted to the Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI 20)
null
null
null
cs.LG q-fin.ST stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Finance is a particularly challenging application area for deep learning models due to low noise-to-signal ratio, non-stationarity, and partial observability. Non-deliverable-forwards (NDF), a derivatives contract used in foreign exchange (FX) trading, presents additional difficulty in the form of long-term planning required for an effective selection of start and end date of the contract. In this work, we focus on tackling the problem of NDF tenor selection by leveraging high-dimensional sequential data consisting of spot rates, technical indicators and expert tenor patterns. To this end, we construct a dataset from the Depository Trust & Clearing Corporation (DTCC) NDF data that includes a comprehensive list of NDF volumes and daily spot rates for 64 FX pairs. We introduce WaveATTentionNet (WATTNet), a novel temporal convolution (TCN) model for spatio-temporal modeling of highly multivariate time series, and validate it across NDF markets with varying degrees of dissimilarity between the training and test periods in terms of volatility and general market regimes. The proposed method achieves a significant positive return on investment (ROI) in all NDF markets under analysis, outperforming recurrent and classical baselines by a wide margin. Finally, we propose two orthogonal interpretability approaches to verify noise stability and detect the driving factors of the learned tenor selection strategy.
[ { "created": "Tue, 24 Sep 2019 10:42:23 GMT", "version": "v1" } ]
2019-09-25
[ [ "Poli", "Michael", "" ], [ "Park", "Jinkyoo", "" ], [ "Ilievski", "Ilija", "" ] ]
Finance is a particularly challenging application area for deep learning models due to low noise-to-signal ratio, non-stationarity, and partial observability. Non-deliverable-forwards (NDF), a derivatives contract used in foreign exchange (FX) trading, presents additional difficulty in the form of long-term planning required for an effective selection of start and end date of the contract. In this work, we focus on tackling the problem of NDF tenor selection by leveraging high-dimensional sequential data consisting of spot rates, technical indicators and expert tenor patterns. To this end, we construct a dataset from the Depository Trust & Clearing Corporation (DTCC) NDF data that includes a comprehensive list of NDF volumes and daily spot rates for 64 FX pairs. We introduce WaveATTentionNet (WATTNet), a novel temporal convolution (TCN) model for spatio-temporal modeling of highly multivariate time series, and validate it across NDF markets with varying degrees of dissimilarity between the training and test periods in terms of volatility and general market regimes. The proposed method achieves a significant positive return on investment (ROI) in all NDF markets under analysis, outperforming recurrent and classical baselines by a wide margin. Finally, we propose two orthogonal interpretability approaches to verify noise stability and detect the driving factors of the learned tenor selection strategy.
2210.04563
Qingyi Si
Qingyi Si, Yuanxin Liu, Fandong Meng, Zheng Lin, Peng Fu, Yanan Cao, Weiping Wang and Jie Zhou
Towards Robust Visual Question Answering: Making the Most of Biased Samples via Contrastive Learning
Findings of EMNLP-2022
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Models for Visual Question Answering (VQA) often rely on the spurious correlations, i.e., the language priors, that appear in the biased samples of training set, which make them brittle against the out-of-distribution (OOD) test data. Recent methods have achieved promising progress in overcoming this problem by reducing the impact of biased samples on model training. However, these models reveal a trade-off that the improvements on OOD data severely sacrifice the performance on the in-distribution (ID) data (which is dominated by the biased samples). Therefore, we propose a novel contrastive learning approach, MMBS, for building robust VQA models by Making the Most of Biased Samples. Specifically, we construct positive samples for contrastive learning by eliminating the information related to spurious correlation from the original training samples and explore several strategies to use the constructed positive samples for training. Instead of undermining the importance of biased samples in model training, our approach precisely exploits the biased samples for unbiased information that contributes to reasoning. The proposed method is compatible with various VQA backbones. We validate our contributions by achieving competitive performance on the OOD dataset VQA-CP v2 while preserving robust performance on the ID dataset VQA v2.
[ { "created": "Mon, 10 Oct 2022 11:05:21 GMT", "version": "v1" } ]
2022-10-11
[ [ "Si", "Qingyi", "" ], [ "Liu", "Yuanxin", "" ], [ "Meng", "Fandong", "" ], [ "Lin", "Zheng", "" ], [ "Fu", "Peng", "" ], [ "Cao", "Yanan", "" ], [ "Wang", "Weiping", "" ], [ "Zhou", "Jie", "" ] ]
Models for Visual Question Answering (VQA) often rely on the spurious correlations, i.e., the language priors, that appear in the biased samples of training set, which make them brittle against the out-of-distribution (OOD) test data. Recent methods have achieved promising progress in overcoming this problem by reducing the impact of biased samples on model training. However, these models reveal a trade-off that the improvements on OOD data severely sacrifice the performance on the in-distribution (ID) data (which is dominated by the biased samples). Therefore, we propose a novel contrastive learning approach, MMBS, for building robust VQA models by Making the Most of Biased Samples. Specifically, we construct positive samples for contrastive learning by eliminating the information related to spurious correlation from the original training samples and explore several strategies to use the constructed positive samples for training. Instead of undermining the importance of biased samples in model training, our approach precisely exploits the biased samples for unbiased information that contributes to reasoning. The proposed method is compatible with various VQA backbones. We validate our contributions by achieving competitive performance on the OOD dataset VQA-CP v2 while preserving robust performance on the ID dataset VQA v2.
2306.04620
Julien Roy
Julien Roy, Pierre-Luc Bacon, Christopher Pal and Emmanuel Bengio
Goal-conditioned GFlowNets for Controllable Multi-Objective Molecular Design
14 pages
null
null
null
cs.LG q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, in-silico molecular design has received much attention from the machine learning community. When designing a new compound for pharmaceutical applications, there are usually multiple properties of such molecules that need to be optimised: binding energy to the target, synthesizability, toxicity, EC50, and so on. While previous approaches have employed a scalarization scheme to turn the multi-objective problem into a preference-conditioned single objective, it has been established that this kind of reduction may produce solutions that tend to slide towards the extreme points of the objective space when presented with a problem that exhibits a concave Pareto front. In this work we experiment with an alternative formulation of goal-conditioned molecular generation to obtain a more controllable conditional model that can uniformly explore solutions along the entire Pareto front.
[ { "created": "Wed, 7 Jun 2023 17:48:29 GMT", "version": "v1" }, { "created": "Thu, 29 Jun 2023 21:25:27 GMT", "version": "v2" } ]
2023-07-03
[ [ "Roy", "Julien", "" ], [ "Bacon", "Pierre-Luc", "" ], [ "Pal", "Christopher", "" ], [ "Bengio", "Emmanuel", "" ] ]
In recent years, in-silico molecular design has received much attention from the machine learning community. When designing a new compound for pharmaceutical applications, there are usually multiple properties of such molecules that need to be optimised: binding energy to the target, synthesizability, toxicity, EC50, and so on. While previous approaches have employed a scalarization scheme to turn the multi-objective problem into a preference-conditioned single objective, it has been established that this kind of reduction may produce solutions that tend to slide towards the extreme points of the objective space when presented with a problem that exhibits a concave Pareto front. In this work we experiment with an alternative formulation of goal-conditioned molecular generation to obtain a more controllable conditional model that can uniformly explore solutions along the entire Pareto front.
1001.2298
Gokul Sridharan Mr.
Gokul Sridharan and Teng Joon Lim
Turbo Receiver Design for Phase Noise Mitigation in OFDM Systems
17 pages; 1 figure. Shorter version of this paper was submitted to ISIT 2010
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper addresses the issue of phase noise in OFDM systems. Phase noise (PHN) is a transceiver impairment resulting from the non-idealities of the local oscillator. We present a case for designing a turbo receiver for systems corrupted by phase noise by taking a closer look at the effects of the common phase error (CPE). Using an approximate probabilistic framework called variational inference (VI), we develop a soft-in soft-out (SISO) algorithm that generates posterior bit-level soft estimates while taking into account the effect of phase noise. The algorithm also provides an estimate of the phase noise sequence. Using this SISO algorithm, a turbo receiver is designed by passing soft information between the SISO detector and an outer forward error correcting (FEC) decoder that uses a soft decoding algorithm. It is shown that the turbo receiver achieves close to optimal performance.
[ { "created": "Wed, 13 Jan 2010 20:44:38 GMT", "version": "v1" } ]
2010-01-14
[ [ "Sridharan", "Gokul", "" ], [ "Lim", "Teng Joon", "" ] ]
This paper addresses the issue of phase noise in OFDM systems. Phase noise (PHN) is a transceiver impairment resulting from the non-idealities of the local oscillator. We present a case for designing a turbo receiver for systems corrupted by phase noise by taking a closer look at the effects of the common phase error (CPE). Using an approximate probabilistic framework called variational inference (VI), we develop a soft-in soft-out (SISO) algorithm that generates posterior bit-level soft estimates while taking into account the effect of phase noise. The algorithm also provides an estimate of the phase noise sequence. Using this SISO algorithm, a turbo receiver is designed by passing soft information between the SISO detector and an outer forward error correcting (FEC) decoder that uses a soft decoding algorithm. It is shown that the turbo receiver achieves close to optimal performance.
2105.06501
Thiago B. Burghi
Thiago B. Burghi, Juliano G. Iossaqui, Juan F. Camino
Kinematic control design for wheeled mobile robots with longitudinal and lateral slip
null
International Journal of Adaptive Control and Signal Processing, 2024; 1-27
10.1002/acs.3747
null
cs.RO cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The motion control of wheeled mobile robots at high speeds under adverse ground conditions is a difficult task, since the robots' wheels may be subject to different kinds of slip. This work introduces an adaptive kinematic controller that is capable of solving the trajectory tracking problem of a nonholonomic mobile robot under longitudinal and lateral slip. While the controller can effectively compensate for the longitudinal slip, the lateral slip is a more involved problem to deal with, since nonholonomic robots cannot directly produce movement in the lateral direction. To show that the proposed controller is still able to make the mobile robot follow a reference trajectory under lateral and longitudinal time-varying slip, the solutions of the robot's position and orientation error dynamics are shown to be uniformly ultimately bounded. Numerical simulations are presented to illustrate the robot's performance using the proposed adaptive control law.
[ { "created": "Thu, 13 May 2021 18:25:48 GMT", "version": "v1" } ]
2024-02-19
[ [ "Burghi", "Thiago B.", "" ], [ "Iossaqui", "Juliano G.", "" ], [ "Camino", "Juan F.", "" ] ]
The motion control of wheeled mobile robots at high speeds under adverse ground conditions is a difficult task, since the robots' wheels may be subject to different kinds of slip. This work introduces an adaptive kinematic controller that is capable of solving the trajectory tracking problem of a nonholonomic mobile robot under longitudinal and lateral slip. While the controller can effectively compensate for the longitudinal slip, the lateral slip is a more involved problem to deal with, since nonholonomic robots cannot directly produce movement in the lateral direction. To show that the proposed controller is still able to make the mobile robot follow a reference trajectory under lateral and longitudinal time-varying slip, the solutions of the robot's position and orientation error dynamics are shown to be uniformly ultimately bounded. Numerical simulations are presented to illustrate the robot's performance using the proposed adaptive control law.
1503.08642
Sina Sanjari
S. Sanjari, S. Ozgoli
Generalized Integral Siding Mode Manifold Design: A Sum of Squares Approach
null
null
null
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a general form of integral sliding mode manifold, and proposes an algorithmic approach based on Sum of Squares (SOS) programming to design generalized integral sliding mode manifold and controller for nonlinear systems with both matched and unmatched uncertainties. The approach also gives a sufficient condition for successful design of controller and manifold parameters. The result of the paper is then verified by several simulation examples and two practical applications, namely Glucose-insulin regulation problem and the unicycle dynamics steering problem are considered.
[ { "created": "Mon, 30 Mar 2015 11:25:57 GMT", "version": "v1" }, { "created": "Thu, 21 May 2015 08:19:27 GMT", "version": "v2" } ]
2015-05-22
[ [ "Sanjari", "S.", "" ], [ "Ozgoli", "S.", "" ] ]
This paper presents a general form of integral sliding mode manifold, and proposes an algorithmic approach based on Sum of Squares (SOS) programming to design generalized integral sliding mode manifold and controller for nonlinear systems with both matched and unmatched uncertainties. The approach also gives a sufficient condition for successful design of controller and manifold parameters. The result of the paper is then verified by several simulation examples and two practical applications, namely Glucose-insulin regulation problem and the unicycle dynamics steering problem are considered.
2402.00455
Lingsheng Meng
Lingsheng Meng, Yong Liang Guan, Yao Ge, Zilong Liu, Pingzhi Fan
Tighter Lower Bounds on Aperiodic Ambiguity Function and Their Asymptotic Achievability
25 pages, 2 figure
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents tighter lower bounds on the maximum aperiodic ambiguity function (AF) magnitude of unimodular sequences under certain delay-Doppler low ambiguity zones (LAZ). These bounds are derived by exploiting the upper and lower bounds on the Frobenius norm of the weighted auto- and cross-AF matrices, with the introduction of two weight vectors associated with the delay and Doppler shifts, respectively. As a second major contribution, we demonstrate that our derived lower bounds are asymptotically achievable with selected Chu sequence sets by analyzing their maximum auto- and cross- AF magnitudes within certain LAZ.
[ { "created": "Thu, 1 Feb 2024 09:45:11 GMT", "version": "v1" }, { "created": "Thu, 18 Jul 2024 15:25:17 GMT", "version": "v2" } ]
2024-07-19
[ [ "Meng", "Lingsheng", "" ], [ "Guan", "Yong Liang", "" ], [ "Ge", "Yao", "" ], [ "Liu", "Zilong", "" ], [ "Fan", "Pingzhi", "" ] ]
This paper presents tighter lower bounds on the maximum aperiodic ambiguity function (AF) magnitude of unimodular sequences under certain delay-Doppler low ambiguity zones (LAZ). These bounds are derived by exploiting the upper and lower bounds on the Frobenius norm of the weighted auto- and cross-AF matrices, with the introduction of two weight vectors associated with the delay and Doppler shifts, respectively. As a second major contribution, we demonstrate that our derived lower bounds are asymptotically achievable with selected Chu sequence sets by analyzing their maximum auto- and cross- AF magnitudes within certain LAZ.
0904.2441
Karsten Fyhn Nielsen
Rasmus Jacobsen, Karsten Fyhn Nielsen, Petar Popovski, Torben Larsen
Reliable Identification of RFID Tags Using Multiple Independent Reader Sessions
Presented at IEEE RFID 2009 Conference
null
10.1109/RFID.2009.4911187
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Radio Frequency Identification (RFID) systems are gaining momentum in various applications of logistics, inventory, etc. A generic problem in such systems is to ensure that the RFID readers can reliably read a set of RFID tags, such that the probability of missing tags stays below an acceptable value. A tag may be missing (left unread) due to errors in the communication link towards the reader e.g. due to obstacles in the radio path. The present paper proposes techniques that use multiple reader sessions, during which the system of readers obtains a running estimate of the probability to have at least one tag missing. Based on such an estimate, it is decided whether an additional reader session is required. Two methods are proposed, they rely on the statistical independence of the tag reading errors across different reader sessions, which is a plausible assumption when e.g. each reader session is executed on different readers. The first method uses statistical relationships that are valid when the reader sessions are independent. The second method is obtained by modifying an existing capture-recapture estimator. The results show that, when the reader sessions are independent, the proposed mechanisms provide a good approximation to the probability of missing tags, such that the number of reader sessions made, meets the target specification. If the assumption of independence is violated, the estimators are still useful, but they should be corrected by a margin of additional reader sessions to ensure that the target probability of missing tags is met.
[ { "created": "Thu, 16 Apr 2009 07:33:40 GMT", "version": "v1" } ]
2016-11-18
[ [ "Jacobsen", "Rasmus", "" ], [ "Nielsen", "Karsten Fyhn", "" ], [ "Popovski", "Petar", "" ], [ "Larsen", "Torben", "" ] ]
Radio Frequency Identification (RFID) systems are gaining momentum in various applications of logistics, inventory, etc. A generic problem in such systems is to ensure that the RFID readers can reliably read a set of RFID tags, such that the probability of missing tags stays below an acceptable value. A tag may be missing (left unread) due to errors in the communication link towards the reader e.g. due to obstacles in the radio path. The present paper proposes techniques that use multiple reader sessions, during which the system of readers obtains a running estimate of the probability to have at least one tag missing. Based on such an estimate, it is decided whether an additional reader session is required. Two methods are proposed, they rely on the statistical independence of the tag reading errors across different reader sessions, which is a plausible assumption when e.g. each reader session is executed on different readers. The first method uses statistical relationships that are valid when the reader sessions are independent. The second method is obtained by modifying an existing capture-recapture estimator. The results show that, when the reader sessions are independent, the proposed mechanisms provide a good approximation to the probability of missing tags, such that the number of reader sessions made, meets the target specification. If the assumption of independence is violated, the estimators are still useful, but they should be corrected by a margin of additional reader sessions to ensure that the target probability of missing tags is met.
1510.01553
Dan Xu
Dan Xu, Elisa Ricci, Yan Yan, Jingkuan Song, Nicu Sebe
Learning Deep Representations of Appearance and Motion for Anomalous Event Detection
Oral paper in BMVC 2015
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a novel unsupervised deep learning framework for anomalous event detection in complex video scenes. While most existing works merely use hand-crafted appearance and motion features, we propose Appearance and Motion DeepNet (AMDN) which utilizes deep neural networks to automatically learn feature representations. To exploit the complementary information of both appearance and motion patterns, we introduce a novel double fusion framework, combining both the benefits of traditional early fusion and late fusion strategies. Specifically, stacked denoising autoencoders are proposed to separately learn both appearance and motion features as well as a joint representation (early fusion). Based on the learned representations, multiple one-class SVM models are used to predict the anomaly scores of each input, which are then integrated with a late fusion strategy for final anomaly detection. We evaluate the proposed method on two publicly available video surveillance datasets, showing competitive performance with respect to state of the art approaches.
[ { "created": "Tue, 6 Oct 2015 12:42:55 GMT", "version": "v1" } ]
2015-10-07
[ [ "Xu", "Dan", "" ], [ "Ricci", "Elisa", "" ], [ "Yan", "Yan", "" ], [ "Song", "Jingkuan", "" ], [ "Sebe", "Nicu", "" ] ]
We present a novel unsupervised deep learning framework for anomalous event detection in complex video scenes. While most existing works merely use hand-crafted appearance and motion features, we propose Appearance and Motion DeepNet (AMDN) which utilizes deep neural networks to automatically learn feature representations. To exploit the complementary information of both appearance and motion patterns, we introduce a novel double fusion framework, combining both the benefits of traditional early fusion and late fusion strategies. Specifically, stacked denoising autoencoders are proposed to separately learn both appearance and motion features as well as a joint representation (early fusion). Based on the learned representations, multiple one-class SVM models are used to predict the anomaly scores of each input, which are then integrated with a late fusion strategy for final anomaly detection. We evaluate the proposed method on two publicly available video surveillance datasets, showing competitive performance with respect to state of the art approaches.
1507.02199
Guy Grebla
Guy Grebla, Berk Birand, Peter van de Ven, Gil Zussman
Joint Transmission in Cellular Networks with CoMP - Stability and Scheduling Algorithms
31 pages, 11 figures, and 2 appendixes
null
null
null
cs.NI cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Due to the current trend towards smaller cells, an increasing number of users of cellular networks reside at the edge between two cells; these users typically receive poor service as a result of the relatively weak signal and strong interference. Coordinated Multi-Point (CoMP) with Joint Transmission (JT) is a cellular networking technique allowing multiple Base Stations (BSs) to jointly transmit to a single user. This improves the users' reception quality and facilitates better service to cell-edge users. We consider a CoMP-enabled network, comprised of multiple BSs interconnected via a backhaul network. We formulate the OFDMA Joint Scheduling (OJS) problem of determining a subframe schedule and deciding if and how to use JT in order to maximize some utility function. We show that the OJS problem is NP-hard. We develop optimal and approximation algorithms for specific and general topologies, respectively. We consider a time dimension and study a queueing model with packet arrivals in which the service rates for each subframe are obtained by solving the OJS problem. We prove that when the problem is formulated with a specific utility function and solved optimally in each subframe, the resulting scheduling policy is throughput-optimal. Via extensive simulations we show that the bulk of the gains from CoMP with JT can be achieved with low capacity backhaul. Moreover, our algorithms distribute the network resources evenly, increasing the inter-cell users' throughput at only a slight cost to the intra-cell users. This is the first step towards a rigorous, network-level understanding of the impact of cross-layer scheduling algorithms on CoMP networks.
[ { "created": "Wed, 8 Jul 2015 15:42:37 GMT", "version": "v1" }, { "created": "Sun, 26 Jul 2015 22:30:46 GMT", "version": "v2" } ]
2015-07-28
[ [ "Grebla", "Guy", "" ], [ "Birand", "Berk", "" ], [ "van de Ven", "Peter", "" ], [ "Zussman", "Gil", "" ] ]
Due to the current trend towards smaller cells, an increasing number of users of cellular networks reside at the edge between two cells; these users typically receive poor service as a result of the relatively weak signal and strong interference. Coordinated Multi-Point (CoMP) with Joint Transmission (JT) is a cellular networking technique allowing multiple Base Stations (BSs) to jointly transmit to a single user. This improves the users' reception quality and facilitates better service to cell-edge users. We consider a CoMP-enabled network, comprised of multiple BSs interconnected via a backhaul network. We formulate the OFDMA Joint Scheduling (OJS) problem of determining a subframe schedule and deciding if and how to use JT in order to maximize some utility function. We show that the OJS problem is NP-hard. We develop optimal and approximation algorithms for specific and general topologies, respectively. We consider a time dimension and study a queueing model with packet arrivals in which the service rates for each subframe are obtained by solving the OJS problem. We prove that when the problem is formulated with a specific utility function and solved optimally in each subframe, the resulting scheduling policy is throughput-optimal. Via extensive simulations we show that the bulk of the gains from CoMP with JT can be achieved with low capacity backhaul. Moreover, our algorithms distribute the network resources evenly, increasing the inter-cell users' throughput at only a slight cost to the intra-cell users. This is the first step towards a rigorous, network-level understanding of the impact of cross-layer scheduling algorithms on CoMP networks.
2204.00097
Sijie Zhu
Sijie Zhu, Mubarak Shah, Chen Chen
TransGeo: Transformer Is All You Need for Cross-view Image Geo-localization
CVPR
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The dominant CNN-based methods for cross-view image geo-localization rely on polar transform and fail to model global correlation. We propose a pure transformer-based approach (TransGeo) to address these limitations from a different perspective. TransGeo takes full advantage of the strengths of transformer related to global information modeling and explicit position information encoding. We further leverage the flexibility of transformer input and propose an attention-guided non-uniform cropping method, so that uninformative image patches are removed with negligible drop on performance to reduce computation cost. The saved computation can be reallocated to increase resolution only for informative patches, resulting in performance improvement with no additional computation cost. This "attend and zoom-in" strategy is highly similar to human behavior when observing images. Remarkably, TransGeo achieves state-of-the-art results on both urban and rural datasets, with significantly less computation cost than CNN-based methods. It does not rely on polar transform and infers faster than CNN-based methods. Code is available at https://github.com/Jeff-Zilence/TransGeo2022.
[ { "created": "Thu, 31 Mar 2022 21:19:41 GMT", "version": "v1" } ]
2022-04-04
[ [ "Zhu", "Sijie", "" ], [ "Shah", "Mubarak", "" ], [ "Chen", "Chen", "" ] ]
The dominant CNN-based methods for cross-view image geo-localization rely on polar transform and fail to model global correlation. We propose a pure transformer-based approach (TransGeo) to address these limitations from a different perspective. TransGeo takes full advantage of the strengths of transformer related to global information modeling and explicit position information encoding. We further leverage the flexibility of transformer input and propose an attention-guided non-uniform cropping method, so that uninformative image patches are removed with negligible drop on performance to reduce computation cost. The saved computation can be reallocated to increase resolution only for informative patches, resulting in performance improvement with no additional computation cost. This "attend and zoom-in" strategy is highly similar to human behavior when observing images. Remarkably, TransGeo achieves state-of-the-art results on both urban and rural datasets, with significantly less computation cost than CNN-based methods. It does not rely on polar transform and infers faster than CNN-based methods. Code is available at https://github.com/Jeff-Zilence/TransGeo2022.
1501.03064
David Castells-Rufas
Francisco Corbera, Andr\'es Rodr\'iguez, Rafael Asenjo, Angeles Navarro, Antonio Vilches, Maria Garzaran, Ismat Chaib Draa, Jamel Tayeb, Smail Niar, Mikael Desertot, Daniel Gregorek, Robert Schmidt, Alberto Garcia-Ortiz, Pedro Lopez-Garcia, R\'emy Haemmerl\'e, Maximiliano Klemen, Umer Liqat, Manuel V. Hermenegildo, Radim Vav\v{r}\'ik, Albert Sa\`a-Garriga, David Castells-Rufas, Jordi Carrabina
Proceedings of the Workshop on High Performance Energy Efficient Embedded Systems (HIP3ES) 2015
null
null
null
null
cs.DC
http://creativecommons.org/licenses/by/3.0/
Proceedings of the Workshop on High Performance Energy Efficient Embedded Systems (HIP3ES) 2015. Amsterdam, January 21st. Collocated with HIPEAC 2015 Conference.
[ { "created": "Tue, 13 Jan 2015 16:29:18 GMT", "version": "v1" } ]
2015-01-14
[ [ "Corbera", "Francisco", "" ], [ "Rodríguez", "Andrés", "" ], [ "Asenjo", "Rafael", "" ], [ "Navarro", "Angeles", "" ], [ "Vilches", "Antonio", "" ], [ "Garzaran", "Maria", "" ], [ "Draa", "Ismat Chaib", "" ], [ "Tayeb", "Jamel", "" ], [ "Niar", "Smail", "" ], [ "Desertot", "Mikael", "" ], [ "Gregorek", "Daniel", "" ], [ "Schmidt", "Robert", "" ], [ "Garcia-Ortiz", "Alberto", "" ], [ "Lopez-Garcia", "Pedro", "" ], [ "Haemmerlé", "Rémy", "" ], [ "Klemen", "Maximiliano", "" ], [ "Liqat", "Umer", "" ], [ "Hermenegildo", "Manuel V.", "" ], [ "Vavřík", "Radim", "" ], [ "Saà-Garriga", "Albert", "" ], [ "Castells-Rufas", "David", "" ], [ "Carrabina", "Jordi", "" ] ]
Proceedings of the Workshop on High Performance Energy Efficient Embedded Systems (HIP3ES) 2015. Amsterdam, January 21st. Collocated with HIPEAC 2015 Conference.
2005.07532
Ripon Patgiri
Sabuzima Nayak and Ripon Patgiri
6G Communication Technology: A Vision on Intelligent Healthcare
This manuscript is submitted to IEEE for possible publication
null
10.1007/978-981-15-9735-0_1
null
cs.NI cs.CY
http://creativecommons.org/licenses/by/4.0/
6G is a promising communication technology that will dominate the entire health market from 2030 onward. It will dominate not only health sector but also diverse sectors. It is expected that 6G will revolutionize many sectors including healthcare. Healthcare will be fully AI-driven and dependent on 6G communication technology, which will change our perception of lifestyle. Currently, time and space are the key barriers to health care and 6G will be able to overcome these barriers. Also, 6G will be proven as a game changing technology for healthcare. Therefore, in this perspective, we envision healthcare system for the era of 6G communication technology. Also, various new methodologies have to be introduced to enhance our lifestyle, which is addressed in this perspective, including Quality of Life (QoL), Intelligent Wearable Devices (IWD), Intelligent Internet of Medical Things (IIoMT), Hospital-to-Home (H2H) services, and new business model. In addition, we expose the role of 6G communication technology in telesurgery, Epidemic and Pandemic.
[ { "created": "Thu, 16 Apr 2020 06:53:05 GMT", "version": "v1" } ]
2021-05-07
[ [ "Nayak", "Sabuzima", "" ], [ "Patgiri", "Ripon", "" ] ]
6G is a promising communication technology that will dominate the entire health market from 2030 onward. It will dominate not only health sector but also diverse sectors. It is expected that 6G will revolutionize many sectors including healthcare. Healthcare will be fully AI-driven and dependent on 6G communication technology, which will change our perception of lifestyle. Currently, time and space are the key barriers to health care and 6G will be able to overcome these barriers. Also, 6G will be proven as a game changing technology for healthcare. Therefore, in this perspective, we envision healthcare system for the era of 6G communication technology. Also, various new methodologies have to be introduced to enhance our lifestyle, which is addressed in this perspective, including Quality of Life (QoL), Intelligent Wearable Devices (IWD), Intelligent Internet of Medical Things (IIoMT), Hospital-to-Home (H2H) services, and new business model. In addition, we expose the role of 6G communication technology in telesurgery, Epidemic and Pandemic.
2006.15212
Prateek Gupta
Prateek Gupta, Maxime Gasse, Elias B. Khalil, M. Pawan Kumar, Andrea Lodi, Yoshua Bengio
Hybrid Models for Learning to Branch
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada
null
null
null
cs.LG math.OC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A recent Graph Neural Network (GNN) approach for learning to branch has been shown to successfully reduce the running time of branch-and-bound algorithms for Mixed Integer Linear Programming (MILP). While the GNN relies on a GPU for inference, MILP solvers are purely CPU-based. This severely limits its application as many practitioners may not have access to high-end GPUs. In this work, we ask two key questions. First, in a more realistic setting where only a CPU is available, is the GNN model still competitive? Second, can we devise an alternate computationally inexpensive model that retains the predictive power of the GNN architecture? We answer the first question in the negative, and address the second question by proposing a new hybrid architecture for efficient branching on CPU machines. The proposed architecture combines the expressive power of GNNs with computationally inexpensive multi-layer perceptrons (MLP) for branching. We evaluate our methods on four classes of MILP problems, and show that they lead to up to 26% reduction in solver running time compared to state-of-the-art methods without a GPU, while extrapolating to harder problems than it was trained on. The code for this project is publicly available at https://github.com/pg2455/Hybrid-learn2branch.
[ { "created": "Fri, 26 Jun 2020 21:03:45 GMT", "version": "v1" }, { "created": "Tue, 20 Oct 2020 16:28:54 GMT", "version": "v2" }, { "created": "Fri, 23 Oct 2020 14:01:14 GMT", "version": "v3" } ]
2020-10-26
[ [ "Gupta", "Prateek", "" ], [ "Gasse", "Maxime", "" ], [ "Khalil", "Elias B.", "" ], [ "Kumar", "M. Pawan", "" ], [ "Lodi", "Andrea", "" ], [ "Bengio", "Yoshua", "" ] ]
A recent Graph Neural Network (GNN) approach for learning to branch has been shown to successfully reduce the running time of branch-and-bound algorithms for Mixed Integer Linear Programming (MILP). While the GNN relies on a GPU for inference, MILP solvers are purely CPU-based. This severely limits its application as many practitioners may not have access to high-end GPUs. In this work, we ask two key questions. First, in a more realistic setting where only a CPU is available, is the GNN model still competitive? Second, can we devise an alternate computationally inexpensive model that retains the predictive power of the GNN architecture? We answer the first question in the negative, and address the second question by proposing a new hybrid architecture for efficient branching on CPU machines. The proposed architecture combines the expressive power of GNNs with computationally inexpensive multi-layer perceptrons (MLP) for branching. We evaluate our methods on four classes of MILP problems, and show that they lead to up to 26% reduction in solver running time compared to state-of-the-art methods without a GPU, while extrapolating to harder problems than it was trained on. The code for this project is publicly available at https://github.com/pg2455/Hybrid-learn2branch.
0805.0850
Michael Hilker
Michael Hilker and Christoph Schommer
Service Oriented Architecture in Network Security - a novel Organisation in Security Systems
4 pages
Proceedings of the 3rd International Workshop on Theory of Computer Viruses (TCV 2008), May 2008, Nancy, France
null
null
cs.CR cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current network security systems are a collection of various security components, which are directly installed in the operating system. These check the whole node for suspicious behaviour. Armouring intrusions e.g. have the ability to hide themselves from being checked. We present in this paper an alternative organisation of security systems. The node is completely virtualized with current virtualization systems so that the operating system with applications and the security system is distinguished. The security system then checks the node from outside and the right security components are provided through a service oriented architecture. Due to the running in a virtual machine, the infected nodes can be halted, duplicated, and moved to other nodes for further analysis and legal aspects. This organisation is in this article analysed and a preliminary implementation showing promising results are discussed.
[ { "created": "Wed, 7 May 2008 14:13:32 GMT", "version": "v1" } ]
2008-05-08
[ [ "Hilker", "Michael", "" ], [ "Schommer", "Christoph", "" ] ]
Current network security systems are a collection of various security components, which are directly installed in the operating system. These check the whole node for suspicious behaviour. Armouring intrusions e.g. have the ability to hide themselves from being checked. We present in this paper an alternative organisation of security systems. The node is completely virtualized with current virtualization systems so that the operating system with applications and the security system is distinguished. The security system then checks the node from outside and the right security components are provided through a service oriented architecture. Due to the running in a virtual machine, the infected nodes can be halted, duplicated, and moved to other nodes for further analysis and legal aspects. This organisation is in this article analysed and a preliminary implementation showing promising results are discussed.
1908.06874
Michael Rapp
Yannik Klein, Michael Rapp and Eneldo Loza Menc\'ia
Efficient Discovery of Expressive Multi-label Rules using Relaxed Pruning
Preprint version. To appear in Proceedings of the 22nd International Conference on Discovery Science, 2019
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Being able to model correlations between labels is considered crucial in multi-label classification. Rule-based models enable to expose such dependencies, e.g., implications, subsumptions, or exclusions, in an interpretable and human-comprehensible manner. Albeit the number of possible label combinations increases exponentially with the number of available labels, it has been shown that rules with multiple labels in their heads, which are a natural form to model local label dependencies, can be induced efficiently by exploiting certain properties of rule evaluation measures and pruning the label search space accordingly. However, experiments have revealed that multi-label heads are unlikely to be learned by existing methods due to their restrictiveness. To overcome this limitation, we propose a plug-in approach that relaxes the search space pruning used by existing methods in order to introduce a bias towards larger multi-label heads resulting in more expressive rules. We further demonstrate the effectiveness of our approach empirically and show that it does not come with drawbacks in terms of training time or predictive performance.
[ { "created": "Mon, 19 Aug 2019 15:22:23 GMT", "version": "v1" } ]
2019-08-20
[ [ "Klein", "Yannik", "" ], [ "Rapp", "Michael", "" ], [ "Mencía", "Eneldo Loza", "" ] ]
Being able to model correlations between labels is considered crucial in multi-label classification. Rule-based models enable to expose such dependencies, e.g., implications, subsumptions, or exclusions, in an interpretable and human-comprehensible manner. Albeit the number of possible label combinations increases exponentially with the number of available labels, it has been shown that rules with multiple labels in their heads, which are a natural form to model local label dependencies, can be induced efficiently by exploiting certain properties of rule evaluation measures and pruning the label search space accordingly. However, experiments have revealed that multi-label heads are unlikely to be learned by existing methods due to their restrictiveness. To overcome this limitation, we propose a plug-in approach that relaxes the search space pruning used by existing methods in order to introduce a bias towards larger multi-label heads resulting in more expressive rules. We further demonstrate the effectiveness of our approach empirically and show that it does not come with drawbacks in terms of training time or predictive performance.
2312.00041
Prosenjit Chatterjee
Justin Spencer, Deborah Lawrence, Prosenjit Chatterjee, Kaushik Roy, Albert Esterline, and Jung-Hee Kim
Presentation Attack Detection using Convolutional Neural Networks and Local Binary Patterns
null
null
null
null
cs.CR cs.CV cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
The use of biometrics to authenticate users and control access to secure areas has become extremely popular in recent years, and biometric access control systems are frequently used by both governments and private corporations. However, these systems may represent risks to security when deployed without considering the possibility of biometric presentation attacks (also known as spoofing). Presentation attacks are a serious threat because they do not require significant time, expense, or skill to carry out while remaining effective against many biometric systems in use today. This research compares three different software-based methods for facial and iris presentation attack detection in images. The first method uses Inception-v3, a pre-trained deep Convolutional Neural Network (CNN) made by Google for the ImageNet challenge, which is retrained for this problem. The second uses a shallow CNN based on a modified Spoofnet architecture, which is trained normally. The third is a texture-based method using Local Binary Patterns (LBP). The datasets used are the ATVS-FIr dataset, which contains real and fake iris images, and the CASIA Face Anti-Spoofing Dataset, which contains real images as well as warped photos, cut photos, and video replay presentation attacks. We also present a third set of results, based on cropped versions of the CASIA images.
[ { "created": "Thu, 23 Nov 2023 20:57:07 GMT", "version": "v1" } ]
2023-12-04
[ [ "Spencer", "Justin", "" ], [ "Lawrence", "Deborah", "" ], [ "Chatterjee", "Prosenjit", "" ], [ "Roy", "Kaushik", "" ], [ "Esterline", "Albert", "" ], [ "Kim", "Jung-Hee", "" ] ]
The use of biometrics to authenticate users and control access to secure areas has become extremely popular in recent years, and biometric access control systems are frequently used by both governments and private corporations. However, these systems may represent risks to security when deployed without considering the possibility of biometric presentation attacks (also known as spoofing). Presentation attacks are a serious threat because they do not require significant time, expense, or skill to carry out while remaining effective against many biometric systems in use today. This research compares three different software-based methods for facial and iris presentation attack detection in images. The first method uses Inception-v3, a pre-trained deep Convolutional Neural Network (CNN) made by Google for the ImageNet challenge, which is retrained for this problem. The second uses a shallow CNN based on a modified Spoofnet architecture, which is trained normally. The third is a texture-based method using Local Binary Patterns (LBP). The datasets used are the ATVS-FIr dataset, which contains real and fake iris images, and the CASIA Face Anti-Spoofing Dataset, which contains real images as well as warped photos, cut photos, and video replay presentation attacks. We also present a third set of results, based on cropped versions of the CASIA images.
2111.02603
Kanishka Misra
Kanishka Misra
On Semantic Cognition, Inductive Generalization, and Language Models
Accepted at AAAI 2022 Doctoral Consortium
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
My doctoral research focuses on understanding semantic knowledge in neural network models trained solely to predict natural language (referred to as language models, or LMs), by drawing on insights from the study of concepts and categories grounded in cognitive science. I propose a framework inspired by 'inductive reasoning,' a phenomenon that sheds light on how humans utilize background knowledge to make inductive leaps and generalize from new pieces of information about concepts and their properties. Drawing from experiments that study inductive reasoning, I propose to analyze semantic inductive generalization in LMs using phenomena observed in human-induction literature, investigate inductive behavior on tasks such as implicit reasoning and emergent feature recognition, and analyze and relate induction dynamics to the learned conceptual representation space.
[ { "created": "Thu, 4 Nov 2021 03:19:52 GMT", "version": "v1" } ]
2021-11-05
[ [ "Misra", "Kanishka", "" ] ]
My doctoral research focuses on understanding semantic knowledge in neural network models trained solely to predict natural language (referred to as language models, or LMs), by drawing on insights from the study of concepts and categories grounded in cognitive science. I propose a framework inspired by 'inductive reasoning,' a phenomenon that sheds light on how humans utilize background knowledge to make inductive leaps and generalize from new pieces of information about concepts and their properties. Drawing from experiments that study inductive reasoning, I propose to analyze semantic inductive generalization in LMs using phenomena observed in human-induction literature, investigate inductive behavior on tasks such as implicit reasoning and emergent feature recognition, and analyze and relate induction dynamics to the learned conceptual representation space.
2006.11693
Teng Wang
Teng Wang, Huicheng Zheng, Mingjing Yu
Dense-Captioning Events in Videos: SYSU Submission to ActivityNet Challenge 2020
Second-place solution to TASK 2 (Dense video captioning) in ActivityNet Challenge 2020. Code is available at https://github.com/ttengwang/dense-video-captioning-pytorch
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This technical report presents a brief description of our submission to the dense video captioning task of ActivityNet Challenge 2020. Our approach follows a two-stage pipeline: first, we extract a set of temporal event proposals; then we propose a multi-event captioning model to capture the event-level temporal relationships and effectively fuse the multi-modal information. Our approach achieves a 9.28 METEOR score on the test set.
[ { "created": "Sun, 21 Jun 2020 02:38:59 GMT", "version": "v1" }, { "created": "Wed, 12 Aug 2020 03:44:21 GMT", "version": "v2" } ]
2020-08-13
[ [ "Wang", "Teng", "" ], [ "Zheng", "Huicheng", "" ], [ "Yu", "Mingjing", "" ] ]
This technical report presents a brief description of our submission to the dense video captioning task of ActivityNet Challenge 2020. Our approach follows a two-stage pipeline: first, we extract a set of temporal event proposals; then we propose a multi-event captioning model to capture the event-level temporal relationships and effectively fuse the multi-modal information. Our approach achieves a 9.28 METEOR score on the test set.
2209.11069
Milica Petkovic
Milica Petkovic, Dejan Vukobratovi\'c, Andrea Munari, Federico Clazzer
Relay-aided Slotted Aloha for Optical Wireless Communications
Published in: 2020 12th International Symposium on Communication Systems, Networks and Digital Signal Processing (CSNDSP)
null
10.1109/CSNDSP49049.2020.9249592
null
cs.NI cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a relay-aided Slotted ALOHA solution for uplink random access for an Optical Wireless Communications (OWC)-based Internet of Things (IoT). The first phase of uplink, the one between IoT devices and the relays, is realized using indoor OWC, while the second phase, between the relays and a base station, represents the long-range RF transmission based on low-power wide area network such as LoRaWAN and occurs outdoors. The throughput performance dependence on the OWC and RF channel conditions is observed. The behavior of the performance gain due to adding relays is highlighted and investigated under different channel and traffic conditions.
[ { "created": "Thu, 22 Sep 2022 15:00:32 GMT", "version": "v1" } ]
2022-09-23
[ [ "Petkovic", "Milica", "" ], [ "Vukobratović", "Dejan", "" ], [ "Munari", "Andrea", "" ], [ "Clazzer", "Federico", "" ] ]
We consider a relay-aided Slotted ALOHA solution for uplink random access for an Optical Wireless Communications (OWC)-based Internet of Things (IoT). The first phase of uplink, the one between IoT devices and the relays, is realized using indoor OWC, while the second phase, between the relays and a base station, represents the long-range RF transmission based on low-power wide area network such as LoRaWAN and occurs outdoors. The throughput performance dependence on the OWC and RF channel conditions is observed. The behavior of the performance gain due to adding relays is highlighted and investigated under different channel and traffic conditions.
2405.02008
Peijin Jia
Peijin Jia, Tuopu Wen, Ziang Luo, Mengmeng Yang, Kun Jiang, Zhiquan Lei, Xuewei Tang, Ziyuan Liu, Le Cui, Kehua Sheng, Bo Zhang, Diange Yang
DiffMap: Enhancing Map Segmentation with Map Prior Using Diffusion Model
null
null
null
null
cs.CV
http://creativecommons.org/publicdomain/zero/1.0/
Constructing high-definition (HD) maps is a crucial requirement for enabling autonomous driving. In recent years, several map segmentation algorithms have been developed to address this need, leveraging advancements in Bird's-Eye View (BEV) perception. However, existing models still encounter challenges in producing realistic and consistent semantic map layouts. One prominent issue is the limited utilization of structured priors inherent in map segmentation masks. In light of this, we propose DiffMap, a novel approach specifically designed to model the structured priors of map segmentation masks using latent diffusion model. By incorporating this technique, the performance of existing semantic segmentation methods can be significantly enhanced and certain structural errors present in the segmentation outputs can be effectively rectified. Notably, the proposed module can be seamlessly integrated into any map segmentation model, thereby augmenting its capability to accurately delineate semantic information. Furthermore, through extensive visualization analysis, our model demonstrates superior proficiency in generating results that more accurately reflect real-world map layouts, further validating its efficacy in improving the quality of the generated maps.
[ { "created": "Fri, 3 May 2024 11:16:27 GMT", "version": "v1" } ]
2024-05-06
[ [ "Jia", "Peijin", "" ], [ "Wen", "Tuopu", "" ], [ "Luo", "Ziang", "" ], [ "Yang", "Mengmeng", "" ], [ "Jiang", "Kun", "" ], [ "Lei", "Zhiquan", "" ], [ "Tang", "Xuewei", "" ], [ "Liu", "Ziyuan", "" ], [ "Cui", "Le", "" ], [ "Sheng", "Kehua", "" ], [ "Zhang", "Bo", "" ], [ "Yang", "Diange", "" ] ]
Constructing high-definition (HD) maps is a crucial requirement for enabling autonomous driving. In recent years, several map segmentation algorithms have been developed to address this need, leveraging advancements in Bird's-Eye View (BEV) perception. However, existing models still encounter challenges in producing realistic and consistent semantic map layouts. One prominent issue is the limited utilization of structured priors inherent in map segmentation masks. In light of this, we propose DiffMap, a novel approach specifically designed to model the structured priors of map segmentation masks using latent diffusion model. By incorporating this technique, the performance of existing semantic segmentation methods can be significantly enhanced and certain structural errors present in the segmentation outputs can be effectively rectified. Notably, the proposed module can be seamlessly integrated into any map segmentation model, thereby augmenting its capability to accurately delineate semantic information. Furthermore, through extensive visualization analysis, our model demonstrates superior proficiency in generating results that more accurately reflect real-world map layouts, further validating its efficacy in improving the quality of the generated maps.
2212.14814
R\'emi Pellerin
Christophe Crespelle, R\'emi Pellerin, St\'ephan Thomass\'e
A quasi-quadratic vertex Kernel for Cograph edge editing
null
null
null
null
cs.DS cs.CC
http://creativecommons.org/licenses/by-nc-sa/4.0/
We provide a $O(k^2 \mathrm{log} k)$ vertex kernel for cograph edge editing. This improves a cubic kernel found by Guillemot, Havet, Paul and Perez [1] which involved four reduction rules. We generalize one of their rules, based on packing of induced paths of length four, by introducing t-modules, which are modules up to t edge modifications. The key fact is that large t-modules cannot be edited more than t times, and this allows to obtain a near quadratic kernel. The extra $\mathrm{log} k$ factor seems tricky to remove as it is necessary in the combinatorial lemma on trees which is central in our proof. Nevertheless, we think that a quadratic bound should be reachable.
[ { "created": "Fri, 30 Dec 2022 16:23:27 GMT", "version": "v1" } ]
2023-01-02
[ [ "Crespelle", "Christophe", "" ], [ "Pellerin", "Rémi", "" ], [ "Thomassé", "Stéphan", "" ] ]
We provide a $O(k^2 \mathrm{log} k)$ vertex kernel for cograph edge editing. This improves a cubic kernel found by Guillemot, Havet, Paul and Perez [1] which involved four reduction rules. We generalize one of their rules, based on packing of induced paths of length four, by introducing t-modules, which are modules up to t edge modifications. The key fact is that large t-modules cannot be edited more than t times, and this allows to obtain a near quadratic kernel. The extra $\mathrm{log} k$ factor seems tricky to remove as it is necessary in the combinatorial lemma on trees which is central in our proof. Nevertheless, we think that a quadratic bound should be reachable.
2111.12796
Dongha Lee
Dongha Lee, Dongmin Hyun, Jiawei Han, Hwanjo Yu
Out-of-Category Document Identification Using Target-Category Names as Weak Supervision
ICDM 2021. 10 pages, 4 figures
null
null
null
cs.IR cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
Identifying outlier documents, whose content is different from the majority of the documents in a corpus, has played an important role to manage a large text collection. However, due to the absence of explicit information about the inlier (or target) distribution, existing unsupervised outlier detectors are likely to make unreliable results depending on the density or diversity of the outliers in the corpus. To address this challenge, we introduce a new task referred to as out-of-category detection, which aims to distinguish the documents according to their semantic relevance to the inlier (or target) categories by using the category names as weak supervision. In practice, this task can be widely applicable in that it can flexibly designate the scope of target categories according to users' interests while requiring only the target-category names as minimum guidance. In this paper, we present an out-of-category detection framework, which effectively measures how confidently each document belongs to one of the target categories based on its category-specific relevance score. Our framework adopts a two-step approach; (i) it first generates the pseudo-category label of all unlabeled documents by exploiting the word-document similarity encoded in a text embedding space, then (ii) it trains a neural classifier by using the pseudo-labels in order to compute the confidence from its target-category prediction. The experiments on real-world datasets demonstrate that our framework achieves the best detection performance among all baseline methods in various scenarios specifying different target categories.
[ { "created": "Wed, 24 Nov 2021 21:01:25 GMT", "version": "v1" } ]
2021-11-29
[ [ "Lee", "Dongha", "" ], [ "Hyun", "Dongmin", "" ], [ "Han", "Jiawei", "" ], [ "Yu", "Hwanjo", "" ] ]
Identifying outlier documents, whose content is different from the majority of the documents in a corpus, has played an important role to manage a large text collection. However, due to the absence of explicit information about the inlier (or target) distribution, existing unsupervised outlier detectors are likely to make unreliable results depending on the density or diversity of the outliers in the corpus. To address this challenge, we introduce a new task referred to as out-of-category detection, which aims to distinguish the documents according to their semantic relevance to the inlier (or target) categories by using the category names as weak supervision. In practice, this task can be widely applicable in that it can flexibly designate the scope of target categories according to users' interests while requiring only the target-category names as minimum guidance. In this paper, we present an out-of-category detection framework, which effectively measures how confidently each document belongs to one of the target categories based on its category-specific relevance score. Our framework adopts a two-step approach; (i) it first generates the pseudo-category label of all unlabeled documents by exploiting the word-document similarity encoded in a text embedding space, then (ii) it trains a neural classifier by using the pseudo-labels in order to compute the confidence from its target-category prediction. The experiments on real-world datasets demonstrate that our framework achieves the best detection performance among all baseline methods in various scenarios specifying different target categories.
1701.08481
C.-C. Jay Kuo
C.-C. Jay Kuo
CNN as Guided Multi-layer RECOS Transform
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There is a resurging interest in developing a neural-network-based solution to the supervised machine learning problem. The convolutional neural network (CNN) will be studied in this note. To begin with, we introduce a RECOS transform as a basic building block of CNNs. The "RECOS" is an acronym for "REctified-COrrelations on a Sphere". It consists of two main concepts: 1) data clustering on a sphere and 2) rectification. Afterwards, we interpret a CNN as a network that implements the guided multi-layer RECOS transform with three highlights. First, we compare the traditional single-layer and modern multi-layer signal analysis approaches, point out key ingredients that enable the multi-layer approach, and provide a full explanation to the operating principle of CNNs. Second, we discuss how guidance is provided by labels through backpropagation (BP) in the training. Third, we show that a trained network can be greatly simplified in the testing stage demanding only one-bit representation for both filter weights and inputs.
[ { "created": "Mon, 30 Jan 2017 04:39:36 GMT", "version": "v1" }, { "created": "Thu, 16 Feb 2017 07:41:27 GMT", "version": "v2" }, { "created": "Sun, 19 Feb 2017 07:42:24 GMT", "version": "v3" } ]
2017-02-21
[ [ "Kuo", "C. -C. Jay", "" ] ]
There is a resurging interest in developing a neural-network-based solution to the supervised machine learning problem. The convolutional neural network (CNN) will be studied in this note. To begin with, we introduce a RECOS transform as a basic building block of CNNs. The "RECOS" is an acronym for "REctified-COrrelations on a Sphere". It consists of two main concepts: 1) data clustering on a sphere and 2) rectification. Afterwards, we interpret a CNN as a network that implements the guided multi-layer RECOS transform with three highlights. First, we compare the traditional single-layer and modern multi-layer signal analysis approaches, point out key ingredients that enable the multi-layer approach, and provide a full explanation to the operating principle of CNNs. Second, we discuss how guidance is provided by labels through backpropagation (BP) in the training. Third, we show that a trained network can be greatly simplified in the testing stage demanding only one-bit representation for both filter weights and inputs.
2402.08284
Takanori Ugai
Takanori Ugai, Yusuke Koyanagi, Fumihito Nishino
A Logical Approach to Criminal Case Investigation
11 pages, 11 figures
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
XAI (eXplanable AI) techniques that have the property of explaining the reasons for their conclusions, i.e. explainability or interpretability, are attracting attention. XAI is expected to be used in the development of forensic science and the justice system. In today's forensic and criminal investigation environment, experts face many challenges due to large amounts of data, small pieces of evidence in a chaotic and complex environment, traditional laboratory structures and sometimes inadequate knowledge. All these can lead to failed investigations and miscarriages of justice. In this paper, we describe the application of one logical approach to crime scene investigation. The subject of the application is ``The Adventure of the Speckled Band'' from the Sherlock Holmes short stories. The applied data is the knowledge graph created for the Knowledge Graph Reasoning Challenge. We tried to find the murderer by inferring each person with the motive, opportunity, and method. We created an ontology of motives and methods of murder from dictionaries and dictionaries, added it to the knowledge graph of ``The Adventure of the Speckled Band'', and applied scripts to determine motives, opportunities, and methods.
[ { "created": "Tue, 13 Feb 2024 08:24:32 GMT", "version": "v1" } ]
2024-02-14
[ [ "Ugai", "Takanori", "" ], [ "Koyanagi", "Yusuke", "" ], [ "Nishino", "Fumihito", "" ] ]
XAI (eXplanable AI) techniques that have the property of explaining the reasons for their conclusions, i.e. explainability or interpretability, are attracting attention. XAI is expected to be used in the development of forensic science and the justice system. In today's forensic and criminal investigation environment, experts face many challenges due to large amounts of data, small pieces of evidence in a chaotic and complex environment, traditional laboratory structures and sometimes inadequate knowledge. All these can lead to failed investigations and miscarriages of justice. In this paper, we describe the application of one logical approach to crime scene investigation. The subject of the application is ``The Adventure of the Speckled Band'' from the Sherlock Holmes short stories. The applied data is the knowledge graph created for the Knowledge Graph Reasoning Challenge. We tried to find the murderer by inferring each person with the motive, opportunity, and method. We created an ontology of motives and methods of murder from dictionaries and dictionaries, added it to the knowledge graph of ``The Adventure of the Speckled Band'', and applied scripts to determine motives, opportunities, and methods.
2311.13489
Arun Narayanan
Arun Narayanan, Evangelos Pournaras, Pedro H. J. Nardelli
Large-scale Package Deliveries with Unmanned Aerial Vehicles using Collective Learning
null
null
null
null
cs.MA
http://creativecommons.org/licenses/by/4.0/
Unmanned aerial vehicles (UAVs) have significant practical advantages for delivering packages, and many logistics companies have begun deploying UAVs for commercial package deliveries. To deliver packages quickly and cost-effectively, the routes taken by UAVs from depots to customers must be optimized. This route optimization problem, a type of capacitated vehicle routing problem, has recently attracted considerable research interest. However, few papers have dealt with large-scale deliveries, where the number of customers exceed 1000. We present an innovative, practical package delivery model wherein multiple UAVs deliver multiple packages to customers who are compensated for late deliveries. Further, we propose an innovative methodology that combines a new plan-generation algorithm with a collective-learning heuristic to quickly determine cost-effective paths of UAVs even for large-scale deliveries up to 10000 customers. Specialized settings are applied to a collective-learning heuristic, the Iterative Economic Planning and Optimized Selections (I-EPOS) in order to coordinate collective actions of the UAVs. To demonstrate our methodology, we applied our highly flexible approach to a depot in Heathrow Airport, London. We show that a coordinated approach, in which the UAVs collectively determine their flight paths, leads to lower operational costs than an uncoordinated approach. Further, the coordinated approach enables large-scale package deliveries.
[ { "created": "Wed, 22 Nov 2023 16:04:39 GMT", "version": "v1" } ]
2023-11-23
[ [ "Narayanan", "Arun", "" ], [ "Pournaras", "Evangelos", "" ], [ "Nardelli", "Pedro H. J.", "" ] ]
Unmanned aerial vehicles (UAVs) have significant practical advantages for delivering packages, and many logistics companies have begun deploying UAVs for commercial package deliveries. To deliver packages quickly and cost-effectively, the routes taken by UAVs from depots to customers must be optimized. This route optimization problem, a type of capacitated vehicle routing problem, has recently attracted considerable research interest. However, few papers have dealt with large-scale deliveries, where the number of customers exceed 1000. We present an innovative, practical package delivery model wherein multiple UAVs deliver multiple packages to customers who are compensated for late deliveries. Further, we propose an innovative methodology that combines a new plan-generation algorithm with a collective-learning heuristic to quickly determine cost-effective paths of UAVs even for large-scale deliveries up to 10000 customers. Specialized settings are applied to a collective-learning heuristic, the Iterative Economic Planning and Optimized Selections (I-EPOS) in order to coordinate collective actions of the UAVs. To demonstrate our methodology, we applied our highly flexible approach to a depot in Heathrow Airport, London. We show that a coordinated approach, in which the UAVs collectively determine their flight paths, leads to lower operational costs than an uncoordinated approach. Further, the coordinated approach enables large-scale package deliveries.
1901.02693
Yuncheng Du
Jeongeun Son, Yuncheng Du
Model-based Stochastic Fault Detection and Diagnosis for Lithium-ion Batteries
null
Processes, 2019
null
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Lithium-ion battery (Li-ion) is becoming the dominant energy storage solution in many applications such as hybrid electric and electric vehicles, due to its higher energy density and longer life cycle. For these applications, the battery should perform reliably and pose no safety threats. However, the performance of Li-ion batteries can be affected by abnormal thermal behaviors, defined as faults. It is essential to develop reliable thermal management system to accurately predict and monitor thermal behaviors of Li-ion battery. Using the first-principle models of batteries, this work presents a stochastic fault detection and diagnosis (FDD) algorithm to identify two particular faults in the Li-ion battery cells, using easily measured quantities such as temperatures. Models of Li-ion battery are typically derived from the underlying physical phenomena. To make model tractable and useful, it is common to make simplifications during model development, which may consequently introduce mismatch between models and battery cells. Further, FDD algorithms can be affected by uncertainty, which may originate from either intrinsic time varying phenomena or model calibration with noisy data. A two-step FDD algorithm is developed in this work to correct model of Li-ion battery cells and to identify faulty operations from a normal operating condition. An iterative optimization problem is proposed to correct the model by incorporating the errors between measured quantities and model predictions, which is followed by an optimization-based FDD to provide a probabilistic description of the occurrence of possible faults, while taking the uncertainty into account. The two-step stochastic FDD algorithm in this work is shown to be efficient in terms of fault detection rate for both individual and simultaneous faults in Li-ion batteries, as compared to Monte Carlo (MC) simulations.
[ { "created": "Wed, 9 Jan 2019 12:29:07 GMT", "version": "v1" } ]
2019-01-10
[ [ "Son", "Jeongeun", "" ], [ "Du", "Yuncheng", "" ] ]
Lithium-ion battery (Li-ion) is becoming the dominant energy storage solution in many applications such as hybrid electric and electric vehicles, due to its higher energy density and longer life cycle. For these applications, the battery should perform reliably and pose no safety threats. However, the performance of Li-ion batteries can be affected by abnormal thermal behaviors, defined as faults. It is essential to develop reliable thermal management system to accurately predict and monitor thermal behaviors of Li-ion battery. Using the first-principle models of batteries, this work presents a stochastic fault detection and diagnosis (FDD) algorithm to identify two particular faults in the Li-ion battery cells, using easily measured quantities such as temperatures. Models of Li-ion battery are typically derived from the underlying physical phenomena. To make model tractable and useful, it is common to make simplifications during model development, which may consequently introduce mismatch between models and battery cells. Further, FDD algorithms can be affected by uncertainty, which may originate from either intrinsic time varying phenomena or model calibration with noisy data. A two-step FDD algorithm is developed in this work to correct model of Li-ion battery cells and to identify faulty operations from a normal operating condition. An iterative optimization problem is proposed to correct the model by incorporating the errors between measured quantities and model predictions, which is followed by an optimization-based FDD to provide a probabilistic description of the occurrence of possible faults, while taking the uncertainty into account. The two-step stochastic FDD algorithm in this work is shown to be efficient in terms of fault detection rate for both individual and simultaneous faults in Li-ion batteries, as compared to Monte Carlo (MC) simulations.
1701.07482
Alessandro Vittorio Papadopoulos
Alessandro Vittorio Papadopoulos, Federico Terraneo, Alberto Leva, Maria Prandini
Switched control for quantized feedback systems: invariance and limit cycle analysis
12 pages, 14 figures
null
10.1109/TAC.2018.2797246
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study feedback control for discrete-time linear time-invariant systems in the presence of quantization both in the control action and in the measurement of the controlled variable. While in some application the quantization effects can be neglected, when high-precision control is needed, they have to be explicitly accounted for in control design. In this paper we propose a switched control solution for minimizing the effect of quantization of both the control and controlled variables in the case of a simple integrator with unitary delay, a model that is quite common in the computing systems domain, for example in thread scheduling, clock synchronization, and resource allocation. We show that the switched solution outperforms the one without switching, designed by neglecting quantization, and analyze necessary and sufficient conditions for the controlled system to exhibit periodic solutions in the presence of an additive constant disturbance affecting the control input. Simulation results provide evidence of the effectiveness of the approach.
[ { "created": "Wed, 25 Jan 2017 20:43:02 GMT", "version": "v1" }, { "created": "Mon, 12 Jun 2017 14:42:31 GMT", "version": "v2" } ]
2020-05-12
[ [ "Papadopoulos", "Alessandro Vittorio", "" ], [ "Terraneo", "Federico", "" ], [ "Leva", "Alberto", "" ], [ "Prandini", "Maria", "" ] ]
We study feedback control for discrete-time linear time-invariant systems in the presence of quantization both in the control action and in the measurement of the controlled variable. While in some application the quantization effects can be neglected, when high-precision control is needed, they have to be explicitly accounted for in control design. In this paper we propose a switched control solution for minimizing the effect of quantization of both the control and controlled variables in the case of a simple integrator with unitary delay, a model that is quite common in the computing systems domain, for example in thread scheduling, clock synchronization, and resource allocation. We show that the switched solution outperforms the one without switching, designed by neglecting quantization, and analyze necessary and sufficient conditions for the controlled system to exhibit periodic solutions in the presence of an additive constant disturbance affecting the control input. Simulation results provide evidence of the effectiveness of the approach.
1807.03865
Caleb Stanford
Rajeev Alur, Dana Fisman, Konstantinos Mamouras, Mukund Raghothaman, Caleb Stanford
Streamable Regular Transductions
53 pages
null
null
null
cs.FL cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivated by real-time monitoring and data processing applications, we develop a formal theory of quantitative queries for streaming data that can be evaluated efficiently. We consider the model of unambiguous Cost Register Automata (CRAs), which are machines that combine finite-state control (for identifying regular patterns) with a finite set of data registers (for computing numerical aggregates). The definition of CRAs is parameterized by the collection of numerical operations that can be applied to the registers. These machines give rise to the class of streamable regular transductions (SR), and to the class of streamable linear regular transductions (SLR) when the register updates are copyless, i.e. every register appears at most once the right-hand-side expressions of the updates. We give a logical characterization of the class SR (resp., SLR) using MSO-definable transformations from strings to DAGs (resp., trees) without backward edges. Additionally, we establish that the two classes SR and SLR are closed under operations that are relevant for designing query languages. Finally, we study the relationship with weighted automata (WA), and show that CRAs over a suitably chosen set of operations correspond to WA, thus establishing that WA are a special case of CRAs.
[ { "created": "Tue, 10 Jul 2018 21:11:24 GMT", "version": "v1" }, { "created": "Sun, 3 Nov 2019 21:13:50 GMT", "version": "v2" } ]
2019-11-05
[ [ "Alur", "Rajeev", "" ], [ "Fisman", "Dana", "" ], [ "Mamouras", "Konstantinos", "" ], [ "Raghothaman", "Mukund", "" ], [ "Stanford", "Caleb", "" ] ]
Motivated by real-time monitoring and data processing applications, we develop a formal theory of quantitative queries for streaming data that can be evaluated efficiently. We consider the model of unambiguous Cost Register Automata (CRAs), which are machines that combine finite-state control (for identifying regular patterns) with a finite set of data registers (for computing numerical aggregates). The definition of CRAs is parameterized by the collection of numerical operations that can be applied to the registers. These machines give rise to the class of streamable regular transductions (SR), and to the class of streamable linear regular transductions (SLR) when the register updates are copyless, i.e. every register appears at most once the right-hand-side expressions of the updates. We give a logical characterization of the class SR (resp., SLR) using MSO-definable transformations from strings to DAGs (resp., trees) without backward edges. Additionally, we establish that the two classes SR and SLR are closed under operations that are relevant for designing query languages. Finally, we study the relationship with weighted automata (WA), and show that CRAs over a suitably chosen set of operations correspond to WA, thus establishing that WA are a special case of CRAs.
1108.0333
Gabriele Oliva
Stefano Panzieri, Gabriele Oliva and Roberto Setola
Fuzzy Consensus and Synchronization: Theory and Application to Critical Infrastructure Protection Problems
Sidra Automatica.it 2011 Conference, 7-9 September 2011, Pisa, Italy. in Automatica
null
null
null
cs.SY cs.MA math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper the Distributed Consensus and Synchronization problems with fuzzy-valued initial conditions are introduced, in order to obtain a shared estimation of the state of a system based on partial and distributed observations, in the case where such a state is affected by ambiguity and/or vagueness. The Discrete-Time Fuzzy Systems (DFS) are introduced as an extension of scalar fuzzy difference equations and some conditions for their stability and representation are provided. The proposed framework is then applied in the field of Critical Infrastructures; the consensus framework is used to represent a scenario where human operators, each able to observe directly the state of a given infrastructure (or of a given area considering vast and geographically dispersed infrastructures), reach an agreement on the overall situation, whose severity is expressed in a linguistic, fuzzy way; conversely synchronization is used to provide a distributed interdependency estimation system, where an array of interdependency models is synchronized via partial observation.
[ { "created": "Mon, 1 Aug 2011 15:06:40 GMT", "version": "v1" }, { "created": "Sun, 14 Aug 2011 17:26:23 GMT", "version": "v2" }, { "created": "Tue, 27 Dec 2011 14:06:34 GMT", "version": "v3" } ]
2015-03-19
[ [ "Panzieri", "Stefano", "" ], [ "Oliva", "Gabriele", "" ], [ "Setola", "Roberto", "" ] ]
In this paper the Distributed Consensus and Synchronization problems with fuzzy-valued initial conditions are introduced, in order to obtain a shared estimation of the state of a system based on partial and distributed observations, in the case where such a state is affected by ambiguity and/or vagueness. The Discrete-Time Fuzzy Systems (DFS) are introduced as an extension of scalar fuzzy difference equations and some conditions for their stability and representation are provided. The proposed framework is then applied in the field of Critical Infrastructures; the consensus framework is used to represent a scenario where human operators, each able to observe directly the state of a given infrastructure (or of a given area considering vast and geographically dispersed infrastructures), reach an agreement on the overall situation, whose severity is expressed in a linguistic, fuzzy way; conversely synchronization is used to provide a distributed interdependency estimation system, where an array of interdependency models is synchronized via partial observation.
2302.10088
Sandy Manolios
Sandy Manolios, Catholijn M. Jonker, Cynthia C.S. Liem
Registered Report : Perception of Other's Musical Preferences Based on Their Personal Values
11 Pages, 3 Figures
null
null
null
cs.MM
http://creativecommons.org/licenses/by/4.0/
The present work is part of a research line seeking to uncover the mysteries of what lies behind people's musical preferences in order to provide better music recommendations. More specifically, it takes the angle of personal values. Personal values are what we as people strive for, and are a popular tool in marketing research to understand customer preferences for certain types of product. Therefore, it makes sense to explore their usefulness in the music domain. Based on a previous qualitative work using the Means-End theory, we designed a survey in an attempt to more quantitatively approach the relationship between personal values and musical preferences. We support our approach with a simulation study as a tool to improve the experimental procedure and decisions.
[ { "created": "Mon, 20 Feb 2023 16:49:27 GMT", "version": "v1" } ]
2023-02-21
[ [ "Manolios", "Sandy", "" ], [ "Jonker", "Catholijn M.", "" ], [ "Liem", "Cynthia C. S.", "" ] ]
The present work is part of a research line seeking to uncover the mysteries of what lies behind people's musical preferences in order to provide better music recommendations. More specifically, it takes the angle of personal values. Personal values are what we as people strive for, and are a popular tool in marketing research to understand customer preferences for certain types of product. Therefore, it makes sense to explore their usefulness in the music domain. Based on a previous qualitative work using the Means-End theory, we designed a survey in an attempt to more quantitatively approach the relationship between personal values and musical preferences. We support our approach with a simulation study as a tool to improve the experimental procedure and decisions.
2405.13018
Ahmed Attia
Ahmed Adel Attia, Dorottya Demszky, Tolulope Ogunremi, Jing Liu, Carol Espy-Wilson
Continued Pretraining for Domain Adaptation of Wav2vec2.0 in Automatic Speech Recognition for Elementary Math Classroom Settings
null
null
null
null
cs.CL cs.AI eess.AS
http://creativecommons.org/licenses/by/4.0/
Creating Automatic Speech Recognition (ASR) systems that are robust and resilient to classroom conditions is paramount to the development of AI tools to aid teachers and students. In this work, we study the efficacy of continued pretraining (CPT) in adapting Wav2vec2.0 to the classroom domain. We show that CPT is a powerful tool in that regard and reduces the Word Error Rate (WER) of Wav2vec2.0-based models by upwards of 10%. More specifically, CPT improves the model's robustness to different noises, microphones, classroom conditions as well as classroom demographics. Our CPT models show improved ability to generalize to different demographics unseen in the labeled finetuning data.
[ { "created": "Wed, 15 May 2024 06:59:33 GMT", "version": "v1" } ]
2024-05-24
[ [ "Attia", "Ahmed Adel", "" ], [ "Demszky", "Dorottya", "" ], [ "Ogunremi", "Tolulope", "" ], [ "Liu", "Jing", "" ], [ "Espy-Wilson", "Carol", "" ] ]
Creating Automatic Speech Recognition (ASR) systems that are robust and resilient to classroom conditions is paramount to the development of AI tools to aid teachers and students. In this work, we study the efficacy of continued pretraining (CPT) in adapting Wav2vec2.0 to the classroom domain. We show that CPT is a powerful tool in that regard and reduces the Word Error Rate (WER) of Wav2vec2.0-based models by upwards of 10%. More specifically, CPT improves the model's robustness to different noises, microphones, classroom conditions as well as classroom demographics. Our CPT models show improved ability to generalize to different demographics unseen in the labeled finetuning data.
1806.00525
Huda Alamri
Huda Alamri, Vincent Cartillier, Raphael Gontijo Lopes, Abhishek Das, Jue Wang, Irfan Essa, Dhruv Batra, Devi Parikh, Anoop Cherian, Tim K. Marks, Chiori Hori
Audio Visual Scene-Aware Dialog (AVSD) Challenge at DSTC7
null
null
null
null
cs.CL cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Scene-aware dialog systems will be able to have conversations with users about the objects and events around them. Progress on such systems can be made by integrating state-of-the-art technologies from multiple research areas including end-to-end dialog systems visual dialog, and video description. We introduce the Audio Visual Scene Aware Dialog (AVSD) challenge and dataset. In this challenge, which is one track of the 7th Dialog System Technology Challenges (DSTC7) workshop1, the task is to build a system that generates responses in a dialog about an input video
[ { "created": "Fri, 1 Jun 2018 19:51:58 GMT", "version": "v1" } ]
2018-06-05
[ [ "Alamri", "Huda", "" ], [ "Cartillier", "Vincent", "" ], [ "Lopes", "Raphael Gontijo", "" ], [ "Das", "Abhishek", "" ], [ "Wang", "Jue", "" ], [ "Essa", "Irfan", "" ], [ "Batra", "Dhruv", "" ], [ "Parikh", "Devi", "" ], [ "Cherian", "Anoop", "" ], [ "Marks", "Tim K.", "" ], [ "Hori", "Chiori", "" ] ]
Scene-aware dialog systems will be able to have conversations with users about the objects and events around them. Progress on such systems can be made by integrating state-of-the-art technologies from multiple research areas including end-to-end dialog systems visual dialog, and video description. We introduce the Audio Visual Scene Aware Dialog (AVSD) challenge and dataset. In this challenge, which is one track of the 7th Dialog System Technology Challenges (DSTC7) workshop1, the task is to build a system that generates responses in a dialog about an input video
2310.17075
Sudarshan Babu
Sudarshan Babu, Richard Liu, Avery Zhou, Michael Maire, Greg Shakhnarovich, Rana Hanocka
HyperFields: Towards Zero-Shot Generation of NeRFs from Text
Accepted to ICML 2024, Project page: https://threedle.github.io/hyperfields/
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
We introduce HyperFields, a method for generating text-conditioned Neural Radiance Fields (NeRFs) with a single forward pass and (optionally) some fine-tuning. Key to our approach are: (i) a dynamic hypernetwork, which learns a smooth mapping from text token embeddings to the space of NeRFs; (ii) NeRF distillation training, which distills scenes encoded in individual NeRFs into one dynamic hypernetwork. These techniques enable a single network to fit over a hundred unique scenes. We further demonstrate that HyperFields learns a more general map between text and NeRFs, and consequently is capable of predicting novel in-distribution and out-of-distribution scenes -- either zero-shot or with a few finetuning steps. Finetuning HyperFields benefits from accelerated convergence thanks to the learned general map, and is capable of synthesizing novel scenes 5 to 10 times faster than existing neural optimization-based methods. Our ablation experiments show that both the dynamic architecture and NeRF distillation are critical to the expressivity of HyperFields.
[ { "created": "Thu, 26 Oct 2023 00:36:03 GMT", "version": "v1" }, { "created": "Fri, 27 Oct 2023 14:35:04 GMT", "version": "v2" }, { "created": "Thu, 13 Jun 2024 17:59:14 GMT", "version": "v3" } ]
2024-06-14
[ [ "Babu", "Sudarshan", "" ], [ "Liu", "Richard", "" ], [ "Zhou", "Avery", "" ], [ "Maire", "Michael", "" ], [ "Shakhnarovich", "Greg", "" ], [ "Hanocka", "Rana", "" ] ]
We introduce HyperFields, a method for generating text-conditioned Neural Radiance Fields (NeRFs) with a single forward pass and (optionally) some fine-tuning. Key to our approach are: (i) a dynamic hypernetwork, which learns a smooth mapping from text token embeddings to the space of NeRFs; (ii) NeRF distillation training, which distills scenes encoded in individual NeRFs into one dynamic hypernetwork. These techniques enable a single network to fit over a hundred unique scenes. We further demonstrate that HyperFields learns a more general map between text and NeRFs, and consequently is capable of predicting novel in-distribution and out-of-distribution scenes -- either zero-shot or with a few finetuning steps. Finetuning HyperFields benefits from accelerated convergence thanks to the learned general map, and is capable of synthesizing novel scenes 5 to 10 times faster than existing neural optimization-based methods. Our ablation experiments show that both the dynamic architecture and NeRF distillation are critical to the expressivity of HyperFields.
1302.6570
Jianwei Xie
Jianwei Xie, Sennur Ulukus
Secure Degrees of Freedom of the Gaussian Wiretap Channel with Helpers and No Eavesdropper CSI: Blind Cooperative Jamming
To appear in the CISS 2013
null
null
null
cs.IT cs.CR math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the Gaussian wiretap channel with M helpers, where no eavesdropper channel state information (CSI) is available at the legitimate entities. The exact secure d.o.f. of the Gaussian wiretap channel with M helpers with perfect CSI at the transmitters was found in [1], [2] to be M/(M+1). One of the key ingredients of the optimal achievable scheme in [1], [2] is to align cooperative jamming signals with the information symbols at the eavesdropper to limit the information leakage rate. This required perfect eavesdropper CSI at the transmitters. Motivated by the recent result in [3], we propose a new achievable scheme in which cooperative jamming signals span the entire space of the eavesdropper, but are not exactly aligned with the information symbols. We show that this scheme achieves the same secure d.o.f. of M/(M+1) in [1], [2] but does not require any eavesdropper CSI; the transmitters blindly cooperative jam the eavesdropper.
[ { "created": "Tue, 26 Feb 2013 20:35:12 GMT", "version": "v1" } ]
2013-02-27
[ [ "Xie", "Jianwei", "" ], [ "Ulukus", "Sennur", "" ] ]
We consider the Gaussian wiretap channel with M helpers, where no eavesdropper channel state information (CSI) is available at the legitimate entities. The exact secure d.o.f. of the Gaussian wiretap channel with M helpers with perfect CSI at the transmitters was found in [1], [2] to be M/(M+1). One of the key ingredients of the optimal achievable scheme in [1], [2] is to align cooperative jamming signals with the information symbols at the eavesdropper to limit the information leakage rate. This required perfect eavesdropper CSI at the transmitters. Motivated by the recent result in [3], we propose a new achievable scheme in which cooperative jamming signals span the entire space of the eavesdropper, but are not exactly aligned with the information symbols. We show that this scheme achieves the same secure d.o.f. of M/(M+1) in [1], [2] but does not require any eavesdropper CSI; the transmitters blindly cooperative jam the eavesdropper.
2402.16886
Rishabh Goel
Rishabh Goel
Using text embedding models and vector databases as text classifiers with the example of medical data
11 pages, 8 figures, All robustness tests are in a linked pdf
null
null
null
cs.IR cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
The advent of Large Language Models (LLMs) is promising and has found application in numerous fields, but as it often is with the medical field, the bar is typically quite high [5]. In tandem with LLMs, vector embedding models and vector databases provide a robust way of expressing numerous modes of data that are easily digestible by typical machine learning models. Along with the ease of adding information, knowledge, and data to these vector databases, they provide a compelling reason to apply them in numerous fields where the task of retrieving information is typically done by humans. Researchers at Google have developed a clear alternative model, Med-PaLM [6] specifically designed to match a clinician's level of accuracy when it comes to medical knowledge. When training classifiers, and developing models, it is imperative to maintain factuality and reduce bias [4]. Here, we explore the use of vector databases and embedding models as a means of encoding, and classifying text with the example and application in the field of medicine. We show the robustness of these tools depends heavily on the sparsity of the data presented, and even with low amounts of data in the vector database itself, the vector database does a good job at classifying data [9]. Using various LLMs to generate the medical data, we also understand the limitations of the medical knowledge of these models and encourage further expert medical review of our testing data. By using vector databases to classify a clinician's notes on a patient presented with a certain ailment, we understand the limitations of such methods, but also the promise of their prospective use and with continued testing and experimentation, hope to explore a unique use case of vector databases and embedding models.
[ { "created": "Wed, 7 Feb 2024 22:15:15 GMT", "version": "v1" } ]
2024-02-28
[ [ "Goel", "Rishabh", "" ] ]
The advent of Large Language Models (LLMs) is promising and has found application in numerous fields, but as it often is with the medical field, the bar is typically quite high [5]. In tandem with LLMs, vector embedding models and vector databases provide a robust way of expressing numerous modes of data that are easily digestible by typical machine learning models. Along with the ease of adding information, knowledge, and data to these vector databases, they provide a compelling reason to apply them in numerous fields where the task of retrieving information is typically done by humans. Researchers at Google have developed a clear alternative model, Med-PaLM [6] specifically designed to match a clinician's level of accuracy when it comes to medical knowledge. When training classifiers, and developing models, it is imperative to maintain factuality and reduce bias [4]. Here, we explore the use of vector databases and embedding models as a means of encoding, and classifying text with the example and application in the field of medicine. We show the robustness of these tools depends heavily on the sparsity of the data presented, and even with low amounts of data in the vector database itself, the vector database does a good job at classifying data [9]. Using various LLMs to generate the medical data, we also understand the limitations of the medical knowledge of these models and encourage further expert medical review of our testing data. By using vector databases to classify a clinician's notes on a patient presented with a certain ailment, we understand the limitations of such methods, but also the promise of their prospective use and with continued testing and experimentation, hope to explore a unique use case of vector databases and embedding models.
2103.12011
Jonathan Herzig
Jonathan Herzig, Thomas M\"uller, Syrine Krichene, Julian Martin Eisenschlos
Open Domain Question Answering over Tables via Dense Retrieval
NAACL 2021 camera ready
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent advances in open-domain QA have led to strong models based on dense retrieval, but only focused on retrieving textual passages. In this work, we tackle open-domain QA over tables for the first time, and show that retrieval can be improved by a retriever designed to handle tabular context. We present an effective pre-training procedure for our retriever and improve retrieval quality with mined hard negatives. As relevant datasets are missing, we extract a subset of Natural Questions (Kwiatkowski et al., 2019) into a Table QA dataset. We find that our retriever improves retrieval results from 72.0 to 81.1 recall@10 and end-to-end QA results from 33.8 to 37.7 exact match, over a BERT based retriever.
[ { "created": "Mon, 22 Mar 2021 17:01:04 GMT", "version": "v1" }, { "created": "Wed, 9 Jun 2021 09:39:24 GMT", "version": "v2" } ]
2021-06-10
[ [ "Herzig", "Jonathan", "" ], [ "Müller", "Thomas", "" ], [ "Krichene", "Syrine", "" ], [ "Eisenschlos", "Julian Martin", "" ] ]
Recent advances in open-domain QA have led to strong models based on dense retrieval, but only focused on retrieving textual passages. In this work, we tackle open-domain QA over tables for the first time, and show that retrieval can be improved by a retriever designed to handle tabular context. We present an effective pre-training procedure for our retriever and improve retrieval quality with mined hard negatives. As relevant datasets are missing, we extract a subset of Natural Questions (Kwiatkowski et al., 2019) into a Table QA dataset. We find that our retriever improves retrieval results from 72.0 to 81.1 recall@10 and end-to-end QA results from 33.8 to 37.7 exact match, over a BERT based retriever.
2404.10575
Chung-Yiu Yau
Chung-Yiu Yau, Hoi-To Wai, Parameswaran Raman, Soumajyoti Sarkar, Mingyi Hong
EMC$^2$: Efficient MCMC Negative Sampling for Contrastive Learning with Global Convergence
20 pages
null
null
null
cs.LG cs.AI cs.CV math.OC
http://creativecommons.org/licenses/by/4.0/
A key challenge in contrastive learning is to generate negative samples from a large sample set to contrast with positive samples, for learning better encoding of the data. These negative samples often follow a softmax distribution which are dynamically updated during the training process. However, sampling from this distribution is non-trivial due to the high computational costs in computing the partition function. In this paper, we propose an Efficient Markov Chain Monte Carlo negative sampling method for Contrastive learning (EMC$^2$). We follow the global contrastive learning loss as introduced in SogCLR, and propose EMC$^2$ which utilizes an adaptive Metropolis-Hastings subroutine to generate hardness-aware negative samples in an online fashion during the optimization. We prove that EMC$^2$ finds an $\mathcal{O}(1/\sqrt{T})$-stationary point of the global contrastive loss in $T$ iterations. Compared to prior works, EMC$^2$ is the first algorithm that exhibits global convergence (to stationarity) regardless of the choice of batch size while exhibiting low computation and memory cost. Numerical experiments validate that EMC$^2$ is effective with small batch training and achieves comparable or better performance than baseline algorithms. We report the results for pre-training image encoders on STL-10 and Imagenet-100.
[ { "created": "Tue, 16 Apr 2024 13:53:58 GMT", "version": "v1" } ]
2024-04-17
[ [ "Yau", "Chung-Yiu", "" ], [ "Wai", "Hoi-To", "" ], [ "Raman", "Parameswaran", "" ], [ "Sarkar", "Soumajyoti", "" ], [ "Hong", "Mingyi", "" ] ]
A key challenge in contrastive learning is to generate negative samples from a large sample set to contrast with positive samples, for learning better encoding of the data. These negative samples often follow a softmax distribution which are dynamically updated during the training process. However, sampling from this distribution is non-trivial due to the high computational costs in computing the partition function. In this paper, we propose an Efficient Markov Chain Monte Carlo negative sampling method for Contrastive learning (EMC$^2$). We follow the global contrastive learning loss as introduced in SogCLR, and propose EMC$^2$ which utilizes an adaptive Metropolis-Hastings subroutine to generate hardness-aware negative samples in an online fashion during the optimization. We prove that EMC$^2$ finds an $\mathcal{O}(1/\sqrt{T})$-stationary point of the global contrastive loss in $T$ iterations. Compared to prior works, EMC$^2$ is the first algorithm that exhibits global convergence (to stationarity) regardless of the choice of batch size while exhibiting low computation and memory cost. Numerical experiments validate that EMC$^2$ is effective with small batch training and achieves comparable or better performance than baseline algorithms. We report the results for pre-training image encoders on STL-10 and Imagenet-100.
1312.4071
Srinjoy Ganguly Mr.
Srinjoy Ganguly, Arpita Chakraborty and Mrinal Kanti Naskar
A Trust-based Framework for Congestion-aware Energy Efficient Routing in Wireless Multimedia Sensor Networks
5 pages, 3 figures and 0 tables. Poster Paper at the Student Research Symposium of the International Conference on High Performance Computing (HiPC), 2013
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A new era in wireless sensor network technology has been ushered in through the introduction of multimedia sensor networks, which has a major bottleneck in the form of network congestion. Congestion occurs when resources are in high demand during the active period while the data processing and transmission speeds lag behind the speed of the incoming traffic. This may disrupt normal network operations by buffer overflow, packet loss, increased latency, excessive energy consumption and even worse, a collapse of the entire operation. In this paper we propose a novel Trust Integrated Congestion-aware Energy Efficient Routing algorithm (TCEER) in which the potential of a node is computed using its trust value, congestion status, residual energy, distance from the current packet-forwarding node and the distance from the base station using a Fuzzy Logic Controller. The source node selects the node of highest potential in its one hop radio range for data transmission. Hop by hop data routing from source to base station is obtained which is light-weight as well as energy-efficient. Finally, the merits of the proposed scheme are discussed by comparing it with the existing protocols and the study shows promising improvements in network performance.
[ { "created": "Sat, 14 Dec 2013 17:49:43 GMT", "version": "v1" } ]
2013-12-17
[ [ "Ganguly", "Srinjoy", "" ], [ "Chakraborty", "Arpita", "" ], [ "Naskar", "Mrinal Kanti", "" ] ]
A new era in wireless sensor network technology has been ushered in through the introduction of multimedia sensor networks, which has a major bottleneck in the form of network congestion. Congestion occurs when resources are in high demand during the active period while the data processing and transmission speeds lag behind the speed of the incoming traffic. This may disrupt normal network operations by buffer overflow, packet loss, increased latency, excessive energy consumption and even worse, a collapse of the entire operation. In this paper we propose a novel Trust Integrated Congestion-aware Energy Efficient Routing algorithm (TCEER) in which the potential of a node is computed using its trust value, congestion status, residual energy, distance from the current packet-forwarding node and the distance from the base station using a Fuzzy Logic Controller. The source node selects the node of highest potential in its one hop radio range for data transmission. Hop by hop data routing from source to base station is obtained which is light-weight as well as energy-efficient. Finally, the merits of the proposed scheme are discussed by comparing it with the existing protocols and the study shows promising improvements in network performance.
1807.10726
Bowen Zhang
Bowen Zhang, Xifan Zhang, Fan Cheng, Deli Zhao
Few Shot Learning with Simplex
There is still room for model improvement
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep learning has made remarkable achievement in many fields. However, learning the parameters of neural networks usually demands a large amount of labeled data. The algorithms of deep learning, therefore, encounter difficulties when applied to supervised learning where only little data are available. This specific task is called few-shot learning. To address it, we propose a novel algorithm for few-shot learning using discrete geometry, in the sense that the samples in a class are modeled as a reduced simplex. The volume of the simplex is used for the measurement of class scatter. During testing, combined with the test sample and the points in the class, a new simplex is formed. Then the similarity between the test sample and the class can be quantized with the ratio of volumes of the new simplex to the original class simplex. Moreover, we present an approach to constructing simplices using local regions of feature maps yielded by convolutional neural networks. Experiments on Omniglot and miniImageNet verify the effectiveness of our simplex algorithm on few-shot learning.
[ { "created": "Fri, 27 Jul 2018 16:52:57 GMT", "version": "v1" }, { "created": "Tue, 30 Oct 2018 00:59:36 GMT", "version": "v2" } ]
2018-10-31
[ [ "Zhang", "Bowen", "" ], [ "Zhang", "Xifan", "" ], [ "Cheng", "Fan", "" ], [ "Zhao", "Deli", "" ] ]
Deep learning has made remarkable achievement in many fields. However, learning the parameters of neural networks usually demands a large amount of labeled data. The algorithms of deep learning, therefore, encounter difficulties when applied to supervised learning where only little data are available. This specific task is called few-shot learning. To address it, we propose a novel algorithm for few-shot learning using discrete geometry, in the sense that the samples in a class are modeled as a reduced simplex. The volume of the simplex is used for the measurement of class scatter. During testing, combined with the test sample and the points in the class, a new simplex is formed. Then the similarity between the test sample and the class can be quantized with the ratio of volumes of the new simplex to the original class simplex. Moreover, we present an approach to constructing simplices using local regions of feature maps yielded by convolutional neural networks. Experiments on Omniglot and miniImageNet verify the effectiveness of our simplex algorithm on few-shot learning.
1211.4290
Wojciech Golab
Muntasir Raihan Rahman, Wojciech Golab, Alvin AuYoung, Kimberly Keeton, Jay J. Wylie
Toward a Principled Framework for Benchmarking Consistency
null
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large-scale key-value storage systems sacrifice consistency in the interest of dependability (i.e., partition tolerance and availability), as well as performance (i.e., latency). Such systems provide eventual consistency,which---to this point---has been difficult to quantify in real systems. Given the many implementations and deployments of eventually-consistent systems (e.g., NoSQL systems), attempts have been made to measure this consistency empirically, but they suffer from important drawbacks. For example, state-of-the art consistency benchmarks exercise the system only in restricted ways and disrupt the workload, which limits their accuracy. In this paper, we take the position that a consistency benchmark should paint a comprehensive picture of the relationship between the storage system under consideration, the workload, the pattern of failures, and the consistency observed by clients. To illustrate our point, we first survey prior efforts to quantify eventual consistency. We then present a benchmarking technique that overcomes the shortcomings of existing techniques to measure the consistency observed by clients as they execute the workload under consideration. This method is versatile and minimally disruptive to the system under test. As a proof of concept, we demonstrate this tool on Cassandra.
[ { "created": "Mon, 19 Nov 2012 02:59:53 GMT", "version": "v1" }, { "created": "Tue, 20 Nov 2012 02:25:27 GMT", "version": "v2" } ]
2012-11-21
[ [ "Rahman", "Muntasir Raihan", "" ], [ "Golab", "Wojciech", "" ], [ "AuYoung", "Alvin", "" ], [ "Keeton", "Kimberly", "" ], [ "Wylie", "Jay J.", "" ] ]
Large-scale key-value storage systems sacrifice consistency in the interest of dependability (i.e., partition tolerance and availability), as well as performance (i.e., latency). Such systems provide eventual consistency,which---to this point---has been difficult to quantify in real systems. Given the many implementations and deployments of eventually-consistent systems (e.g., NoSQL systems), attempts have been made to measure this consistency empirically, but they suffer from important drawbacks. For example, state-of-the art consistency benchmarks exercise the system only in restricted ways and disrupt the workload, which limits their accuracy. In this paper, we take the position that a consistency benchmark should paint a comprehensive picture of the relationship between the storage system under consideration, the workload, the pattern of failures, and the consistency observed by clients. To illustrate our point, we first survey prior efforts to quantify eventual consistency. We then present a benchmarking technique that overcomes the shortcomings of existing techniques to measure the consistency observed by clients as they execute the workload under consideration. This method is versatile and minimally disruptive to the system under test. As a proof of concept, we demonstrate this tool on Cassandra.
1109.5420
Lei Zhou
Lei Zhou and Wei Yu
Incremental Relaying for the Gaussian Interference Channel with a Degraded Broadcasting Relay
To appear in IEEE Trans. on Inf. Theory
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by/3.0/
This paper studies incremental relay strategies for a two-user Gaussian relay-interference channel with an in-band-reception and out-of-band-transmission relay, where the link between the relay and the two receivers is modelled as a degraded broadcast channel. It is shown that generalized hash-and-forward (GHF) can achieve the capacity region of this channel to within a constant number of bits in a certain weak relay regime, where the transmitter-to-relay link gains are not unboundedly stronger than the interference links between the transmitters and the receivers. The GHF relaying strategy is ideally suited for the broadcasting relay because it can be implemented in an incremental fashion, i.e., the relay message to one receiver is a degraded version of the message to the other receiver. A generalized-degree-of-freedom (GDoF) analysis in the high signal-to-noise ratio (SNR) regime reveals that in the symmetric channel setting, each common relay bit can improve the sum rate roughly by either one bit or two bits asymptotically depending on the operating regime, and the rate gain can be interpreted as coming solely from the improvement of the common message rates, or alternatively in the very weak interference regime as solely coming from the rate improvement of the private messages. Further, this paper studies an asymmetric case in which the relay has only a single single link to one of the destinations. It is shown that with only one relay-destination link, the approximate capacity region can be established for a larger regime of channel parameters. Further, from a GDoF point of view, the sum-capacity gain due to the relay can now be thought as coming from either signal relaying only, or interference forwarding only.
[ { "created": "Mon, 26 Sep 2011 00:21:47 GMT", "version": "v1" }, { "created": "Mon, 7 Nov 2011 23:30:36 GMT", "version": "v2" }, { "created": "Wed, 15 Aug 2012 04:34:32 GMT", "version": "v3" }, { "created": "Wed, 12 Dec 2012 07:33:56 GMT", "version": "v4" } ]
2012-12-13
[ [ "Zhou", "Lei", "" ], [ "Yu", "Wei", "" ] ]
This paper studies incremental relay strategies for a two-user Gaussian relay-interference channel with an in-band-reception and out-of-band-transmission relay, where the link between the relay and the two receivers is modelled as a degraded broadcast channel. It is shown that generalized hash-and-forward (GHF) can achieve the capacity region of this channel to within a constant number of bits in a certain weak relay regime, where the transmitter-to-relay link gains are not unboundedly stronger than the interference links between the transmitters and the receivers. The GHF relaying strategy is ideally suited for the broadcasting relay because it can be implemented in an incremental fashion, i.e., the relay message to one receiver is a degraded version of the message to the other receiver. A generalized-degree-of-freedom (GDoF) analysis in the high signal-to-noise ratio (SNR) regime reveals that in the symmetric channel setting, each common relay bit can improve the sum rate roughly by either one bit or two bits asymptotically depending on the operating regime, and the rate gain can be interpreted as coming solely from the improvement of the common message rates, or alternatively in the very weak interference regime as solely coming from the rate improvement of the private messages. Further, this paper studies an asymmetric case in which the relay has only a single single link to one of the destinations. It is shown that with only one relay-destination link, the approximate capacity region can be established for a larger regime of channel parameters. Further, from a GDoF point of view, the sum-capacity gain due to the relay can now be thought as coming from either signal relaying only, or interference forwarding only.
1907.05284
Mhafuzul Islam
Mhafuzul Islam, Mizanur Rahman, Mashrur Chowdhury, Gurcan Comert, Eshaa Deepak Sood, Amy Apon
Vision-based Pedestrian Alert Safety System (PASS) for Signalized Intersections
23 pages, 8 figures
null
null
null
cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although Vehicle-to-Pedestrian (V2P) communication can significantly improve pedestrian safety at a signalized intersection, this safety is hindered as pedestrians often do not carry hand-held devices (e.g., Dedicated short-range communication (DSRC) and 5G enabled cell phone) to communicate with connected vehicles nearby. To overcome this limitation, in this study, traffic cameras at a signalized intersection were used to accurately detect and locate pedestrians via a vision-based deep learning technique to generate safety alerts in real-time about possible conflicts between vehicles and pedestrians. The contribution of this paper lies in the development of a system using a vision-based deep learning model that is able to generate personal safety messages (PSMs) in real-time (every 100 milliseconds). We develop a pedestrian alert safety system (PASS) to generate a safety alert of an imminent pedestrian-vehicle crash using generated PSMs to improve pedestrian safety at a signalized intersection. Our approach estimates the location and velocity of a pedestrian more accurately than existing DSRC-enabled pedestrian hand-held devices. A connected vehicle application, the Pedestrian in Signalized Crosswalk Warning (PSCW), was developed to evaluate the vision-based PASS. Numerical analyses show that our vision-based PASS is able to satisfy the accuracy and latency requirements of pedestrian safety applications in a connected vehicle environment.
[ { "created": "Tue, 2 Jul 2019 02:17:55 GMT", "version": "v1" } ]
2019-07-12
[ [ "Islam", "Mhafuzul", "" ], [ "Rahman", "Mizanur", "" ], [ "Chowdhury", "Mashrur", "" ], [ "Comert", "Gurcan", "" ], [ "Sood", "Eshaa Deepak", "" ], [ "Apon", "Amy", "" ] ]
Although Vehicle-to-Pedestrian (V2P) communication can significantly improve pedestrian safety at a signalized intersection, this safety is hindered as pedestrians often do not carry hand-held devices (e.g., Dedicated short-range communication (DSRC) and 5G enabled cell phone) to communicate with connected vehicles nearby. To overcome this limitation, in this study, traffic cameras at a signalized intersection were used to accurately detect and locate pedestrians via a vision-based deep learning technique to generate safety alerts in real-time about possible conflicts between vehicles and pedestrians. The contribution of this paper lies in the development of a system using a vision-based deep learning model that is able to generate personal safety messages (PSMs) in real-time (every 100 milliseconds). We develop a pedestrian alert safety system (PASS) to generate a safety alert of an imminent pedestrian-vehicle crash using generated PSMs to improve pedestrian safety at a signalized intersection. Our approach estimates the location and velocity of a pedestrian more accurately than existing DSRC-enabled pedestrian hand-held devices. A connected vehicle application, the Pedestrian in Signalized Crosswalk Warning (PSCW), was developed to evaluate the vision-based PASS. Numerical analyses show that our vision-based PASS is able to satisfy the accuracy and latency requirements of pedestrian safety applications in a connected vehicle environment.
1410.4477
Amir Rastegarnia
Azam Khalili, Wael M. Bazzi, Amir Rastegarnia
Analysis of incremental augmented affine projection algorithm for distributed estimation of complex signals
23 pages, 6 figures
null
null
null
cs.DC cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper considers the problem of distributed estimation in an incremental network when the measurements taken by the node follow a widely linear model. The proposed algorithm which we refer to it as incremental augmented affine projection algorithm (incAAPA) utilizes the full second order statistical information in the complex domain. Moreover, it exploits spatio-temporal diversity to improve the estimation performance. We derive steady-state performance metric of the incAAPA in terms of the mean-square deviation (MSD). We further derive sufficient conditions to ensure mean-square convergence. Our analysis illustrate that the proposed algorithm is able to process both second order circular (proper) and noncircular (improper) signals. The validity of the theoretical results and the good performance of the proposed algorithm are demonstrated by several computer simulations.
[ { "created": "Thu, 16 Oct 2014 16:03:22 GMT", "version": "v1" }, { "created": "Thu, 18 Dec 2014 12:57:13 GMT", "version": "v2" } ]
2014-12-19
[ [ "Khalili", "Azam", "" ], [ "Bazzi", "Wael M.", "" ], [ "Rastegarnia", "Amir", "" ] ]
This paper considers the problem of distributed estimation in an incremental network when the measurements taken by the node follow a widely linear model. The proposed algorithm which we refer to it as incremental augmented affine projection algorithm (incAAPA) utilizes the full second order statistical information in the complex domain. Moreover, it exploits spatio-temporal diversity to improve the estimation performance. We derive steady-state performance metric of the incAAPA in terms of the mean-square deviation (MSD). We further derive sufficient conditions to ensure mean-square convergence. Our analysis illustrate that the proposed algorithm is able to process both second order circular (proper) and noncircular (improper) signals. The validity of the theoretical results and the good performance of the proposed algorithm are demonstrated by several computer simulations.
2010.06097
Feihu Huang
Feihu Huang, Shangqian Gao
Gradient Descent Ascent for Minimax Problems on Riemannian Manifolds
This paper was accepted to IEEE Transactions on Pattern Analysis and Machine Intelligence
null
null
null
cs.LG cs.CV math.OC
http://creativecommons.org/licenses/by-nc-sa/4.0/
In the paper, we study a class of useful minimax problems on Riemanian manifolds and propose a class of effective Riemanian gradient-based methods to solve these minimax problems. Specifically, we propose an effective Riemannian gradient descent ascent (RGDA) algorithm for the deterministic minimax optimization. Moreover, we prove that our RGDA has a sample complexity of $O(\kappa^2\epsilon^{-2})$ for finding an $\epsilon$-stationary solution of the Geodesically-Nonconvex Strongly-Concave (GNSC) minimax problems, where $\kappa$ denotes the condition number. At the same time, we present an effective Riemannian stochastic gradient descent ascent (RSGDA) algorithm for the stochastic minimax optimization, which has a sample complexity of $O(\kappa^4\epsilon^{-4})$ for finding an $\epsilon$-stationary solution. To further reduce the sample complexity, we propose an accelerated Riemannian stochastic gradient descent ascent (Acc-RSGDA) algorithm based on the momentum-based variance-reduced technique. We prove that our Acc-RSGDA algorithm achieves a lower sample complexity of $\tilde{O}(\kappa^{4}\epsilon^{-3})$ in searching for an $\epsilon$-stationary solution of the GNSC minimax problems. Extensive experimental results on the robust distributional optimization and robust Deep Neural Networks (DNNs) training over Stiefel manifold demonstrate efficiency of our algorithms.
[ { "created": "Tue, 13 Oct 2020 00:54:00 GMT", "version": "v1" }, { "created": "Tue, 24 Nov 2020 03:43:15 GMT", "version": "v2" }, { "created": "Wed, 17 Mar 2021 15:09:46 GMT", "version": "v3" }, { "created": "Mon, 25 Apr 2022 21:33:03 GMT", "version": "v4" }, { "created": "Mon, 2 Jan 2023 23:16:35 GMT", "version": "v5" } ]
2023-01-04
[ [ "Huang", "Feihu", "" ], [ "Gao", "Shangqian", "" ] ]
In the paper, we study a class of useful minimax problems on Riemanian manifolds and propose a class of effective Riemanian gradient-based methods to solve these minimax problems. Specifically, we propose an effective Riemannian gradient descent ascent (RGDA) algorithm for the deterministic minimax optimization. Moreover, we prove that our RGDA has a sample complexity of $O(\kappa^2\epsilon^{-2})$ for finding an $\epsilon$-stationary solution of the Geodesically-Nonconvex Strongly-Concave (GNSC) minimax problems, where $\kappa$ denotes the condition number. At the same time, we present an effective Riemannian stochastic gradient descent ascent (RSGDA) algorithm for the stochastic minimax optimization, which has a sample complexity of $O(\kappa^4\epsilon^{-4})$ for finding an $\epsilon$-stationary solution. To further reduce the sample complexity, we propose an accelerated Riemannian stochastic gradient descent ascent (Acc-RSGDA) algorithm based on the momentum-based variance-reduced technique. We prove that our Acc-RSGDA algorithm achieves a lower sample complexity of $\tilde{O}(\kappa^{4}\epsilon^{-3})$ in searching for an $\epsilon$-stationary solution of the GNSC minimax problems. Extensive experimental results on the robust distributional optimization and robust Deep Neural Networks (DNNs) training over Stiefel manifold demonstrate efficiency of our algorithms.
2407.18827
Mutahar Safdar
Mutahar Safdar, Jiarui Xie, Andrei Mircea and Yaoyao Fiona Zhao
Human-artificial intelligence teaming for scientific information extraction from data-driven additive manufacturing research using large language models
11 pages, 5 Figures, 3 Tables. This paper has been accepted to be published in the proceedings of IDETC-CIE 2024
null
null
null
cs.IR cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Data-driven research in Additive Manufacturing (AM) has gained significant success in recent years. This has led to a plethora of scientific literature to emerge. The knowledge in these works consists of AM and Artificial Intelligence (AI) contexts that have not been mined and formalized in an integrated way. It requires substantial effort and time to extract scientific information from these works. AM domain experts have contributed over two dozen review papers to summarize these works. However, information specific to AM and AI contexts still requires manual effort to extract. The recent success of foundation models such as BERT (Bidirectional Encoder Representations for Transformers) or GPT (Generative Pre-trained Transformers) on textual data has opened the possibility of expediting scientific information extraction. We propose a framework that enables collaboration between AM and AI experts to continuously extract scientific information from data-driven AM literature. A demonstration tool is implemented based on the proposed framework and a case study is conducted to extract information relevant to the datasets, modeling, sensing, and AM system categories. We show the ability of LLMs (Large Language Models) to expedite the extraction of relevant information from data-driven AM literature. In the future, the framework can be used to extract information from the broader design and manufacturing literature in the engineering discipline.
[ { "created": "Fri, 26 Jul 2024 15:43:52 GMT", "version": "v1" } ]
2024-07-29
[ [ "Safdar", "Mutahar", "" ], [ "Xie", "Jiarui", "" ], [ "Mircea", "Andrei", "" ], [ "Zhao", "Yaoyao Fiona", "" ] ]
Data-driven research in Additive Manufacturing (AM) has gained significant success in recent years. This has led to a plethora of scientific literature to emerge. The knowledge in these works consists of AM and Artificial Intelligence (AI) contexts that have not been mined and formalized in an integrated way. It requires substantial effort and time to extract scientific information from these works. AM domain experts have contributed over two dozen review papers to summarize these works. However, information specific to AM and AI contexts still requires manual effort to extract. The recent success of foundation models such as BERT (Bidirectional Encoder Representations for Transformers) or GPT (Generative Pre-trained Transformers) on textual data has opened the possibility of expediting scientific information extraction. We propose a framework that enables collaboration between AM and AI experts to continuously extract scientific information from data-driven AM literature. A demonstration tool is implemented based on the proposed framework and a case study is conducted to extract information relevant to the datasets, modeling, sensing, and AM system categories. We show the ability of LLMs (Large Language Models) to expedite the extraction of relevant information from data-driven AM literature. In the future, the framework can be used to extract information from the broader design and manufacturing literature in the engineering discipline.
2311.05884
Huan Gui
Huan Gui, Ruoxi Wang, Ke Yin, Long Jin, Maciej Kula, Taibai Xu, Lichan Hong, Ed H. Chi
Hiformer: Heterogeneous Feature Interactions Learning with Transformers for Recommender Systems
null
null
null
null
cs.IR cs.LG
http://creativecommons.org/licenses/by/4.0/
Learning feature interaction is the critical backbone to building recommender systems. In web-scale applications, learning feature interaction is extremely challenging due to the sparse and large input feature space; meanwhile, manually crafting effective feature interactions is infeasible because of the exponential solution space. We propose to leverage a Transformer-based architecture with attention layers to automatically capture feature interactions. Transformer architectures have witnessed great success in many domains, such as natural language processing and computer vision. However, there has not been much adoption of Transformer architecture for feature interaction modeling in industry. We aim at closing the gap. We identify two key challenges for applying the vanilla Transformer architecture to web-scale recommender systems: (1) Transformer architecture fails to capture the heterogeneous feature interactions in the self-attention layer; (2) The serving latency of Transformer architecture might be too high to be deployed in web-scale recommender systems. We first propose a heterogeneous self-attention layer, which is a simple yet effective modification to the self-attention layer in Transformer, to take into account the heterogeneity of feature interactions. We then introduce \textsc{Hiformer} (\textbf{H}eterogeneous \textbf{I}nteraction Trans\textbf{former}) to further improve the model expressiveness. With low-rank approximation and model pruning, \hiformer enjoys fast inference for online deployment. Extensive offline experiment results corroborates the effectiveness and efficiency of the \textsc{Hiformer} model. We have successfully deployed the \textsc{Hiformer} model to a real world large scale App ranking model at Google Play, with significant improvement in key engagement metrics (up to +2.66\%).
[ { "created": "Fri, 10 Nov 2023 05:57:57 GMT", "version": "v1" } ]
2023-11-13
[ [ "Gui", "Huan", "" ], [ "Wang", "Ruoxi", "" ], [ "Yin", "Ke", "" ], [ "Jin", "Long", "" ], [ "Kula", "Maciej", "" ], [ "Xu", "Taibai", "" ], [ "Hong", "Lichan", "" ], [ "Chi", "Ed H.", "" ] ]
Learning feature interaction is the critical backbone to building recommender systems. In web-scale applications, learning feature interaction is extremely challenging due to the sparse and large input feature space; meanwhile, manually crafting effective feature interactions is infeasible because of the exponential solution space. We propose to leverage a Transformer-based architecture with attention layers to automatically capture feature interactions. Transformer architectures have witnessed great success in many domains, such as natural language processing and computer vision. However, there has not been much adoption of Transformer architecture for feature interaction modeling in industry. We aim at closing the gap. We identify two key challenges for applying the vanilla Transformer architecture to web-scale recommender systems: (1) Transformer architecture fails to capture the heterogeneous feature interactions in the self-attention layer; (2) The serving latency of Transformer architecture might be too high to be deployed in web-scale recommender systems. We first propose a heterogeneous self-attention layer, which is a simple yet effective modification to the self-attention layer in Transformer, to take into account the heterogeneity of feature interactions. We then introduce \textsc{Hiformer} (\textbf{H}eterogeneous \textbf{I}nteraction Trans\textbf{former}) to further improve the model expressiveness. With low-rank approximation and model pruning, \hiformer enjoys fast inference for online deployment. Extensive offline experiment results corroborates the effectiveness and efficiency of the \textsc{Hiformer} model. We have successfully deployed the \textsc{Hiformer} model to a real world large scale App ranking model at Google Play, with significant improvement in key engagement metrics (up to +2.66\%).
2308.06644
Junwei Huang
Junwei Huang, Zhiqing Sun, Yiming Yang
Accelerating Diffusion-based Combinatorial Optimization Solvers by Progressive Distillation
Published at ICML 2023, Sampling and Optimization in Discrete Space Workshop. The implementation is at https://github.com/jwrh/Accelerating-Diffusion-based-Combinatorial-Optimization-Solvers-by-Progressive-Distillation
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Graph-based diffusion models have shown promising results in terms of generating high-quality solutions to NP-complete (NPC) combinatorial optimization (CO) problems. However, those models are often inefficient in inference, due to the iterative evaluation nature of the denoising diffusion process. This paper proposes to use progressive distillation to speed up the inference by taking fewer steps (e.g., forecasting two steps ahead within a single step) during the denoising process. Our experimental results show that the progressively distilled model can perform inference 16 times faster with only 0.019% degradation in performance on the TSP-50 dataset.
[ { "created": "Sat, 12 Aug 2023 21:25:24 GMT", "version": "v1" }, { "created": "Tue, 22 Aug 2023 22:25:54 GMT", "version": "v2" } ]
2023-08-24
[ [ "Huang", "Junwei", "" ], [ "Sun", "Zhiqing", "" ], [ "Yang", "Yiming", "" ] ]
Graph-based diffusion models have shown promising results in terms of generating high-quality solutions to NP-complete (NPC) combinatorial optimization (CO) problems. However, those models are often inefficient in inference, due to the iterative evaluation nature of the denoising diffusion process. This paper proposes to use progressive distillation to speed up the inference by taking fewer steps (e.g., forecasting two steps ahead within a single step) during the denoising process. Our experimental results show that the progressively distilled model can perform inference 16 times faster with only 0.019% degradation in performance on the TSP-50 dataset.
2011.01793
Yijie Zhou
Yijie Zhou, Yan Zhang, Xusheng Luo, Michael M. Zavlanos
Human-in-the-Loop Robot Planning with Non-Contextual Bandit Feedback
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we consider a robot navigation problem in environments populated by humans. The goal is to determine collision-free and dynamically feasible trajectories that also maximize human satisfaction. This is because they may drive the robot close to humans that need help with their work or because they may keep the robot away from humans when it can interfere with human sight or work. In practice, human satisfaction is subjective and hard to describe mathematically. As a result, the planning problem we consider in this paper may lack important contextual information. To address this challenge, we propose a semi-supervised Bayesian Optimization (BO) method to design globally optimal robot trajectories using non-contextual bandit human feedback in the form of complaints or satisfaction ratings that express how satisfactory a trajectory is, without revealing the reason. Since trajectory planning is typically a high-dimensional optimization problem in the space of waypoints that define a trajectory, BO may require prohibitively many queries for human feedback to return a good solution. To this end, we use an autoencoder to reduce the high-dimensional problem space into a low dimensional latent space, which we update using human feedback. Moreover, we improve the exploration efficiency of BO by biasing the search for new trajectories towards dynamically feasible and collision-free trajectories obtained using off-the-shelf motion planners. We demonstrate the efficiency of our proposed trajectory planning method in a scenario with humans that have diversified and unknown demands.
[ { "created": "Tue, 3 Nov 2020 15:38:41 GMT", "version": "v1" } ]
2020-11-04
[ [ "Zhou", "Yijie", "" ], [ "Zhang", "Yan", "" ], [ "Luo", "Xusheng", "" ], [ "Zavlanos", "Michael M.", "" ] ]
In this paper, we consider a robot navigation problem in environments populated by humans. The goal is to determine collision-free and dynamically feasible trajectories that also maximize human satisfaction. This is because they may drive the robot close to humans that need help with their work or because they may keep the robot away from humans when it can interfere with human sight or work. In practice, human satisfaction is subjective and hard to describe mathematically. As a result, the planning problem we consider in this paper may lack important contextual information. To address this challenge, we propose a semi-supervised Bayesian Optimization (BO) method to design globally optimal robot trajectories using non-contextual bandit human feedback in the form of complaints or satisfaction ratings that express how satisfactory a trajectory is, without revealing the reason. Since trajectory planning is typically a high-dimensional optimization problem in the space of waypoints that define a trajectory, BO may require prohibitively many queries for human feedback to return a good solution. To this end, we use an autoencoder to reduce the high-dimensional problem space into a low dimensional latent space, which we update using human feedback. Moreover, we improve the exploration efficiency of BO by biasing the search for new trajectories towards dynamically feasible and collision-free trajectories obtained using off-the-shelf motion planners. We demonstrate the efficiency of our proposed trajectory planning method in a scenario with humans that have diversified and unknown demands.
2407.05605
Zhenchun Lei
Zhenchun Lei, Hui Yan, Changhong Liu, Minglei Ma, Yingen Yang
Two-Path GMM-ResNet and GMM-SENet for ASV Spoofing Detection
null
null
10.1109/ICASSP43922.2022.9746163
null
cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The automatic speaker verification system is sometimes vulnerable to various spoofing attacks. The 2-class Gaussian Mixture Model classifier for genuine and spoofed speech is usually used as the baseline for spoofing detection. However, the GMM classifier does not separately consider the scores of feature frames on each Gaussian component. In addition, the GMM accumulates the scores on all frames independently, and does not consider their correlations. We propose the two-path GMM-ResNet and GMM-SENet models for spoofing detection, whose input is the Gaussian probability features based on two GMMs trained on genuine and spoofed speech respectively. The models consider not only the score distribution on GMM components, but also the relationship between adjacent frames. A two-step training scheme is applied to improve the system robustness. Experiments on the ASVspoof 2019 show that the LFCC+GMM-ResNet system can relatively reduce min-tDCF and EER by 76.1% and 76.3% on logical access scenario compared with the GMM, and the LFCC+GMM-SENet system by 94.4% and 95.4% on physical access scenario. After score fusion, the systems give the second-best results on both scenarios.
[ { "created": "Mon, 8 Jul 2024 04:42:36 GMT", "version": "v1" } ]
2024-07-09
[ [ "Lei", "Zhenchun", "" ], [ "Yan", "Hui", "" ], [ "Liu", "Changhong", "" ], [ "Ma", "Minglei", "" ], [ "Yang", "Yingen", "" ] ]
The automatic speaker verification system is sometimes vulnerable to various spoofing attacks. The 2-class Gaussian Mixture Model classifier for genuine and spoofed speech is usually used as the baseline for spoofing detection. However, the GMM classifier does not separately consider the scores of feature frames on each Gaussian component. In addition, the GMM accumulates the scores on all frames independently, and does not consider their correlations. We propose the two-path GMM-ResNet and GMM-SENet models for spoofing detection, whose input is the Gaussian probability features based on two GMMs trained on genuine and spoofed speech respectively. The models consider not only the score distribution on GMM components, but also the relationship between adjacent frames. A two-step training scheme is applied to improve the system robustness. Experiments on the ASVspoof 2019 show that the LFCC+GMM-ResNet system can relatively reduce min-tDCF and EER by 76.1% and 76.3% on logical access scenario compared with the GMM, and the LFCC+GMM-SENet system by 94.4% and 95.4% on physical access scenario. After score fusion, the systems give the second-best results on both scenarios.
2305.08293
Weizhi Zhong
Weizhi Zhong, Chaowei Fang, Yinqi Cai, Pengxu Wei, Gangming Zhao, Liang Lin, Guanbin Li
Identity-Preserving Talking Face Generation with Landmark and Appearance Priors
CVPR2023, Code: https://github.com/Weizhi-Zhong/IP_LAP
null
null
null
cs.CV cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Generating talking face videos from audio attracts lots of research interest. A few person-specific methods can generate vivid videos but require the target speaker's videos for training or fine-tuning. Existing person-generic methods have difficulty in generating realistic and lip-synced videos while preserving identity information. To tackle this problem, we propose a two-stage framework consisting of audio-to-landmark generation and landmark-to-video rendering procedures. First, we devise a novel Transformer-based landmark generator to infer lip and jaw landmarks from the audio. Prior landmark characteristics of the speaker's face are employed to make the generated landmarks coincide with the facial outline of the speaker. Then, a video rendering model is built to translate the generated landmarks into face images. During this stage, prior appearance information is extracted from the lower-half occluded target face and static reference images, which helps generate realistic and identity-preserving visual content. For effectively exploring the prior information of static reference images, we align static reference images with the target face's pose and expression based on motion fields. Moreover, auditory features are reused to guarantee that the generated face images are well synchronized with the audio. Extensive experiments demonstrate that our method can produce more realistic, lip-synced, and identity-preserving videos than existing person-generic talking face generation methods.
[ { "created": "Mon, 15 May 2023 01:31:32 GMT", "version": "v1" } ]
2023-05-16
[ [ "Zhong", "Weizhi", "" ], [ "Fang", "Chaowei", "" ], [ "Cai", "Yinqi", "" ], [ "Wei", "Pengxu", "" ], [ "Zhao", "Gangming", "" ], [ "Lin", "Liang", "" ], [ "Li", "Guanbin", "" ] ]
Generating talking face videos from audio attracts lots of research interest. A few person-specific methods can generate vivid videos but require the target speaker's videos for training or fine-tuning. Existing person-generic methods have difficulty in generating realistic and lip-synced videos while preserving identity information. To tackle this problem, we propose a two-stage framework consisting of audio-to-landmark generation and landmark-to-video rendering procedures. First, we devise a novel Transformer-based landmark generator to infer lip and jaw landmarks from the audio. Prior landmark characteristics of the speaker's face are employed to make the generated landmarks coincide with the facial outline of the speaker. Then, a video rendering model is built to translate the generated landmarks into face images. During this stage, prior appearance information is extracted from the lower-half occluded target face and static reference images, which helps generate realistic and identity-preserving visual content. For effectively exploring the prior information of static reference images, we align static reference images with the target face's pose and expression based on motion fields. Moreover, auditory features are reused to guarantee that the generated face images are well synchronized with the audio. Extensive experiments demonstrate that our method can produce more realistic, lip-synced, and identity-preserving videos than existing person-generic talking face generation methods.
2311.04108
Nils Japke
Nils Japke, Christoph Witzko, Martin Grambow, David Bermbach
The Early Microbenchmark Catches the Bug -- Studying Performance Issues Using Micro- and Application Benchmarks
Accepted for publication in 2023 IEEE/ACM 16th International Conference on Utility and Cloud Computing
null
10.1145/3603166.3632128
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An application's performance regressions can be detected by both application or microbenchmarks. While application benchmarks stress the system under test by sending synthetic but realistic requests which, e.g., simulate real user traffic, microbenchmarks evaluate the performance on a subroutine level by calling the function under test repeatedly. In this paper, we use a testbed microservice application which includes three performance issues to study the detection capabilities of both approaches. In extensive benchmarking experiments, we increase the severity of each performance issue stepwise, run both an application benchmark and the microbenchmark suite, and check at which point each benchmark detects the performance issue. Our results show that microbenchmarks detect all three issues earlier, some even at the lowest severity level. Application benchmarks, however, raised false positive alarms, wrongly detected performance improvements, and detected the performance issues later.
[ { "created": "Tue, 7 Nov 2023 16:30:38 GMT", "version": "v1" } ]
2023-11-08
[ [ "Japke", "Nils", "" ], [ "Witzko", "Christoph", "" ], [ "Grambow", "Martin", "" ], [ "Bermbach", "David", "" ] ]
An application's performance regressions can be detected by both application or microbenchmarks. While application benchmarks stress the system under test by sending synthetic but realistic requests which, e.g., simulate real user traffic, microbenchmarks evaluate the performance on a subroutine level by calling the function under test repeatedly. In this paper, we use a testbed microservice application which includes three performance issues to study the detection capabilities of both approaches. In extensive benchmarking experiments, we increase the severity of each performance issue stepwise, run both an application benchmark and the microbenchmark suite, and check at which point each benchmark detects the performance issue. Our results show that microbenchmarks detect all three issues earlier, some even at the lowest severity level. Application benchmarks, however, raised false positive alarms, wrongly detected performance improvements, and detected the performance issues later.
2103.06371
Himanshu Sahni
Himanshu Sahni and Charles Isbell
Hard Attention Control By Mutual Information Maximization
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Biological agents have adopted the principle of attention to limit the rate of incoming information from the environment. One question that arises is if an artificial agent has access to only a limited view of its surroundings, how can it control its attention to effectively solve tasks? We propose an approach for learning how to control a hard attention window by maximizing the mutual information between the environment state and the attention location at each step. The agent employs an internal world model to make predictions about its state and focuses attention towards where the predictions may be wrong. Attention is trained jointly with a dynamic memory architecture that stores partial observations and keeps track of the unobserved state. We demonstrate that our approach is effective in predicting the full state from a sequence of partial observations. We also show that the agent's internal representation of the surroundings, a live mental map, can be used for control in two partially observable reinforcement learning tasks. Videos of the trained agent can be found at https://sites.google.com/view/hard-attention-control.
[ { "created": "Wed, 10 Mar 2021 22:38:28 GMT", "version": "v1" } ]
2021-03-12
[ [ "Sahni", "Himanshu", "" ], [ "Isbell", "Charles", "" ] ]
Biological agents have adopted the principle of attention to limit the rate of incoming information from the environment. One question that arises is if an artificial agent has access to only a limited view of its surroundings, how can it control its attention to effectively solve tasks? We propose an approach for learning how to control a hard attention window by maximizing the mutual information between the environment state and the attention location at each step. The agent employs an internal world model to make predictions about its state and focuses attention towards where the predictions may be wrong. Attention is trained jointly with a dynamic memory architecture that stores partial observations and keeps track of the unobserved state. We demonstrate that our approach is effective in predicting the full state from a sequence of partial observations. We also show that the agent's internal representation of the surroundings, a live mental map, can be used for control in two partially observable reinforcement learning tasks. Videos of the trained agent can be found at https://sites.google.com/view/hard-attention-control.
0910.0819
Rdv Ijcsis
Md. Ashraful Islam, Riaz Uddin Mondal, Md. Zahid Hasan
Performance Evaluation of Wimax Physical Layer under Adaptive Modulation Techniques and Communication Channels
4 pages IEEE format, International Journal of Computer Science and Information Security, IJCSIS 2009, ISSN 1947 5500, Impact Factor 0.423, http://sites.google.com/site/ijcsis/
International Journal of Computer Science and Information Security, IJCSIS, Vol. 5, No. 1, pp. 111-114, September 2009, USA
null
ISSn 1947 5500
cs.PF
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Wimax (Worldwide Interoperability for Microwave Access) is a promising technology which can offer high speed voice, video and data service up to the customer end. The aim of this paper is the performance evaluation of an Wimax system under different combinations of digital modulation (BPSK, QPSK, 4 QAM and 16 QAM) and different communication channels AWGN and fading channels (Rayleigh and Rician). And the Wimax system incorporates Reed Solomon (RS) encoder with Convolutional encoder with half and two third rated codes in FEC channel coding. The simulation results of estimated Bit Error Rate (BER) displays that the implementation of interleaved RS code (255, 239, 8) with two third rated Convolutional code under BPSK modulation technique is highly effective to combat in the Wimax communication system. To complete this performance analysis in Wimax based systems, a segment of audio signal is used for analysis. The transmitted audio message is found to have retrieved effectively under noisy situation.
[ { "created": "Mon, 5 Oct 2009 18:30:05 GMT", "version": "v1" } ]
2009-10-06
[ [ "Islam", "Md. Ashraful", "" ], [ "Mondal", "Riaz Uddin", "" ], [ "Hasan", "Md. Zahid", "" ] ]
Wimax (Worldwide Interoperability for Microwave Access) is a promising technology which can offer high speed voice, video and data service up to the customer end. The aim of this paper is the performance evaluation of an Wimax system under different combinations of digital modulation (BPSK, QPSK, 4 QAM and 16 QAM) and different communication channels AWGN and fading channels (Rayleigh and Rician). And the Wimax system incorporates Reed Solomon (RS) encoder with Convolutional encoder with half and two third rated codes in FEC channel coding. The simulation results of estimated Bit Error Rate (BER) displays that the implementation of interleaved RS code (255, 239, 8) with two third rated Convolutional code under BPSK modulation technique is highly effective to combat in the Wimax communication system. To complete this performance analysis in Wimax based systems, a segment of audio signal is used for analysis. The transmitted audio message is found to have retrieved effectively under noisy situation.
1710.05298
Hyemin Ahn
Hyemin Ahn, Timothy Ha, Yunho Choi, Hwiyeon Yoo, and Songhwai Oh
Text2Action: Generative Adversarial Synthesis from Language to Action
8 pages, 10 figures
null
null
null
cs.LG cs.CL cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a generative model which learns the relationship between language and human action in order to generate a human action sequence given a sentence describing human behavior. The proposed generative model is a generative adversarial network (GAN), which is based on the sequence to sequence (SEQ2SEQ) model. Using the proposed generative network, we can synthesize various actions for a robot or a virtual agent using a text encoder recurrent neural network (RNN) and an action decoder RNN. The proposed generative network is trained from 29,770 pairs of actions and sentence annotations extracted from MSR-Video-to-Text (MSR-VTT), a large-scale video dataset. We demonstrate that the network can generate human-like actions which can be transferred to a Baxter robot, such that the robot performs an action based on a provided sentence. Results show that the proposed generative network correctly models the relationship between language and action and can generate a diverse set of actions from the same sentence.
[ { "created": "Sun, 15 Oct 2017 07:51:01 GMT", "version": "v1" }, { "created": "Tue, 24 Oct 2017 06:32:52 GMT", "version": "v2" } ]
2017-10-25
[ [ "Ahn", "Hyemin", "" ], [ "Ha", "Timothy", "" ], [ "Choi", "Yunho", "" ], [ "Yoo", "Hwiyeon", "" ], [ "Oh", "Songhwai", "" ] ]
In this paper, we propose a generative model which learns the relationship between language and human action in order to generate a human action sequence given a sentence describing human behavior. The proposed generative model is a generative adversarial network (GAN), which is based on the sequence to sequence (SEQ2SEQ) model. Using the proposed generative network, we can synthesize various actions for a robot or a virtual agent using a text encoder recurrent neural network (RNN) and an action decoder RNN. The proposed generative network is trained from 29,770 pairs of actions and sentence annotations extracted from MSR-Video-to-Text (MSR-VTT), a large-scale video dataset. We demonstrate that the network can generate human-like actions which can be transferred to a Baxter robot, such that the robot performs an action based on a provided sentence. Results show that the proposed generative network correctly models the relationship between language and action and can generate a diverse set of actions from the same sentence.
1209.0127
Alexandra Faynburd Mrs
Ran El-Yaniv, Alexandra Faynburd
Autoregressive short-term prediction of turning points using support vector regression
null
null
null
null
cs.LG cs.CE cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work is concerned with autoregressive prediction of turning points in financial price sequences. Such turning points are critical local extrema points along a series, which mark the start of new swings. Predicting the future time of such turning points or even their early or late identification slightly before or after the fact has useful applications in economics and finance. Building on recently proposed neural network model for turning point prediction, we propose and study a new autoregressive model for predicting turning points of small swings. Our method relies on a known turning point indicator, a Fourier enriched representation of price histories, and support vector regression. We empirically examine the performance of the proposed method over a long history of the Dow Jones Industrial average. Our study shows that the proposed method is superior to the previous neural network model, in terms of trading performance of a simple trading application and also exhibits a quantifiable advantage over the buy-and-hold benchmark.
[ { "created": "Sat, 1 Sep 2012 19:53:23 GMT", "version": "v1" }, { "created": "Mon, 24 Sep 2012 19:28:24 GMT", "version": "v2" } ]
2012-09-25
[ [ "El-Yaniv", "Ran", "" ], [ "Faynburd", "Alexandra", "" ] ]
This work is concerned with autoregressive prediction of turning points in financial price sequences. Such turning points are critical local extrema points along a series, which mark the start of new swings. Predicting the future time of such turning points or even their early or late identification slightly before or after the fact has useful applications in economics and finance. Building on recently proposed neural network model for turning point prediction, we propose and study a new autoregressive model for predicting turning points of small swings. Our method relies on a known turning point indicator, a Fourier enriched representation of price histories, and support vector regression. We empirically examine the performance of the proposed method over a long history of the Dow Jones Industrial average. Our study shows that the proposed method is superior to the previous neural network model, in terms of trading performance of a simple trading application and also exhibits a quantifiable advantage over the buy-and-hold benchmark.
2308.06271
Owen Melia
Owen Melia, Eric Jonas, and Rebecca Willett
Rotation-Invariant Random Features Provide a Strong Baseline for Machine Learning on 3D Point Clouds
null
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Rotational invariance is a popular inductive bias used by many fields in machine learning, such as computer vision and machine learning for quantum chemistry. Rotation-invariant machine learning methods set the state of the art for many tasks, including molecular property prediction and 3D shape classification. These methods generally either rely on task-specific rotation-invariant features, or they use general-purpose deep neural networks which are complicated to design and train. However, it is unclear whether the success of these methods is primarily due to the rotation invariance or the deep neural networks. To address this question, we suggest a simple and general-purpose method for learning rotation-invariant functions of three-dimensional point cloud data using a random features approach. Specifically, we extend the random features method of Rahimi & Recht 2007 by deriving a version that is invariant to three-dimensional rotations and showing that it is fast to evaluate on point cloud data. We show through experiments that our method matches or outperforms the performance of general-purpose rotation-invariant neural networks on standard molecular property prediction benchmark datasets QM7 and QM9. We also show that our method is general-purpose and provides a rotation-invariant baseline on the ModelNet40 shape classification task. Finally, we show that our method has an order of magnitude smaller prediction latency than competing kernel methods.
[ { "created": "Thu, 27 Jul 2023 20:18:11 GMT", "version": "v1" } ]
2023-08-15
[ [ "Melia", "Owen", "" ], [ "Jonas", "Eric", "" ], [ "Willett", "Rebecca", "" ] ]
Rotational invariance is a popular inductive bias used by many fields in machine learning, such as computer vision and machine learning for quantum chemistry. Rotation-invariant machine learning methods set the state of the art for many tasks, including molecular property prediction and 3D shape classification. These methods generally either rely on task-specific rotation-invariant features, or they use general-purpose deep neural networks which are complicated to design and train. However, it is unclear whether the success of these methods is primarily due to the rotation invariance or the deep neural networks. To address this question, we suggest a simple and general-purpose method for learning rotation-invariant functions of three-dimensional point cloud data using a random features approach. Specifically, we extend the random features method of Rahimi & Recht 2007 by deriving a version that is invariant to three-dimensional rotations and showing that it is fast to evaluate on point cloud data. We show through experiments that our method matches or outperforms the performance of general-purpose rotation-invariant neural networks on standard molecular property prediction benchmark datasets QM7 and QM9. We also show that our method is general-purpose and provides a rotation-invariant baseline on the ModelNet40 shape classification task. Finally, we show that our method has an order of magnitude smaller prediction latency than competing kernel methods.
2012.12733
Cunhua Pan
Gui Zhou, Cunhua Pan, Hong Ren, Kezhi Wang, Zhangjie Peng
Secure Wireless Communication in RIS-Aided MISO Systems with Hardware Impairments
Accepted by IEEE Wireless Communications Letters. Keywords: reconfigurable intelligent surface (RIS), intelligent reflecting surface, hardware impairments, physical layer security
null
null
null
cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by/4.0/
In practice, residual transceiver hardware impairments inevitably lead to distortion noise which causes the performance loss. In this paper, we study the robust transmission design for a reconfigurable intelligent surface (RIS)-aided secure communication system in the presence of transceiver hardware impairments. We aim for maximizing the secrecy rate while ensuring the transmit power constraint on the active beamforming at the base station and the unit-modulus constraint on the passive beamforming at the RIS. To address this problem, we adopt the alternate optimization method to iteratively optimize one set of variables while keeping the other set fixed. Specifically, the successive convex approximation (SCA) method is used to solve the active beamforming optimization subproblem, while the passive beamforming is obtained by using the semidefinite program (SDP) method. Numerical results illustrate that the proposed transmission design scheme is more robust to the hardware impairments than the conventional non-robust scheme that ignores the impact of the hardware impairments.
[ { "created": "Wed, 23 Dec 2020 15:13:44 GMT", "version": "v1" }, { "created": "Thu, 24 Dec 2020 02:12:25 GMT", "version": "v2" }, { "created": "Mon, 8 Mar 2021 02:43:31 GMT", "version": "v3" } ]
2021-03-09
[ [ "Zhou", "Gui", "" ], [ "Pan", "Cunhua", "" ], [ "Ren", "Hong", "" ], [ "Wang", "Kezhi", "" ], [ "Peng", "Zhangjie", "" ] ]
In practice, residual transceiver hardware impairments inevitably lead to distortion noise which causes the performance loss. In this paper, we study the robust transmission design for a reconfigurable intelligent surface (RIS)-aided secure communication system in the presence of transceiver hardware impairments. We aim for maximizing the secrecy rate while ensuring the transmit power constraint on the active beamforming at the base station and the unit-modulus constraint on the passive beamforming at the RIS. To address this problem, we adopt the alternate optimization method to iteratively optimize one set of variables while keeping the other set fixed. Specifically, the successive convex approximation (SCA) method is used to solve the active beamforming optimization subproblem, while the passive beamforming is obtained by using the semidefinite program (SDP) method. Numerical results illustrate that the proposed transmission design scheme is more robust to the hardware impairments than the conventional non-robust scheme that ignores the impact of the hardware impairments.
1807.03750
Yehia Elkhatib PhD
Yehia Elkhatib
Navigating Diverse Data Science Learning: Critical Reflections Towards Future Practice
null
4th Workshop on Curricula and Teaching Methods in Cloud Computing, Big Data, and Data Science, 2017
10.1109/CloudCom.2017.58
null
cs.GL cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data Science is currently a popular field of science attracting expertise from very diverse backgrounds. Current learning practices need to acknowledge this and adapt to it. This paper summarises some experiences relating to such learning approaches from teaching a postgraduate Data Science module, and draws some learned lessons that are of relevance to others teaching Data Science.
[ { "created": "Thu, 5 Jul 2018 21:32:18 GMT", "version": "v1" } ]
2018-07-11
[ [ "Elkhatib", "Yehia", "" ] ]
Data Science is currently a popular field of science attracting expertise from very diverse backgrounds. Current learning practices need to acknowledge this and adapt to it. This paper summarises some experiences relating to such learning approaches from teaching a postgraduate Data Science module, and draws some learned lessons that are of relevance to others teaching Data Science.
2310.16956
Ayah Ahmad
Ayah Ahmad (1), Christopher Graziul (2), Margaret Beale Spencer (2) ((1) University of California, Berkeley, (2) University of Chicago)
Datastore Design for Analysis of Police Broadcast Audio at Scale
null
null
null
null
cs.CY cs.DB
http://creativecommons.org/licenses/by/4.0/
With policing coming under greater scrutiny in recent years, researchers have begun to more thoroughly study the effects of contact between police and minority communities. Despite data archives of hundreds of thousands of recorded Broadcast Police Communications (BPC) being openly available to the public, a closer look at a large-scale analysis of the language of policing has remained largely unexplored. While this research is critical in understanding a "pre-reflective" notion of policing, the large quantity of data presents numerous challenges in its organization and analysis. In this paper, we describe preliminary work towards enabling Speech Emotion Recognition (SER) in an analysis of the Chicago Police Department's (CPD) BPC by demonstrating the pipelined creation of a datastore to enable a multimodal analysis of composed raw audio files.
[ { "created": "Wed, 25 Oct 2023 19:52:19 GMT", "version": "v1" } ]
2023-10-27
[ [ "Ahmad", "Ayah", "", "University of California, Berkeley" ], [ "Graziul", "Christopher", "", "University of Chicago" ], [ "Spencer", "Margaret Beale", "", "University of Chicago" ] ]
With policing coming under greater scrutiny in recent years, researchers have begun to more thoroughly study the effects of contact between police and minority communities. Despite data archives of hundreds of thousands of recorded Broadcast Police Communications (BPC) being openly available to the public, a closer look at a large-scale analysis of the language of policing has remained largely unexplored. While this research is critical in understanding a "pre-reflective" notion of policing, the large quantity of data presents numerous challenges in its organization and analysis. In this paper, we describe preliminary work towards enabling Speech Emotion Recognition (SER) in an analysis of the Chicago Police Department's (CPD) BPC by demonstrating the pipelined creation of a datastore to enable a multimodal analysis of composed raw audio files.
2304.08809
Yi Li
Yi Li, Kyle Min, Subarna Tripathi, Nuno Vasconcelos
SViTT: Temporal Learning of Sparse Video-Text Transformers
CVPR 2023
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Do video-text transformers learn to model temporal relationships across frames? Despite their immense capacity and the abundance of multimodal training data, recent work has revealed the strong tendency of video-text models towards frame-based spatial representations, while temporal reasoning remains largely unsolved. In this work, we identify several key challenges in temporal learning of video-text transformers: the spatiotemporal trade-off from limited network size; the curse of dimensionality for multi-frame modeling; and the diminishing returns of semantic information by extending clip length. Guided by these findings, we propose SViTT, a sparse video-text architecture that performs multi-frame reasoning with significantly lower cost than naive transformers with dense attention. Analogous to graph-based networks, SViTT employs two forms of sparsity: edge sparsity that limits the query-key communications between tokens in self-attention, and node sparsity that discards uninformative visual tokens. Trained with a curriculum which increases model sparsity with the clip length, SViTT outperforms dense transformer baselines on multiple video-text retrieval and question answering benchmarks, with a fraction of computational cost. Project page: http://svcl.ucsd.edu/projects/svitt.
[ { "created": "Tue, 18 Apr 2023 08:17:58 GMT", "version": "v1" } ]
2023-04-19
[ [ "Li", "Yi", "" ], [ "Min", "Kyle", "" ], [ "Tripathi", "Subarna", "" ], [ "Vasconcelos", "Nuno", "" ] ]
Do video-text transformers learn to model temporal relationships across frames? Despite their immense capacity and the abundance of multimodal training data, recent work has revealed the strong tendency of video-text models towards frame-based spatial representations, while temporal reasoning remains largely unsolved. In this work, we identify several key challenges in temporal learning of video-text transformers: the spatiotemporal trade-off from limited network size; the curse of dimensionality for multi-frame modeling; and the diminishing returns of semantic information by extending clip length. Guided by these findings, we propose SViTT, a sparse video-text architecture that performs multi-frame reasoning with significantly lower cost than naive transformers with dense attention. Analogous to graph-based networks, SViTT employs two forms of sparsity: edge sparsity that limits the query-key communications between tokens in self-attention, and node sparsity that discards uninformative visual tokens. Trained with a curriculum which increases model sparsity with the clip length, SViTT outperforms dense transformer baselines on multiple video-text retrieval and question answering benchmarks, with a fraction of computational cost. Project page: http://svcl.ucsd.edu/projects/svitt.
1909.03691
Jan Krajicek
Jan Krajicek
The Cook-Reckhow definition
null
in: "Logic, Automata, and Computational Complexity: The Works of Stephen A. Cook", ed.Bruce M.Kapron, Association for Computing Machinery Books, New York, NY, USA, 43, pp.83-94, May 2023
10.1145/3588287.3588
null
cs.CC math.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Cook-Reckhow 1979 paper defined the area of research we now call Proof Complexity. There were earlier papers which contributed to the subject as we understand it today, the most significant being Tseitin's 1968 paper, but none of them introduced general notions that would allow to make an explicit and universal link between lengths-of-proofs problems and computational complexity theory. In this note we shall highlight three particular definitions from the paper: of proof systems, p-simulations and the pigeonhole principle formula, and discuss their role in defining the field. We will also mention some related developments and open problems.
[ { "created": "Mon, 9 Sep 2019 08:01:27 GMT", "version": "v1" } ]
2023-05-31
[ [ "Krajicek", "Jan", "" ] ]
The Cook-Reckhow 1979 paper defined the area of research we now call Proof Complexity. There were earlier papers which contributed to the subject as we understand it today, the most significant being Tseitin's 1968 paper, but none of them introduced general notions that would allow to make an explicit and universal link between lengths-of-proofs problems and computational complexity theory. In this note we shall highlight three particular definitions from the paper: of proof systems, p-simulations and the pigeonhole principle formula, and discuss their role in defining the field. We will also mention some related developments and open problems.
cs/0207068
Konstantin Korovin
Konstantin Korovin and Andrei Voronkov
Knuth-Bendix constraint solving is NP-complete
27 pages
null
null
null
cs.LO
null
We show the NP-completeness of the existential theory of term algebras with the Knuth-Bendix order by giving a nondeterministic polynomial-time algorithm for solving Knuth-Bendix ordering constraints.
[ { "created": "Wed, 17 Jul 2002 18:34:45 GMT", "version": "v1" } ]
2007-05-23
[ [ "Korovin", "Konstantin", "" ], [ "Voronkov", "Andrei", "" ] ]
We show the NP-completeness of the existential theory of term algebras with the Knuth-Bendix order by giving a nondeterministic polynomial-time algorithm for solving Knuth-Bendix ordering constraints.
2010.12577
Hansjoerg Albrecher
Hansjoerg Albrecher and Pierre-Olivier Goffard
On the profitability of selfish blockchain mining under consideration of ruin
null
null
null
null
cs.CR math.OC math.PR q-fin.RM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mining blocks on a blockchain equipped with a proof of work consensus protocol is well-known to be resource-consuming. A miner bears the operational cost, mainly electricity consumption and IT gear, of mining, and is compensated by a capital gain when a block is discovered. This paper aims at quantifying the profitability of mining when the possible event of ruin is also considered. This is done by formulating a tractable stochastic model and using tools from applied probability and analysis, including the explicit solution of a certain type of advanced functional differential equation. The expected profit at a future time point is determined for the situation when the miner follows the protocol as well as when he/she withholds blocks. The obtained explicit expressions allow to analyze the sensitivity with respect to the different model ingredients and to identify conditions under which selfish mining is a strategic advantage.
[ { "created": "Sat, 24 Oct 2020 15:21:45 GMT", "version": "v1" } ]
2020-10-27
[ [ "Albrecher", "Hansjoerg", "" ], [ "Goffard", "Pierre-Olivier", "" ] ]
Mining blocks on a blockchain equipped with a proof of work consensus protocol is well-known to be resource-consuming. A miner bears the operational cost, mainly electricity consumption and IT gear, of mining, and is compensated by a capital gain when a block is discovered. This paper aims at quantifying the profitability of mining when the possible event of ruin is also considered. This is done by formulating a tractable stochastic model and using tools from applied probability and analysis, including the explicit solution of a certain type of advanced functional differential equation. The expected profit at a future time point is determined for the situation when the miner follows the protocol as well as when he/she withholds blocks. The obtained explicit expressions allow to analyze the sensitivity with respect to the different model ingredients and to identify conditions under which selfish mining is a strategic advantage.
2004.14004
Chenglei Si
Chenglei Si, Ziqing Yang, Yiming Cui, Wentao Ma, Ting Liu, Shijin Wang
Benchmarking Robustness of Machine Reading Comprehension Models
ACL 2021 (Findings)
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Machine Reading Comprehension (MRC) is an important testbed for evaluating models' natural language understanding (NLU) ability. There has been rapid progress in this area, with new models achieving impressive performance on various benchmarks. However, existing benchmarks only evaluate models on in-domain test sets without considering their robustness under test-time perturbations or adversarial attacks. To fill this important gap, we construct AdvRACE (Adversarial RACE), a new model-agnostic benchmark for evaluating the robustness of MRC models under four different types of adversarial attacks, including our novel distractor extraction and generation attacks. We show that state-of-the-art (SOTA) models are vulnerable to all of these attacks. We conclude that there is substantial room for building more robust MRC models and our benchmark can help motivate and measure progress in this area. We release our data and code at https://github.com/NoviScl/AdvRACE .
[ { "created": "Wed, 29 Apr 2020 08:05:32 GMT", "version": "v1" }, { "created": "Wed, 26 May 2021 06:16:19 GMT", "version": "v2" } ]
2021-05-27
[ [ "Si", "Chenglei", "" ], [ "Yang", "Ziqing", "" ], [ "Cui", "Yiming", "" ], [ "Ma", "Wentao", "" ], [ "Liu", "Ting", "" ], [ "Wang", "Shijin", "" ] ]
Machine Reading Comprehension (MRC) is an important testbed for evaluating models' natural language understanding (NLU) ability. There has been rapid progress in this area, with new models achieving impressive performance on various benchmarks. However, existing benchmarks only evaluate models on in-domain test sets without considering their robustness under test-time perturbations or adversarial attacks. To fill this important gap, we construct AdvRACE (Adversarial RACE), a new model-agnostic benchmark for evaluating the robustness of MRC models under four different types of adversarial attacks, including our novel distractor extraction and generation attacks. We show that state-of-the-art (SOTA) models are vulnerable to all of these attacks. We conclude that there is substantial room for building more robust MRC models and our benchmark can help motivate and measure progress in this area. We release our data and code at https://github.com/NoviScl/AdvRACE .
1811.10146
Zhiqin Xu
Zhi-Qin John Xu
Frequency Principle in Deep Learning with General Loss Functions and Its Potential Application
8 pages, 4 figures
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Previous studies have shown that deep neural networks (DNNs) with common settings often capture target functions from low to high frequency, which is called Frequency Principle (F-Principle). It has also been shown that F-Principle can provide an understanding to the often observed good generalization ability of DNNs. However, previous studies focused on the loss function of mean square error, while various loss functions are used in practice. In this work, we show that the F-Principle holds for a general loss function (e.g., mean square error, cross entropy, etc.). In addition, DNN's F-Principle may be applied to develop numerical schemes for solving various problems which would benefit from a fast converging of low frequency. As an example of the potential usage of F-Principle, we apply DNN in solving differential equations, in which conventional methods (e.g., Jacobi method) is usually slow in solving problems due to the convergence from high to low frequency.
[ { "created": "Mon, 26 Nov 2018 02:27:44 GMT", "version": "v1" } ]
2018-11-27
[ [ "Xu", "Zhi-Qin John", "" ] ]
Previous studies have shown that deep neural networks (DNNs) with common settings often capture target functions from low to high frequency, which is called Frequency Principle (F-Principle). It has also been shown that F-Principle can provide an understanding to the often observed good generalization ability of DNNs. However, previous studies focused on the loss function of mean square error, while various loss functions are used in practice. In this work, we show that the F-Principle holds for a general loss function (e.g., mean square error, cross entropy, etc.). In addition, DNN's F-Principle may be applied to develop numerical schemes for solving various problems which would benefit from a fast converging of low frequency. As an example of the potential usage of F-Principle, we apply DNN in solving differential equations, in which conventional methods (e.g., Jacobi method) is usually slow in solving problems due to the convergence from high to low frequency.
1401.7444
Yossi Gilad
Yossi Gilad, Amir Herzberg, Ari Trachtenberg
Securing Smartphones: A Micro-TCB Approach
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As mobile phones have evolved into `smartphones', with complex operating systems running third- party software, they have become increasingly vulnerable to malicious applications (malware). We introduce a new design for mitigating malware attacks against smartphone users, based on a small trusted computing base module, denoted uTCB. The uTCB manages sensitive data and sensors, and provides core services to applications, independently of the operating system. The user invokes uTCB using a simple secure attention key, which is pressed in order to validate physical possession of the device and authorize a sensitive action; this protects private information even if the device is infected with malware. We present a proof-of-concept implementation of uTCB based on ARM's TrustZone, a secure execution environment increasingly found in smartphones, and evaluate our implementation using simulations.
[ { "created": "Wed, 29 Jan 2014 09:20:53 GMT", "version": "v1" } ]
2014-01-30
[ [ "Gilad", "Yossi", "" ], [ "Herzberg", "Amir", "" ], [ "Trachtenberg", "Ari", "" ] ]
As mobile phones have evolved into `smartphones', with complex operating systems running third- party software, they have become increasingly vulnerable to malicious applications (malware). We introduce a new design for mitigating malware attacks against smartphone users, based on a small trusted computing base module, denoted uTCB. The uTCB manages sensitive data and sensors, and provides core services to applications, independently of the operating system. The user invokes uTCB using a simple secure attention key, which is pressed in order to validate physical possession of the device and authorize a sensitive action; this protects private information even if the device is infected with malware. We present a proof-of-concept implementation of uTCB based on ARM's TrustZone, a secure execution environment increasingly found in smartphones, and evaluate our implementation using simulations.
cs/0611158
Francoise Detienne
Michael Baker, Fran\c{c}oise D\'etienne (INRIA), Kristine Lundt, Arnauld S\'ejourn\'e
Articulation entre \'{e}laboration de solutions et argumentation polyphonique
null
Dans EPIQUE'2003 (2003)
null
null
cs.HC
null
In this paper, we propose an analytical framework that aims to bring out the nature of participants' contributions to co-design meetings, in a way that synthesises content and function dimensions, together with the dimension of dialogicality. We term the resulting global vision of contribution, the "interactive profile".
[ { "created": "Thu, 30 Nov 2006 13:28:29 GMT", "version": "v1" }, { "created": "Fri, 2 Feb 2007 08:10:20 GMT", "version": "v2" } ]
2016-08-16
[ [ "Baker", "Michael", "", "INRIA" ], [ "Détienne", "Françoise", "", "INRIA" ], [ "Lundt", "Kristine", "" ], [ "Séjourné", "Arnauld", "" ] ]
In this paper, we propose an analytical framework that aims to bring out the nature of participants' contributions to co-design meetings, in a way that synthesises content and function dimensions, together with the dimension of dialogicality. We term the resulting global vision of contribution, the "interactive profile".
2404.01110
Tong Hui
Tong Hui, Stefan Rucareanu, Esteban Zamora, Simone D'Angelo, Haotian Liu, Matteo Fumagalli
A Novel Center-of-Mass Displacing Aerial Manipulation Platform: Design, Modeling, and Control
This paper is under submission to IEEE Robotics and Auotmation Letter
null
null
null
cs.RO cs.SY eess.SY
http://creativecommons.org/licenses/by-nc-nd/4.0/
Aerial manipulators can serve contact-based industrial applications, where fundamental tasks like drilling and grinding often necessitate aerial platforms to handle heavy tools and high loads (i.e., forces and torques). These tasks are frequently performed on non-horizontal surfaces. Current multirotor platforms, which have a fixed CoM (Center of Mass) within the rotor-defined area, typically exhibit a large moment arm between the EE (End-Effector) tip and the system's CoM. This configuration can result in instability and potential damage during physical interactions. In this letter, we present the system design, modeling, and control of a novel aerial vehicle tailored to tool manipulation on non-horizontal surfaces. This platform adapts the form of an H-shaped coaxial octocopter with tiltable back rotors; it can carry heavy components (e.g., the manipulator and battery) on a movable plate within the rotor-defined area during free flight. When interacting with surfaces, the platform actively shifts the plate toward the work surface while preserving the system orientation thanks to the tiltable back rotors. This leads to a displaced CoM location and a reduced moment arm. We use simulations that closely capture the built physical prototype to validate our proposed concepts for complex and risky working scenarios. Moreover, early-stage physical experiments were conducted to evaluate the developed system for free flights and a pushing task.
[ { "created": "Mon, 1 Apr 2024 13:33:53 GMT", "version": "v1" }, { "created": "Wed, 17 Jul 2024 11:20:06 GMT", "version": "v2" } ]
2024-07-18
[ [ "Hui", "Tong", "" ], [ "Rucareanu", "Stefan", "" ], [ "Zamora", "Esteban", "" ], [ "D'Angelo", "Simone", "" ], [ "Liu", "Haotian", "" ], [ "Fumagalli", "Matteo", "" ] ]
Aerial manipulators can serve contact-based industrial applications, where fundamental tasks like drilling and grinding often necessitate aerial platforms to handle heavy tools and high loads (i.e., forces and torques). These tasks are frequently performed on non-horizontal surfaces. Current multirotor platforms, which have a fixed CoM (Center of Mass) within the rotor-defined area, typically exhibit a large moment arm between the EE (End-Effector) tip and the system's CoM. This configuration can result in instability and potential damage during physical interactions. In this letter, we present the system design, modeling, and control of a novel aerial vehicle tailored to tool manipulation on non-horizontal surfaces. This platform adapts the form of an H-shaped coaxial octocopter with tiltable back rotors; it can carry heavy components (e.g., the manipulator and battery) on a movable plate within the rotor-defined area during free flight. When interacting with surfaces, the platform actively shifts the plate toward the work surface while preserving the system orientation thanks to the tiltable back rotors. This leads to a displaced CoM location and a reduced moment arm. We use simulations that closely capture the built physical prototype to validate our proposed concepts for complex and risky working scenarios. Moreover, early-stage physical experiments were conducted to evaluate the developed system for free flights and a pushing task.
1807.02257
Edgar Margffoy-Tuay
Edgar Margffoy-Tuay, Juan C. P\'erez, Emilio Botero, Pablo Arbel\'aez
Dynamic Multimodal Instance Segmentation guided by natural language queries
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
We address the problem of segmenting an object given a natural language expression that describes it. Current techniques tackle this task by either (\textit{i}) directly or recursively merging linguistic and visual information in the channel dimension and then performing convolutions; or by (\textit{ii}) mapping the expression to a space in which it can be thought of as a filter, whose response is directly related to the presence of the object at a given spatial coordinate in the image, so that a convolution can be applied to look for the object. We propose a novel method that integrates these two insights in order to fully exploit the recursive nature of language. Additionally, during the upsampling process, we take advantage of the intermediate information generated when downsampling the image, so that detailed segmentations can be obtained. We compare our method against the state-of-the-art approaches in four standard datasets, in which it surpasses all previous methods in six of eight of the splits for this task.
[ { "created": "Fri, 6 Jul 2018 05:21:06 GMT", "version": "v1" }, { "created": "Sun, 22 Jul 2018 22:31:18 GMT", "version": "v2" } ]
2018-07-24
[ [ "Margffoy-Tuay", "Edgar", "" ], [ "Pérez", "Juan C.", "" ], [ "Botero", "Emilio", "" ], [ "Arbeláez", "Pablo", "" ] ]
We address the problem of segmenting an object given a natural language expression that describes it. Current techniques tackle this task by either (\textit{i}) directly or recursively merging linguistic and visual information in the channel dimension and then performing convolutions; or by (\textit{ii}) mapping the expression to a space in which it can be thought of as a filter, whose response is directly related to the presence of the object at a given spatial coordinate in the image, so that a convolution can be applied to look for the object. We propose a novel method that integrates these two insights in order to fully exploit the recursive nature of language. Additionally, during the upsampling process, we take advantage of the intermediate information generated when downsampling the image, so that detailed segmentations can be obtained. We compare our method against the state-of-the-art approaches in four standard datasets, in which it surpasses all previous methods in six of eight of the splits for this task.
1611.02174
Yiyi Liao
Yiyi Liao, Lichao Huang, Yue Wang, Sarath Kodagoda, Yinan Yu, Yong Liu
Parse Geometry from a Line: Monocular Depth Estimation with Partial Laser Observation
null
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many standard robotic platforms are equipped with at least a fixed 2D laser range finder and a monocular camera. Although those platforms do not have sensors for 3D depth sensing capability, knowledge of depth is an essential part in many robotics activities. Therefore, recently, there is an increasing interest in depth estimation using monocular images. As this task is inherently ambiguous, the data-driven estimated depth might be unreliable in robotics applications. In this paper, we have attempted to improve the precision of monocular depth estimation by introducing 2D planar observation from the remaining laser range finder without extra cost. Specifically, we construct a dense reference map from the sparse laser range data, redefining the depth estimation task as estimating the distance between the real and the reference depth. To solve the problem, we construct a novel residual of residual neural network, and tightly combine the classification and regression losses for continuous depth estimation. Experimental results suggest that our method achieves considerable promotion compared to the state-of-the-art methods on both NYUD2 and KITTI, validating the effectiveness of our method on leveraging the additional sensory information. We further demonstrate the potential usage of our method in obstacle avoidance where our methodology provides comprehensive depth information compared to the solution using monocular camera or 2D laser range finder alone.
[ { "created": "Mon, 17 Oct 2016 21:12:07 GMT", "version": "v1" } ]
2016-11-08
[ [ "Liao", "Yiyi", "" ], [ "Huang", "Lichao", "" ], [ "Wang", "Yue", "" ], [ "Kodagoda", "Sarath", "" ], [ "Yu", "Yinan", "" ], [ "Liu", "Yong", "" ] ]
Many standard robotic platforms are equipped with at least a fixed 2D laser range finder and a monocular camera. Although those platforms do not have sensors for 3D depth sensing capability, knowledge of depth is an essential part in many robotics activities. Therefore, recently, there is an increasing interest in depth estimation using monocular images. As this task is inherently ambiguous, the data-driven estimated depth might be unreliable in robotics applications. In this paper, we have attempted to improve the precision of monocular depth estimation by introducing 2D planar observation from the remaining laser range finder without extra cost. Specifically, we construct a dense reference map from the sparse laser range data, redefining the depth estimation task as estimating the distance between the real and the reference depth. To solve the problem, we construct a novel residual of residual neural network, and tightly combine the classification and regression losses for continuous depth estimation. Experimental results suggest that our method achieves considerable promotion compared to the state-of-the-art methods on both NYUD2 and KITTI, validating the effectiveness of our method on leveraging the additional sensory information. We further demonstrate the potential usage of our method in obstacle avoidance where our methodology provides comprehensive depth information compared to the solution using monocular camera or 2D laser range finder alone.
2006.02585
Victor Sanches Portella
Huang Fang, Nicholas J. A. Harvey, Victor S. Portella, Michael P. Friedlander
Online mirror descent and dual averaging: keeping pace in the dynamic case
27 pages main text, 37 pages in total, 1 figure. Version 2: Revision for camera-ready version of ICML 2020, with a new abstract, new discussion and acknowledgements sections, and some other minor modifications. Version 3: Technical report version of JMLR submission, with minor revisions, full proofs, and more details on the setting with composite functions
null
null
null
cs.LG math.OC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Online mirror descent (OMD) and dual averaging (DA) -- two fundamental algorithms for online convex optimization -- are known to have very similar (and sometimes identical) performance guarantees when used with a fixed learning rate. Under dynamic learning rates, however, OMD is provably inferior to DA and suffers a linear regret, even in common settings such as prediction with expert advice. We modify the OMD algorithm through a simple technique that we call stabilization. We give essentially the same abstract regret bound for OMD with stabilization and for DA by modifying the classical OMD convergence analysis in a careful and modular way that allows for straightforward and flexible proofs. Simple corollaries of these bounds show that OMD with stabilization and DA enjoy the same performance guarantees in many applications -- even under dynamic learning rates. We also shed light on the similarities between OMD and DA and show simple conditions under which stabilized-OMD and DA generate the same iterates.
[ { "created": "Wed, 3 Jun 2020 23:41:40 GMT", "version": "v1" }, { "created": "Sat, 4 Jul 2020 03:49:36 GMT", "version": "v2" }, { "created": "Fri, 3 Sep 2021 23:23:53 GMT", "version": "v3" } ]
2021-09-07
[ [ "Fang", "Huang", "" ], [ "Harvey", "Nicholas J. A.", "" ], [ "Portella", "Victor S.", "" ], [ "Friedlander", "Michael P.", "" ] ]
Online mirror descent (OMD) and dual averaging (DA) -- two fundamental algorithms for online convex optimization -- are known to have very similar (and sometimes identical) performance guarantees when used with a fixed learning rate. Under dynamic learning rates, however, OMD is provably inferior to DA and suffers a linear regret, even in common settings such as prediction with expert advice. We modify the OMD algorithm through a simple technique that we call stabilization. We give essentially the same abstract regret bound for OMD with stabilization and for DA by modifying the classical OMD convergence analysis in a careful and modular way that allows for straightforward and flexible proofs. Simple corollaries of these bounds show that OMD with stabilization and DA enjoy the same performance guarantees in many applications -- even under dynamic learning rates. We also shed light on the similarities between OMD and DA and show simple conditions under which stabilized-OMD and DA generate the same iterates.
1905.06650
Rui-Xiao Zhang
Rui-Xiao Zhang, Tianchi Huang, Chenglei Wu, Lifeng Sun
Reactive Video Caching via long-short-term fusion approach
6 pages, 5 figures
null
null
null
cs.MM cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Video caching has been a basic network functionality in today's network architectures. Although the abundance of caching replacement algorithms has been proposed recently, these methods all suffer from a key limitation: due to their immature rules, inaccurate feature engineering or unresponsive model update, they cannot strike a balance between the long-term history and short-term sudden events. To address this concern, we propose LA-E2, a long-short-term fusion caching replacement approach, which is based on a learning-aided exploration-exploitation process. Specifically, by effectively combining the deep neural network (DNN) based prediction with the online exploitation-exploration process through a \emph{top-k} method, LA-E2 can both make use of the historical information and adapt to the constantly changing popularity responsively. Through the extensive experiments in two real-world datasets, we show that LA-E2 can achieve state-of-the-art performance and generalize well. Especially when the cache size is small, our approach can outperform the baselines by 17.5\%-68.7\% higher in total hit rate.
[ { "created": "Thu, 16 May 2019 10:45:06 GMT", "version": "v1" } ]
2019-05-17
[ [ "Zhang", "Rui-Xiao", "" ], [ "Huang", "Tianchi", "" ], [ "Wu", "Chenglei", "" ], [ "Sun", "Lifeng", "" ] ]
Video caching has been a basic network functionality in today's network architectures. Although the abundance of caching replacement algorithms has been proposed recently, these methods all suffer from a key limitation: due to their immature rules, inaccurate feature engineering or unresponsive model update, they cannot strike a balance between the long-term history and short-term sudden events. To address this concern, we propose LA-E2, a long-short-term fusion caching replacement approach, which is based on a learning-aided exploration-exploitation process. Specifically, by effectively combining the deep neural network (DNN) based prediction with the online exploitation-exploration process through a \emph{top-k} method, LA-E2 can both make use of the historical information and adapt to the constantly changing popularity responsively. Through the extensive experiments in two real-world datasets, we show that LA-E2 can achieve state-of-the-art performance and generalize well. Especially when the cache size is small, our approach can outperform the baselines by 17.5\%-68.7\% higher in total hit rate.
1808.08376
Antoine Genitrini
Olivier Bodini and Antoine Genitrini and Mehdi Naima
Ranked Schr\"oder Trees
null
null
null
null
cs.DS math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In biology, a phylogenetic tree is a tool to represent the evolutionary relationship between species. Unfortunately, the classical Schr\"oder tree model is not adapted to take into account the chronology between the branching nodes. In particular, it does not answer the question: how many different phylogenetic stories lead to the creation of n species and what is the average time to get there? In this paper, we enrich this model in two distinct ways in order to obtain two ranked tree models for phylogenetics, i.e. models coding chronology. For that purpose, we first develop a model of (strongly) increasing Schr\"oder trees, symbolically described in the classical context of increasing labeling. Then we introduce a generalization for the labeling with some unusual order constraint in Analytic Combinatorics (namely the weakly increasing trees). Although these models are direct extensions of the Schr\"oder tree model, it appears that they are also in one-to-one correspondence with several classical combinatorial objects. Through the paper, we present these links, exhibit some parameters in typical large trees and conclude the studies with efficient uniform samplers.
[ { "created": "Sat, 25 Aug 2018 08:31:51 GMT", "version": "v1" }, { "created": "Sun, 23 Dec 2018 12:07:22 GMT", "version": "v2" }, { "created": "Mon, 31 Dec 2018 20:58:16 GMT", "version": "v3" }, { "created": "Sat, 5 Jan 2019 22:02:11 GMT", "version": "v4" }, { "created": "Mon, 14 Jan 2019 11:23:57 GMT", "version": "v5" } ]
2019-01-15
[ [ "Bodini", "Olivier", "" ], [ "Genitrini", "Antoine", "" ], [ "Naima", "Mehdi", "" ] ]
In biology, a phylogenetic tree is a tool to represent the evolutionary relationship between species. Unfortunately, the classical Schr\"oder tree model is not adapted to take into account the chronology between the branching nodes. In particular, it does not answer the question: how many different phylogenetic stories lead to the creation of n species and what is the average time to get there? In this paper, we enrich this model in two distinct ways in order to obtain two ranked tree models for phylogenetics, i.e. models coding chronology. For that purpose, we first develop a model of (strongly) increasing Schr\"oder trees, symbolically described in the classical context of increasing labeling. Then we introduce a generalization for the labeling with some unusual order constraint in Analytic Combinatorics (namely the weakly increasing trees). Although these models are direct extensions of the Schr\"oder tree model, it appears that they are also in one-to-one correspondence with several classical combinatorial objects. Through the paper, we present these links, exhibit some parameters in typical large trees and conclude the studies with efficient uniform samplers.
2306.15562
Josep Lluis Berral
Gonzalo Gomez-Sanchez, Aaron Call, Xavier Teruel, Lorena Alonso, Ignasi Moran, Miguel Angel Perez, David Torrents, Josep Ll. Berral
Challenges and Opportunities for RISC-V Architectures towards Genomics-based Workloads
null
Presented at the ISC High-Performance Computing 2023
null
null
cs.DC
http://creativecommons.org/licenses/by-nc-nd/4.0/
The use of large-scale supercomputing architectures is a hard requirement for scientific computing Big-Data applications. An example is genomics analytics, where millions of data transformations and tests per patient need to be done to find relevant clinical indicators. Therefore, to ensure open and broad access to high-performance technologies, governments, and academia are pushing toward the introduction of novel computing architectures in large-scale scientific environments. This is the case of RISC-V, an open-source and royalty-free instruction-set architecture. To evaluate such technologies, here we present the Variant-Interaction Analytics use case benchmarking suite and datasets. Through this use case, we search for possible genetic interactions using computational and statistical methods, providing a representative case for heavy ETL (Extract, Transform, Load) data processing. Current implementations are implemented in x86-based supercomputers (e.g. MareNostrum-IV at the Barcelona Supercomputing Center (BSC)), and future steps propose RISC-V as part of the next MareNostrum generations. Here we describe the Variant Interaction Use Case, highlighting the characteristics leveraging high-performance computing, indicating the caveats and challenges towards the next RISC-V developments and designs to come from a first comparison between x86 and RISC-V architectures on real Variant Interaction executions over real hardware implementations.
[ { "created": "Tue, 27 Jun 2023 15:37:36 GMT", "version": "v1" } ]
2023-06-28
[ [ "Gomez-Sanchez", "Gonzalo", "" ], [ "Call", "Aaron", "" ], [ "Teruel", "Xavier", "" ], [ "Alonso", "Lorena", "" ], [ "Moran", "Ignasi", "" ], [ "Perez", "Miguel Angel", "" ], [ "Torrents", "David", "" ], [ "Berral", "Josep Ll.", "" ] ]
The use of large-scale supercomputing architectures is a hard requirement for scientific computing Big-Data applications. An example is genomics analytics, where millions of data transformations and tests per patient need to be done to find relevant clinical indicators. Therefore, to ensure open and broad access to high-performance technologies, governments, and academia are pushing toward the introduction of novel computing architectures in large-scale scientific environments. This is the case of RISC-V, an open-source and royalty-free instruction-set architecture. To evaluate such technologies, here we present the Variant-Interaction Analytics use case benchmarking suite and datasets. Through this use case, we search for possible genetic interactions using computational and statistical methods, providing a representative case for heavy ETL (Extract, Transform, Load) data processing. Current implementations are implemented in x86-based supercomputers (e.g. MareNostrum-IV at the Barcelona Supercomputing Center (BSC)), and future steps propose RISC-V as part of the next MareNostrum generations. Here we describe the Variant Interaction Use Case, highlighting the characteristics leveraging high-performance computing, indicating the caveats and challenges towards the next RISC-V developments and designs to come from a first comparison between x86 and RISC-V architectures on real Variant Interaction executions over real hardware implementations.
2311.00706
Jian Chen
Ross Geuy and Nate Rising and Tiancheng Shi, Meng Ling, Jian Chen
Can AI Mitigate Human Perceptual Biases? A Pilot Study
This paper was accepted IEEE VIS 2023 VISxVISION Workshop
null
null
null
cs.HC cs.AI
http://creativecommons.org/licenses/by/4.0/
We present results from a pilot experiment to measure if machine recommendations can debias human perceptual biases in visualization tasks. We specifically studied the ``pull-down'' effect, i.e., people underestimate the average position of lines, for the task of estimating the ensemble average of data points in line charts. These line charts can show for example temperature or precipitation in 12 months. Six participants estimated ensemble averages with or without an AI assistant. The assistant, when available, responded at three different speeds to assemble the conditions of a human collaborator who may delay his or her responses. Our pilot study showed that participants were faster with AI assistance in ensemble tasks, compared to the baseline without AI assistance. Although ``pull-down'' biases were reduced, the effect of AI assistance was not statistically significant. Also, delaying AI responses had no significant impact on human decision accuracy. We discuss the implications of these preliminary results for subsequent studies.
[ { "created": "Tue, 10 Oct 2023 21:35:15 GMT", "version": "v1" } ]
2023-11-03
[ [ "Geuy", "Ross", "" ], [ "Rising", "Nate", "" ], [ "Shi", "Tiancheng", "" ], [ "Ling", "Meng", "" ], [ "Chen", "Jian", "" ] ]
We present results from a pilot experiment to measure if machine recommendations can debias human perceptual biases in visualization tasks. We specifically studied the ``pull-down'' effect, i.e., people underestimate the average position of lines, for the task of estimating the ensemble average of data points in line charts. These line charts can show for example temperature or precipitation in 12 months. Six participants estimated ensemble averages with or without an AI assistant. The assistant, when available, responded at three different speeds to assemble the conditions of a human collaborator who may delay his or her responses. Our pilot study showed that participants were faster with AI assistance in ensemble tasks, compared to the baseline without AI assistance. Although ``pull-down'' biases were reduced, the effect of AI assistance was not statistically significant. Also, delaying AI responses had no significant impact on human decision accuracy. We discuss the implications of these preliminary results for subsequent studies.
0809.4821
Jean-Marie Vanherpe
Jean-Luc Fouquet (LIFO), Jean-Marie Vanherpe (LIFO)
On Fan Raspaud Conjecture
null
null
null
null
cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A conjecture of Fan and Raspaud [3] asserts that every bridgeless cubic graph con-tains three perfect matchings with empty intersection. Kaiser and Raspaud [6] sug-gested a possible approach to this problem based on the concept of a balanced join in an embedded graph. We give here some new results concerning this conjecture and prove that a minimum counterexample must have at least 32 vertices.
[ { "created": "Sun, 28 Sep 2008 05:56:06 GMT", "version": "v1" } ]
2008-09-30
[ [ "Fouquet", "Jean-Luc", "", "LIFO" ], [ "Vanherpe", "Jean-Marie", "", "LIFO" ] ]
A conjecture of Fan and Raspaud [3] asserts that every bridgeless cubic graph con-tains three perfect matchings with empty intersection. Kaiser and Raspaud [6] sug-gested a possible approach to this problem based on the concept of a balanced join in an embedded graph. We give here some new results concerning this conjecture and prove that a minimum counterexample must have at least 32 vertices.
2007.09222
Haoran Wang
Haoran Wang, Tong Shen, Wei Zhang, Lingyu Duan, Tao Mei
Classes Matter: A Fine-grained Adversarial Approach to Cross-domain Semantic Segmentation
Accepted to ECCV 2020
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite great progress in supervised semantic segmentation,a large performance drop is usually observed when deploying the model in the wild. Domain adaptation methods tackle the issue by aligning the source domain and the target domain. However, most existing methods attempt to perform the alignment from a holistic view, ignoring the underlying class-level data structure in the target domain. To fully exploit the supervision in the source domain, we propose a fine-grained adversarial learning strategy for class-level feature alignment while preserving the internal structure of semantics across domains. We adopt a fine-grained domain discriminator that not only plays as a domain distinguisher, but also differentiates domains at class level. The traditional binary domain labels are also generalized to domain encodings as the supervision signal to guide the fine-grained feature alignment. An analysis with Class Center Distance (CCD) validates that our fine-grained adversarial strategy achieves better class-level alignment compared to other state-of-the-art methods. Our method is easy to implement and its effectiveness is evaluated on three classical domain adaptation tasks, i.e., GTA5 to Cityscapes, SYNTHIA to Cityscapes and Cityscapes to Cross-City. Large performance gains show that our method outperforms other global feature alignment based and class-wise alignment based counterparts. The code is publicly available at https://github.com/JDAI-CV/FADA.
[ { "created": "Fri, 17 Jul 2020 20:50:59 GMT", "version": "v1" } ]
2020-07-21
[ [ "Wang", "Haoran", "" ], [ "Shen", "Tong", "" ], [ "Zhang", "Wei", "" ], [ "Duan", "Lingyu", "" ], [ "Mei", "Tao", "" ] ]
Despite great progress in supervised semantic segmentation,a large performance drop is usually observed when deploying the model in the wild. Domain adaptation methods tackle the issue by aligning the source domain and the target domain. However, most existing methods attempt to perform the alignment from a holistic view, ignoring the underlying class-level data structure in the target domain. To fully exploit the supervision in the source domain, we propose a fine-grained adversarial learning strategy for class-level feature alignment while preserving the internal structure of semantics across domains. We adopt a fine-grained domain discriminator that not only plays as a domain distinguisher, but also differentiates domains at class level. The traditional binary domain labels are also generalized to domain encodings as the supervision signal to guide the fine-grained feature alignment. An analysis with Class Center Distance (CCD) validates that our fine-grained adversarial strategy achieves better class-level alignment compared to other state-of-the-art methods. Our method is easy to implement and its effectiveness is evaluated on three classical domain adaptation tasks, i.e., GTA5 to Cityscapes, SYNTHIA to Cityscapes and Cityscapes to Cross-City. Large performance gains show that our method outperforms other global feature alignment based and class-wise alignment based counterparts. The code is publicly available at https://github.com/JDAI-CV/FADA.
2011.12827
Rafal Kucharski
Rafa{\l} Kucharski, Oded Cats
MaaSSim -- agent-based two-sided mobility platform simulator
null
null
null
null
cs.MA physics.soc-ph
http://creativecommons.org/licenses/by-nc-nd/4.0/
Two-sided mobility platforms, such as Uber and Lyft, widely emerged in the urban mobility landscape, bringing disruptive changes to transportation systems worldwide. This calls for a simulation framework where researchers from various and across disciplines may introduce models aimed at representing the dynamics of platform-driven urban mobility systems. In this work, we present MaaSSim, an agent-based simulator reproducing the transport system used by two kind of agents: (i) travellers, requesting to travel from their origin to destination at a given time, and (ii) drivers supplying their travel needs by offering them rides. An intermediate agent, the platform, allows demand to be matched with supply. Agents are decision makers, specifically, travellers may decide which mode they use or reject an incoming offer. Similarly, drivers may opt-out from the system or reject incoming requests. All of the above behaviours are modelled through user-defined modules, representing agents' taste variations (heterogeneity), their previous experiences (learning) and available information (system control). MaaSSim is an open-source library available at a public repository github.com/RafalKucharskiPK/MaaSSim, along with a set of tutorials and reproducible use-case scenarios.
[ { "created": "Wed, 25 Nov 2020 15:32:13 GMT", "version": "v1" } ]
2020-11-26
[ [ "Kucharski", "Rafał", "" ], [ "Cats", "Oded", "" ] ]
Two-sided mobility platforms, such as Uber and Lyft, widely emerged in the urban mobility landscape, bringing disruptive changes to transportation systems worldwide. This calls for a simulation framework where researchers from various and across disciplines may introduce models aimed at representing the dynamics of platform-driven urban mobility systems. In this work, we present MaaSSim, an agent-based simulator reproducing the transport system used by two kind of agents: (i) travellers, requesting to travel from their origin to destination at a given time, and (ii) drivers supplying their travel needs by offering them rides. An intermediate agent, the platform, allows demand to be matched with supply. Agents are decision makers, specifically, travellers may decide which mode they use or reject an incoming offer. Similarly, drivers may opt-out from the system or reject incoming requests. All of the above behaviours are modelled through user-defined modules, representing agents' taste variations (heterogeneity), their previous experiences (learning) and available information (system control). MaaSSim is an open-source library available at a public repository github.com/RafalKucharskiPK/MaaSSim, along with a set of tutorials and reproducible use-case scenarios.
1711.10125
Yen-Chang Hsu
Yen-Chang Hsu, Zhaoyang Lv, Zsolt Kira
Learning to cluster in order to transfer across domains and tasks
ICLR 2018
null
null
null
cs.LG cs.AI cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces a novel method to perform transfer learning across domains and tasks, formulating it as a problem of learning to cluster. The key insight is that, in addition to features, we can transfer similarity information and this is sufficient to learn a similarity function and clustering network to perform both domain adaptation and cross-task transfer learning. We begin by reducing categorical information to pairwise constraints, which only considers whether two instances belong to the same class or not. This similarity is category-agnostic and can be learned from data in the source domain using a similarity network. We then present two novel approaches for performing transfer learning using this similarity function. First, for unsupervised domain adaptation, we design a new loss function to regularize classification with a constrained clustering loss, hence learning a clustering network with the transferred similarity metric generating the training inputs. Second, for cross-task learning (i.e., unsupervised clustering with unseen categories), we propose a framework to reconstruct and estimate the number of semantic clusters, again using the clustering network. Since the similarity network is noisy, the key is to use a robust clustering algorithm, and we show that our formulation is more robust than the alternative constrained and unconstrained clustering approaches. Using this method, we first show state of the art results for the challenging cross-task problem, applied on Omniglot and ImageNet. Our results show that we can reconstruct semantic clusters with high accuracy. We then evaluate the performance of cross-domain transfer using images from the Office-31 and SVHN-MNIST tasks and present top accuracy on both datasets. Our approach doesn't explicitly deal with domain discrepancy. If we combine with a domain adaptation loss, it shows further improvement.
[ { "created": "Tue, 28 Nov 2017 04:59:58 GMT", "version": "v1" }, { "created": "Tue, 2 Jan 2018 15:54:59 GMT", "version": "v2" }, { "created": "Sat, 17 Mar 2018 04:42:49 GMT", "version": "v3" } ]
2018-03-20
[ [ "Hsu", "Yen-Chang", "" ], [ "Lv", "Zhaoyang", "" ], [ "Kira", "Zsolt", "" ] ]
This paper introduces a novel method to perform transfer learning across domains and tasks, formulating it as a problem of learning to cluster. The key insight is that, in addition to features, we can transfer similarity information and this is sufficient to learn a similarity function and clustering network to perform both domain adaptation and cross-task transfer learning. We begin by reducing categorical information to pairwise constraints, which only considers whether two instances belong to the same class or not. This similarity is category-agnostic and can be learned from data in the source domain using a similarity network. We then present two novel approaches for performing transfer learning using this similarity function. First, for unsupervised domain adaptation, we design a new loss function to regularize classification with a constrained clustering loss, hence learning a clustering network with the transferred similarity metric generating the training inputs. Second, for cross-task learning (i.e., unsupervised clustering with unseen categories), we propose a framework to reconstruct and estimate the number of semantic clusters, again using the clustering network. Since the similarity network is noisy, the key is to use a robust clustering algorithm, and we show that our formulation is more robust than the alternative constrained and unconstrained clustering approaches. Using this method, we first show state of the art results for the challenging cross-task problem, applied on Omniglot and ImageNet. Our results show that we can reconstruct semantic clusters with high accuracy. We then evaluate the performance of cross-domain transfer using images from the Office-31 and SVHN-MNIST tasks and present top accuracy on both datasets. Our approach doesn't explicitly deal with domain discrepancy. If we combine with a domain adaptation loss, it shows further improvement.
2402.13547
Zhipeng Xu
Zhipeng Xu, Zhenghao Liu, Yibin Liu, Chenyan Xiong, Yukun Yan, Shuo Wang, Shi Yu, Zhiyuan Liu, Ge Yu
ActiveRAG: Revealing the Treasures of Knowledge via Active Learning
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Retrieval Augmented Generation (RAG) has introduced a new paradigm for Large Language Models (LLMs), aiding in the resolution of knowledge-intensive tasks. However, current RAG models position LLMs as passive knowledge receptors, thereby restricting their capacity for learning and comprehending external knowledge. In this paper, we present ActiveRAG, an innovative RAG framework that shifts from passive knowledge acquisition to an active learning mechanism. This approach utilizes the Knowledge Construction mechanism to develop a deeper understanding of external knowledge by associating it with previously acquired or memorized knowledge. Subsequently, it designs the Cognitive Nexus mechanism to incorporate the outcomes from both chains of thought and knowledge construction, thereby calibrating the intrinsic cognition of LLMs. Our experimental results demonstrate that ActiveRAG surpasses previous RAG models, achieving a 5% improvement on question-answering datasets. All data and codes are available at https://github.com/OpenMatch/ActiveRAG.
[ { "created": "Wed, 21 Feb 2024 06:04:53 GMT", "version": "v1" } ]
2024-02-22
[ [ "Xu", "Zhipeng", "" ], [ "Liu", "Zhenghao", "" ], [ "Liu", "Yibin", "" ], [ "Xiong", "Chenyan", "" ], [ "Yan", "Yukun", "" ], [ "Wang", "Shuo", "" ], [ "Yu", "Shi", "" ], [ "Liu", "Zhiyuan", "" ], [ "Yu", "Ge", "" ] ]
Retrieval Augmented Generation (RAG) has introduced a new paradigm for Large Language Models (LLMs), aiding in the resolution of knowledge-intensive tasks. However, current RAG models position LLMs as passive knowledge receptors, thereby restricting their capacity for learning and comprehending external knowledge. In this paper, we present ActiveRAG, an innovative RAG framework that shifts from passive knowledge acquisition to an active learning mechanism. This approach utilizes the Knowledge Construction mechanism to develop a deeper understanding of external knowledge by associating it with previously acquired or memorized knowledge. Subsequently, it designs the Cognitive Nexus mechanism to incorporate the outcomes from both chains of thought and knowledge construction, thereby calibrating the intrinsic cognition of LLMs. Our experimental results demonstrate that ActiveRAG surpasses previous RAG models, achieving a 5% improvement on question-answering datasets. All data and codes are available at https://github.com/OpenMatch/ActiveRAG.
2009.09447
Lu Yang
Lu Yang, Qing Song, Zhihui Wang, Mengjie Hu, Chun Liu, Xueshi Xin, Wenhe Jia, Songcen Xu
Renovating Parsing R-CNN for Accurate Multiple Human Parsing
Accepted by ECCV 2020
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multiple human parsing aims to segment various human parts and associate each part with the corresponding instance simultaneously. This is a very challenging task due to the diverse human appearance, semantic ambiguity of different body parts, and complex background. Through analysis of multiple human parsing task, we observe that human-centric global perception and accurate instance-level parsing scoring are crucial for obtaining high-quality results. But the most state-of-the-art methods have not paid enough attention to these issues. To reverse this phenomenon, we present Renovating Parsing R-CNN (RP R-CNN), which introduces a global semantic enhanced feature pyramid network and a parsing re-scoring network into the existing high-performance pipeline. The proposed RP R-CNN adopts global semantic representation to enhance multi-scale features for generating human parsing maps, and regresses a confidence score to represent its quality. Extensive experiments show that RP R-CNN performs favorably against state-of-the-art methods on CIHP and MHP-v2 datasets. Code and models are available at https://github.com/soeaver/RP-R-CNN.
[ { "created": "Sun, 20 Sep 2020 14:55:35 GMT", "version": "v1" } ]
2020-09-22
[ [ "Yang", "Lu", "" ], [ "Song", "Qing", "" ], [ "Wang", "Zhihui", "" ], [ "Hu", "Mengjie", "" ], [ "Liu", "Chun", "" ], [ "Xin", "Xueshi", "" ], [ "Jia", "Wenhe", "" ], [ "Xu", "Songcen", "" ] ]
Multiple human parsing aims to segment various human parts and associate each part with the corresponding instance simultaneously. This is a very challenging task due to the diverse human appearance, semantic ambiguity of different body parts, and complex background. Through analysis of multiple human parsing task, we observe that human-centric global perception and accurate instance-level parsing scoring are crucial for obtaining high-quality results. But the most state-of-the-art methods have not paid enough attention to these issues. To reverse this phenomenon, we present Renovating Parsing R-CNN (RP R-CNN), which introduces a global semantic enhanced feature pyramid network and a parsing re-scoring network into the existing high-performance pipeline. The proposed RP R-CNN adopts global semantic representation to enhance multi-scale features for generating human parsing maps, and regresses a confidence score to represent its quality. Extensive experiments show that RP R-CNN performs favorably against state-of-the-art methods on CIHP and MHP-v2 datasets. Code and models are available at https://github.com/soeaver/RP-R-CNN.
2111.12685
Tao Hu
Tao Hu, Kripasindhu Sarkar, Lingjie Liu, Matthias Zwicker, Christian Theobalt
EgoRenderer: Rendering Human Avatars from Egocentric Camera Images
ICCV 2021. https://vcai.mpi-inf.mpg.de/projects/EgoRenderer/
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
We present EgoRenderer, a system for rendering full-body neural avatars of a person captured by a wearable, egocentric fisheye camera that is mounted on a cap or a VR headset. Our system renders photorealistic novel views of the actor and her motion from arbitrary virtual camera locations. Rendering full-body avatars from such egocentric images come with unique challenges due to the top-down view and large distortions. We tackle these challenges by decomposing the rendering process into several steps, including texture synthesis, pose construction, and neural image translation. For texture synthesis, we propose Ego-DPNet, a neural network that infers dense correspondences between the input fisheye images and an underlying parametric body model, and to extract textures from egocentric inputs. In addition, to encode dynamic appearances, our approach also learns an implicit texture stack that captures detailed appearance variation across poses and viewpoints. For correct pose generation, we first estimate body pose from the egocentric view using a parametric model. We then synthesize an external free-viewpoint pose image by projecting the parametric model to the user-specified target viewpoint. We next combine the target pose image and the textures into a combined feature image, which is transformed into the output color image using a neural image translation network. Experimental evaluations show that EgoRenderer is capable of generating realistic free-viewpoint avatars of a person wearing an egocentric camera. Comparisons to several baselines demonstrate the advantages of our approach.
[ { "created": "Wed, 24 Nov 2021 18:33:02 GMT", "version": "v1" } ]
2021-11-25
[ [ "Hu", "Tao", "" ], [ "Sarkar", "Kripasindhu", "" ], [ "Liu", "Lingjie", "" ], [ "Zwicker", "Matthias", "" ], [ "Theobalt", "Christian", "" ] ]
We present EgoRenderer, a system for rendering full-body neural avatars of a person captured by a wearable, egocentric fisheye camera that is mounted on a cap or a VR headset. Our system renders photorealistic novel views of the actor and her motion from arbitrary virtual camera locations. Rendering full-body avatars from such egocentric images come with unique challenges due to the top-down view and large distortions. We tackle these challenges by decomposing the rendering process into several steps, including texture synthesis, pose construction, and neural image translation. For texture synthesis, we propose Ego-DPNet, a neural network that infers dense correspondences between the input fisheye images and an underlying parametric body model, and to extract textures from egocentric inputs. In addition, to encode dynamic appearances, our approach also learns an implicit texture stack that captures detailed appearance variation across poses and viewpoints. For correct pose generation, we first estimate body pose from the egocentric view using a parametric model. We then synthesize an external free-viewpoint pose image by projecting the parametric model to the user-specified target viewpoint. We next combine the target pose image and the textures into a combined feature image, which is transformed into the output color image using a neural image translation network. Experimental evaluations show that EgoRenderer is capable of generating realistic free-viewpoint avatars of a person wearing an egocentric camera. Comparisons to several baselines demonstrate the advantages of our approach.
2106.15083
Peter Kulits
Peter Kulits and Jake Wall and Anka Bedetti and Michelle Henley and Sara Beery
ElephantBook: A Semi-Automated Human-in-the-Loop System for Elephant Re-Identification
null
null
10.1145/3460112.3471947
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
African elephants are vital to their ecosystems, but their populations are threatened by a rise in human-elephant conflict and poaching. Monitoring population dynamics is essential in conservation efforts; however, tracking elephants is a difficult task, usually relying on the invasive and sometimes dangerous placement of GPS collars. Although there have been many recent successes in the use of computer vision techniques for automated identification of other species, identification of elephants is extremely difficult and typically requires expertise as well as familiarity with elephants in the population. We have built and deployed a web-based platform and database for human-in-the-loop re-identification of elephants combining manual attribute labeling and state-of-the-art computer vision algorithms, known as ElephantBook. Our system is currently in use at the Mara Elephant Project, helping monitor the protected and at-risk population of elephants in the Greater Maasai Mara ecosystem. ElephantBook makes elephant re-identification usable by non-experts and scalable for use by multiple conservation NGOs.
[ { "created": "Tue, 29 Jun 2021 04:18:22 GMT", "version": "v1" }, { "created": "Wed, 30 Jun 2021 01:46:20 GMT", "version": "v2" } ]
2021-07-01
[ [ "Kulits", "Peter", "" ], [ "Wall", "Jake", "" ], [ "Bedetti", "Anka", "" ], [ "Henley", "Michelle", "" ], [ "Beery", "Sara", "" ] ]
African elephants are vital to their ecosystems, but their populations are threatened by a rise in human-elephant conflict and poaching. Monitoring population dynamics is essential in conservation efforts; however, tracking elephants is a difficult task, usually relying on the invasive and sometimes dangerous placement of GPS collars. Although there have been many recent successes in the use of computer vision techniques for automated identification of other species, identification of elephants is extremely difficult and typically requires expertise as well as familiarity with elephants in the population. We have built and deployed a web-based platform and database for human-in-the-loop re-identification of elephants combining manual attribute labeling and state-of-the-art computer vision algorithms, known as ElephantBook. Our system is currently in use at the Mara Elephant Project, helping monitor the protected and at-risk population of elephants in the Greater Maasai Mara ecosystem. ElephantBook makes elephant re-identification usable by non-experts and scalable for use by multiple conservation NGOs.
1901.00172
Shaobo Han
Shaobo Han and David B. Dunson
Supervised Multiscale Dimension Reduction for Spatial Interaction Networks
30 pages, 12 figures, revised for clarity and conciseness
null
null
null
cs.LG cs.SI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a multiscale supervised dimension reduction method for SPatial Interaction Network (SPIN) data, which consist of a collection of spatially coordinated interactions. This type of predictor arises when the sampling unit of data is composed of a collection of primitive variables, each of them being essentially unique, so that it becomes necessary to group the variables in order to simplify the representation and enhance interpretability. In this paper, we introduce an empirical Bayes approach called spinlets, which first constructs a partitioning tree to guide the reduction over multiple spatial granularities, and then refines the representation of predictors according to the relevance to the response. We consider an inverse Poisson regression model and propose a new multiscale generalized double Pareto prior, which is induced via a tree-structured parameter expansion scheme. Our approach is motivated by an application in soccer analytics, in which we obtain compact vectorial representations and readily interpretable visualizations of the complex network objects, supervised by the response of interest.
[ { "created": "Tue, 1 Jan 2019 16:01:36 GMT", "version": "v1" }, { "created": "Mon, 28 Jan 2019 14:46:17 GMT", "version": "v2" }, { "created": "Sat, 8 Jun 2019 20:28:55 GMT", "version": "v3" } ]
2019-06-11
[ [ "Han", "Shaobo", "" ], [ "Dunson", "David B.", "" ] ]
We introduce a multiscale supervised dimension reduction method for SPatial Interaction Network (SPIN) data, which consist of a collection of spatially coordinated interactions. This type of predictor arises when the sampling unit of data is composed of a collection of primitive variables, each of them being essentially unique, so that it becomes necessary to group the variables in order to simplify the representation and enhance interpretability. In this paper, we introduce an empirical Bayes approach called spinlets, which first constructs a partitioning tree to guide the reduction over multiple spatial granularities, and then refines the representation of predictors according to the relevance to the response. We consider an inverse Poisson regression model and propose a new multiscale generalized double Pareto prior, which is induced via a tree-structured parameter expansion scheme. Our approach is motivated by an application in soccer analytics, in which we obtain compact vectorial representations and readily interpretable visualizations of the complex network objects, supervised by the response of interest.
2308.04404
Sajjad Emdadi Mahdimahalleh
Sajjad Emdadi Mahdimahalleh
Revolutionizing Wireless Networks with Federated Learning: A Comprehensive Review
null
null
null
null
cs.LG cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
These days with the rising computational capabilities of wireless user equipment such as smart phones, tablets, and vehicles, along with growing concerns about sharing private data, a novel machine learning model called federated learning (FL) has emerged. FL enables the separation of data acquisition and computation at the central unit, which is different from centralized learning that occurs in a data center. FL is typically used in a wireless edge network where communication resources are limited and unreliable. Bandwidth constraints necessitate scheduling only a subset of UEs for updates in each iteration, and because the wireless medium is shared, transmissions are susceptible to interference and are not assured. The article discusses the significance of Machine Learning in wireless communication and highlights Federated Learning (FL) as a novel approach that could play a vital role in future mobile networks, particularly 6G and beyond.
[ { "created": "Tue, 1 Aug 2023 22:32:10 GMT", "version": "v1" } ]
2023-08-09
[ [ "Mahdimahalleh", "Sajjad Emdadi", "" ] ]
These days with the rising computational capabilities of wireless user equipment such as smart phones, tablets, and vehicles, along with growing concerns about sharing private data, a novel machine learning model called federated learning (FL) has emerged. FL enables the separation of data acquisition and computation at the central unit, which is different from centralized learning that occurs in a data center. FL is typically used in a wireless edge network where communication resources are limited and unreliable. Bandwidth constraints necessitate scheduling only a subset of UEs for updates in each iteration, and because the wireless medium is shared, transmissions are susceptible to interference and are not assured. The article discusses the significance of Machine Learning in wireless communication and highlights Federated Learning (FL) as a novel approach that could play a vital role in future mobile networks, particularly 6G and beyond.
2211.13449
Hongkai Zheng
Hongkai Zheng, Weili Nie, Arash Vahdat, Kamyar Azizzadenesheli, Anima Anandkumar
Fast Sampling of Diffusion Models via Operator Learning
null
null
null
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Diffusion models have found widespread adoption in various areas. However, their sampling process is slow because it requires hundreds to thousands of network evaluations to emulate a continuous process defined by differential equations. In this work, we use neural operators, an efficient method to solve the probability flow differential equations, to accelerate the sampling process of diffusion models. Compared to other fast sampling methods that have a sequential nature, we are the first to propose a parallel decoding method that generates images with only one model forward pass. We propose diffusion model sampling with neural operator (DSNO) that maps the initial condition, i.e., Gaussian distribution, to the continuous-time solution trajectory of the reverse diffusion process. To model the temporal correlations along the trajectory, we introduce temporal convolution layers that are parameterized in the Fourier space into the given diffusion model backbone. We show our method achieves state-of-the-art FID of 3.78 for CIFAR-10 and 7.83 for ImageNet-64 in the one-model-evaluation setting.
[ { "created": "Thu, 24 Nov 2022 07:30:27 GMT", "version": "v1" }, { "created": "Tue, 31 Jan 2023 22:45:41 GMT", "version": "v2" }, { "created": "Sat, 22 Jul 2023 08:47:10 GMT", "version": "v3" } ]
2023-07-25
[ [ "Zheng", "Hongkai", "" ], [ "Nie", "Weili", "" ], [ "Vahdat", "Arash", "" ], [ "Azizzadenesheli", "Kamyar", "" ], [ "Anandkumar", "Anima", "" ] ]
Diffusion models have found widespread adoption in various areas. However, their sampling process is slow because it requires hundreds to thousands of network evaluations to emulate a continuous process defined by differential equations. In this work, we use neural operators, an efficient method to solve the probability flow differential equations, to accelerate the sampling process of diffusion models. Compared to other fast sampling methods that have a sequential nature, we are the first to propose a parallel decoding method that generates images with only one model forward pass. We propose diffusion model sampling with neural operator (DSNO) that maps the initial condition, i.e., Gaussian distribution, to the continuous-time solution trajectory of the reverse diffusion process. To model the temporal correlations along the trajectory, we introduce temporal convolution layers that are parameterized in the Fourier space into the given diffusion model backbone. We show our method achieves state-of-the-art FID of 3.78 for CIFAR-10 and 7.83 for ImageNet-64 in the one-model-evaluation setting.
2109.13910
Shiyang Lu
Shiyang Lu, Rui Wang, Yinglong Miao, Chaitanya Mitash, Kostas Bekris
Online Object Model Reconstruction and Reuse for Lifelong Improvement of Robot Manipulation
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
This work proposes a robotic pipeline for picking and constrained placement of objects without geometric shape priors. Compared to recent efforts developed for similar tasks, where every object was assumed to be novel, the proposed system recognizes previously manipulated objects and performs online model reconstruction and reuse. Over a lifelong manipulation process, the system keeps learning features of objects it has interacted with and updates their reconstructed models. Whenever an instance of a previously manipulated object reappears, the system aims to first recognize it and then register its previously reconstructed model given the current observation. This step greatly reduces object shape uncertainty allowing the system to even reason for parts of objects, which are currently not observable. This also results in better manipulation efficiency as it reduces the need for active perception of the target object during manipulation. To get a reusable reconstructed model, the proposed pipeline adopts: i) TSDF for object representation, and ii) a variant of the standard particle filter algorithm for pose estimation and tracking of the partial object model. Furthermore, an effective way to construct and maintain a dataset of manipulated objects is presented. A sequence of real-world manipulation experiments is performed. They show how future manipulation tasks become more effective and efficient by reusing reconstructed models of previously manipulated objects, which were generated during their prior manipulation, instead of treating objects as novel every time.
[ { "created": "Tue, 28 Sep 2021 17:56:01 GMT", "version": "v1" }, { "created": "Mon, 23 May 2022 02:53:18 GMT", "version": "v2" } ]
2022-05-24
[ [ "Lu", "Shiyang", "" ], [ "Wang", "Rui", "" ], [ "Miao", "Yinglong", "" ], [ "Mitash", "Chaitanya", "" ], [ "Bekris", "Kostas", "" ] ]
This work proposes a robotic pipeline for picking and constrained placement of objects without geometric shape priors. Compared to recent efforts developed for similar tasks, where every object was assumed to be novel, the proposed system recognizes previously manipulated objects and performs online model reconstruction and reuse. Over a lifelong manipulation process, the system keeps learning features of objects it has interacted with and updates their reconstructed models. Whenever an instance of a previously manipulated object reappears, the system aims to first recognize it and then register its previously reconstructed model given the current observation. This step greatly reduces object shape uncertainty allowing the system to even reason for parts of objects, which are currently not observable. This also results in better manipulation efficiency as it reduces the need for active perception of the target object during manipulation. To get a reusable reconstructed model, the proposed pipeline adopts: i) TSDF for object representation, and ii) a variant of the standard particle filter algorithm for pose estimation and tracking of the partial object model. Furthermore, an effective way to construct and maintain a dataset of manipulated objects is presented. A sequence of real-world manipulation experiments is performed. They show how future manipulation tasks become more effective and efficient by reusing reconstructed models of previously manipulated objects, which were generated during their prior manipulation, instead of treating objects as novel every time.
1611.07420
Hongyang Qu
Hongyang Qu, Michalis Smyrnakis and Sandor M. Veres
SMCL - Stochastic Model Checker for Learning in Games
null
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A stochastic model checker is presented for analysing the performance of game-theoretic learning algorithms. The method enables the comparison of short-term behaviour of learning algorithms intended for practical use. The procedure of comparison is automated and it can be tuned for accuracy and speed. Users can choose from among various learning algorithms to select a suitable one for a given practical problem. The powerful performance of the method is enabled by a novel behaviour-similarity-relation, which compacts large state spaces into small ones. The stochastic model checking tool is tested on a set of examples classified into four categories to demonstrate the effectiveness of selecting suitable algorithms for distributed decision making.
[ { "created": "Thu, 10 Nov 2016 15:48:08 GMT", "version": "v1" } ]
2016-11-23
[ [ "Qu", "Hongyang", "" ], [ "Smyrnakis", "Michalis", "" ], [ "Veres", "Sandor M.", "" ] ]
A stochastic model checker is presented for analysing the performance of game-theoretic learning algorithms. The method enables the comparison of short-term behaviour of learning algorithms intended for practical use. The procedure of comparison is automated and it can be tuned for accuracy and speed. Users can choose from among various learning algorithms to select a suitable one for a given practical problem. The powerful performance of the method is enabled by a novel behaviour-similarity-relation, which compacts large state spaces into small ones. The stochastic model checking tool is tested on a set of examples classified into four categories to demonstrate the effectiveness of selecting suitable algorithms for distributed decision making.
2108.06165
Berkan Demirel
Berkan Demirel and Ramazan Gokberk Cinbis
Caption Generation on Scenes with Seen and Unseen Object Categories
Accepted for Publication at Image and Vision Computing (IMAVIS)
null
10.1016/j.imavis.2022.104515
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Image caption generation is one of the most challenging problems at the intersection of vision and language domains. In this work, we propose a realistic captioning task where the input scenes may incorporate visual objects with no corresponding visual or textual training examples. For this problem, we propose a detection-driven approach that consists of a single-stage generalized zero-shot detection model to recognize and localize instances of both seen and unseen classes, and a template-based captioning model that transforms detections into sentences. To improve the generalized zero-shot detection model, which provides essential information for captioning, we define effective class representations in terms of class-to-class semantic similarities, and leverage their special structure to construct an effective unseen/seen class confidence score calibration mechanism. We also propose a novel evaluation metric that provides additional insights for the captioning outputs by separately measuring the visual and non-visual contents of generated sentences. Our experiments highlight the importance of studying captioning in the proposed zero-shot setting, and verify the effectiveness of the proposed detection-driven zero-shot captioning approach.
[ { "created": "Fri, 13 Aug 2021 10:43:20 GMT", "version": "v1" }, { "created": "Fri, 1 Jul 2022 11:47:46 GMT", "version": "v2" } ]
2022-07-04
[ [ "Demirel", "Berkan", "" ], [ "Cinbis", "Ramazan Gokberk", "" ] ]
Image caption generation is one of the most challenging problems at the intersection of vision and language domains. In this work, we propose a realistic captioning task where the input scenes may incorporate visual objects with no corresponding visual or textual training examples. For this problem, we propose a detection-driven approach that consists of a single-stage generalized zero-shot detection model to recognize and localize instances of both seen and unseen classes, and a template-based captioning model that transforms detections into sentences. To improve the generalized zero-shot detection model, which provides essential information for captioning, we define effective class representations in terms of class-to-class semantic similarities, and leverage their special structure to construct an effective unseen/seen class confidence score calibration mechanism. We also propose a novel evaluation metric that provides additional insights for the captioning outputs by separately measuring the visual and non-visual contents of generated sentences. Our experiments highlight the importance of studying captioning in the proposed zero-shot setting, and verify the effectiveness of the proposed detection-driven zero-shot captioning approach.
2107.04000
Siddharth Ancha
Siddharth Ancha, Gaurav Pathak, Srinivasa G. Narasimhan, David Held
Active Safety Envelopes using Light Curtains with Probabilistic Guarantees
18 pages, Published at Robotics: Science and Systems (RSS) 2021
null
10.15607/rss.2021.xvii.045
null
cs.LG cs.AI cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To safely navigate unknown environments, robots must accurately perceive dynamic obstacles. Instead of directly measuring the scene depth with a LiDAR sensor, we explore the use of a much cheaper and higher resolution sensor: programmable light curtains. Light curtains are controllable depth sensors that sense only along a surface that a user selects. We use light curtains to estimate the safety envelope of a scene: a hypothetical surface that separates the robot from all obstacles. We show that generating light curtains that sense random locations (from a particular distribution) can quickly discover the safety envelope for scenes with unknown objects. Importantly, we produce theoretical safety guarantees on the probability of detecting an obstacle using random curtains. We combine random curtains with a machine learning based model that forecasts and tracks the motion of the safety envelope efficiently. Our method accurately estimates safety envelopes while providing probabilistic safety guarantees that can be used to certify the efficacy of a robot perception system to detect and avoid dynamic obstacles. We evaluate our approach in a simulated urban driving environment and a real-world environment with moving pedestrians using a light curtain device and show that we can estimate safety envelopes efficiently and effectively. Project website: https://siddancha.github.io/projects/active-safety-envelopes-with-guarantees
[ { "created": "Thu, 8 Jul 2021 17:46:05 GMT", "version": "v1" } ]
2021-07-09
[ [ "Ancha", "Siddharth", "" ], [ "Pathak", "Gaurav", "" ], [ "Narasimhan", "Srinivasa G.", "" ], [ "Held", "David", "" ] ]
To safely navigate unknown environments, robots must accurately perceive dynamic obstacles. Instead of directly measuring the scene depth with a LiDAR sensor, we explore the use of a much cheaper and higher resolution sensor: programmable light curtains. Light curtains are controllable depth sensors that sense only along a surface that a user selects. We use light curtains to estimate the safety envelope of a scene: a hypothetical surface that separates the robot from all obstacles. We show that generating light curtains that sense random locations (from a particular distribution) can quickly discover the safety envelope for scenes with unknown objects. Importantly, we produce theoretical safety guarantees on the probability of detecting an obstacle using random curtains. We combine random curtains with a machine learning based model that forecasts and tracks the motion of the safety envelope efficiently. Our method accurately estimates safety envelopes while providing probabilistic safety guarantees that can be used to certify the efficacy of a robot perception system to detect and avoid dynamic obstacles. We evaluate our approach in a simulated urban driving environment and a real-world environment with moving pedestrians using a light curtain device and show that we can estimate safety envelopes efficiently and effectively. Project website: https://siddancha.github.io/projects/active-safety-envelopes-with-guarantees
2310.06599
Daniel Russo
Christiaan Verwijs, Daniel Russo
Do Agile Scaling Approaches Make A Difference? An Empirical Comparison of Team Effectiveness Across Popular Scaling Approaches
null
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the era of Agile methodologies, organizations are exploring strategies to scale development across teams. Various scaling strategies have emerged, from "SAFe" to "LeSS", with some organizations creating their own methods. Despite numerous studies on organizational challenges with these approaches, none have empirically compared their impact on Agile team effectiveness. This study aims to evaluate the effectiveness of Agile teams using different scaling methods, focusing on factors like responsiveness, stakeholder satisfaction, and management approach. We surveyed 15,078 Agile team members and 1,841 stakeholders, followed by statistical analyses. The results showed minor differences in effectiveness across scaling strategies. In essence, the choice of scaling strategy does not significantly impact team effectiveness, and organizations should select based on their culture and management style.
[ { "created": "Tue, 10 Oct 2023 13:06:38 GMT", "version": "v1" } ]
2023-10-11
[ [ "Verwijs", "Christiaan", "" ], [ "Russo", "Daniel", "" ] ]
In the era of Agile methodologies, organizations are exploring strategies to scale development across teams. Various scaling strategies have emerged, from "SAFe" to "LeSS", with some organizations creating their own methods. Despite numerous studies on organizational challenges with these approaches, none have empirically compared their impact on Agile team effectiveness. This study aims to evaluate the effectiveness of Agile teams using different scaling methods, focusing on factors like responsiveness, stakeholder satisfaction, and management approach. We surveyed 15,078 Agile team members and 1,841 stakeholders, followed by statistical analyses. The results showed minor differences in effectiveness across scaling strategies. In essence, the choice of scaling strategy does not significantly impact team effectiveness, and organizations should select based on their culture and management style.