id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
1705.06401
Kostas Alexis PhD
Frank Mascarich, Taylor Wilson, Tung Dang, Shehryar Khattak, Christos Papachristos, Kostas Alexis
Towards Robotically Supported Decommissioning of Nuclear Sites
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper overviews certain radiation detection, perception, and planning challenges for nuclearized robotics that aim to support the waste management and decommissioning mission. To enable the autonomous monitoring, inspection and multi-modal characterization of nuclear sites, we discuss important problems relevant to the tasks of navigation in degraded visual environments, localizability-aware exploration and mapping without any prior knowledge of the environment, as well as robotic radiation detection. Future contributions will focus on each of the relevant problems, will aim to deliver a comprehensive multi-modal mapping result, and will emphasize on extensive field evaluation and system verification.
[ { "created": "Thu, 18 May 2017 02:58:23 GMT", "version": "v1" } ]
2017-05-19
[ [ "Mascarich", "Frank", "" ], [ "Wilson", "Taylor", "" ], [ "Dang", "Tung", "" ], [ "Khattak", "Shehryar", "" ], [ "Papachristos", "Christos", "" ], [ "Alexis", "Kostas", "" ] ]
This paper overviews certain radiation detection, perception, and planning challenges for nuclearized robotics that aim to support the waste management and decommissioning mission. To enable the autonomous monitoring, inspection and multi-modal characterization of nuclear sites, we discuss important problems relevant to the tasks of navigation in degraded visual environments, localizability-aware exploration and mapping without any prior knowledge of the environment, as well as robotic radiation detection. Future contributions will focus on each of the relevant problems, will aim to deliver a comprehensive multi-modal mapping result, and will emphasize on extensive field evaluation and system verification.
1304.2946
Baofeng Wu
Jia Zheng, Baofeng Wu, Yufu Chen, Zhuojun Liu
Constructing $2m$-variable Boolean functions with optimal algebraic immunity based on polar decomposition of $\mathbb{F}_{2^{2m}}^*$
20 pages
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Constructing $2m$-variable Boolean functions with optimal algebraic immunity based on decomposition of additive group of the finite field $\mathbb{F}_{2^{2m}}$ seems to be a promising approach since Tu and Deng's work. In this paper, we consider the same problem in a new way. Based on polar decomposition of the multiplicative group of $\mathbb{F}_{2^{2m}}$, we propose a new construction of Boolean functions with optimal algebraic immunity. By a slight modification of it, we obtain a class of balanced Boolean functions achieving optimal algebraic immunity, which also have optimal algebraic degree and high nonlinearity. Computer investigations imply that this class of functions also behave well against fast algebraic attacks.
[ { "created": "Wed, 10 Apr 2013 13:18:05 GMT", "version": "v1" } ]
2013-04-11
[ [ "Zheng", "Jia", "" ], [ "Wu", "Baofeng", "" ], [ "Chen", "Yufu", "" ], [ "Liu", "Zhuojun", "" ] ]
Constructing $2m$-variable Boolean functions with optimal algebraic immunity based on decomposition of additive group of the finite field $\mathbb{F}_{2^{2m}}$ seems to be a promising approach since Tu and Deng's work. In this paper, we consider the same problem in a new way. Based on polar decomposition of the multiplicative group of $\mathbb{F}_{2^{2m}}$, we propose a new construction of Boolean functions with optimal algebraic immunity. By a slight modification of it, we obtain a class of balanced Boolean functions achieving optimal algebraic immunity, which also have optimal algebraic degree and high nonlinearity. Computer investigations imply that this class of functions also behave well against fast algebraic attacks.
2201.09178
Serdar Kadioglu
Xin Wang, Serdar Kadioglu
Dichotomic Pattern Mining with Applications to Intent Prediction from Semi-Structured Clickstream Datasets
The AAAI-22 Workshop on Knowledge Discovery from Unstructured Data in Financial Services (KDF@AAAI'22)
null
null
null
cs.AI cs.IR cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
We introduce a pattern mining framework that operates on semi-structured datasets and exploits the dichotomy between outcomes. Our approach takes advantage of constraint reasoning to find sequential patterns that occur frequently and exhibit desired properties. This allows the creation of novel pattern embeddings that are useful for knowledge extraction and predictive modeling. Finally, we present an application on customer intent prediction from digital clickstream data. Overall, we show that pattern embeddings play an integrator role between semi-structured data and machine learning models, improve the performance of the downstream task and retain interpretability.
[ { "created": "Sun, 23 Jan 2022 05:00:50 GMT", "version": "v1" } ]
2022-01-25
[ [ "Wang", "Xin", "" ], [ "Kadioglu", "Serdar", "" ] ]
We introduce a pattern mining framework that operates on semi-structured datasets and exploits the dichotomy between outcomes. Our approach takes advantage of constraint reasoning to find sequential patterns that occur frequently and exhibit desired properties. This allows the creation of novel pattern embeddings that are useful for knowledge extraction and predictive modeling. Finally, we present an application on customer intent prediction from digital clickstream data. Overall, we show that pattern embeddings play an integrator role between semi-structured data and machine learning models, improve the performance of the downstream task and retain interpretability.
2206.01192
Marcus Hutter
Marcus Hutter and Steven Hansen
Uniqueness and Complexity of Inverse MDP Models
34 pages, 4 fgures
null
null
null
cs.LG cs.CC
http://creativecommons.org/licenses/by/4.0/
What is the action sequence aa'a" that was likely responsible for reaching state s"' (from state s) in 3 steps? Addressing such questions is important in causal reasoning and in reinforcement learning. Inverse "MDP" models p(aa'a"|ss"') can be used to answer them. In the traditional "forward" view, transition "matrix" p(s'|sa) and policy {\pi}(a|s) uniquely determine "everything": the whole dynamics p(as'a's"a"...|s), and with it, the action-conditional state process p(s's"...|saa'a"), the multi-step inverse models p(aa'a"...|ss^i), etc. If the latter is our primary concern, a natural question, analogous to the forward case is to which extent 1-step inverse model p(a|ss') plus policy {\pi}(a|s) determine the multi-step inverse models or even the whole dynamics. In other words, can forward models be inferred from inverse models or even be side-stepped. This work addresses this question and variations thereof, and also whether there are efficient decision/inference algorithms for this.
[ { "created": "Thu, 2 Jun 2022 17:52:10 GMT", "version": "v1" } ]
2022-06-03
[ [ "Hutter", "Marcus", "" ], [ "Hansen", "Steven", "" ] ]
What is the action sequence aa'a" that was likely responsible for reaching state s"' (from state s) in 3 steps? Addressing such questions is important in causal reasoning and in reinforcement learning. Inverse "MDP" models p(aa'a"|ss"') can be used to answer them. In the traditional "forward" view, transition "matrix" p(s'|sa) and policy {\pi}(a|s) uniquely determine "everything": the whole dynamics p(as'a's"a"...|s), and with it, the action-conditional state process p(s's"...|saa'a"), the multi-step inverse models p(aa'a"...|ss^i), etc. If the latter is our primary concern, a natural question, analogous to the forward case is to which extent 1-step inverse model p(a|ss') plus policy {\pi}(a|s) determine the multi-step inverse models or even the whole dynamics. In other words, can forward models be inferred from inverse models or even be side-stepped. This work addresses this question and variations thereof, and also whether there are efficient decision/inference algorithms for this.
1904.05553
Phu Lai
Phu Lai, Qiang He, Mohamed Abdelrazek, Feifei Chen, John Hosking, John Grundy, Yun Yang
Optimal Edge User Allocation in Edge Computing with Variable Sized Vector Bin Packing
null
null
10.1007/978-3-030-03596-9_15
null
cs.DC
http://creativecommons.org/licenses/by/4.0/
In mobile edge computing, edge servers are geographically distributed around base stations placed near end-users to provide highly accessible and efficient computing capacities and services. In the mobile edge computing environment, a service provider can deploy its service on hired edge servers to reduce end-to-end service delays experienced by its end-users allocated to those edge servers. An optimal deployment must maximize the number of allocated end-users and minimize the number of hired edge servers while ensuring the required quality of service for end-users. In this paper, we model the edge user allocation (EUA) problem as a bin packing problem, and introduce a novel, optimal approach to solving the EUA problem based on the Lexicographic Goal Programming technique. We have conducted three series of experiments to evaluate the proposed approach against two representative baseline approaches. Experimental results show that our approach significantly outperforms the other two approaches.
[ { "created": "Thu, 11 Apr 2019 07:06:04 GMT", "version": "v1" } ]
2019-04-12
[ [ "Lai", "Phu", "" ], [ "He", "Qiang", "" ], [ "Abdelrazek", "Mohamed", "" ], [ "Chen", "Feifei", "" ], [ "Hosking", "John", "" ], [ "Grundy", "John", "" ], [ "Yang", "Yun", "" ] ]
In mobile edge computing, edge servers are geographically distributed around base stations placed near end-users to provide highly accessible and efficient computing capacities and services. In the mobile edge computing environment, a service provider can deploy its service on hired edge servers to reduce end-to-end service delays experienced by its end-users allocated to those edge servers. An optimal deployment must maximize the number of allocated end-users and minimize the number of hired edge servers while ensuring the required quality of service for end-users. In this paper, we model the edge user allocation (EUA) problem as a bin packing problem, and introduce a novel, optimal approach to solving the EUA problem based on the Lexicographic Goal Programming technique. We have conducted three series of experiments to evaluate the proposed approach against two representative baseline approaches. Experimental results show that our approach significantly outperforms the other two approaches.
2304.06049
Kevin Chang
Kevin Chang, Nathan Dahlin, Rahul Jain and Pierluigi Nuzzo
Exact and Cost-Effective Automated Transformation of Neural Network Controllers to Decision Tree Controllers
null
null
null
null
cs.LG cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Over the past decade, neural network (NN)-based controllers have demonstrated remarkable efficacy in a variety of decision-making tasks. However, their black-box nature and the risk of unexpected behaviors and surprising results pose a challenge to their deployment in real-world systems with strong guarantees of correctness and safety. We address these limitations by investigating the transformation of NN-based controllers into equivalent soft decision tree (SDT)-based controllers and its impact on verifiability. Differently from previous approaches, we focus on discrete-output NN controllers including rectified linear unit (ReLU) activation functions as well as argmax operations. We then devise an exact but cost-effective transformation algorithm, in that it can automatically prune redundant branches. We evaluate our approach using two benchmarks from the OpenAI Gym environment. Our results indicate that the SDT transformation can benefit formal verification, showing runtime improvements of up to 21x and 2x for MountainCar-v0 and CartPole-v0, respectively.
[ { "created": "Tue, 11 Apr 2023 19:52:30 GMT", "version": "v1" }, { "created": "Sat, 16 Sep 2023 00:52:18 GMT", "version": "v2" } ]
2023-09-19
[ [ "Chang", "Kevin", "" ], [ "Dahlin", "Nathan", "" ], [ "Jain", "Rahul", "" ], [ "Nuzzo", "Pierluigi", "" ] ]
Over the past decade, neural network (NN)-based controllers have demonstrated remarkable efficacy in a variety of decision-making tasks. However, their black-box nature and the risk of unexpected behaviors and surprising results pose a challenge to their deployment in real-world systems with strong guarantees of correctness and safety. We address these limitations by investigating the transformation of NN-based controllers into equivalent soft decision tree (SDT)-based controllers and its impact on verifiability. Differently from previous approaches, we focus on discrete-output NN controllers including rectified linear unit (ReLU) activation functions as well as argmax operations. We then devise an exact but cost-effective transformation algorithm, in that it can automatically prune redundant branches. We evaluate our approach using two benchmarks from the OpenAI Gym environment. Our results indicate that the SDT transformation can benefit formal verification, showing runtime improvements of up to 21x and 2x for MountainCar-v0 and CartPole-v0, respectively.
2004.03868
Dieuwke Hupkes
Diana Rodr\'iguez Luna, Edoardo Maria Ponti, Dieuwke Hupkes, Elia Bruni
Internal and external pressures on language emergence: least effort, object constancy and frequency
Accepted for EMNLP-findings
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In previous work, artificial agents were shown to achieve almost perfect accuracy in referential games where they have to communicate to identify images. Nevertheless, the resulting communication protocols rarely display salient features of natural languages, such as compositionality. In this paper, we propose some realistic sources of pressure on communication that avert this outcome. More specifically, we formalise the principle of least effort through an auxiliary objective. Moreover, we explore several game variants, inspired by the principle of object constancy, in which we alter the frequency, position, and luminosity of the objects in the images. We perform an extensive analysis on their effect through compositionality metrics, diagnostic classifiers, and zero-shot evaluation. Our findings reveal that the proposed sources of pressure result in emerging languages with less redundancy, more focus on high-level conceptual information, and better abilities of generalisation. Overall, our contributions reduce the gap between emergent and natural languages.
[ { "created": "Wed, 8 Apr 2020 08:12:41 GMT", "version": "v1" }, { "created": "Fri, 24 Jul 2020 17:48:06 GMT", "version": "v2" }, { "created": "Tue, 13 Oct 2020 09:29:44 GMT", "version": "v3" } ]
2020-10-14
[ [ "Luna", "Diana Rodríguez", "" ], [ "Ponti", "Edoardo Maria", "" ], [ "Hupkes", "Dieuwke", "" ], [ "Bruni", "Elia", "" ] ]
In previous work, artificial agents were shown to achieve almost perfect accuracy in referential games where they have to communicate to identify images. Nevertheless, the resulting communication protocols rarely display salient features of natural languages, such as compositionality. In this paper, we propose some realistic sources of pressure on communication that avert this outcome. More specifically, we formalise the principle of least effort through an auxiliary objective. Moreover, we explore several game variants, inspired by the principle of object constancy, in which we alter the frequency, position, and luminosity of the objects in the images. We perform an extensive analysis on their effect through compositionality metrics, diagnostic classifiers, and zero-shot evaluation. Our findings reveal that the proposed sources of pressure result in emerging languages with less redundancy, more focus on high-level conceptual information, and better abilities of generalisation. Overall, our contributions reduce the gap between emergent and natural languages.
1306.2347
Sivan Sabato
Sivan Sabato and Anand D. Sarwate and Nathan Srebro
Auditing: Active Learning with Outcome-Dependent Query Costs
Corrections in section 5
Neural Information Processing Systems 26 (NIPS), 512-520, 2013
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a learning setting in which unlabeled data is free, and the cost of a label depends on its value, which is not known in advance. We study binary classification in an extreme case, where the algorithm only pays for negative labels. Our motivation are applications such as fraud detection, in which investigating an honest transaction should be avoided if possible. We term the setting auditing, and consider the auditing complexity of an algorithm: the number of negative labels the algorithm requires in order to learn a hypothesis with low relative error. We design auditing algorithms for simple hypothesis classes (thresholds and rectangles), and show that with these algorithms, the auditing complexity can be significantly lower than the active label complexity. We also discuss a general competitive approach for auditing and possible modifications to the framework.
[ { "created": "Mon, 10 Jun 2013 20:18:48 GMT", "version": "v1" }, { "created": "Fri, 27 Sep 2013 17:57:33 GMT", "version": "v2" }, { "created": "Tue, 15 Oct 2013 18:27:07 GMT", "version": "v3" }, { "created": "Sun, 12 Jul 2015 10:11:57 GMT", "version": "v4" } ]
2015-07-14
[ [ "Sabato", "Sivan", "" ], [ "Sarwate", "Anand D.", "" ], [ "Srebro", "Nathan", "" ] ]
We propose a learning setting in which unlabeled data is free, and the cost of a label depends on its value, which is not known in advance. We study binary classification in an extreme case, where the algorithm only pays for negative labels. Our motivation are applications such as fraud detection, in which investigating an honest transaction should be avoided if possible. We term the setting auditing, and consider the auditing complexity of an algorithm: the number of negative labels the algorithm requires in order to learn a hypothesis with low relative error. We design auditing algorithms for simple hypothesis classes (thresholds and rectangles), and show that with these algorithms, the auditing complexity can be significantly lower than the active label complexity. We also discuss a general competitive approach for auditing and possible modifications to the framework.
1409.7286
Vinay Vaishampayan
Antonio Campello and Vinay A. Vaishampayan
Reliability of Erasure Coded Storage Systems: A Geometric Approach
28 pages. 8 figures. Presented in part at IEEE International Conference on BigData 2013, Santa Clara, CA, Oct. 2013 and to be presented in part at 2014 IEEE Information Theory Workshop, Tasmania, Australia, Nov. 2014. New analysis added May 2015. Further Update Aug. 2015
null
10.1109/TIT.2015.2477401
null
cs.DC cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the probability of data loss, or equivalently, the reliability function for an erasure coded distributed data storage system under worst case conditions. Data loss in an erasure coded system depends on probability distributions for the disk repair duration and the disk failure duration. In previous works, the data loss probability of such systems has been studied under the assumption of exponentially distributed disk failure and disk repair durations, using well-known analytic methods from the theory of Markov processes. These methods lead to an estimate of the integral of the reliability function. Here, we address the problem of directly calculating the data loss probability for general repair and failure duration distributions. A closed limiting form is developed for the probability of data loss and it is shown that the probability of the event that a repair duration exceeds a failure duration is sufficient for characterizing the data loss probability. For the case of constant repair duration, we develop an expression for the conditional data loss probability given the number of failures experienced by a each node in a given time window. We do so by developing a geometric approach that relies on the computation of volumes of a family of polytopes that are related to the code. An exact calculation is provided and an upper bound on the data loss probability is obtained by posing the problem as a set avoidance problem. Theoretical calculations are compared to simulation results.
[ { "created": "Thu, 25 Sep 2014 15:12:15 GMT", "version": "v1" }, { "created": "Thu, 2 Apr 2015 12:50:56 GMT", "version": "v2" }, { "created": "Thu, 21 May 2015 21:47:39 GMT", "version": "v3" }, { "created": "Thu, 20 Aug 2015 03:37:26 GMT", "version": "v4" } ]
2016-11-18
[ [ "Campello", "Antonio", "" ], [ "Vaishampayan", "Vinay A.", "" ] ]
We consider the probability of data loss, or equivalently, the reliability function for an erasure coded distributed data storage system under worst case conditions. Data loss in an erasure coded system depends on probability distributions for the disk repair duration and the disk failure duration. In previous works, the data loss probability of such systems has been studied under the assumption of exponentially distributed disk failure and disk repair durations, using well-known analytic methods from the theory of Markov processes. These methods lead to an estimate of the integral of the reliability function. Here, we address the problem of directly calculating the data loss probability for general repair and failure duration distributions. A closed limiting form is developed for the probability of data loss and it is shown that the probability of the event that a repair duration exceeds a failure duration is sufficient for characterizing the data loss probability. For the case of constant repair duration, we develop an expression for the conditional data loss probability given the number of failures experienced by a each node in a given time window. We do so by developing a geometric approach that relies on the computation of volumes of a family of polytopes that are related to the code. An exact calculation is provided and an upper bound on the data loss probability is obtained by posing the problem as a set avoidance problem. Theoretical calculations are compared to simulation results.
cs/0504004
Henning Bostelmann
Henning Bostelmann
Statistical analysis of quality measures for mobile ad hoc networks
Master's thesis; 78 pages, 10 figures
null
null
null
cs.NI cs.DM
null
How can the quality of a mobile ad hoc network (MANET) be quantified? This work aims at an answer based on the lower network layers, i.e. on connectivity between the wireless nodes, using statistical methods. A number of different quality measures are introduced and classified according to their scaling behaviour. They are analysed in a statistical model of a 1-dimensional MANET system (corresponding e.g. to cars on a road). Neglecting boundary effects, the model turns out to be exactly solvable, so that explicit analytical results for the quality levels can be obtained both at fixed system size and in the limit of large systems. In particular, this improves estimates known in the literature for the probability of connectedness of 1-dimensional MANETs.
[ { "created": "Sat, 2 Apr 2005 23:19:43 GMT", "version": "v1" } ]
2007-05-23
[ [ "Bostelmann", "Henning", "" ] ]
How can the quality of a mobile ad hoc network (MANET) be quantified? This work aims at an answer based on the lower network layers, i.e. on connectivity between the wireless nodes, using statistical methods. A number of different quality measures are introduced and classified according to their scaling behaviour. They are analysed in a statistical model of a 1-dimensional MANET system (corresponding e.g. to cars on a road). Neglecting boundary effects, the model turns out to be exactly solvable, so that explicit analytical results for the quality levels can be obtained both at fixed system size and in the limit of large systems. In particular, this improves estimates known in the literature for the probability of connectedness of 1-dimensional MANETs.
2205.00718
Hermann Kroll
Hermann Kroll and Florian Pl\"otzky and Jan Pirklbauer and Wolf-Tilo Balke
What a Publication Tells You -- Benefits of Narrative Information Access in Digital Libraries
Accepted at JCDL2022, 8 pages, 1 figure
null
10.1145/3529372.3530928
null
cs.DL
http://creativecommons.org/licenses/by/4.0/
Knowledge bases allow effective access paths in digital libraries. Here users can specify their information need as graph patterns for precise searches and structured overviews (by allowing variables in queries). But especially when considering textual sources that contain narrative information, i.e., short stories of interest, harvesting statements from them to construct knowledge bases may be a serious threat to the statements' validity. A piece of information, originally stated in a coherent line of arguments, could be used in a knowledge base query processing without considering its vital context conditions. And this can lead to invalid results. That is why we argue to move towards narrative information access by considering contexts in the query processing step. In this way digital libraries can allow users to query for narrative information and supply them with valid answers. In this paper we define narrative information access, demonstrate its benefits for Covid 19 related questions, and argue on the generalizability for other domains such as political sciences.
[ { "created": "Mon, 2 May 2022 08:14:40 GMT", "version": "v1" }, { "created": "Tue, 3 May 2022 05:40:01 GMT", "version": "v2" }, { "created": "Wed, 4 May 2022 05:59:06 GMT", "version": "v3" } ]
2022-05-05
[ [ "Kroll", "Hermann", "" ], [ "Plötzky", "Florian", "" ], [ "Pirklbauer", "Jan", "" ], [ "Balke", "Wolf-Tilo", "" ] ]
Knowledge bases allow effective access paths in digital libraries. Here users can specify their information need as graph patterns for precise searches and structured overviews (by allowing variables in queries). But especially when considering textual sources that contain narrative information, i.e., short stories of interest, harvesting statements from them to construct knowledge bases may be a serious threat to the statements' validity. A piece of information, originally stated in a coherent line of arguments, could be used in a knowledge base query processing without considering its vital context conditions. And this can lead to invalid results. That is why we argue to move towards narrative information access by considering contexts in the query processing step. In this way digital libraries can allow users to query for narrative information and supply them with valid answers. In this paper we define narrative information access, demonstrate its benefits for Covid 19 related questions, and argue on the generalizability for other domains such as political sciences.
2102.07981
Mingbao Lin
Mingbao Lin, Rongrong Ji, Zihan Xu, Baochang Zhang, Fei Chao, Chia-Wen Lin, Ling Shao
SiMaN: Sign-to-Magnitude Network Binarization
Accepted by IEEE TPAMI, 2022
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Binary neural networks (BNNs) have attracted broad research interest due to their efficient storage and computational ability. Nevertheless, a significant challenge of BNNs lies in handling discrete constraints while ensuring bit entropy maximization, which typically makes their weight optimization very difficult. Existing methods relax the learning using the sign function, which simply encodes positive weights into +1s, and -1s otherwise. Alternatively, we formulate an angle alignment objective to constrain the weight binarization to {0,+1} to solve the challenge. In this paper, we show that our weight binarization provides an analytical solution by encoding high-magnitude weights into +1s, and 0s otherwise. Therefore, a high-quality discrete solution is established in a computationally efficient manner without the sign function. We prove that the learned weights of binarized networks roughly follow a Laplacian distribution that does not allow entropy maximization, and further demonstrate that it can be effectively solved by simply removing the $\ell_2$ regularization during network training. Our method, dubbed sign-to-magnitude network binarization (SiMaN), is evaluated on CIFAR-10 and ImageNet, demonstrating its superiority over the sign-based state-of-the-arts. Our source code, experimental settings, training logs and binary models are available at https://github.com/lmbxmu/SiMaN.
[ { "created": "Tue, 16 Feb 2021 07:03:51 GMT", "version": "v1" }, { "created": "Wed, 24 Mar 2021 12:51:21 GMT", "version": "v2" }, { "created": "Tue, 4 Oct 2022 17:24:54 GMT", "version": "v3" } ]
2022-10-05
[ [ "Lin", "Mingbao", "" ], [ "Ji", "Rongrong", "" ], [ "Xu", "Zihan", "" ], [ "Zhang", "Baochang", "" ], [ "Chao", "Fei", "" ], [ "Lin", "Chia-Wen", "" ], [ "Shao", "Ling", "" ] ]
Binary neural networks (BNNs) have attracted broad research interest due to their efficient storage and computational ability. Nevertheless, a significant challenge of BNNs lies in handling discrete constraints while ensuring bit entropy maximization, which typically makes their weight optimization very difficult. Existing methods relax the learning using the sign function, which simply encodes positive weights into +1s, and -1s otherwise. Alternatively, we formulate an angle alignment objective to constrain the weight binarization to {0,+1} to solve the challenge. In this paper, we show that our weight binarization provides an analytical solution by encoding high-magnitude weights into +1s, and 0s otherwise. Therefore, a high-quality discrete solution is established in a computationally efficient manner without the sign function. We prove that the learned weights of binarized networks roughly follow a Laplacian distribution that does not allow entropy maximization, and further demonstrate that it can be effectively solved by simply removing the $\ell_2$ regularization during network training. Our method, dubbed sign-to-magnitude network binarization (SiMaN), is evaluated on CIFAR-10 and ImageNet, demonstrating its superiority over the sign-based state-of-the-arts. Our source code, experimental settings, training logs and binary models are available at https://github.com/lmbxmu/SiMaN.
2205.06226
Zixin Wen
Zixin Wen, Yuanzhi Li
The Mechanism of Prediction Head in Non-contrastive Self-supervised Learning
88 pages, comments welcome
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
Recently the surprising discovery of the Bootstrap Your Own Latent (BYOL) method by Grill et al. shows the negative term in contrastive loss can be removed if we add the so-called prediction head to the network. This initiated the research of non-contrastive self-supervised learning. It is mysterious why even when there exist trivial collapsed global optimal solutions, neural networks trained by (stochastic) gradient descent can still learn competitive representations. This phenomenon is a typical example of implicit bias in deep learning and remains little understood. In this work, we present our empirical and theoretical discoveries on non-contrastive self-supervised learning. Empirically, we find that when the prediction head is initialized as an identity matrix with only its off-diagonal entries being trainable, the network can learn competitive representations even though the trivial optima still exist in the training objective. Theoretically, we present a framework to understand the behavior of the trainable, but identity-initialized prediction head. Under a simple setting, we characterized the substitution effect and acceleration effect of the prediction head. The substitution effect happens when learning the stronger features in some neurons can substitute for learning these features in other neurons through updating the prediction head. And the acceleration effect happens when the substituted features can accelerate the learning of other weaker features to prevent them from being ignored. These two effects enable the neural networks to learn all the features rather than focus only on learning the stronger features, which is likely the cause of the dimensional collapse phenomenon. To the best of our knowledge, this is also the first end-to-end optimization guarantee for non-contrastive methods using nonlinear neural networks with a trainable prediction head and normalization.
[ { "created": "Thu, 12 May 2022 17:15:53 GMT", "version": "v1" }, { "created": "Sat, 14 May 2022 05:04:42 GMT", "version": "v2" }, { "created": "Sun, 15 Jan 2023 23:51:28 GMT", "version": "v3" } ]
2023-01-18
[ [ "Wen", "Zixin", "" ], [ "Li", "Yuanzhi", "" ] ]
Recently the surprising discovery of the Bootstrap Your Own Latent (BYOL) method by Grill et al. shows the negative term in contrastive loss can be removed if we add the so-called prediction head to the network. This initiated the research of non-contrastive self-supervised learning. It is mysterious why even when there exist trivial collapsed global optimal solutions, neural networks trained by (stochastic) gradient descent can still learn competitive representations. This phenomenon is a typical example of implicit bias in deep learning and remains little understood. In this work, we present our empirical and theoretical discoveries on non-contrastive self-supervised learning. Empirically, we find that when the prediction head is initialized as an identity matrix with only its off-diagonal entries being trainable, the network can learn competitive representations even though the trivial optima still exist in the training objective. Theoretically, we present a framework to understand the behavior of the trainable, but identity-initialized prediction head. Under a simple setting, we characterized the substitution effect and acceleration effect of the prediction head. The substitution effect happens when learning the stronger features in some neurons can substitute for learning these features in other neurons through updating the prediction head. And the acceleration effect happens when the substituted features can accelerate the learning of other weaker features to prevent them from being ignored. These two effects enable the neural networks to learn all the features rather than focus only on learning the stronger features, which is likely the cause of the dimensional collapse phenomenon. To the best of our knowledge, this is also the first end-to-end optimization guarantee for non-contrastive methods using nonlinear neural networks with a trainable prediction head and normalization.
2104.13092
Bin Cao
Mingrui Cao, Long Zhang, Bin Cao
Towards On-Device Federated Learning: A Direct Acyclic Graph-based Blockchain Approach
null
null
null
null
cs.DC cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Due to the distributed characteristics of Federated Learning (FL), the vulnerability of global model and coordination of devices are the main obstacle. As a promising solution of decentralization, scalability and security, leveraging blockchain in FL has attracted much attention in recent years. However, the traditional consensus mechanisms designed for blockchain like Proof of Work (PoW) would cause extreme resource consumption, which reduces the efficiency of FL greatly, especially when the participating devices are wireless and resource-limited. In order to address device asynchrony and anomaly detection in FL while avoiding the extra resource consumption caused by blockchain, this paper introduces a framework for empowering FL using Direct Acyclic Graph (DAG)-based blockchain systematically (DAG-FL). Accordingly, DAG-FL is first introduced from a three-layer architecture in details, and then two algorithms DAG-FL Controlling and DAG-FL Updating are designed running on different nodes to elaborate the operation of DAG-FL consensus mechanism. After that, a Poisson process model is formulated to discuss that how to set deployment parameters to maintain DAG-FL stably in different federated learning tasks. The extensive simulations and experiments show that DAG-FL can achieve better performance in terms of training efficiency and model accuracy compared with the typical existing on-device federated learning systems as the benchmarks.
[ { "created": "Tue, 27 Apr 2021 10:29:38 GMT", "version": "v1" } ]
2021-04-28
[ [ "Cao", "Mingrui", "" ], [ "Zhang", "Long", "" ], [ "Cao", "Bin", "" ] ]
Due to the distributed characteristics of Federated Learning (FL), the vulnerability of global model and coordination of devices are the main obstacle. As a promising solution of decentralization, scalability and security, leveraging blockchain in FL has attracted much attention in recent years. However, the traditional consensus mechanisms designed for blockchain like Proof of Work (PoW) would cause extreme resource consumption, which reduces the efficiency of FL greatly, especially when the participating devices are wireless and resource-limited. In order to address device asynchrony and anomaly detection in FL while avoiding the extra resource consumption caused by blockchain, this paper introduces a framework for empowering FL using Direct Acyclic Graph (DAG)-based blockchain systematically (DAG-FL). Accordingly, DAG-FL is first introduced from a three-layer architecture in details, and then two algorithms DAG-FL Controlling and DAG-FL Updating are designed running on different nodes to elaborate the operation of DAG-FL consensus mechanism. After that, a Poisson process model is formulated to discuss that how to set deployment parameters to maintain DAG-FL stably in different federated learning tasks. The extensive simulations and experiments show that DAG-FL can achieve better performance in terms of training efficiency and model accuracy compared with the typical existing on-device federated learning systems as the benchmarks.
2301.04101
Matthew Wallingford
Matthew Wallingford, Aditya Kusupati, Alex Fang, Vivek Ramanujan, Aniruddha Kembhavi, Roozbeh Mottaghi, Ali Farhadi
Neural Radiance Field Codebooks
19 pages, 8 figures, 9 tables
International Conference on Learning Representations 2023
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Compositional representations of the world are a promising step towards enabling high-level scene understanding and efficient transfer to downstream tasks. Learning such representations for complex scenes and tasks remains an open challenge. Towards this goal, we introduce Neural Radiance Field Codebooks (NRC), a scalable method for learning object-centric representations through novel view reconstruction. NRC learns to reconstruct scenes from novel views using a dictionary of object codes which are decoded through a volumetric renderer. This enables the discovery of reoccurring visual and geometric patterns across scenes which are transferable to downstream tasks. We show that NRC representations transfer well to object navigation in THOR, outperforming 2D and 3D representation learning methods by 3.1% success rate. We demonstrate that our approach is able to perform unsupervised segmentation for more complex synthetic (THOR) and real scenes (NYU Depth) better than prior methods (29% relative improvement). Finally, we show that NRC improves on the task of depth ordering by 5.5% accuracy in THOR.
[ { "created": "Tue, 10 Jan 2023 18:03:48 GMT", "version": "v1" }, { "created": "Sun, 30 Apr 2023 09:25:38 GMT", "version": "v2" } ]
2023-05-02
[ [ "Wallingford", "Matthew", "" ], [ "Kusupati", "Aditya", "" ], [ "Fang", "Alex", "" ], [ "Ramanujan", "Vivek", "" ], [ "Kembhavi", "Aniruddha", "" ], [ "Mottaghi", "Roozbeh", "" ], [ "Farhadi", "Ali", "" ] ]
Compositional representations of the world are a promising step towards enabling high-level scene understanding and efficient transfer to downstream tasks. Learning such representations for complex scenes and tasks remains an open challenge. Towards this goal, we introduce Neural Radiance Field Codebooks (NRC), a scalable method for learning object-centric representations through novel view reconstruction. NRC learns to reconstruct scenes from novel views using a dictionary of object codes which are decoded through a volumetric renderer. This enables the discovery of reoccurring visual and geometric patterns across scenes which are transferable to downstream tasks. We show that NRC representations transfer well to object navigation in THOR, outperforming 2D and 3D representation learning methods by 3.1% success rate. We demonstrate that our approach is able to perform unsupervised segmentation for more complex synthetic (THOR) and real scenes (NYU Depth) better than prior methods (29% relative improvement). Finally, we show that NRC improves on the task of depth ordering by 5.5% accuracy in THOR.
1008.0823
Adrian Paschke
Adrian Paschke, Alexander Kozlenkov, Harold Boley
A Homogeneous Reaction Rule Language for Complex Event Processing
In Proc. 2nd International Workshop on Event Drive Architecture and Event Processing Systems (EDA-PS 2007) at VLDB 2007
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Event-driven automation of reactive functionalities for complex event processing is an urgent need in today's distributed service-oriented architectures and Web-based event-driven environments. An important problem to be addressed is how to correctly and efficiently capture and process the event-based behavioral, reactive logic embodied in reaction rules, and combining this with other conditional decision logic embodied, e.g., in derivation rules. This paper elaborates a homogeneous integration approach that combines derivation rules, reaction rules and other rule types such as integrity constraints into the general framework of logic programming, the industrial-strength version of declarative programming. We describe syntax and semantics of the language, implement a distributed web-based middleware using enterprise service technologies and illustrate its adequacy in terms of expressiveness, efficiency and scalability through examples extracted from industrial use cases. The developed reaction rule language provides expressive features such as modular ID-based updates with support for external imports and self-updates of the intensional and extensional knowledge bases, transactions including integrity testing and roll-backs of update transition paths. It also supports distributed complex event processing, event messaging and event querying via efficient and scalable enterprise middleware technologies and event/action reasoning based on an event/action algebra implemented by an interval-based event calculus variant as a logic inference formalism.
[ { "created": "Wed, 4 Aug 2010 17:05:33 GMT", "version": "v1" } ]
2010-08-05
[ [ "Paschke", "Adrian", "" ], [ "Kozlenkov", "Alexander", "" ], [ "Boley", "Harold", "" ] ]
Event-driven automation of reactive functionalities for complex event processing is an urgent need in today's distributed service-oriented architectures and Web-based event-driven environments. An important problem to be addressed is how to correctly and efficiently capture and process the event-based behavioral, reactive logic embodied in reaction rules, and combining this with other conditional decision logic embodied, e.g., in derivation rules. This paper elaborates a homogeneous integration approach that combines derivation rules, reaction rules and other rule types such as integrity constraints into the general framework of logic programming, the industrial-strength version of declarative programming. We describe syntax and semantics of the language, implement a distributed web-based middleware using enterprise service technologies and illustrate its adequacy in terms of expressiveness, efficiency and scalability through examples extracted from industrial use cases. The developed reaction rule language provides expressive features such as modular ID-based updates with support for external imports and self-updates of the intensional and extensional knowledge bases, transactions including integrity testing and roll-backs of update transition paths. It also supports distributed complex event processing, event messaging and event querying via efficient and scalable enterprise middleware technologies and event/action reasoning based on an event/action algebra implemented by an interval-based event calculus variant as a logic inference formalism.
1607.00964
Koosha Pourtahmasi Roshandeh
Koosha Pourtahmasi Roshandeh, Masoud Ardakani and Chintha Tellambura
A general framework for weighted sum-rate and common-rate optimization
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a framework for solving a class of optimization problems encountered in a range of power allocation problems in wireless relay networks. In particular, power allocation for weighted sum-rate and common-rate optimization problems fall in this framework. Subject to some conditions on the region of feasible powers, the optimal solutions are analytically found. The optimization problems are posed in a general form and their solutions are shown to have applications in a number of practical scenarios. Numerical results verify the optimality of the analytical approach.
[ { "created": "Mon, 4 Jul 2016 17:12:35 GMT", "version": "v1" } ]
2016-07-05
[ [ "Roshandeh", "Koosha Pourtahmasi", "" ], [ "Ardakani", "Masoud", "" ], [ "Tellambura", "Chintha", "" ] ]
In this paper, we propose a framework for solving a class of optimization problems encountered in a range of power allocation problems in wireless relay networks. In particular, power allocation for weighted sum-rate and common-rate optimization problems fall in this framework. Subject to some conditions on the region of feasible powers, the optimal solutions are analytically found. The optimization problems are posed in a general form and their solutions are shown to have applications in a number of practical scenarios. Numerical results verify the optimality of the analytical approach.
2408.04725
Saleh Darzi
Saleh Darzi, Attila A. Yavuz
Counter Denial of Service for Next-Generation Networks within the Artificial Intelligence and Post-Quantum Era
10 Pages, 1 Figure, 2 Tables
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
Given the rise in cyber threats to networked systems, coupled with the proliferation of AI techniques and enhanced processing capabilities, Denial of Service (DoS) attacks are becoming increasingly sophisticated and easily executable. They target system availability, compromising entire systems without breaking underlying security protocols. Consequently, numerous studies have focused on preventing, detecting, and mitigating DoS attacks. However, state-of-the-art systematization efforts have limitations such as isolated DoS countermeasures, shortcomings of AI-based studies, and a lack of DoS integration features like privacy, anonymity, authentication, and transparency. Additionally, the emergence of quantum computers is a game changer for DoS from attack and defense perspectives, yet it has remained largely unexplored. This study aims to address these gaps by examining (counter)-DoS in the AI era while also considering post-quantum (PQ) security when it applies. We highlight the deficiencies in the current literature and provide insights into synergistic techniques to bridge these gaps. We explore AI mechanisms for DoS intrusion detection, evaluate cybersecurity properties in cutting-edge machine learning models, and analyze weaponized AI in the context of DoS. We also investigate collaborative and distributed counter-DoS frameworks via federated learning and blockchains. Finally, we assess proactive approaches such as honeypots, puzzles, and authentication schemes that can be integrated into next-generation network systems for DoS prevention and mitigation.
[ { "created": "Thu, 8 Aug 2024 18:47:31 GMT", "version": "v1" } ]
2024-08-12
[ [ "Darzi", "Saleh", "" ], [ "Yavuz", "Attila A.", "" ] ]
Given the rise in cyber threats to networked systems, coupled with the proliferation of AI techniques and enhanced processing capabilities, Denial of Service (DoS) attacks are becoming increasingly sophisticated and easily executable. They target system availability, compromising entire systems without breaking underlying security protocols. Consequently, numerous studies have focused on preventing, detecting, and mitigating DoS attacks. However, state-of-the-art systematization efforts have limitations such as isolated DoS countermeasures, shortcomings of AI-based studies, and a lack of DoS integration features like privacy, anonymity, authentication, and transparency. Additionally, the emergence of quantum computers is a game changer for DoS from attack and defense perspectives, yet it has remained largely unexplored. This study aims to address these gaps by examining (counter)-DoS in the AI era while also considering post-quantum (PQ) security when it applies. We highlight the deficiencies in the current literature and provide insights into synergistic techniques to bridge these gaps. We explore AI mechanisms for DoS intrusion detection, evaluate cybersecurity properties in cutting-edge machine learning models, and analyze weaponized AI in the context of DoS. We also investigate collaborative and distributed counter-DoS frameworks via federated learning and blockchains. Finally, we assess proactive approaches such as honeypots, puzzles, and authentication schemes that can be integrated into next-generation network systems for DoS prevention and mitigation.
2105.02827
Eklavya Sharma
Arindam Khan, Eklavya Sharma
Tight Approximation Algorithms for Geometric Bin Packing with Skewed Items
null
null
null
null
cs.CG cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the Two-dimensional Bin Packing (2BP) problem, we are given a set of rectangles of height and width at most one and our goal is to find an axis-aligned nonoverlapping packing of these rectangles into the minimum number of unit square bins. The problem admits no APTAS and the current best approximation ratio is $1.406$ by Bansal and Khan [SODA'14]. A well-studied variant of the problem is Guillotine Two-dimensional Bin Packing (G2BP), where all rectangles must be packed in such a way that every rectangle in the packing can be obtained by recursively applying a sequence of end-to-end axis-parallel cuts, also called guillotine cuts. Bansal, Lodi, and Sviridenko [FOCS'05] obtained an APTAS for this problem. Let $\lambda$ be the smallest constant such that for every set $I$ of items, the number of bins in the optimal solution to G2BP for $I$ is upper bounded by $\lambda\operatorname{opt}(I) + c$, where $\operatorname{opt}(I)$ is the number of bins in the optimal solution to 2BP for $I$ and $c$ is a constant. It is known that $4/3 \le \lambda \le 1.692$. Bansal and Khan [SODA'14] conjectured that $\lambda = 4/3$. The conjecture, if true, will imply a $(4/3+\varepsilon)$-approximation algorithm for 2BP. According to convention, for a given constant $\delta>0$, a rectangle is large if both its height and width are at least $\delta$, and otherwise it is called skewed. We make progress towards the conjecture by showing $\lambda = 4/3$ for skewed instance, i.e., when all input rectangles are skewed. Even for this case, the previous best upper bound on $\lambda$ was roughly 1.692. We also give an APTAS for 2BP for skewed instance, though general 2BP does not admit an APTAS.
[ { "created": "Thu, 6 May 2021 17:16:11 GMT", "version": "v1" } ]
2021-05-07
[ [ "Khan", "Arindam", "" ], [ "Sharma", "Eklavya", "" ] ]
In the Two-dimensional Bin Packing (2BP) problem, we are given a set of rectangles of height and width at most one and our goal is to find an axis-aligned nonoverlapping packing of these rectangles into the minimum number of unit square bins. The problem admits no APTAS and the current best approximation ratio is $1.406$ by Bansal and Khan [SODA'14]. A well-studied variant of the problem is Guillotine Two-dimensional Bin Packing (G2BP), where all rectangles must be packed in such a way that every rectangle in the packing can be obtained by recursively applying a sequence of end-to-end axis-parallel cuts, also called guillotine cuts. Bansal, Lodi, and Sviridenko [FOCS'05] obtained an APTAS for this problem. Let $\lambda$ be the smallest constant such that for every set $I$ of items, the number of bins in the optimal solution to G2BP for $I$ is upper bounded by $\lambda\operatorname{opt}(I) + c$, where $\operatorname{opt}(I)$ is the number of bins in the optimal solution to 2BP for $I$ and $c$ is a constant. It is known that $4/3 \le \lambda \le 1.692$. Bansal and Khan [SODA'14] conjectured that $\lambda = 4/3$. The conjecture, if true, will imply a $(4/3+\varepsilon)$-approximation algorithm for 2BP. According to convention, for a given constant $\delta>0$, a rectangle is large if both its height and width are at least $\delta$, and otherwise it is called skewed. We make progress towards the conjecture by showing $\lambda = 4/3$ for skewed instance, i.e., when all input rectangles are skewed. Even for this case, the previous best upper bound on $\lambda$ was roughly 1.692. We also give an APTAS for 2BP for skewed instance, though general 2BP does not admit an APTAS.
2009.07265
Kelvin C.K. Chan
Kelvin C.K. Chan, Xintao Wang, Ke Yu, Chao Dong, Chen Change Loy
Understanding Deformable Alignment in Video Super-Resolution
Tech report, 15 pages, 19 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deformable convolution, originally proposed for the adaptation to geometric variations of objects, has recently shown compelling performance in aligning multiple frames and is increasingly adopted for video super-resolution. Despite its remarkable performance, its underlying mechanism for alignment remains unclear. In this study, we carefully investigate the relation between deformable alignment and the classic flow-based alignment. We show that deformable convolution can be decomposed into a combination of spatial warping and convolution. This decomposition reveals the commonality of deformable alignment and flow-based alignment in formulation, but with a key difference in their offset diversity. We further demonstrate through experiments that the increased diversity in deformable alignment yields better-aligned features, and hence significantly improves the quality of video super-resolution output. Based on our observations, we propose an offset-fidelity loss that guides the offset learning with optical flow. Experiments show that our loss successfully avoids the overflow of offsets and alleviates the instability problem of deformable alignment. Aside from the contributions to deformable alignment, our formulation inspires a more flexible approach to introduce offset diversity to flow-based alignment, improving its performance.
[ { "created": "Tue, 15 Sep 2020 17:55:06 GMT", "version": "v1" } ]
2020-09-16
[ [ "Chan", "Kelvin C. K.", "" ], [ "Wang", "Xintao", "" ], [ "Yu", "Ke", "" ], [ "Dong", "Chao", "" ], [ "Loy", "Chen Change", "" ] ]
Deformable convolution, originally proposed for the adaptation to geometric variations of objects, has recently shown compelling performance in aligning multiple frames and is increasingly adopted for video super-resolution. Despite its remarkable performance, its underlying mechanism for alignment remains unclear. In this study, we carefully investigate the relation between deformable alignment and the classic flow-based alignment. We show that deformable convolution can be decomposed into a combination of spatial warping and convolution. This decomposition reveals the commonality of deformable alignment and flow-based alignment in formulation, but with a key difference in their offset diversity. We further demonstrate through experiments that the increased diversity in deformable alignment yields better-aligned features, and hence significantly improves the quality of video super-resolution output. Based on our observations, we propose an offset-fidelity loss that guides the offset learning with optical flow. Experiments show that our loss successfully avoids the overflow of offsets and alleviates the instability problem of deformable alignment. Aside from the contributions to deformable alignment, our formulation inspires a more flexible approach to introduce offset diversity to flow-based alignment, improving its performance.
1208.4138
Zahoor Khan
Ashraf Mohammed Iqbal, Abidalrahman Moh'd, Zahoor Khan
Semi-supervised Clustering Ensemble by Voting
The International Conference on Information and Communication Systems (ICICS 2009), Amman, Jordan
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Clustering ensemble is one of the most recent advances in unsupervised learning. It aims to combine the clustering results obtained using different algorithms or from different runs of the same clustering algorithm for the same data set, this is accomplished using on a consensus function, the efficiency and accuracy of this method has been proven in many works in literature. In the first part of this paper we make a comparison among current approaches to clustering ensemble in literature. All of these approaches consist of two main steps: the ensemble generation and consensus function. In the second part of the paper, we suggest engaging supervision in the clustering ensemble procedure to get more enhancements on the clustering results. Supervision can be applied in two places: either by using semi-supervised algorithms in the clustering ensemble generation step or in the form of a feedback used by the consensus function stage. Also, we introduce a flexible two parameter weighting mechanism, the first parameter describes the compatibility between the datasets under study and the semi-supervised clustering algorithms used to generate the base partitions, the second parameter is used to provide the user feedback on the these partitions. The two parameters are engaged in a "relabeling and voting" based consensus function to produce the final clustering.
[ { "created": "Mon, 20 Aug 2012 23:21:10 GMT", "version": "v1" } ]
2012-08-22
[ [ "Iqbal", "Ashraf Mohammed", "" ], [ "Moh'd", "Abidalrahman", "" ], [ "Khan", "Zahoor", "" ] ]
Clustering ensemble is one of the most recent advances in unsupervised learning. It aims to combine the clustering results obtained using different algorithms or from different runs of the same clustering algorithm for the same data set, this is accomplished using on a consensus function, the efficiency and accuracy of this method has been proven in many works in literature. In the first part of this paper we make a comparison among current approaches to clustering ensemble in literature. All of these approaches consist of two main steps: the ensemble generation and consensus function. In the second part of the paper, we suggest engaging supervision in the clustering ensemble procedure to get more enhancements on the clustering results. Supervision can be applied in two places: either by using semi-supervised algorithms in the clustering ensemble generation step or in the form of a feedback used by the consensus function stage. Also, we introduce a flexible two parameter weighting mechanism, the first parameter describes the compatibility between the datasets under study and the semi-supervised clustering algorithms used to generate the base partitions, the second parameter is used to provide the user feedback on the these partitions. The two parameters are engaged in a "relabeling and voting" based consensus function to produce the final clustering.
2102.11394
Jan Achterhold
Jan Achterhold and Joerg Stueckler
Explore the Context: Optimal Data Collection for Context-Conditional Dynamics Models
Accepted for publication at the 24th International Conference on Artificial Intelligence and Statistics (AISTATS) 2021, with supplementary material
null
null
null
cs.LG cs.RO cs.SY eess.SY stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we learn dynamics models for parametrized families of dynamical systems with varying properties. The dynamics models are formulated as stochastic processes conditioned on a latent context variable which is inferred from observed transitions of the respective system. The probabilistic formulation allows us to compute an action sequence which, for a limited number of environment interactions, optimally explores the given system within the parametrized family. This is achieved by steering the system through transitions being most informative for the context variable. We demonstrate the effectiveness of our method for exploration on a non-linear toy-problem and two well-known reinforcement learning environments.
[ { "created": "Mon, 22 Feb 2021 22:52:39 GMT", "version": "v1" } ]
2021-02-24
[ [ "Achterhold", "Jan", "" ], [ "Stueckler", "Joerg", "" ] ]
In this paper, we learn dynamics models for parametrized families of dynamical systems with varying properties. The dynamics models are formulated as stochastic processes conditioned on a latent context variable which is inferred from observed transitions of the respective system. The probabilistic formulation allows us to compute an action sequence which, for a limited number of environment interactions, optimally explores the given system within the parametrized family. This is achieved by steering the system through transitions being most informative for the context variable. We demonstrate the effectiveness of our method for exploration on a non-linear toy-problem and two well-known reinforcement learning environments.
1002.1167
Vishal Goyal
A. K. Ojha, A. K. Das
Geometric Programming Problem with Co-Efficients and Exponents Associated with Binary Numbers
International Journal of Computer Science Issues, IJCSI, Vol. 7, Issue 1, No. 2, January 2010, http://ijcsi.org/articles/Geometric-Programming-Problem-with-Co-Efficients-and-Exponents-Associated-with-Binary-Numbers.php
International Journal of Computer Science Issues, IJCSI, Vol. 7, Issue 1, No. 2, January 2010, http://ijcsi.org/articles/Geometric-Programming-Problem-with-Co-Efficients-and-Exponents-Associated-with-Binary-Numbers.php
null
null
cs.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Geometric programming (GP) provides a power tool for solving a variety of optimization problems. In the real world, many applications of geometric programming (GP) are engineering design problems in which some of the problem parameters are estimating of actual values. This paper develops a solution procedure to solve nonlinear programming problems using GP technique by splitting the cost coefficients, constraint coefficients and exponents with the help of binary numbers. The equivalent mathematical programming problems are formulated to find their corresponding value of the objective function based on the duality theorem. The ability of calculating the cost coefficients, constraint coefficients and exponents developed in this paper might help lead to more realistic modeling efforts in engineering design areas. Standard nonlinear programming software has been used to solve the proposed optimization problem. Two numerical examples are presented to illustrate the method.
[ { "created": "Fri, 5 Feb 2010 09:29:23 GMT", "version": "v1" } ]
2010-02-08
[ [ "Ojha", "A. K.", "" ], [ "Das", "A. K.", "" ] ]
Geometric programming (GP) provides a power tool for solving a variety of optimization problems. In the real world, many applications of geometric programming (GP) are engineering design problems in which some of the problem parameters are estimating of actual values. This paper develops a solution procedure to solve nonlinear programming problems using GP technique by splitting the cost coefficients, constraint coefficients and exponents with the help of binary numbers. The equivalent mathematical programming problems are formulated to find their corresponding value of the objective function based on the duality theorem. The ability of calculating the cost coefficients, constraint coefficients and exponents developed in this paper might help lead to more realistic modeling efforts in engineering design areas. Standard nonlinear programming software has been used to solve the proposed optimization problem. Two numerical examples are presented to illustrate the method.
2011.09318
Jieli Liu
Jiajing Wu, Jieli Liu, Yijing Zhao, Zibin Zheng
Analysis of Cryptocurrency Transactions from a Network Perspective: An Overview
null
null
10.1016/j.jnca.2021.103139
null
cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As one of the most important and famous applications of blockchain technology, cryptocurrency has attracted extensive attention recently. Empowered by blockchain technology, all the transaction records of cryptocurrencies are irreversible and recorded in the blocks. These transaction records containing rich information and complete traces of financial activities are publicly accessible, thus providing researchers with unprecedented opportunities for data mining and knowledge discovery in this area. Networks are a general language for describing interacting systems in the real world, and a considerable part of existing work on cryptocurrency transactions is studied from a network perspective. This survey aims to analyze and summarize the existing literature on analyzing and understanding cryptocurrency transactions from a network perspective. Aiming to provide a systematic guideline for researchers and engineers, we present the background information of cryptocurrency transaction network analysis and review existing research in terms of three aspects, i.e., network modeling, network profiling, and network-based detection. For each aspect, we introduce the research issues, summarize the methods, and discuss the results and findings given in the literature. Furthermore, we present the main challenges and several future directions in this area.
[ { "created": "Wed, 18 Nov 2020 14:39:02 GMT", "version": "v1" }, { "created": "Sat, 7 Aug 2021 15:44:34 GMT", "version": "v2" } ]
2022-01-20
[ [ "Wu", "Jiajing", "" ], [ "Liu", "Jieli", "" ], [ "Zhao", "Yijing", "" ], [ "Zheng", "Zibin", "" ] ]
As one of the most important and famous applications of blockchain technology, cryptocurrency has attracted extensive attention recently. Empowered by blockchain technology, all the transaction records of cryptocurrencies are irreversible and recorded in the blocks. These transaction records containing rich information and complete traces of financial activities are publicly accessible, thus providing researchers with unprecedented opportunities for data mining and knowledge discovery in this area. Networks are a general language for describing interacting systems in the real world, and a considerable part of existing work on cryptocurrency transactions is studied from a network perspective. This survey aims to analyze and summarize the existing literature on analyzing and understanding cryptocurrency transactions from a network perspective. Aiming to provide a systematic guideline for researchers and engineers, we present the background information of cryptocurrency transaction network analysis and review existing research in terms of three aspects, i.e., network modeling, network profiling, and network-based detection. For each aspect, we introduce the research issues, summarize the methods, and discuss the results and findings given in the literature. Furthermore, we present the main challenges and several future directions in this area.
1909.01910
Valerio Formicola Dr.
Vincenzo Giuliano and Valerio Formicola
ICSrange: A Simulation-based Cyber Range Platform for Industrial Control Systems
Student Forum paper of the 15th European Dependable Computing Conference (EDCC 2019)
null
null
null
cs.CR cs.SY eess.SY
http://creativecommons.org/licenses/by-sa/4.0/
Maintenance staff of Industrial Control Systems (ICS) is generally not aware about information technologies, and even less about cyber security problems. The scary impact of cyber attacks in the industrial world calls for tools to train defensive skills and test effective security measures. Cyber range offers this opportunity, but current research is lacking cost-effective solutions verticalized for the industrial domain. This work proposes ICSrange, a simulation-based cyber range platform for Industrial Control Systems. ICSrange adopts Commercial-Off-The-Shelf (COTS) technologies to virtualize an enterprise network connected to Industrial Control Systems. ICSrange is the outcome of a preliminary study intended to investigate challenges and opportunities to build a configurable and extensible cyber range with simulated industrial processes. Literature shows that testbeds based on realistic mock-ups are effectively employed to develop complex exploits like Advanced Persistent Threats (APTs), hence motivating their usage to train and test security in ICS. We prove the effectiveness of ICSrange through the execution of a multi-staged attack that breaches an enterprise network and progressively intrudes a simulated ICS with water tanks. The attack mimics lateral movements as observed in APTs.
[ { "created": "Wed, 4 Sep 2019 16:03:08 GMT", "version": "v1" } ]
2019-09-05
[ [ "Giuliano", "Vincenzo", "" ], [ "Formicola", "Valerio", "" ] ]
Maintenance staff of Industrial Control Systems (ICS) is generally not aware about information technologies, and even less about cyber security problems. The scary impact of cyber attacks in the industrial world calls for tools to train defensive skills and test effective security measures. Cyber range offers this opportunity, but current research is lacking cost-effective solutions verticalized for the industrial domain. This work proposes ICSrange, a simulation-based cyber range platform for Industrial Control Systems. ICSrange adopts Commercial-Off-The-Shelf (COTS) technologies to virtualize an enterprise network connected to Industrial Control Systems. ICSrange is the outcome of a preliminary study intended to investigate challenges and opportunities to build a configurable and extensible cyber range with simulated industrial processes. Literature shows that testbeds based on realistic mock-ups are effectively employed to develop complex exploits like Advanced Persistent Threats (APTs), hence motivating their usage to train and test security in ICS. We prove the effectiveness of ICSrange through the execution of a multi-staged attack that breaches an enterprise network and progressively intrudes a simulated ICS with water tanks. The attack mimics lateral movements as observed in APTs.
1803.04798
Banu Kabakulak
Banu Kabakulak, Z. Caner Ta\c{s}k{\i}n, and Ali Emre Pusane
A Branch-Price-and-Cut Algorithm for Optimal Decoding of LDPC Codes
30 pages, 4 figures, 7 tables
null
null
null
cs.IT math.IT math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Channel coding aims to minimize errors that occur during the transmission of digital information from one place to another. Low-density parity-check (LDPC) codes can detect and correct transmission errors if one encodes the original information by adding redundant bits. In practice, heuristic iterative decoding algorithms are used to decode the received vector. However, these algorithms may fail to decode if the received vector contains multiple errors. We consider decoding the received vector with minimum error as an integer programming problem and propose a branch-and-price method for its solution. We improve the performance of our method by introducing heuristic feasible solutions and adding valid cuts to the mathematical formulation. Computational results reveal that our branch-price-and-cut algorithm significantly improves solvability of the problem compared to a commercial solver in high channel error rates. Our proposed algorithm can find higher quality solutions than commonly used iterative decoding heuristics.
[ { "created": "Tue, 13 Mar 2018 13:41:09 GMT", "version": "v1" } ]
2018-03-14
[ [ "Kabakulak", "Banu", "" ], [ "Taşkın", "Z. Caner", "" ], [ "Pusane", "Ali Emre", "" ] ]
Channel coding aims to minimize errors that occur during the transmission of digital information from one place to another. Low-density parity-check (LDPC) codes can detect and correct transmission errors if one encodes the original information by adding redundant bits. In practice, heuristic iterative decoding algorithms are used to decode the received vector. However, these algorithms may fail to decode if the received vector contains multiple errors. We consider decoding the received vector with minimum error as an integer programming problem and propose a branch-and-price method for its solution. We improve the performance of our method by introducing heuristic feasible solutions and adding valid cuts to the mathematical formulation. Computational results reveal that our branch-price-and-cut algorithm significantly improves solvability of the problem compared to a commercial solver in high channel error rates. Our proposed algorithm can find higher quality solutions than commonly used iterative decoding heuristics.
2310.17325
Xiaoyu Liu
Xiaoyu Liu, Jiaxin Yuan, Bang An, Yuancheng Xu, Yifan Yang, Furong Huang
C-Disentanglement: Discovering Causally-Independent Generative Factors under an Inductive Bias of Confounder
accepted to Neurips 2023
null
null
null
cs.LG cs.AI cs.CV
http://creativecommons.org/licenses/by/4.0/
Representation learning assumes that real-world data is generated by a few semantically meaningful generative factors (i.e., sources of variation) and aims to discover them in the latent space. These factors are expected to be causally disentangled, meaning that distinct factors are encoded into separate latent variables, and changes in one factor will not affect the values of the others. Compared to statistical independence, causal disentanglement allows more controllable data generation, improved robustness, and better generalization. However, most existing work assumes unconfoundedness in the discovery process, that there are no common causes to the generative factors and thus obtain only statistical independence. In this paper, we recognize the importance of modeling confounders in discovering causal generative factors. Unfortunately, such factors are not identifiable without proper inductive bias. We fill the gap by introducing a framework entitled Confounded-Disentanglement (C-Disentanglement), the first framework that explicitly introduces the inductive bias of confounder via labels from domain expertise. In addition, we accordingly propose an approach to sufficiently identify the causally disentangled factors under any inductive bias of the confounder. We conduct extensive experiments on both synthetic and real-world datasets. Our method demonstrates competitive results compared to various SOTA baselines in obtaining causally disentangled features and downstream tasks under domain shifts.
[ { "created": "Thu, 26 Oct 2023 11:44:42 GMT", "version": "v1" } ]
2023-10-27
[ [ "Liu", "Xiaoyu", "" ], [ "Yuan", "Jiaxin", "" ], [ "An", "Bang", "" ], [ "Xu", "Yuancheng", "" ], [ "Yang", "Yifan", "" ], [ "Huang", "Furong", "" ] ]
Representation learning assumes that real-world data is generated by a few semantically meaningful generative factors (i.e., sources of variation) and aims to discover them in the latent space. These factors are expected to be causally disentangled, meaning that distinct factors are encoded into separate latent variables, and changes in one factor will not affect the values of the others. Compared to statistical independence, causal disentanglement allows more controllable data generation, improved robustness, and better generalization. However, most existing work assumes unconfoundedness in the discovery process, that there are no common causes to the generative factors and thus obtain only statistical independence. In this paper, we recognize the importance of modeling confounders in discovering causal generative factors. Unfortunately, such factors are not identifiable without proper inductive bias. We fill the gap by introducing a framework entitled Confounded-Disentanglement (C-Disentanglement), the first framework that explicitly introduces the inductive bias of confounder via labels from domain expertise. In addition, we accordingly propose an approach to sufficiently identify the causally disentangled factors under any inductive bias of the confounder. We conduct extensive experiments on both synthetic and real-world datasets. Our method demonstrates competitive results compared to various SOTA baselines in obtaining causally disentangled features and downstream tasks under domain shifts.
1908.03605
Manjunath Narayana
Nandan Banerjee, Ryan C. Connolly, Dimitri Lisin, Jimmy Briggs, Manjunath Narayana, Mario E. Munich
View management for lifelong visual maps
IEEE International Conference on Intelligent Robots and Systems (IROS), 2019
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The time complexity of making observations and loop closures in a graph-based visual SLAM system is a function of the number of views stored. Clever algorithms, such as approximate nearest neighbor search, can make this function sub-linear. Despite this, over time the number of views can still grow to a point at which the speed and/or accuracy of the system becomes unacceptable, especially in computation- and memory-constrained SLAM systems. However, not all views are created equal. Some views are rarely observed, because they have been created in an unusual lighting condition, or from low quality images, or in a location whose appearance has changed. These views can be removed to improve the overall performance of a SLAM system. In this paper, we propose a method for pruning views in a visual SLAM system to maintain its speed and accuracy for long term use.
[ { "created": "Fri, 9 Aug 2019 19:26:56 GMT", "version": "v1" } ]
2019-08-13
[ [ "Banerjee", "Nandan", "" ], [ "Connolly", "Ryan C.", "" ], [ "Lisin", "Dimitri", "" ], [ "Briggs", "Jimmy", "" ], [ "Narayana", "Manjunath", "" ], [ "Munich", "Mario E.", "" ] ]
The time complexity of making observations and loop closures in a graph-based visual SLAM system is a function of the number of views stored. Clever algorithms, such as approximate nearest neighbor search, can make this function sub-linear. Despite this, over time the number of views can still grow to a point at which the speed and/or accuracy of the system becomes unacceptable, especially in computation- and memory-constrained SLAM systems. However, not all views are created equal. Some views are rarely observed, because they have been created in an unusual lighting condition, or from low quality images, or in a location whose appearance has changed. These views can be removed to improve the overall performance of a SLAM system. In this paper, we propose a method for pruning views in a visual SLAM system to maintain its speed and accuracy for long term use.
1501.04979
Thomas Goldstein
Tom Goldstein, Christoph Studer, Richard Baraniuk
FASTA: A Generalized Implementation of Forward-Backward Splitting
null
null
null
null
cs.MS cs.NA math.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This is a user manual for the software package FASTA.
[ { "created": "Fri, 16 Jan 2015 01:22:55 GMT", "version": "v1" }, { "created": "Thu, 6 Aug 2015 16:54:50 GMT", "version": "v2" }, { "created": "Wed, 20 Jan 2016 23:51:33 GMT", "version": "v3" } ]
2016-01-22
[ [ "Goldstein", "Tom", "" ], [ "Studer", "Christoph", "" ], [ "Baraniuk", "Richard", "" ] ]
This is a user manual for the software package FASTA.
2403.18447
Inhwan Bae
Inhwan Bae and Junoh Lee and Hae-Gon Jeon
Can Language Beat Numerical Regression? Language-Based Multimodal Trajectory Prediction
Accepted at CVPR 2024
null
null
null
cs.CL cs.CV cs.LG cs.RO
http://creativecommons.org/licenses/by-nc-sa/4.0/
Language models have demonstrated impressive ability in context understanding and generative performance. Inspired by the recent success of language foundation models, in this paper, we propose LMTraj (Language-based Multimodal Trajectory predictor), which recasts the trajectory prediction task into a sort of question-answering problem. Departing from traditional numerical regression models, which treat the trajectory coordinate sequence as continuous signals, we consider them as discrete signals like text prompts. Specially, we first transform an input space for the trajectory coordinate into the natural language space. Here, the entire time-series trajectories of pedestrians are converted into a text prompt, and scene images are described as text information through image captioning. The transformed numerical and image data are then wrapped into the question-answering template for use in a language model. Next, to guide the language model in understanding and reasoning high-level knowledge, such as scene context and social relationships between pedestrians, we introduce an auxiliary multi-task question and answering. We then train a numerical tokenizer with the prompt data. We encourage the tokenizer to separate the integer and decimal parts well, and leverage it to capture correlations between the consecutive numbers in the language model. Lastly, we train the language model using the numerical tokenizer and all of the question-answer prompts. Here, we propose a beam-search-based most-likely prediction and a temperature-based multimodal prediction to implement both deterministic and stochastic inferences. Applying our LMTraj, we show that the language-based model can be a powerful pedestrian trajectory predictor, and outperforms existing numerical-based predictor methods. Code is publicly available at https://github.com/inhwanbae/LMTrajectory .
[ { "created": "Wed, 27 Mar 2024 11:06:44 GMT", "version": "v1" } ]
2024-03-28
[ [ "Bae", "Inhwan", "" ], [ "Lee", "Junoh", "" ], [ "Jeon", "Hae-Gon", "" ] ]
Language models have demonstrated impressive ability in context understanding and generative performance. Inspired by the recent success of language foundation models, in this paper, we propose LMTraj (Language-based Multimodal Trajectory predictor), which recasts the trajectory prediction task into a sort of question-answering problem. Departing from traditional numerical regression models, which treat the trajectory coordinate sequence as continuous signals, we consider them as discrete signals like text prompts. Specially, we first transform an input space for the trajectory coordinate into the natural language space. Here, the entire time-series trajectories of pedestrians are converted into a text prompt, and scene images are described as text information through image captioning. The transformed numerical and image data are then wrapped into the question-answering template for use in a language model. Next, to guide the language model in understanding and reasoning high-level knowledge, such as scene context and social relationships between pedestrians, we introduce an auxiliary multi-task question and answering. We then train a numerical tokenizer with the prompt data. We encourage the tokenizer to separate the integer and decimal parts well, and leverage it to capture correlations between the consecutive numbers in the language model. Lastly, we train the language model using the numerical tokenizer and all of the question-answer prompts. Here, we propose a beam-search-based most-likely prediction and a temperature-based multimodal prediction to implement both deterministic and stochastic inferences. Applying our LMTraj, we show that the language-based model can be a powerful pedestrian trajectory predictor, and outperforms existing numerical-based predictor methods. Code is publicly available at https://github.com/inhwanbae/LMTrajectory .
2108.03900
Jiexia Ye
Jiexia Ye, Juanjuan Zhao, Furong Zheng, Chengzhong Xu
Completion and Augmentation based Spatiotemporal Deep Learning Approach for Short-Term Metro Origin-Destination Matrix Prediction under Limited Observable Data
16 pages, 13 figures, 6 tables
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Short-term OD flow (i.e. the number of passenger traveling between stations) prediction is crucial to traffic management in metro systems. Due to the delayed effect in latest complete OD flow collection, complex spatiotemporal correlations of OD flows in high dimension, it is more challengeable than other traffic prediction tasks of time series. Existing methods need to be improved due to not fully utilizing the real-time passenger mobility data and not sufficiently modeling the implicit correlation of the mobility patterns between stations. In this paper, we propose a Completion based Adaptive Heterogeneous Graph Convolution Spatiotemporal Predictor. The novelty is mainly reflected in two aspects. The first is to model real-time mobility evolution by establishing the implicit correlation between observed OD flows and the prediction target OD flows in high dimension based on a key data-driven insight: the destination distributions of the passengers departing from a station are correlated with other stations sharing similar attributes (e.g. geographical location, region function). The second is to complete the latest incomplete OD flows by estimating the destination distribution of unfinished trips through considering the real-time mobility evolution and the time cost between stations, which is the base of time series prediction and can improve the model's dynamic adaptability. Extensive experiments on two real world metro datasets demonstrate the superiority of our model over other competitors with the biggest model performance improvement being nearly 4\%. In addition, the data complete framework we propose can be integrated into other models to improve their performance up to 2.1\%.
[ { "created": "Mon, 9 Aug 2021 09:32:42 GMT", "version": "v1" }, { "created": "Mon, 16 Aug 2021 03:15:34 GMT", "version": "v2" }, { "created": "Tue, 19 Oct 2021 01:51:43 GMT", "version": "v3" }, { "created": "Fri, 12 Nov 2021 08:32:25 GMT", "version": "v4" }, { "created": "Tue, 15 Feb 2022 09:46:13 GMT", "version": "v5" }, { "created": "Fri, 18 Feb 2022 02:34:36 GMT", "version": "v6" }, { "created": "Mon, 28 Mar 2022 08:02:26 GMT", "version": "v7" }, { "created": "Tue, 18 Oct 2022 06:22:05 GMT", "version": "v8" } ]
2022-10-19
[ [ "Ye", "Jiexia", "" ], [ "Zhao", "Juanjuan", "" ], [ "Zheng", "Furong", "" ], [ "Xu", "Chengzhong", "" ] ]
Short-term OD flow (i.e. the number of passenger traveling between stations) prediction is crucial to traffic management in metro systems. Due to the delayed effect in latest complete OD flow collection, complex spatiotemporal correlations of OD flows in high dimension, it is more challengeable than other traffic prediction tasks of time series. Existing methods need to be improved due to not fully utilizing the real-time passenger mobility data and not sufficiently modeling the implicit correlation of the mobility patterns between stations. In this paper, we propose a Completion based Adaptive Heterogeneous Graph Convolution Spatiotemporal Predictor. The novelty is mainly reflected in two aspects. The first is to model real-time mobility evolution by establishing the implicit correlation between observed OD flows and the prediction target OD flows in high dimension based on a key data-driven insight: the destination distributions of the passengers departing from a station are correlated with other stations sharing similar attributes (e.g. geographical location, region function). The second is to complete the latest incomplete OD flows by estimating the destination distribution of unfinished trips through considering the real-time mobility evolution and the time cost between stations, which is the base of time series prediction and can improve the model's dynamic adaptability. Extensive experiments on two real world metro datasets demonstrate the superiority of our model over other competitors with the biggest model performance improvement being nearly 4\%. In addition, the data complete framework we propose can be integrated into other models to improve their performance up to 2.1\%.
2406.07302
Julen Etxaniz
Julen Etxaniz and Gorka Azkune and Aitor Soroa and Oier Lopez de Lacalle and Mikel Artetxe
BertaQA: How Much Do Language Models Know About Local Culture?
null
null
null
null
cs.CL cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Large Language Models (LLMs) exhibit extensive knowledge about the world, but most evaluations have been limited to global or anglocentric subjects. This raises the question of how well these models perform on topics relevant to other cultures, whose presence on the web is not that prominent. To address this gap, we introduce BertaQA, a multiple-choice trivia dataset that is parallel in English and Basque. The dataset consists of a local subset with questions pertinent to the Basque culture, and a global subset with questions of broader interest. We find that state-of-the-art LLMs struggle with local cultural knowledge, even as they excel on global topics. However, we show that continued pre-training in Basque significantly improves the models' performance on Basque culture, even when queried in English. To our knowledge, this is the first solid evidence of knowledge transfer from a low-resource to a high-resource language. Our analysis sheds light on the complex interplay between language and knowledge, and reveals that some prior findings do not fully hold when reassessed on local topics. Our dataset and evaluation code are available under open licenses at https://github.com/juletx/BertaQA.
[ { "created": "Tue, 11 Jun 2024 14:30:34 GMT", "version": "v1" } ]
2024-06-12
[ [ "Etxaniz", "Julen", "" ], [ "Azkune", "Gorka", "" ], [ "Soroa", "Aitor", "" ], [ "de Lacalle", "Oier Lopez", "" ], [ "Artetxe", "Mikel", "" ] ]
Large Language Models (LLMs) exhibit extensive knowledge about the world, but most evaluations have been limited to global or anglocentric subjects. This raises the question of how well these models perform on topics relevant to other cultures, whose presence on the web is not that prominent. To address this gap, we introduce BertaQA, a multiple-choice trivia dataset that is parallel in English and Basque. The dataset consists of a local subset with questions pertinent to the Basque culture, and a global subset with questions of broader interest. We find that state-of-the-art LLMs struggle with local cultural knowledge, even as they excel on global topics. However, we show that continued pre-training in Basque significantly improves the models' performance on Basque culture, even when queried in English. To our knowledge, this is the first solid evidence of knowledge transfer from a low-resource to a high-resource language. Our analysis sheds light on the complex interplay between language and knowledge, and reveals that some prior findings do not fully hold when reassessed on local topics. Our dataset and evaluation code are available under open licenses at https://github.com/juletx/BertaQA.
1508.00691
Che Lin
Po-Chun Fu, Pei-Rong Li, Li-Ming Wei, Chang-Lin Chen, and Che Lin
Deterministic Differential Search Algorithm for Distributed Sensor/Relay Networks
2 pages, 1 figure, conference
null
null
null
cs.IT cs.DC cs.NI math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For distributed sensor/relay networks, high reliability and power efficiency are often required. However, several implementation issues arise in practice. One such problem is that all the distributed transmitters have limited power supply since the power source of the transmitters cannot be recharged continually. To resolve this, distributed beamforming has been proposed as a viable solution where all distributed transmitters seek to align in phase at the receiver end. However, it is difficult to implement such transmit beamforming in a distributed fashion in practice since perfect channel state information (CSI) need to be made available at all distributed transmitters, requiring tremendous overhead to feed back CSI from the receiver to all distributed transmitters. In this paper, we propose a novel algorithm that belongs to the category of deterministic phase adjustment algorithm: the Deterministic Differential Search Algorithm (DDSA), where the differences between the measured received signal strength (RSS) are utilized judiciously to help us predict the deterministic phase adjustment done at distributed transmitters. Numerical simulations demonstrate rapid convergence to a pre-determined threshold.
[ { "created": "Tue, 4 Aug 2015 08:00:25 GMT", "version": "v1" } ]
2015-08-05
[ [ "Fu", "Po-Chun", "" ], [ "Li", "Pei-Rong", "" ], [ "Wei", "Li-Ming", "" ], [ "Chen", "Chang-Lin", "" ], [ "Lin", "Che", "" ] ]
For distributed sensor/relay networks, high reliability and power efficiency are often required. However, several implementation issues arise in practice. One such problem is that all the distributed transmitters have limited power supply since the power source of the transmitters cannot be recharged continually. To resolve this, distributed beamforming has been proposed as a viable solution where all distributed transmitters seek to align in phase at the receiver end. However, it is difficult to implement such transmit beamforming in a distributed fashion in practice since perfect channel state information (CSI) need to be made available at all distributed transmitters, requiring tremendous overhead to feed back CSI from the receiver to all distributed transmitters. In this paper, we propose a novel algorithm that belongs to the category of deterministic phase adjustment algorithm: the Deterministic Differential Search Algorithm (DDSA), where the differences between the measured received signal strength (RSS) are utilized judiciously to help us predict the deterministic phase adjustment done at distributed transmitters. Numerical simulations demonstrate rapid convergence to a pre-determined threshold.
1112.3785
Salvador Abreu
Theofrastos Mantadelis and Gerda Janssens
Nesting Probabilistic Inference
Online Proceedings of the 11th International Colloquium on Implementation of Constraint LOgic Programming Systems (CICLOPS 2011), Lexington, KY, U.S.A., July 10, 2011
null
null
null
cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When doing inference in ProbLog, a probabilistic extension of Prolog, we extend SLD resolution with some additional bookkeeping. This additional information is used to compute the probabilistic results for a probabilistic query. In Prolog's SLD, goals are nested very naturally. In ProbLog's SLD, nesting probabilistic queries interferes with the probabilistic bookkeeping. In order to support nested probabilistic inference we propose the notion of a parametrised ProbLog engine. Nesting becomes possible by suspending and resuming instances of ProbLog engines. With our approach we realise several extensions of ProbLog such as meta-calls, negation, and answers of probabilistic goals.
[ { "created": "Fri, 16 Dec 2011 12:25:28 GMT", "version": "v1" } ]
2011-12-19
[ [ "Mantadelis", "Theofrastos", "" ], [ "Janssens", "Gerda", "" ] ]
When doing inference in ProbLog, a probabilistic extension of Prolog, we extend SLD resolution with some additional bookkeeping. This additional information is used to compute the probabilistic results for a probabilistic query. In Prolog's SLD, goals are nested very naturally. In ProbLog's SLD, nesting probabilistic queries interferes with the probabilistic bookkeeping. In order to support nested probabilistic inference we propose the notion of a parametrised ProbLog engine. Nesting becomes possible by suspending and resuming instances of ProbLog engines. With our approach we realise several extensions of ProbLog such as meta-calls, negation, and answers of probabilistic goals.
1108.3426
Bogdan Aman
Livio Bioglio, Cristina Calcagno, Mario Coppo, Ferruccio Damiani, Eva Sciacca, Salvatore Spinella, Angelo Troina
A Spatial Calculus of Wrapped Compartments
Presented at MeCBIC 2011
null
null
MeCBIC/2011/05
cs.LO cs.CE cs.ET q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Calculus of Wrapped Compartments (CWC) is a recently proposed modelling language for the representation and simulation of biological systems behaviour. Although CWC has no explicit structure modelling a spatial geometry, its compartment labelling feature can be exploited to model various examples of spatial interactions in a natural way. However, specifying large networks of compartments may require a long modelling phase. In this work we present a surface language for CWC that provides basic constructs for modelling spatial interactions. These constructs can be compiled away to obtain a standard CWC model, thus exploiting the existing CWC simulation tool. A case study concerning the modelling of Arbuscular Mychorrizal fungi growth is discussed.
[ { "created": "Wed, 17 Aug 2011 09:00:56 GMT", "version": "v1" } ]
2011-08-18
[ [ "Bioglio", "Livio", "" ], [ "Calcagno", "Cristina", "" ], [ "Coppo", "Mario", "" ], [ "Damiani", "Ferruccio", "" ], [ "Sciacca", "Eva", "" ], [ "Spinella", "Salvatore", "" ], [ "Troina", "Angelo", "" ] ]
The Calculus of Wrapped Compartments (CWC) is a recently proposed modelling language for the representation and simulation of biological systems behaviour. Although CWC has no explicit structure modelling a spatial geometry, its compartment labelling feature can be exploited to model various examples of spatial interactions in a natural way. However, specifying large networks of compartments may require a long modelling phase. In this work we present a surface language for CWC that provides basic constructs for modelling spatial interactions. These constructs can be compiled away to obtain a standard CWC model, thus exploiting the existing CWC simulation tool. A case study concerning the modelling of Arbuscular Mychorrizal fungi growth is discussed.
2312.15102
Wassim Kabbani
Wassim Kabbani, Christoph Busch, Kiran Raja
Robust Sclera Segmentation for Skin-tone Agnostic Face Image Quality Assessment
null
BIOSIG 2023. Gesellschaft f\"ur Informatik e.V. ISSN: 1617-5468. ISBN: 978-3-88579-733-3
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Face image quality assessment (FIQA) is crucial for obtaining good face recognition performance. FIQA algorithms should be robust and insensitive to demographic factors. The eye sclera has a consistent whitish color in all humans regardless of their age, ethnicity and skin-tone. This work proposes a robust sclera segmentation method that is suitable for face images in the enrolment and the border control face recognition scenarios. It shows how the statistical analysis of the sclera pixels produces features that are invariant to skin-tone, age and ethnicity and thus can be incorporated into FIQA algorithms to make them agnostic to demographic factors.
[ { "created": "Fri, 22 Dec 2023 22:49:11 GMT", "version": "v1" } ]
2023-12-27
[ [ "Kabbani", "Wassim", "" ], [ "Busch", "Christoph", "" ], [ "Raja", "Kiran", "" ] ]
Face image quality assessment (FIQA) is crucial for obtaining good face recognition performance. FIQA algorithms should be robust and insensitive to demographic factors. The eye sclera has a consistent whitish color in all humans regardless of their age, ethnicity and skin-tone. This work proposes a robust sclera segmentation method that is suitable for face images in the enrolment and the border control face recognition scenarios. It shows how the statistical analysis of the sclera pixels produces features that are invariant to skin-tone, age and ethnicity and thus can be incorporated into FIQA algorithms to make them agnostic to demographic factors.
1301.2561
Hiroki Sayama
Hiroki Sayama, Irene Pestov, Jeffrey Schmidt, Benjamin James Bush, Chun Wong, Junichi Yamanoi, and Thilo Gross
Modeling complex systems with adaptive networks
24 pages, 11 figures, 3 tables
Computers and Mathematics with Applications, 65, 1645-1664 (2013)
10.1016/j.camwa.2012.12.005
null
cs.SI nlin.AO physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Adaptive networks are a novel class of dynamical networks whose topologies and states coevolve. Many real-world complex systems can be modeled as adaptive networks, including social networks, transportation networks, neural networks and biological networks. In this paper, we introduce fundamental concepts and unique properties of adaptive networks through a brief, non-comprehensive review of recent literature on mathematical/computational modeling and analysis of such networks. We also report our recent work on several applications of computational adaptive network modeling and analysis to real-world problems, including temporal development of search and rescue operational networks, automated rule discovery from empirical network evolution data, and cultural integration in corporate merger.
[ { "created": "Fri, 11 Jan 2013 17:59:20 GMT", "version": "v1" } ]
2017-05-29
[ [ "Sayama", "Hiroki", "" ], [ "Pestov", "Irene", "" ], [ "Schmidt", "Jeffrey", "" ], [ "Bush", "Benjamin James", "" ], [ "Wong", "Chun", "" ], [ "Yamanoi", "Junichi", "" ], [ "Gross", "Thilo", "" ] ]
Adaptive networks are a novel class of dynamical networks whose topologies and states coevolve. Many real-world complex systems can be modeled as adaptive networks, including social networks, transportation networks, neural networks and biological networks. In this paper, we introduce fundamental concepts and unique properties of adaptive networks through a brief, non-comprehensive review of recent literature on mathematical/computational modeling and analysis of such networks. We also report our recent work on several applications of computational adaptive network modeling and analysis to real-world problems, including temporal development of search and rescue operational networks, automated rule discovery from empirical network evolution data, and cultural integration in corporate merger.
2212.13184
Peiyu Chen
Peiyu Chen, Weipeng Guan, Peng Lu
ESVIO: Event-based Stereo Visual Inertial Odometry
null
IEEE Robotics and Automation Letters (Volume: 8, Issue: 6, June 2023)
10.1109/LRA.2023.3269950
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Event cameras that asynchronously output low-latency event streams provide great opportunities for state estimation under challenging situations. Despite event-based visual odometry having been extensively studied in recent years, most of them are based on monocular and few research on stereo event vision. In this paper, we present ESVIO, the first event-based stereo visual-inertial odometry, which leverages the complementary advantages of event streams, standard images and inertial measurements. Our proposed pipeline achieves temporal tracking and instantaneous matching between consecutive stereo event streams, thereby obtaining robust state estimation. In addition, the motion compensation method is designed to emphasize the edge of scenes by warping each event to reference moments with IMU and ESVIO back-end. We validate that both ESIO (purely event-based) and ESVIO (event with image-aided) have superior performance compared with other image-based and event-based baseline methods on public and self-collected datasets. Furthermore, we use our pipeline to perform onboard quadrotor flights under low-light environments. A real-world large-scale experiment is also conducted to demonstrate long-term effectiveness. We highlight that this work is a real-time, accurate system that is aimed at robust state estimation under challenging environments.
[ { "created": "Mon, 26 Dec 2022 15:11:01 GMT", "version": "v1" }, { "created": "Fri, 14 Apr 2023 16:13:12 GMT", "version": "v2" }, { "created": "Mon, 11 Mar 2024 02:17:03 GMT", "version": "v3" } ]
2024-03-12
[ [ "Chen", "Peiyu", "" ], [ "Guan", "Weipeng", "" ], [ "Lu", "Peng", "" ] ]
Event cameras that asynchronously output low-latency event streams provide great opportunities for state estimation under challenging situations. Despite event-based visual odometry having been extensively studied in recent years, most of them are based on monocular and few research on stereo event vision. In this paper, we present ESVIO, the first event-based stereo visual-inertial odometry, which leverages the complementary advantages of event streams, standard images and inertial measurements. Our proposed pipeline achieves temporal tracking and instantaneous matching between consecutive stereo event streams, thereby obtaining robust state estimation. In addition, the motion compensation method is designed to emphasize the edge of scenes by warping each event to reference moments with IMU and ESVIO back-end. We validate that both ESIO (purely event-based) and ESVIO (event with image-aided) have superior performance compared with other image-based and event-based baseline methods on public and self-collected datasets. Furthermore, we use our pipeline to perform onboard quadrotor flights under low-light environments. A real-world large-scale experiment is also conducted to demonstrate long-term effectiveness. We highlight that this work is a real-time, accurate system that is aimed at robust state estimation under challenging environments.
1003.0167
Atri Rudra
Nikhil Bansal, Anupam Gupta, Viswanath Nagarajan and Atri Rudra
When LP is the Cure for Your Matching Woes: Approximating Stochastic Matchings
This paper has been withdrawn due to new merged paper arXiv:1008.5356v1
null
null
null
cs.DS cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This results in this paper have been merged with the result in arXiv:1002.3763v1 The authors would like to withdraw this version. Please see arXiv:1008.5356v1 for the merged version.
[ { "created": "Sun, 28 Feb 2010 06:32:18 GMT", "version": "v1" }, { "created": "Fri, 3 Sep 2010 01:29:37 GMT", "version": "v2" } ]
2010-09-06
[ [ "Bansal", "Nikhil", "" ], [ "Gupta", "Anupam", "" ], [ "Nagarajan", "Viswanath", "" ], [ "Rudra", "Atri", "" ] ]
This results in this paper have been merged with the result in arXiv:1002.3763v1 The authors would like to withdraw this version. Please see arXiv:1008.5356v1 for the merged version.
2402.14152
Jonathan Ku
Jonathan Ku, Junyao Zhang, Haoxuan Shan, Saichand Samudrala, Jiawen Wu, Qilin Zheng, Ziru Li, JV Rajendran, Yiran Chen
ModSRAM: Algorithm-Hardware Co-Design for Large Number Modular Multiplication in SRAM
DAC 2024
null
null
null
cs.AR cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Elliptic curve cryptography (ECC) is widely used in security applications such as public key cryptography (PKC) and zero-knowledge proofs (ZKP). ECC is composed of modular arithmetic, where modular multiplication takes most of the processing time. Computational complexity and memory constraints of ECC limit the performance. Therefore, hardware acceleration on ECC is an active field of research. Processing-in-memory (PIM) is a promising approach to tackle this problem. In this work, we design ModSRAM, the first 8T SRAM PIM architecture to compute large-number modular multiplication efficiently. In addition, we propose R4CSA-LUT, a new algorithm that reduces the cycles for an interleaved algorithm and eliminates carry propagation for addition based on look-up tables (LUT). ModSRAM is co-designed with R4CSA-LUT to support modular multiplication and data reuse in memory with 52% cycle reduction compared to prior works with only 32% area overhead.
[ { "created": "Wed, 21 Feb 2024 22:26:44 GMT", "version": "v1" } ]
2024-02-23
[ [ "Ku", "Jonathan", "" ], [ "Zhang", "Junyao", "" ], [ "Shan", "Haoxuan", "" ], [ "Samudrala", "Saichand", "" ], [ "Wu", "Jiawen", "" ], [ "Zheng", "Qilin", "" ], [ "Li", "Ziru", "" ], [ "Rajendran", "JV", "" ], [ "Chen", "Yiran", "" ] ]
Elliptic curve cryptography (ECC) is widely used in security applications such as public key cryptography (PKC) and zero-knowledge proofs (ZKP). ECC is composed of modular arithmetic, where modular multiplication takes most of the processing time. Computational complexity and memory constraints of ECC limit the performance. Therefore, hardware acceleration on ECC is an active field of research. Processing-in-memory (PIM) is a promising approach to tackle this problem. In this work, we design ModSRAM, the first 8T SRAM PIM architecture to compute large-number modular multiplication efficiently. In addition, we propose R4CSA-LUT, a new algorithm that reduces the cycles for an interleaved algorithm and eliminates carry propagation for addition based on look-up tables (LUT). ModSRAM is co-designed with R4CSA-LUT to support modular multiplication and data reuse in memory with 52% cycle reduction compared to prior works with only 32% area overhead.
2305.01876
Siyu Yuan
Siyu Yuan, Deqing Yang, Jinxi Liu, Shuyu Tian, Jiaqing Liang, Yanghua Xiao, Rui Xie
Causality-aware Concept Extraction based on Knowledge-guided Prompting
Accepted to ACL 2023
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Concepts benefit natural language understanding but are far from complete in existing knowledge graphs (KGs). Recently, pre-trained language models (PLMs) have been widely used in text-based concept extraction (CE). However, PLMs tend to mine the co-occurrence associations from massive corpus as pre-trained knowledge rather than the real causal effect between tokens. As a result, the pre-trained knowledge confounds PLMs to extract biased concepts based on spurious co-occurrence correlations, inevitably resulting in low precision. In this paper, through the lens of a Structural Causal Model (SCM), we propose equipping the PLM-based extractor with a knowledge-guided prompt as an intervention to alleviate concept bias. The prompt adopts the topic of the given entity from the existing knowledge in KGs to mitigate the spurious co-occurrence correlations between entities and biased concepts. Our extensive experiments on representative multilingual KG datasets justify that our proposed prompt can effectively alleviate concept bias and improve the performance of PLM-based CE models.The code has been released on https://github.com/siyuyuan/KPCE.
[ { "created": "Wed, 3 May 2023 03:36:20 GMT", "version": "v1" }, { "created": "Thu, 4 May 2023 02:16:38 GMT", "version": "v2" }, { "created": "Sun, 7 May 2023 03:02:12 GMT", "version": "v3" }, { "created": "Wed, 10 May 2023 01:15:45 GMT", "version": "v4" }, { "created": "Sat, 10 Jun 2023 07:34:27 GMT", "version": "v5" } ]
2023-06-13
[ [ "Yuan", "Siyu", "" ], [ "Yang", "Deqing", "" ], [ "Liu", "Jinxi", "" ], [ "Tian", "Shuyu", "" ], [ "Liang", "Jiaqing", "" ], [ "Xiao", "Yanghua", "" ], [ "Xie", "Rui", "" ] ]
Concepts benefit natural language understanding but are far from complete in existing knowledge graphs (KGs). Recently, pre-trained language models (PLMs) have been widely used in text-based concept extraction (CE). However, PLMs tend to mine the co-occurrence associations from massive corpus as pre-trained knowledge rather than the real causal effect between tokens. As a result, the pre-trained knowledge confounds PLMs to extract biased concepts based on spurious co-occurrence correlations, inevitably resulting in low precision. In this paper, through the lens of a Structural Causal Model (SCM), we propose equipping the PLM-based extractor with a knowledge-guided prompt as an intervention to alleviate concept bias. The prompt adopts the topic of the given entity from the existing knowledge in KGs to mitigate the spurious co-occurrence correlations between entities and biased concepts. Our extensive experiments on representative multilingual KG datasets justify that our proposed prompt can effectively alleviate concept bias and improve the performance of PLM-based CE models.The code has been released on https://github.com/siyuyuan/KPCE.
1106.0855
Eyal Ackerman
Eyal Ackerman, Tsachik Gelander, and Rom Pinchasi
Ice-Creams and Wedge Graphs
7 pages
null
null
null
cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
What is the minimum angle $\alpha >0$ such that given any set of $\alpha$-directional antennas (that is, antennas each of which can communicate along a wedge of angle $\alpha$), one can always assign a direction to each antenna such that the resulting communication graph is connected? Here two antennas are connected by an edge if and only if each lies in the wedge assigned to the other. This problem was recently presented by Carmi, Katz, Lotker, and Ros\'en \cite{CKLR10} who also found the minimum such $\alpha$ namely $\alpha=\frac{\pi}{3}$. In this paper we give a simple proof of this result. Moreover, we obtain a much stronger and optimal result (see Theorem \ref{theorem:main}) saying in particular that one can chose the directions of the antennas so that the communication graph has diameter $\le 4$. Our main tool is a surprisingly basic geometric lemma that is of independent interest. We show that for every compact convex set $S$ in the plane and every $0 < \alpha < \pi$, there exist a point $O$ and two supporting lines to $S$ passing through $O$ and touching $S$ at two \emph{single points} $X$ and $Y$, respectively, such that $|OX|=|OY|$ and the angle between the two lines is $\alpha$.
[ { "created": "Sat, 4 Jun 2011 20:24:50 GMT", "version": "v1" }, { "created": "Sun, 24 Jul 2011 13:07:44 GMT", "version": "v2" } ]
2011-07-26
[ [ "Ackerman", "Eyal", "" ], [ "Gelander", "Tsachik", "" ], [ "Pinchasi", "Rom", "" ] ]
What is the minimum angle $\alpha >0$ such that given any set of $\alpha$-directional antennas (that is, antennas each of which can communicate along a wedge of angle $\alpha$), one can always assign a direction to each antenna such that the resulting communication graph is connected? Here two antennas are connected by an edge if and only if each lies in the wedge assigned to the other. This problem was recently presented by Carmi, Katz, Lotker, and Ros\'en \cite{CKLR10} who also found the minimum such $\alpha$ namely $\alpha=\frac{\pi}{3}$. In this paper we give a simple proof of this result. Moreover, we obtain a much stronger and optimal result (see Theorem \ref{theorem:main}) saying in particular that one can chose the directions of the antennas so that the communication graph has diameter $\le 4$. Our main tool is a surprisingly basic geometric lemma that is of independent interest. We show that for every compact convex set $S$ in the plane and every $0 < \alpha < \pi$, there exist a point $O$ and two supporting lines to $S$ passing through $O$ and touching $S$ at two \emph{single points} $X$ and $Y$, respectively, such that $|OX|=|OY|$ and the angle between the two lines is $\alpha$.
1303.5438
Yang Xiang
Yang Xiang, David L. Poole, Michael P. Beddoes
Exploring Localization in Bayesian Networks for Large Expert Systems
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-344-351
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current Bayesian net representations do not consider structure in the domain and include all variables in a homogeneous network. At any time, a human reasoner in a large domain may direct his attention to only one of a number of natural subdomains, i.e., there is ?localization' of queries and evidence. In such a case, propagating evidence through a homogeneous network is inefficient since the entire network has to be updated each time. This paper presents multiply sectioned Bayesian networks that enable a (localization preserving) representation of natural subdomains by separate Bayesian subnets. The subnets are transformed into a set of permanent junction trees such that evidential reasoning takes place at only one of them at a time. Probabilities obtained are identical to those that would be obtained from the homogeneous network. We discuss attention shift to a different junction tree and propagation of previously acquired evidence. Although the overall system can be large, computational requirements are governed by the size of only one junction tree.
[ { "created": "Wed, 13 Mar 2013 12:56:04 GMT", "version": "v1" } ]
2013-03-25
[ [ "Xiang", "Yang", "" ], [ "Poole", "David L.", "" ], [ "Beddoes", "Michael P.", "" ] ]
Current Bayesian net representations do not consider structure in the domain and include all variables in a homogeneous network. At any time, a human reasoner in a large domain may direct his attention to only one of a number of natural subdomains, i.e., there is ?localization' of queries and evidence. In such a case, propagating evidence through a homogeneous network is inefficient since the entire network has to be updated each time. This paper presents multiply sectioned Bayesian networks that enable a (localization preserving) representation of natural subdomains by separate Bayesian subnets. The subnets are transformed into a set of permanent junction trees such that evidential reasoning takes place at only one of them at a time. Probabilities obtained are identical to those that would be obtained from the homogeneous network. We discuss attention shift to a different junction tree and propagation of previously acquired evidence. Although the overall system can be large, computational requirements are governed by the size of only one junction tree.
1904.08701
Nils Rexin
Nils Rexin, Marcel Musch and Klaus Dietmayer
Fusion of Object Tracking and Dynamic Occupancy Grid Map
null
2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 2019, pp. 4121-4127
10.1109/ITSC.2019.8917048
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Environment modeling in autonomous driving is realized by two fundamental approaches, grid-based and feature-based approach. Both methods interpret the environment differently and show some situation-dependent beneficial realizations. In order to use the advantages of both methods, a combination makes sense. This work presents a fusion, which establishes an association between the representations of environment modeling and then decoupled from this performs a fusion of the information. Thus, there is no need to adapt the environment models. The developed fusion generates new hypotheses, which are closer to reality than a representation alone. This algorithm itself does not use object model assumptions, in effect this fusion can be applied to different object hypotheses. In addition, this combination allows the objects to be tracked over a longer period of time. This is evaluated with a quantitative evaluation on real sequences in real-time.
[ { "created": "Thu, 18 Apr 2019 11:32:09 GMT", "version": "v1" }, { "created": "Mon, 22 Jul 2019 09:35:45 GMT", "version": "v2" }, { "created": "Thu, 5 Dec 2019 09:53:08 GMT", "version": "v3" } ]
2019-12-06
[ [ "Rexin", "Nils", "" ], [ "Musch", "Marcel", "" ], [ "Dietmayer", "Klaus", "" ] ]
Environment modeling in autonomous driving is realized by two fundamental approaches, grid-based and feature-based approach. Both methods interpret the environment differently and show some situation-dependent beneficial realizations. In order to use the advantages of both methods, a combination makes sense. This work presents a fusion, which establishes an association between the representations of environment modeling and then decoupled from this performs a fusion of the information. Thus, there is no need to adapt the environment models. The developed fusion generates new hypotheses, which are closer to reality than a representation alone. This algorithm itself does not use object model assumptions, in effect this fusion can be applied to different object hypotheses. In addition, this combination allows the objects to be tracked over a longer period of time. This is evaluated with a quantitative evaluation on real sequences in real-time.
1102.2506
Behrouz Maham
Behrouz Maham and Are Hj{\o}rungnes
Opportunistic Relaying for Space-Time Coded Cooperation with Multiple Antenna Terminals
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a wireless relay network with multiple antenna terminals over Rayleigh fading channels, and apply distributed space-time coding (DSTC) in amplify-and-forward (A&F) mode. The A&F scheme is used in a way that each relay transmits a scaled version of the linear combination of the received symbols. It turns out that, combined with power allocation in the relays, A&F DSTC results in an opportunistic relaying scheme, in which only the best relay is selected to retransmit the source's space-time coded signal. Furthermore, assuming the knowledge of source-relay CSI at the source node, we design an efficient power allocation which outperforms uniform power allocation across the source antennas. Next, assuming M-PSK or M-QAM modulations, we analyze the performance of the proposed cooperative diversity transmission schemes in a wireless relay networks with the multiple-antenna source and destination. We derive the probability density function (PDF) of the received SNR at the destination. Then, the PDF is used to determine the symbol error rate (SER) in Rayleigh fading channels. We derived closed-form approximations of the average SER in the high SNR scenario, from which we find the diversity order of system RminfNs;Ndg, where R, Ns, and Nd are the number of the relays, source antennas, and destination antennas, respectively. Simulation results show that the proposed system obtain more than 6 dB gain in SNR over A&F MIMO DSTC for BER 10^{-4}, when R = 2, Ns = 2, and Nd = 1.
[ { "created": "Sat, 12 Feb 2011 12:28:41 GMT", "version": "v1" } ]
2011-02-15
[ [ "Maham", "Behrouz", "" ], [ "Hjørungnes", "Are", "" ] ]
We consider a wireless relay network with multiple antenna terminals over Rayleigh fading channels, and apply distributed space-time coding (DSTC) in amplify-and-forward (A&F) mode. The A&F scheme is used in a way that each relay transmits a scaled version of the linear combination of the received symbols. It turns out that, combined with power allocation in the relays, A&F DSTC results in an opportunistic relaying scheme, in which only the best relay is selected to retransmit the source's space-time coded signal. Furthermore, assuming the knowledge of source-relay CSI at the source node, we design an efficient power allocation which outperforms uniform power allocation across the source antennas. Next, assuming M-PSK or M-QAM modulations, we analyze the performance of the proposed cooperative diversity transmission schemes in a wireless relay networks with the multiple-antenna source and destination. We derive the probability density function (PDF) of the received SNR at the destination. Then, the PDF is used to determine the symbol error rate (SER) in Rayleigh fading channels. We derived closed-form approximations of the average SER in the high SNR scenario, from which we find the diversity order of system RminfNs;Ndg, where R, Ns, and Nd are the number of the relays, source antennas, and destination antennas, respectively. Simulation results show that the proposed system obtain more than 6 dB gain in SNR over A&F MIMO DSTC for BER 10^{-4}, when R = 2, Ns = 2, and Nd = 1.
2106.06761
Robert Burduk
Robert Burduk
Relearning ensemble selection based on new generated features
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ensemble methods are meta-algorithms that combine several base machine learning techniques to increase the effectiveness of the classification. Many existing committees of classifiers use the classifier selection process to determine the optimal set of base classifiers. In this article, we propose the classifiers selection framework with relearning base classifiers. Additionally, we use in the proposed framework the new generated feature, which can be obtained after the relearning process. The proposed technique was compared with state-of-the-art ensemble methods using three benchmark datasets and one synthetic dataset. Four classification performance measures are used to evaluate the proposed method.
[ { "created": "Sat, 12 Jun 2021 12:45:32 GMT", "version": "v1" } ]
2021-06-15
[ [ "Burduk", "Robert", "" ] ]
The ensemble methods are meta-algorithms that combine several base machine learning techniques to increase the effectiveness of the classification. Many existing committees of classifiers use the classifier selection process to determine the optimal set of base classifiers. In this article, we propose the classifiers selection framework with relearning base classifiers. Additionally, we use in the proposed framework the new generated feature, which can be obtained after the relearning process. The proposed technique was compared with state-of-the-art ensemble methods using three benchmark datasets and one synthetic dataset. Four classification performance measures are used to evaluate the proposed method.
1502.04925
Andrei Asinowski
Andrei Asinowski, G\"unter Rote
Point sets with many non-crossing matchings
33 pages, 19 figures, 2 tables
Computational Geometry, Theory and Applications 68 (2018), 7-33
10.1016/j.comgeo.2017.05.006
null
cs.CG cs.DM math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The maximum number of non-crossing straight-line perfect matchings that a set of $n$ points in the plane can have is known to be $O(10.0438^n)$ and $\Omega^*(3^n)$. The lower bound, due to Garc\'ia, Noy, and Tejel (2000) is attained by the double chain, which has $\Theta(3^n n^{O(1)})$ such matchings. We reprove this bound in a simplified way that uses the novel notion of down-free matching, and apply this approach on several other constructions. As a result, we improve the lower bound. First we show that double zigzag chain with $n$ points has $\Theta^*(\lambda^n)$ such matchings with $\lambda \approx 3.0532$. Next we analyze further generalizations of double zigzag chains - double $r$-chains. The best choice of parameters leads to a construction with $\Theta^*(\nu^n)$ matchings, with $\nu \approx 3.0930$. The derivation of this bound requires an analysis of a coupled dynamic-programming recursion between two infinite vectors.
[ { "created": "Tue, 17 Feb 2015 15:28:52 GMT", "version": "v1" } ]
2017-11-20
[ [ "Asinowski", "Andrei", "" ], [ "Rote", "Günter", "" ] ]
The maximum number of non-crossing straight-line perfect matchings that a set of $n$ points in the plane can have is known to be $O(10.0438^n)$ and $\Omega^*(3^n)$. The lower bound, due to Garc\'ia, Noy, and Tejel (2000) is attained by the double chain, which has $\Theta(3^n n^{O(1)})$ such matchings. We reprove this bound in a simplified way that uses the novel notion of down-free matching, and apply this approach on several other constructions. As a result, we improve the lower bound. First we show that double zigzag chain with $n$ points has $\Theta^*(\lambda^n)$ such matchings with $\lambda \approx 3.0532$. Next we analyze further generalizations of double zigzag chains - double $r$-chains. The best choice of parameters leads to a construction with $\Theta^*(\nu^n)$ matchings, with $\nu \approx 3.0930$. The derivation of this bound requires an analysis of a coupled dynamic-programming recursion between two infinite vectors.
2202.06027
Wei Jing
Tianying Wang, En Yen Puang, Marcus Lee, Yan Wu, Wei Jing
End-to-end Reinforcement Learning of Robotic Manipulation with Robust Keypoints Representation
8 pages
null
null
null
cs.RO cs.CV
http://creativecommons.org/licenses/by/4.0/
We present an end-to-end Reinforcement Learning(RL) framework for robotic manipulation tasks, using a robust and efficient keypoints representation. The proposed method learns keypoints from camera images as the state representation, through a self-supervised autoencoder architecture. The keypoints encode the geometric information, as well as the relationship of the tool and target in a compact representation to ensure efficient and robust learning. After keypoints learning, the RL step then learns the robot motion from the extracted keypoints state representation. The keypoints and RL learning processes are entirely done in the simulated environment. We demonstrate the effectiveness of the proposed method on robotic manipulation tasks including grasping and pushing, in different scenarios. We also investigate the generalization capability of the trained model. In addition to the robust keypoints representation, we further apply domain randomization and adversarial training examples to achieve zero-shot sim-to-real transfer in real-world robotic manipulation tasks.
[ { "created": "Sat, 12 Feb 2022 09:58:09 GMT", "version": "v1" } ]
2022-02-15
[ [ "Wang", "Tianying", "" ], [ "Puang", "En Yen", "" ], [ "Lee", "Marcus", "" ], [ "Wu", "Yan", "" ], [ "Jing", "Wei", "" ] ]
We present an end-to-end Reinforcement Learning(RL) framework for robotic manipulation tasks, using a robust and efficient keypoints representation. The proposed method learns keypoints from camera images as the state representation, through a self-supervised autoencoder architecture. The keypoints encode the geometric information, as well as the relationship of the tool and target in a compact representation to ensure efficient and robust learning. After keypoints learning, the RL step then learns the robot motion from the extracted keypoints state representation. The keypoints and RL learning processes are entirely done in the simulated environment. We demonstrate the effectiveness of the proposed method on robotic manipulation tasks including grasping and pushing, in different scenarios. We also investigate the generalization capability of the trained model. In addition to the robust keypoints representation, we further apply domain randomization and adversarial training examples to achieve zero-shot sim-to-real transfer in real-world robotic manipulation tasks.
2312.06309
Max Hahn-Klimroth
Max Hahn-Klimroth, Paul W. Dierkes, Matthias W. Kleespies
An unsupervised learning approach to evaluate questionnaire data -- what one can learn from violations of measurement invariance
null
null
10.5334/dsj-2024-013
null
cs.LG stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In several branches of the social sciences and humanities, surveys based on standardized questionnaires are a prominent research tool. While there are a variety of ways to analyze the data, some standard procedures have become established. When those surveys want to analyze differences in the answer patterns of different groups (e.g., countries, gender, age, ...), these procedures can only be carried out in a meaningful way if there is measurement invariance, i.e., the measured construct has psychometric equivalence across groups. As recently raised as an open problem by Sauerwein et al. (2021), new evaluation methods that work in the absence of measurement invariance are needed. This paper promotes an unsupervised learning-based approach to such research data by proposing a procedure that works in three phases: data preparation, clustering of questionnaires, and measuring similarity based on the obtained clustering and the properties of each group. We generate synthetic data in three data sets, which allows us to compare our approach with the PCA approach under measurement invariance and under violated measurement invariance. As a main result, we obtain that the approach provides a natural comparison between groups and a natural description of the response patterns of the groups. Moreover, it can be safely applied to a wide variety of data sets, even in the absence of measurement invariance. Finally, this approach allows us to translate (violations of) measurement invariance into a meaningful measure of similarity.
[ { "created": "Mon, 11 Dec 2023 11:31:41 GMT", "version": "v1" } ]
2024-03-28
[ [ "Hahn-Klimroth", "Max", "" ], [ "Dierkes", "Paul W.", "" ], [ "Kleespies", "Matthias W.", "" ] ]
In several branches of the social sciences and humanities, surveys based on standardized questionnaires are a prominent research tool. While there are a variety of ways to analyze the data, some standard procedures have become established. When those surveys want to analyze differences in the answer patterns of different groups (e.g., countries, gender, age, ...), these procedures can only be carried out in a meaningful way if there is measurement invariance, i.e., the measured construct has psychometric equivalence across groups. As recently raised as an open problem by Sauerwein et al. (2021), new evaluation methods that work in the absence of measurement invariance are needed. This paper promotes an unsupervised learning-based approach to such research data by proposing a procedure that works in three phases: data preparation, clustering of questionnaires, and measuring similarity based on the obtained clustering and the properties of each group. We generate synthetic data in three data sets, which allows us to compare our approach with the PCA approach under measurement invariance and under violated measurement invariance. As a main result, we obtain that the approach provides a natural comparison between groups and a natural description of the response patterns of the groups. Moreover, it can be safely applied to a wide variety of data sets, even in the absence of measurement invariance. Finally, this approach allows us to translate (violations of) measurement invariance into a meaningful measure of similarity.
2005.08610
Sadaf Salehkalaibar
Sadaf Salehkalaibar and Michele Wigger
Distributed Hypothesis Testing with Variable-Length Coding
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The problem of distributed testing against independence with variable-length coding is considered when the \emph{average} and not the \emph{maximum} communication load is constrained as in previous works. The paper characterizes the optimum type-II error exponent of a single sensor single decision center system given a maximum type-I error probability when communication is either over a noise-free rate-$R$ link or over a noisy discrete memoryless channel (DMC) with stop-feedback. Specifically, let $\epsilon$ denote the maximum allowed type-I error probability. Then the optimum exponent of the system with a rate-$R$ link under a constraint on the average communication load coincides with the optimum exponent of such a system with a rate $R/(1-\epsilon)$ link under a maximum communication load constraint. A strong converse thus does not hold under an average communication load constraint. A similar observation holds also for testing against independence over DMCs. With variable-length coding and stop-feedback and under an average communication load constraint, the optimum type-II error exponent over a DMC of capacity $C$ equals the optimum exponent under fixed-length coding and a maximum communication load constraint when communication is over a DMC of capacity $C(1-\epsilon)^{-1}$. In particular, under variable-length coding over a DMC with stop feedback a strong converse result does not hold and the optimum error exponent depends on the transition law of the DMC only through its capacity.
[ { "created": "Mon, 18 May 2020 11:47:47 GMT", "version": "v1" } ]
2020-05-19
[ [ "Salehkalaibar", "Sadaf", "" ], [ "Wigger", "Michele", "" ] ]
The problem of distributed testing against independence with variable-length coding is considered when the \emph{average} and not the \emph{maximum} communication load is constrained as in previous works. The paper characterizes the optimum type-II error exponent of a single sensor single decision center system given a maximum type-I error probability when communication is either over a noise-free rate-$R$ link or over a noisy discrete memoryless channel (DMC) with stop-feedback. Specifically, let $\epsilon$ denote the maximum allowed type-I error probability. Then the optimum exponent of the system with a rate-$R$ link under a constraint on the average communication load coincides with the optimum exponent of such a system with a rate $R/(1-\epsilon)$ link under a maximum communication load constraint. A strong converse thus does not hold under an average communication load constraint. A similar observation holds also for testing against independence over DMCs. With variable-length coding and stop-feedback and under an average communication load constraint, the optimum type-II error exponent over a DMC of capacity $C$ equals the optimum exponent under fixed-length coding and a maximum communication load constraint when communication is over a DMC of capacity $C(1-\epsilon)^{-1}$. In particular, under variable-length coding over a DMC with stop feedback a strong converse result does not hold and the optimum error exponent depends on the transition law of the DMC only through its capacity.
2006.08900
Ao Zhang
Ao Zhang and Jinwen Ma
DefenseVGAE: Defending against Adversarial Attacks on Graph Data via a Variational Graph Autoencoder
11 pages, 2 figures
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph neural networks (GNNs) achieve remarkable performance for tasks on graph data. However, recent works show they are extremely vulnerable to adversarial structural perturbations, making their outcomes unreliable. In this paper, we propose DefenseVGAE, a novel framework leveraging variational graph autoencoders(VGAEs) to defend GNNs against such attacks. DefenseVGAE is trained to reconstruct graph structure. The reconstructed adjacency matrix can reduce the effects of adversarial perturbations and boost the performance of GCNs when facing adversarial attacks. Our experiments on a number of datasets show the effectiveness of the proposed method under various threat models. Under some settings it outperforms existing defense strategies. Our code has been made publicly available at https://github.com/zhangao520/defense-vgae.
[ { "created": "Tue, 16 Jun 2020 03:30:23 GMT", "version": "v1" } ]
2020-06-17
[ [ "Zhang", "Ao", "" ], [ "Ma", "Jinwen", "" ] ]
Graph neural networks (GNNs) achieve remarkable performance for tasks on graph data. However, recent works show they are extremely vulnerable to adversarial structural perturbations, making their outcomes unreliable. In this paper, we propose DefenseVGAE, a novel framework leveraging variational graph autoencoders(VGAEs) to defend GNNs against such attacks. DefenseVGAE is trained to reconstruct graph structure. The reconstructed adjacency matrix can reduce the effects of adversarial perturbations and boost the performance of GCNs when facing adversarial attacks. Our experiments on a number of datasets show the effectiveness of the proposed method under various threat models. Under some settings it outperforms existing defense strategies. Our code has been made publicly available at https://github.com/zhangao520/defense-vgae.
2306.01863
Yixin Xu
Yixin Xu, Yi Xiao, Zijian Zhao, Franz M\"uller, Alptekin Vardar, Xiao Gong, Sumitha George, Thomas K\"ampfe, Vijaykrishnan Narayanan, Kai Ni
Embedding Security into Ferroelectric FET Array via In-Situ Memory Operation
null
null
null
null
cs.ET
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Non-volatile memories (NVMs) have the potential to reshape next-generation memory systems because of their promising properties of near-zero leakage power consumption, high density and non-volatility. However, NVMs also face critical security threats that exploit the non-volatile property. Compared to volatile memory, the capability of retaining data even after power down makes NVM more vulnerable. Existing solutions to address the security issues of NVMs are mainly based on Advanced Encryption Standard (AES), which incurs significant performance and power overhead. In this paper, we propose a lightweight memory encryption/decryption scheme by exploiting in-situ memory operations with negligible overhead. To validate the feasibility of the encryption/decryption scheme, device-level and array-level experiments are performed using ferroelectric field effect transistor (FeFET) as an example NVM without loss of generality. Besides, a comprehensive evaluation is performed on a 128x128 FeFET AND-type memory array in terms of area, latency, power and throughput. Compared with the AES-based scheme, our scheme shows around 22.6x/14.1x increase in encryption/decryption throughput with negligible power penalty. Furthermore, we evaluate the performance of our scheme over the AES-based scheme when deploying different neural network workloads. Our scheme yields significant latency reduction by 90% on average for encryption and decryption processes.
[ { "created": "Fri, 2 Jun 2023 18:35:29 GMT", "version": "v1" } ]
2023-06-06
[ [ "Xu", "Yixin", "" ], [ "Xiao", "Yi", "" ], [ "Zhao", "Zijian", "" ], [ "Müller", "Franz", "" ], [ "Vardar", "Alptekin", "" ], [ "Gong", "Xiao", "" ], [ "George", "Sumitha", "" ], [ "Kämpfe", "Thomas", "" ], [ "Narayanan", "Vijaykrishnan", "" ], [ "Ni", "Kai", "" ] ]
Non-volatile memories (NVMs) have the potential to reshape next-generation memory systems because of their promising properties of near-zero leakage power consumption, high density and non-volatility. However, NVMs also face critical security threats that exploit the non-volatile property. Compared to volatile memory, the capability of retaining data even after power down makes NVM more vulnerable. Existing solutions to address the security issues of NVMs are mainly based on Advanced Encryption Standard (AES), which incurs significant performance and power overhead. In this paper, we propose a lightweight memory encryption/decryption scheme by exploiting in-situ memory operations with negligible overhead. To validate the feasibility of the encryption/decryption scheme, device-level and array-level experiments are performed using ferroelectric field effect transistor (FeFET) as an example NVM without loss of generality. Besides, a comprehensive evaluation is performed on a 128x128 FeFET AND-type memory array in terms of area, latency, power and throughput. Compared with the AES-based scheme, our scheme shows around 22.6x/14.1x increase in encryption/decryption throughput with negligible power penalty. Furthermore, we evaluate the performance of our scheme over the AES-based scheme when deploying different neural network workloads. Our scheme yields significant latency reduction by 90% on average for encryption and decryption processes.
2402.07588
Tinashe Handina
Tinashe Handina and Eric Mazumdar
Understanding Model Selection For Learning In Strategic Environments
Reworded title, fixed typos and changed organization from previous version
null
null
null
cs.GT cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
The deployment of ever-larger machine learning models reflects a growing consensus that the more expressive the model class one optimizes over$\unicode{x2013}$and the more data one has access to$\unicode{x2013}$the more one can improve performance. As models get deployed in a variety of real-world scenarios, they inevitably face strategic environments. In this work, we consider the natural question of how the interplay of models and strategic interactions affects the relationship between performance at equilibrium and the expressivity of model classes. We find that strategic interactions can break the conventional view$\unicode{x2013}$meaning that performance does not necessarily monotonically improve as model classes get larger or more expressive (even with infinite data). We show the implications of this result in several contexts including strategic regression, strategic classification, and multi-agent reinforcement learning. In particular, we show that each of these settings admits a Braess' paradox-like phenomenon in which optimizing over less expressive model classes allows one to achieve strictly better equilibrium outcomes. Motivated by these examples, we then propose a new paradigm for model selection in games wherein an agent seeks to choose amongst different model classes to use as their action set in a game.
[ { "created": "Mon, 12 Feb 2024 11:41:42 GMT", "version": "v1" }, { "created": "Wed, 21 Feb 2024 18:49:51 GMT", "version": "v2" }, { "created": "Sat, 1 Jun 2024 23:16:05 GMT", "version": "v3" } ]
2024-06-04
[ [ "Handina", "Tinashe", "" ], [ "Mazumdar", "Eric", "" ] ]
The deployment of ever-larger machine learning models reflects a growing consensus that the more expressive the model class one optimizes over$\unicode{x2013}$and the more data one has access to$\unicode{x2013}$the more one can improve performance. As models get deployed in a variety of real-world scenarios, they inevitably face strategic environments. In this work, we consider the natural question of how the interplay of models and strategic interactions affects the relationship between performance at equilibrium and the expressivity of model classes. We find that strategic interactions can break the conventional view$\unicode{x2013}$meaning that performance does not necessarily monotonically improve as model classes get larger or more expressive (even with infinite data). We show the implications of this result in several contexts including strategic regression, strategic classification, and multi-agent reinforcement learning. In particular, we show that each of these settings admits a Braess' paradox-like phenomenon in which optimizing over less expressive model classes allows one to achieve strictly better equilibrium outcomes. Motivated by these examples, we then propose a new paradigm for model selection in games wherein an agent seeks to choose amongst different model classes to use as their action set in a game.
1908.07380
Omar Rivasplata
Omar Rivasplata, Vikram M Tankasali, Csaba Szepesvari
PAC-Bayes with Backprop
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We explore the family of methods "PAC-Bayes with Backprop" (PBB) to train probabilistic neural networks by minimizing PAC-Bayes bounds. We present two training objectives, one derived from a previously known PAC-Bayes bound, and a second one derived from a novel PAC-Bayes bound. Both training objectives are evaluated on MNIST and on various UCI data sets. Our experiments show two striking observations: we obtain competitive test set error estimates (~1.4% on MNIST) and at the same time we compute non-vacuous bounds with much tighter values (~2.3% on MNIST) than previous results. These observations suggest that neural nets trained by PBB may lead to self-bounding learning, where the available data can be used to simultaneously learn a predictor and certify its risk, with no need to follow a data-splitting protocol.
[ { "created": "Mon, 19 Aug 2019 13:27:08 GMT", "version": "v1" }, { "created": "Wed, 21 Aug 2019 10:18:05 GMT", "version": "v2" }, { "created": "Fri, 23 Aug 2019 08:16:40 GMT", "version": "v3" }, { "created": "Mon, 30 Sep 2019 12:32:30 GMT", "version": "v4" }, { "created": "Fri, 4 Oct 2019 17:23:16 GMT", "version": "v5" } ]
2019-10-07
[ [ "Rivasplata", "Omar", "" ], [ "Tankasali", "Vikram M", "" ], [ "Szepesvari", "Csaba", "" ] ]
We explore the family of methods "PAC-Bayes with Backprop" (PBB) to train probabilistic neural networks by minimizing PAC-Bayes bounds. We present two training objectives, one derived from a previously known PAC-Bayes bound, and a second one derived from a novel PAC-Bayes bound. Both training objectives are evaluated on MNIST and on various UCI data sets. Our experiments show two striking observations: we obtain competitive test set error estimates (~1.4% on MNIST) and at the same time we compute non-vacuous bounds with much tighter values (~2.3% on MNIST) than previous results. These observations suggest that neural nets trained by PBB may lead to self-bounding learning, where the available data can be used to simultaneously learn a predictor and certify its risk, with no need to follow a data-splitting protocol.
1807.00687
Tom Kelly
Tom Kelly, Niloy J. Mitra
Simplifying Urban Data Fusion with BigSUR
Architecture_MPS under review
null
null
null
cs.GR
http://creativecommons.org/publicdomain/zero/1.0/
Our ability to understand data has always lagged behind our ability to collect it. This is particularly true in urban environments, where mass data capture is particularly valuable, but the objects captured are more varied, denser, and complex. To understand the structure and content of the environment, we must process the unstructured data to a structured form. BigSUR is an urban reconstruction algorithm which fuses GIS data, photogrammetric meshes, and street level photography, to create clean representative, semantically labelled, geometry. However, we have identified three problems with the system i) the street level photography is often difficult to acquire; ii) novel fa\c{c}ade styles often frustrate the detection of windows and doors; iii) the computational requirements of the system are large, processing a large city block can take up to 15 hours. In this paper we describe the process of simplifying and validating the BigSUR semantic reconstruction system. In particular, the requirement for street level images is removed, and greedy post-process profile assignment is introduced to accelerate the system. We accomplish this by modifying the binary integer programming (BIP) optimization, and re-evaluating the effects of various parameters. The new variant of the system is evaluated over a variety of urban areas. We objectively measure mean squared error (MSE) terms over the unstructured geometry, showing that BigSUR is able to accurately recover omissions from the input meshes. Further, we evaluate the ability of the system to label the walls and roofs of input meshes, concluding that our new BigSUR variant achieves highly accurate semantic labelling with shorter computational time and less input data.
[ { "created": "Mon, 2 Jul 2018 14:14:57 GMT", "version": "v1" } ]
2018-07-03
[ [ "Kelly", "Tom", "" ], [ "Mitra", "Niloy J.", "" ] ]
Our ability to understand data has always lagged behind our ability to collect it. This is particularly true in urban environments, where mass data capture is particularly valuable, but the objects captured are more varied, denser, and complex. To understand the structure and content of the environment, we must process the unstructured data to a structured form. BigSUR is an urban reconstruction algorithm which fuses GIS data, photogrammetric meshes, and street level photography, to create clean representative, semantically labelled, geometry. However, we have identified three problems with the system i) the street level photography is often difficult to acquire; ii) novel fa\c{c}ade styles often frustrate the detection of windows and doors; iii) the computational requirements of the system are large, processing a large city block can take up to 15 hours. In this paper we describe the process of simplifying and validating the BigSUR semantic reconstruction system. In particular, the requirement for street level images is removed, and greedy post-process profile assignment is introduced to accelerate the system. We accomplish this by modifying the binary integer programming (BIP) optimization, and re-evaluating the effects of various parameters. The new variant of the system is evaluated over a variety of urban areas. We objectively measure mean squared error (MSE) terms over the unstructured geometry, showing that BigSUR is able to accurately recover omissions from the input meshes. Further, we evaluate the ability of the system to label the walls and roofs of input meshes, concluding that our new BigSUR variant achieves highly accurate semantic labelling with shorter computational time and less input data.
1405.3393
Gregory Duck
Gregory J. Duck and Remy Haemmerle and Martin Sulzmann
On Termination, Confluence and Consistent CHR-based Type Inference
null
Theory and Practice of Logic Programming 14 (2014) 619-632
10.1017/S1471068414000246
null
cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the application of Constraint Handling Rules (CHR) for the specification of type inference systems, such as that used by Haskell. Confluence of CHR guarantees that the answer provided by type inference is correct and consistent. The standard method for establishing confluence relies on an assumption that the CHR program is terminating. However, many examples in practice give rise to non-terminating CHR programs, rendering this method inapplicable. Despite no guarantee of termination or confluence, the Glasgow Haskell Compiler (GHC) supports options that allow the user to proceed with type inference anyway, e.g. via the use of the UndecidableInstances flag. In this paper we formally identify and verify a set of relaxed criteria, namely range-restrictedness, local confluence, and ground termination, that ensure the consistency of CHR-based type inference that maps to potentially non-terminating CHR programs.
[ { "created": "Wed, 14 May 2014 07:51:39 GMT", "version": "v1" } ]
2020-02-19
[ [ "Duck", "Gregory J.", "" ], [ "Haemmerle", "Remy", "" ], [ "Sulzmann", "Martin", "" ] ]
We consider the application of Constraint Handling Rules (CHR) for the specification of type inference systems, such as that used by Haskell. Confluence of CHR guarantees that the answer provided by type inference is correct and consistent. The standard method for establishing confluence relies on an assumption that the CHR program is terminating. However, many examples in practice give rise to non-terminating CHR programs, rendering this method inapplicable. Despite no guarantee of termination or confluence, the Glasgow Haskell Compiler (GHC) supports options that allow the user to proceed with type inference anyway, e.g. via the use of the UndecidableInstances flag. In this paper we formally identify and verify a set of relaxed criteria, namely range-restrictedness, local confluence, and ground termination, that ensure the consistency of CHR-based type inference that maps to potentially non-terminating CHR programs.
2404.11052
Mohammad Shiri
Mohammad Shiri, Monalika Padma Reddy, Jiangwen Sun
Supervised Contrastive Vision Transformer for Breast Histopathological Image Classification
8 pages, 7 figures
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Invasive ductal carcinoma (IDC) is the most prevalent form of breast cancer. Breast tissue histopathological examination is critical in diagnosing and classifying breast cancer. Although existing methods have shown promising results, there is still room for improvement in the classification accuracy and generalization of IDC using histopathology images. We present a novel approach, Supervised Contrastive Vision Transformer (SupCon-ViT), for improving the classification of invasive ductal carcinoma in terms of accuracy and generalization by leveraging the inherent strengths and advantages of both transfer learning, i.e., pre-trained vision transformer, and supervised contrastive learning. Our results on a benchmark breast cancer dataset demonstrate that SupCon-Vit achieves state-of-the-art performance in IDC classification, with an F1-score of 0.8188, precision of 0.7692, and specificity of 0.8971, outperforming existing methods. In addition, the proposed model demonstrates resilience in scenarios with minimal labeled data, making it highly efficient in real-world clinical settings where labelled data is limited. Our findings suggest that supervised contrastive learning in conjunction with pre-trained vision transformers appears to be a viable strategy for an accurate classification of IDC, thus paving the way for a more efficient and reliable diagnosis of breast cancer through histopathological image analysis.
[ { "created": "Wed, 17 Apr 2024 03:51:55 GMT", "version": "v1" }, { "created": "Thu, 18 Apr 2024 01:59:27 GMT", "version": "v2" } ]
2024-04-19
[ [ "Shiri", "Mohammad", "" ], [ "Reddy", "Monalika Padma", "" ], [ "Sun", "Jiangwen", "" ] ]
Invasive ductal carcinoma (IDC) is the most prevalent form of breast cancer. Breast tissue histopathological examination is critical in diagnosing and classifying breast cancer. Although existing methods have shown promising results, there is still room for improvement in the classification accuracy and generalization of IDC using histopathology images. We present a novel approach, Supervised Contrastive Vision Transformer (SupCon-ViT), for improving the classification of invasive ductal carcinoma in terms of accuracy and generalization by leveraging the inherent strengths and advantages of both transfer learning, i.e., pre-trained vision transformer, and supervised contrastive learning. Our results on a benchmark breast cancer dataset demonstrate that SupCon-Vit achieves state-of-the-art performance in IDC classification, with an F1-score of 0.8188, precision of 0.7692, and specificity of 0.8971, outperforming existing methods. In addition, the proposed model demonstrates resilience in scenarios with minimal labeled data, making it highly efficient in real-world clinical settings where labelled data is limited. Our findings suggest that supervised contrastive learning in conjunction with pre-trained vision transformers appears to be a viable strategy for an accurate classification of IDC, thus paving the way for a more efficient and reliable diagnosis of breast cancer through histopathological image analysis.
2303.02618
Yanjie Song
Yanjie Song, P. N. Suganthan, Witold Pedrycz, Junwei Ou, Yongming He, Yingwu Chen, Yutong Wu
Ensemble Reinforcement Learning: A Survey
34 pages
null
null
null
cs.LG cs.AI cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reinforcement Learning (RL) has emerged as a highly effective technique for addressing various scientific and applied problems. Despite its success, certain complex tasks remain challenging to be addressed solely with a single model and algorithm. In response, ensemble reinforcement learning (ERL), a promising approach that combines the benefits of both RL and ensemble learning (EL), has gained widespread popularity. ERL leverages multiple models or training algorithms to comprehensively explore the problem space and possesses strong generalization capabilities. In this study, we present a comprehensive survey on ERL to provide readers with an overview of recent advances and challenges in the field. Firstly, we provide an introduction to the background and motivation for ERL. Secondly, we conduct a detailed analysis of strategies such as model selection and combination that have been successfully implemented in ERL. Subsequently, we explore the application of ERL, summarize the datasets, and analyze the algorithms employed. Finally, we outline several open questions and discuss future research directions of ERL. By offering guidance for future scientific research and engineering applications, this survey significantly contributes to the advancement of ERL.
[ { "created": "Sun, 5 Mar 2023 09:26:44 GMT", "version": "v1" }, { "created": "Wed, 19 Apr 2023 08:43:54 GMT", "version": "v2" }, { "created": "Wed, 13 Dec 2023 13:27:25 GMT", "version": "v3" } ]
2023-12-14
[ [ "Song", "Yanjie", "" ], [ "Suganthan", "P. N.", "" ], [ "Pedrycz", "Witold", "" ], [ "Ou", "Junwei", "" ], [ "He", "Yongming", "" ], [ "Chen", "Yingwu", "" ], [ "Wu", "Yutong", "" ] ]
Reinforcement Learning (RL) has emerged as a highly effective technique for addressing various scientific and applied problems. Despite its success, certain complex tasks remain challenging to be addressed solely with a single model and algorithm. In response, ensemble reinforcement learning (ERL), a promising approach that combines the benefits of both RL and ensemble learning (EL), has gained widespread popularity. ERL leverages multiple models or training algorithms to comprehensively explore the problem space and possesses strong generalization capabilities. In this study, we present a comprehensive survey on ERL to provide readers with an overview of recent advances and challenges in the field. Firstly, we provide an introduction to the background and motivation for ERL. Secondly, we conduct a detailed analysis of strategies such as model selection and combination that have been successfully implemented in ERL. Subsequently, we explore the application of ERL, summarize the datasets, and analyze the algorithms employed. Finally, we outline several open questions and discuss future research directions of ERL. By offering guidance for future scientific research and engineering applications, this survey significantly contributes to the advancement of ERL.
2109.10385
Kishan Chandan
Kishan Chandan, Jack Albertson, Xiaohan Zhang, Xiaoyang Zhang, Yao Liu, Shiqi Zhang
Learning to Guide Human Attention on Mobile Telepresence Robots with 360 Vision
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mobile telepresence robots (MTRs) allow people to navigate and interact with a remote environment that is in a place other than the person's true location. Thanks to the recent advances in 360 degree vision, many MTRs are now equipped with an all-degree visual perception capability. However, people's visual field horizontally spans only about 120 degree of the visual field captured by the robot. To bridge this observability gap toward human-MTR shared autonomy, we have developed a framework, called GHAL360, to enable the MTR to learn a goal-oriented policy from reinforcements for guiding human attention using visual indicators. Three telepresence environments were constructed using datasets that are extracted from Matterport3D and collected from a real robot respectively. Experimental results show that GHAL360 outperformed the baselines from the literature in the efficiency of a human-MTR team completing target search tasks.
[ { "created": "Tue, 21 Sep 2021 18:15:05 GMT", "version": "v1" }, { "created": "Fri, 25 Feb 2022 16:49:26 GMT", "version": "v2" } ]
2022-02-28
[ [ "Chandan", "Kishan", "" ], [ "Albertson", "Jack", "" ], [ "Zhang", "Xiaohan", "" ], [ "Zhang", "Xiaoyang", "" ], [ "Liu", "Yao", "" ], [ "Zhang", "Shiqi", "" ] ]
Mobile telepresence robots (MTRs) allow people to navigate and interact with a remote environment that is in a place other than the person's true location. Thanks to the recent advances in 360 degree vision, many MTRs are now equipped with an all-degree visual perception capability. However, people's visual field horizontally spans only about 120 degree of the visual field captured by the robot. To bridge this observability gap toward human-MTR shared autonomy, we have developed a framework, called GHAL360, to enable the MTR to learn a goal-oriented policy from reinforcements for guiding human attention using visual indicators. Three telepresence environments were constructed using datasets that are extracted from Matterport3D and collected from a real robot respectively. Experimental results show that GHAL360 outperformed the baselines from the literature in the efficiency of a human-MTR team completing target search tasks.
2311.11753
Sai Amrit Patnaik
Sai Amrit Patnaik, Shivali Chansoriya, Anil K. Jain, Anoop M. Namboodiri
AdvGen: Physical Adversarial Attack on Face Presentation Attack Detection Systems
10 pages, 9 figures, Accepted to the International Joint Conference on Biometrics (IJCB 2023)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Evaluating the risk level of adversarial images is essential for safely deploying face authentication models in the real world. Popular approaches for physical-world attacks, such as print or replay attacks, suffer from some limitations, like including physical and geometrical artifacts. Recently, adversarial attacks have gained attraction, which try to digitally deceive the learning strategy of a recognition system using slight modifications to the captured image. While most previous research assumes that the adversarial image could be digitally fed into the authentication systems, this is not always the case for systems deployed in the real world. This paper demonstrates the vulnerability of face authentication systems to adversarial images in physical world scenarios. We propose AdvGen, an automated Generative Adversarial Network, to simulate print and replay attacks and generate adversarial images that can fool state-of-the-art PADs in a physical domain attack setting. Using this attack strategy, the attack success rate goes up to 82.01%. We test AdvGen extensively on four datasets and ten state-of-the-art PADs. We also demonstrate the effectiveness of our attack by conducting experiments in a realistic, physical environment.
[ { "created": "Mon, 20 Nov 2023 13:28:42 GMT", "version": "v1" } ]
2023-11-21
[ [ "Patnaik", "Sai Amrit", "" ], [ "Chansoriya", "Shivali", "" ], [ "Jain", "Anil K.", "" ], [ "Namboodiri", "Anoop M.", "" ] ]
Evaluating the risk level of adversarial images is essential for safely deploying face authentication models in the real world. Popular approaches for physical-world attacks, such as print or replay attacks, suffer from some limitations, like including physical and geometrical artifacts. Recently, adversarial attacks have gained attraction, which try to digitally deceive the learning strategy of a recognition system using slight modifications to the captured image. While most previous research assumes that the adversarial image could be digitally fed into the authentication systems, this is not always the case for systems deployed in the real world. This paper demonstrates the vulnerability of face authentication systems to adversarial images in physical world scenarios. We propose AdvGen, an automated Generative Adversarial Network, to simulate print and replay attacks and generate adversarial images that can fool state-of-the-art PADs in a physical domain attack setting. Using this attack strategy, the attack success rate goes up to 82.01%. We test AdvGen extensively on four datasets and ten state-of-the-art PADs. We also demonstrate the effectiveness of our attack by conducting experiments in a realistic, physical environment.
1802.10488
Pierre Laroche
Gais Alhadi, Imed Kacem, Pierre Laroche, and Izzeldin M. Osman
An Approximate Pareto Set for Minimizing the Maximum Lateness and Makespan on Parallel Machines
submitted to Sose 2018
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the two-parallel machines scheduling problem, with the aim of minimizing the maximum lateness and the makespan. Formally, the problem is defined as follows. We have to schedule a set J of n jobs on two identical machines. Each job i in J has a processing time p_i and a delivery time q_i. Each machine can only perform one job at a given time. The machines are available at time t=0 and each of them can process at most one job at a given time. The problem is to find a sequence of jobs, with the objective of minimizing the maximum lateness L_max and the makespan C_max. With no loss of generality, we consider that all data are integers and that jobs are indexed in non-increasing order of their delivery times: q_1 >= q_2 >= ... >= q_n. This paper proposes an exact algorithm (based on a dynamic programming) to generate the complete Pareto Frontier in a pseudo-polynomial time. Then, we present an FPTAS (Fully Polynomial Time Approximation Scheme) to generate an approximate Pareto Frontier, based on the conversion of the dynamic programming. The proposed FPTAS is strongly polynomial. Some numerical experiments are provided in order to compare the two proposed approaches.
[ { "created": "Wed, 28 Feb 2018 15:49:11 GMT", "version": "v1" } ]
2018-03-01
[ [ "Alhadi", "Gais", "" ], [ "Kacem", "Imed", "" ], [ "Laroche", "Pierre", "" ], [ "Osman", "Izzeldin M.", "" ] ]
We consider the two-parallel machines scheduling problem, with the aim of minimizing the maximum lateness and the makespan. Formally, the problem is defined as follows. We have to schedule a set J of n jobs on two identical machines. Each job i in J has a processing time p_i and a delivery time q_i. Each machine can only perform one job at a given time. The machines are available at time t=0 and each of them can process at most one job at a given time. The problem is to find a sequence of jobs, with the objective of minimizing the maximum lateness L_max and the makespan C_max. With no loss of generality, we consider that all data are integers and that jobs are indexed in non-increasing order of their delivery times: q_1 >= q_2 >= ... >= q_n. This paper proposes an exact algorithm (based on a dynamic programming) to generate the complete Pareto Frontier in a pseudo-polynomial time. Then, we present an FPTAS (Fully Polynomial Time Approximation Scheme) to generate an approximate Pareto Frontier, based on the conversion of the dynamic programming. The proposed FPTAS is strongly polynomial. Some numerical experiments are provided in order to compare the two proposed approaches.
2106.15821
Eduardo G. Altmann
Charles C. Hyland, Yuanming Tao, Lamiae Azizi, Martin Gerlach, Tiago P. Peixoto, and Eduardo G. Altmann
Multilayer Networks for Text Analysis with Multiple Data Types
17 pages, 6 figures
EPJ Data Science volume 10, Article number: 33 (2021)
10.1140/epjds/s13688-021-00288-5
null
cs.SI physics.soc-ph stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We are interested in the widespread problem of clustering documents and finding topics in large collections of written documents in the presence of metadata and hyperlinks. To tackle the challenge of accounting for these different types of datasets, we propose a novel framework based on Multilayer Networks and Stochastic Block Models. The main innovation of our approach over other techniques is that it applies the same non-parametric probabilistic framework to the different sources of datasets simultaneously. The key difference to other multilayer complex networks is the strong unbalance between the layers, with the average degree of different node types scaling differently with system size. We show that the latter observation is due to generic properties of text, such as Heaps' law, and strongly affects the inference of communities. We present and discuss the performance of our method in different datasets (hundreds of Wikipedia documents, thousands of scientific papers, and thousands of E-mails) showing that taking into account multiple types of information provides a more nuanced view on topic- and document-clusters and increases the ability to predict missing links.
[ { "created": "Wed, 30 Jun 2021 05:47:39 GMT", "version": "v1" } ]
2021-07-01
[ [ "Hyland", "Charles C.", "" ], [ "Tao", "Yuanming", "" ], [ "Azizi", "Lamiae", "" ], [ "Gerlach", "Martin", "" ], [ "Peixoto", "Tiago P.", "" ], [ "Altmann", "Eduardo G.", "" ] ]
We are interested in the widespread problem of clustering documents and finding topics in large collections of written documents in the presence of metadata and hyperlinks. To tackle the challenge of accounting for these different types of datasets, we propose a novel framework based on Multilayer Networks and Stochastic Block Models. The main innovation of our approach over other techniques is that it applies the same non-parametric probabilistic framework to the different sources of datasets simultaneously. The key difference to other multilayer complex networks is the strong unbalance between the layers, with the average degree of different node types scaling differently with system size. We show that the latter observation is due to generic properties of text, such as Heaps' law, and strongly affects the inference of communities. We present and discuss the performance of our method in different datasets (hundreds of Wikipedia documents, thousands of scientific papers, and thousands of E-mails) showing that taking into account multiple types of information provides a more nuanced view on topic- and document-clusters and increases the ability to predict missing links.
1804.09196
Stephen Hagen
Edward D. Ramirez and Stephen J. Hagen
The quantitative measure and statistical distribution of fame
17 pages, 6 figures
null
10.1371/journal.pone.0200196
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fame and celebrity play an ever-increasing role in our culture. However, despite the cultural and economic importance of fame and its gradations, there exists no consensus method for quantifying the fame of an individual, or of comparing that of two individuals. We argue that, even if fame is difficult to measure with precision, one may develop useful metrics for fame that correlate well with intuition and that remain reasonably stable over time. Using datasets of recently deceased individuals who were highly renowned, we have evaluated several internet-based methods for quantifying fame. We find that some widely-used internet-derived metrics, such as search engine results, correlate poorly with human subject judgments of fame. However other metrics exist that agree well with human judgments and appear to offer workable, easily accessible measures of fame. Using such a metric we perform a preliminary investigation of the statistical distribution of fame, which has some of the power law character seen in other natural and social phenomena such as landslides and market crashes. In order to demonstrate how such findings can generate quantitative insight into celebrity culture, we assess some folk ideas regarding the frequency distribution and apparent clustering of celebrity deaths.
[ { "created": "Tue, 24 Apr 2018 18:18:23 GMT", "version": "v1" } ]
2018-09-05
[ [ "Ramirez", "Edward D.", "" ], [ "Hagen", "Stephen J.", "" ] ]
Fame and celebrity play an ever-increasing role in our culture. However, despite the cultural and economic importance of fame and its gradations, there exists no consensus method for quantifying the fame of an individual, or of comparing that of two individuals. We argue that, even if fame is difficult to measure with precision, one may develop useful metrics for fame that correlate well with intuition and that remain reasonably stable over time. Using datasets of recently deceased individuals who were highly renowned, we have evaluated several internet-based methods for quantifying fame. We find that some widely-used internet-derived metrics, such as search engine results, correlate poorly with human subject judgments of fame. However other metrics exist that agree well with human judgments and appear to offer workable, easily accessible measures of fame. Using such a metric we perform a preliminary investigation of the statistical distribution of fame, which has some of the power law character seen in other natural and social phenomena such as landslides and market crashes. In order to demonstrate how such findings can generate quantitative insight into celebrity culture, we assess some folk ideas regarding the frequency distribution and apparent clustering of celebrity deaths.
2103.05062
Fr\'ed\'eric Lemoine PhD
Fr\'ed\'eric Lemoine (1), Tatiana Aubonnet (1 and 2) and No\"emie Simoni (2) ((1) Conservatoire National des Arts et M\'etiers (CNAM), CEDRIC, Paris, France., (2) T\'el\'ecom-Paris, Palaiseau, France.)
Self-Assemble-Featured Internet of Things
21 pages, 16 figures
Future Generation Computer Systems 112 (2020) 41-57
10.1016/j.future.2020.05.012
null
cs.SE
http://creativecommons.org/licenses/by-nc-nd/4.0/
The Internet of Things supports various industrial applications. The cooperation and coordination of smart things are a promising strategy for satisfying requirements that are beyond the capacity of a single smart thing. One of the major challenges for today's software engineering is the management of large and complex computing systems characterized by a high degree of physical distribution. Examples of such systems arise in many application domains. The number of connected devices grows from billions to hundreds of billions, so a maximum of automatisms must be integrated in IoT architectures in order to control and manage them. Software architects migrate to service oriented architecture and applications are now being constructed as service compositions. Since each IoT device includes one or more microservices, the increasing number of devices around the user makes them difficult to assemble in order to achieve a common goal. In this paper, we propose a self-assembling solution based on self-controlled service components taking into account non-functional requirements concerning the offered quality of services and the structuration of the resulting assembly. Its aim is to build and maintain an assembly of services (taking into account arrival of new peers or failure of existing ones) that, besides functional requirements, also fulfils global quality-of-service and structural requirements.
[ { "created": "Mon, 8 Mar 2021 20:37:45 GMT", "version": "v1" } ]
2021-03-10
[ [ "Lemoine", "Frédéric", "", "1 and 2" ], [ "Aubonnet", "Tatiana", "", "1 and 2" ], [ "Simoni", "Noëmie", "" ] ]
The Internet of Things supports various industrial applications. The cooperation and coordination of smart things are a promising strategy for satisfying requirements that are beyond the capacity of a single smart thing. One of the major challenges for today's software engineering is the management of large and complex computing systems characterized by a high degree of physical distribution. Examples of such systems arise in many application domains. The number of connected devices grows from billions to hundreds of billions, so a maximum of automatisms must be integrated in IoT architectures in order to control and manage them. Software architects migrate to service oriented architecture and applications are now being constructed as service compositions. Since each IoT device includes one or more microservices, the increasing number of devices around the user makes them difficult to assemble in order to achieve a common goal. In this paper, we propose a self-assembling solution based on self-controlled service components taking into account non-functional requirements concerning the offered quality of services and the structuration of the resulting assembly. Its aim is to build and maintain an assembly of services (taking into account arrival of new peers or failure of existing ones) that, besides functional requirements, also fulfils global quality-of-service and structural requirements.
2007.00266
Ben Bogin
Ben Bogin, Sanjay Subramanian, Matt Gardner, Jonathan Berant
Latent Compositional Representations Improve Systematic Generalization in Grounded Question Answering
Accepted for publication in Transactions of the Association for Computational Linguistics (TACL), 2020. Author's final version
null
null
null
cs.CL cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Answering questions that involve multi-step reasoning requires decomposing them and using the answers of intermediate steps to reach the final answer. However, state-of-the-art models in grounded question answering often do not explicitly perform decomposition, leading to difficulties in generalization to out-of-distribution examples. In this work, we propose a model that computes a representation and denotation for all question spans in a bottom-up, compositional manner using a CKY-style parser. Our model induces latent trees, driven by end-to-end (the answer) supervision only. We show that this inductive bias towards tree structures dramatically improves systematic generalization to out-of-distribution examples, compared to strong baselines on an arithmetic expressions benchmark as well as on CLOSURE, a dataset that focuses on systematic generalization for grounded question answering. On this challenging dataset, our model reaches an accuracy of 96.1%, significantly higher than prior models that almost perfectly solve the task on a random, in-distribution split.
[ { "created": "Wed, 1 Jul 2020 06:22:51 GMT", "version": "v1" }, { "created": "Mon, 9 Nov 2020 14:35:21 GMT", "version": "v2" }, { "created": "Tue, 10 Nov 2020 06:22:21 GMT", "version": "v3" } ]
2020-11-11
[ [ "Bogin", "Ben", "" ], [ "Subramanian", "Sanjay", "" ], [ "Gardner", "Matt", "" ], [ "Berant", "Jonathan", "" ] ]
Answering questions that involve multi-step reasoning requires decomposing them and using the answers of intermediate steps to reach the final answer. However, state-of-the-art models in grounded question answering often do not explicitly perform decomposition, leading to difficulties in generalization to out-of-distribution examples. In this work, we propose a model that computes a representation and denotation for all question spans in a bottom-up, compositional manner using a CKY-style parser. Our model induces latent trees, driven by end-to-end (the answer) supervision only. We show that this inductive bias towards tree structures dramatically improves systematic generalization to out-of-distribution examples, compared to strong baselines on an arithmetic expressions benchmark as well as on CLOSURE, a dataset that focuses on systematic generalization for grounded question answering. On this challenging dataset, our model reaches an accuracy of 96.1%, significantly higher than prior models that almost perfectly solve the task on a random, in-distribution split.
2312.15742
Xiang Li
Li Xiang and Junbo Yin and Wei Li and Cheng-Zhong Xu and Ruigang Yang and Jianbing Shen
DI-V2X: Learning Domain-Invariant Representation for Vehicle-Infrastructure Collaborative 3D Object Detection
aaai2024
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Vehicle-to-Everything (V2X) collaborative perception has recently gained significant attention due to its capability to enhance scene understanding by integrating information from various agents, e.g., vehicles, and infrastructure. However, current works often treat the information from each agent equally, ignoring the inherent domain gap caused by the utilization of different LiDAR sensors of each agent, thus leading to suboptimal performance. In this paper, we propose DI-V2X, that aims to learn Domain-Invariant representations through a new distillation framework to mitigate the domain discrepancy in the context of V2X 3D object detection. DI-V2X comprises three essential components: a domain-mixing instance augmentation (DMA) module, a progressive domain-invariant distillation (PDD) module, and a domain-adaptive fusion (DAF) module. Specifically, DMA builds a domain-mixing 3D instance bank for the teacher and student models during training, resulting in aligned data representation. Next, PDD encourages the student models from different domains to gradually learn a domain-invariant feature representation towards the teacher, where the overlapping regions between agents are employed as guidance to facilitate the distillation process. Furthermore, DAF closes the domain gap between the students by incorporating calibration-aware domain-adaptive attention. Extensive experiments on the challenging DAIR-V2X and V2XSet benchmark datasets demonstrate DI-V2X achieves remarkable performance, outperforming all the previous V2X models. Code is available at https://github.com/Serenos/DI-V2X
[ { "created": "Mon, 25 Dec 2023 14:40:46 GMT", "version": "v1" } ]
2023-12-27
[ [ "Xiang", "Li", "" ], [ "Yin", "Junbo", "" ], [ "Li", "Wei", "" ], [ "Xu", "Cheng-Zhong", "" ], [ "Yang", "Ruigang", "" ], [ "Shen", "Jianbing", "" ] ]
Vehicle-to-Everything (V2X) collaborative perception has recently gained significant attention due to its capability to enhance scene understanding by integrating information from various agents, e.g., vehicles, and infrastructure. However, current works often treat the information from each agent equally, ignoring the inherent domain gap caused by the utilization of different LiDAR sensors of each agent, thus leading to suboptimal performance. In this paper, we propose DI-V2X, that aims to learn Domain-Invariant representations through a new distillation framework to mitigate the domain discrepancy in the context of V2X 3D object detection. DI-V2X comprises three essential components: a domain-mixing instance augmentation (DMA) module, a progressive domain-invariant distillation (PDD) module, and a domain-adaptive fusion (DAF) module. Specifically, DMA builds a domain-mixing 3D instance bank for the teacher and student models during training, resulting in aligned data representation. Next, PDD encourages the student models from different domains to gradually learn a domain-invariant feature representation towards the teacher, where the overlapping regions between agents are employed as guidance to facilitate the distillation process. Furthermore, DAF closes the domain gap between the students by incorporating calibration-aware domain-adaptive attention. Extensive experiments on the challenging DAIR-V2X and V2XSet benchmark datasets demonstrate DI-V2X achieves remarkable performance, outperforming all the previous V2X models. Code is available at https://github.com/Serenos/DI-V2X
2110.15548
Tatpong Katanyukul
Pisit Nakjai and Jiradej Ponsawat and Tatpong Katanyukul
Latent Cognizance: What Machine Really Learns
6 pages
null
null
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite overwhelming achievements in recognition accuracy, extending an open-set capability -- ability to identify when the question is out of scope -- remains greatly challenging in a scalable machine learning inference. A recent research has discovered Latent Cognizance (LC) -- an insight on a recognition mechanism based on a new probabilistic interpretation, Bayesian theorem, and an analysis of an internal structure of a commonly-used recognition inference structure. The new interpretation emphasizes a latent assumption of an overlooked probabilistic condition on a learned inference model. Viability of LC has been shown on a task of sign language recognition, but its potential and implication can reach far beyond a specific domain and can move object recognition toward a scalable open-set recognition. However, LC new probabilistic interpretation has not been directly investigated. This article investigates the new interpretation under a traceable context. Our findings support the rationale on which LC is based and reveal a hidden mechanism underlying the learning classification inference. The ramification of these findings could lead to a simple yet effective solution to an open-set recognition.
[ { "created": "Fri, 29 Oct 2021 05:26:38 GMT", "version": "v1" } ]
2021-11-01
[ [ "Nakjai", "Pisit", "" ], [ "Ponsawat", "Jiradej", "" ], [ "Katanyukul", "Tatpong", "" ] ]
Despite overwhelming achievements in recognition accuracy, extending an open-set capability -- ability to identify when the question is out of scope -- remains greatly challenging in a scalable machine learning inference. A recent research has discovered Latent Cognizance (LC) -- an insight on a recognition mechanism based on a new probabilistic interpretation, Bayesian theorem, and an analysis of an internal structure of a commonly-used recognition inference structure. The new interpretation emphasizes a latent assumption of an overlooked probabilistic condition on a learned inference model. Viability of LC has been shown on a task of sign language recognition, but its potential and implication can reach far beyond a specific domain and can move object recognition toward a scalable open-set recognition. However, LC new probabilistic interpretation has not been directly investigated. This article investigates the new interpretation under a traceable context. Our findings support the rationale on which LC is based and reveal a hidden mechanism underlying the learning classification inference. The ramification of these findings could lead to a simple yet effective solution to an open-set recognition.
2401.00788
Terry Yue Zhuo
Terry Yue Zhuo, Armel Zebaze, Nitchakarn Suppattarachai, Leandro von Werra, Harm de Vries, Qian Liu, Niklas Muennighoff
Astraios: Parameter-Efficient Instruction Tuning Code Large Language Models
25 pages (12 main), 19 figures, 8 tables
null
null
null
cs.CL cs.AI cs.SE
http://creativecommons.org/licenses/by/4.0/
The high cost of full-parameter fine-tuning (FFT) of Large Language Models (LLMs) has led to a series of parameter-efficient fine-tuning (PEFT) methods. However, it remains unclear which methods provide the best cost-performance trade-off at different model scales. We introduce Astraios, a suite of 28 instruction-tuned OctoCoder models using 7 tuning methods and 4 model sizes up to 16 billion parameters. Through investigations across 5 tasks and 8 different datasets encompassing both code comprehension and code generation tasks, we find that FFT generally leads to the best downstream performance across all scales, and PEFT methods differ significantly in their efficacy based on the model scale. LoRA usually offers the most favorable trade-off between cost and performance. Further investigation into the effects of these methods on both model robustness and code security reveals that larger models tend to demonstrate reduced robustness and less security. At last, we explore the relationships among updated parameters, cross-entropy loss, and task performance. We find that the tuning effectiveness observed in small models generalizes well to larger models, and the validation loss in instruction tuning can be a reliable indicator of overall downstream performance.
[ { "created": "Mon, 1 Jan 2024 15:30:19 GMT", "version": "v1" } ]
2024-01-02
[ [ "Zhuo", "Terry Yue", "" ], [ "Zebaze", "Armel", "" ], [ "Suppattarachai", "Nitchakarn", "" ], [ "von Werra", "Leandro", "" ], [ "de Vries", "Harm", "" ], [ "Liu", "Qian", "" ], [ "Muennighoff", "Niklas", "" ] ]
The high cost of full-parameter fine-tuning (FFT) of Large Language Models (LLMs) has led to a series of parameter-efficient fine-tuning (PEFT) methods. However, it remains unclear which methods provide the best cost-performance trade-off at different model scales. We introduce Astraios, a suite of 28 instruction-tuned OctoCoder models using 7 tuning methods and 4 model sizes up to 16 billion parameters. Through investigations across 5 tasks and 8 different datasets encompassing both code comprehension and code generation tasks, we find that FFT generally leads to the best downstream performance across all scales, and PEFT methods differ significantly in their efficacy based on the model scale. LoRA usually offers the most favorable trade-off between cost and performance. Further investigation into the effects of these methods on both model robustness and code security reveals that larger models tend to demonstrate reduced robustness and less security. At last, we explore the relationships among updated parameters, cross-entropy loss, and task performance. We find that the tuning effectiveness observed in small models generalizes well to larger models, and the validation loss in instruction tuning can be a reliable indicator of overall downstream performance.
1609.04051
Nika Haghtalab
Avrim Blum, Ioannis Caragiannis, Nika Haghtalab, Ariel D. Procaccia, Eviatar B. Procaccia, Rohit Vaish
Opting Into Optimal Matchings
null
null
null
null
cs.DS math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We revisit the problem of designing optimal, individually rational matching mechanisms (in a general sense, allowing for cycles in directed graphs), where each player --- who is associated with a subset of vertices --- matches as many of his own vertices when he opts into the matching mechanism as when he opts out. We offer a new perspective on this problem by considering an arbitrary graph, but assuming that vertices are associated with players at random. Our main result asserts that, under certain conditions, any fixed optimal matching is likely to be individually rational up to lower-order terms. We also show that a simple and practical mechanism is (fully) individually rational, and likely to be optimal up to lower-order terms. We discuss the implications of our results for market design in general, and kidney exchange in particular.
[ { "created": "Tue, 13 Sep 2016 21:04:31 GMT", "version": "v1" } ]
2016-09-15
[ [ "Blum", "Avrim", "" ], [ "Caragiannis", "Ioannis", "" ], [ "Haghtalab", "Nika", "" ], [ "Procaccia", "Ariel D.", "" ], [ "Procaccia", "Eviatar B.", "" ], [ "Vaish", "Rohit", "" ] ]
We revisit the problem of designing optimal, individually rational matching mechanisms (in a general sense, allowing for cycles in directed graphs), where each player --- who is associated with a subset of vertices --- matches as many of his own vertices when he opts into the matching mechanism as when he opts out. We offer a new perspective on this problem by considering an arbitrary graph, but assuming that vertices are associated with players at random. Our main result asserts that, under certain conditions, any fixed optimal matching is likely to be individually rational up to lower-order terms. We also show that a simple and practical mechanism is (fully) individually rational, and likely to be optimal up to lower-order terms. We discuss the implications of our results for market design in general, and kidney exchange in particular.
2007.13250
Yichen Zhang
Yichen Zhang and Jianzhe Liu and Feng Qiu and Tianqi Hong and Rui Yao
Deep Active Learning for Solvability Prediction in Power Systems
null
null
null
null
cs.LG cs.SY eess.SY stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Traditional methods for solvability region analysis can only have inner approximations with inconclusive conservatism. Machine learning methods have been proposed to approach the real region. In this letter, we propose a deep active learning framework for power system solvability prediction. Compared with the passive learning methods where the training is performed after all instances are labeled, the active learning selects most informative instances to be label and therefore significantly reduce the size of labeled dataset for training. In the active learning framework, the acquisition functions, which correspond to different sampling strategies, are defined in terms of the on-the-fly posterior probability from the classifier. The IEEE 39-bus system is employed to validate the proposed framework, where a two-dimensional case is illustrated to visualize the effectiveness of the sampling method followed by the full-dimensional numerical experiments.
[ { "created": "Mon, 27 Jul 2020 00:13:09 GMT", "version": "v1" }, { "created": "Tue, 22 Dec 2020 07:30:08 GMT", "version": "v2" } ]
2020-12-23
[ [ "Zhang", "Yichen", "" ], [ "Liu", "Jianzhe", "" ], [ "Qiu", "Feng", "" ], [ "Hong", "Tianqi", "" ], [ "Yao", "Rui", "" ] ]
Traditional methods for solvability region analysis can only have inner approximations with inconclusive conservatism. Machine learning methods have been proposed to approach the real region. In this letter, we propose a deep active learning framework for power system solvability prediction. Compared with the passive learning methods where the training is performed after all instances are labeled, the active learning selects most informative instances to be label and therefore significantly reduce the size of labeled dataset for training. In the active learning framework, the acquisition functions, which correspond to different sampling strategies, are defined in terms of the on-the-fly posterior probability from the classifier. The IEEE 39-bus system is employed to validate the proposed framework, where a two-dimensional case is illustrated to visualize the effectiveness of the sampling method followed by the full-dimensional numerical experiments.
2003.13667
Karim Banawan
Sajani Vithana and Karim Banawan and Sennur Ulukus
Semantic Private Information Retrieval
submitted for publication
null
null
null
cs.IT cs.CR eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the problem of semantic private information retrieval (semantic PIR). In semantic PIR, a user retrieves a message out of $K$ independent messages stored in $N$ replicated and non-colluding databases without revealing the identity of the desired message to any individual database. The messages come with \emph{different semantics}, i.e., the messages are allowed to have \emph{non-uniform a priori probabilities} denoted by $(p_i>0,\: i \in [K])$, which are a proxy for their respective popularity of retrieval, and \emph{arbitrary message sizes} $(L_i,\: i \in [K])$. This is a generalization of the classical private information retrieval (PIR) problem, where messages are assumed to have equal a priori probabilities and equal message sizes. We derive the semantic PIR capacity for general $K$, $N$. The results show that the semantic PIR capacity depends on the number of databases $N$, the number of messages $K$, the a priori probability distribution of messages $p_i$, and the message sizes $L_i$. We present two achievable semantic PIR schemes: The first one is a deterministic scheme which is based on message asymmetry. This scheme employs non-uniform subpacketization. The second scheme is probabilistic and is based on choosing one query set out of multiple options at random to retrieve the required message without the need for exponential subpacketization. We derive necessary and sufficient conditions for the semantic PIR capacity to exceed the classical PIR capacity with equal priors and sizes. Our results show that the semantic PIR capacity can be larger than the classical PIR capacity when longer messages have higher popularities. However, when messages are equal-length, the non-uniform priors cannot be exploited to improve the retrieval rate over the classical PIR capacity.
[ { "created": "Mon, 30 Mar 2020 17:51:57 GMT", "version": "v1" } ]
2020-03-31
[ [ "Vithana", "Sajani", "" ], [ "Banawan", "Karim", "" ], [ "Ulukus", "Sennur", "" ] ]
We investigate the problem of semantic private information retrieval (semantic PIR). In semantic PIR, a user retrieves a message out of $K$ independent messages stored in $N$ replicated and non-colluding databases without revealing the identity of the desired message to any individual database. The messages come with \emph{different semantics}, i.e., the messages are allowed to have \emph{non-uniform a priori probabilities} denoted by $(p_i>0,\: i \in [K])$, which are a proxy for their respective popularity of retrieval, and \emph{arbitrary message sizes} $(L_i,\: i \in [K])$. This is a generalization of the classical private information retrieval (PIR) problem, where messages are assumed to have equal a priori probabilities and equal message sizes. We derive the semantic PIR capacity for general $K$, $N$. The results show that the semantic PIR capacity depends on the number of databases $N$, the number of messages $K$, the a priori probability distribution of messages $p_i$, and the message sizes $L_i$. We present two achievable semantic PIR schemes: The first one is a deterministic scheme which is based on message asymmetry. This scheme employs non-uniform subpacketization. The second scheme is probabilistic and is based on choosing one query set out of multiple options at random to retrieve the required message without the need for exponential subpacketization. We derive necessary and sufficient conditions for the semantic PIR capacity to exceed the classical PIR capacity with equal priors and sizes. Our results show that the semantic PIR capacity can be larger than the classical PIR capacity when longer messages have higher popularities. However, when messages are equal-length, the non-uniform priors cannot be exploited to improve the retrieval rate over the classical PIR capacity.
2403.13841
Zhongqi Yang
Zhongqi Yang, Yuning Wang, Ken S. Yamashita, Maryam Sabah, Elahe Khatibi, Iman Azimi, Nikil Dutt, Jessica L. Borelli, and Amir M. Rahmani
Integrating Wearable Sensor Data and Self-reported Diaries for Personalized Affect Forecasting
Accepted by Connected Health: Applications, Systems and Engineering Technologies (CHASE) 2024
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Emotional states, as indicators of affect, are pivotal to overall health, making their accurate prediction before onset crucial. Current studies are primarily centered on immediate short-term affect detection using data from wearable and mobile devices. These studies typically focus on objective sensory measures, often neglecting other forms of self-reported information like diaries and notes. In this paper, we propose a multimodal deep learning model for affect status forecasting. This model combines a transformer encoder with a pre-trained language model, facilitating the integrated analysis of objective metrics and self-reported diaries. To validate our model, we conduct a longitudinal study, enrolling college students and monitoring them over a year, to collect an extensive dataset including physiological, environmental, sleep, metabolic, and physical activity parameters, alongside open-ended textual diaries provided by the participants. Our results demonstrate that the proposed model achieves predictive accuracy of 82.50% for positive affect and 82.76% for negative affect, a full week in advance. The effectiveness of our model is further elevated by its explainability.
[ { "created": "Sat, 16 Mar 2024 17:24:38 GMT", "version": "v1" }, { "created": "Sat, 23 Mar 2024 18:27:49 GMT", "version": "v2" } ]
2024-03-26
[ [ "Yang", "Zhongqi", "" ], [ "Wang", "Yuning", "" ], [ "Yamashita", "Ken S.", "" ], [ "Sabah", "Maryam", "" ], [ "Khatibi", "Elahe", "" ], [ "Azimi", "Iman", "" ], [ "Dutt", "Nikil", "" ], [ "Borelli", "Jessica L.", "" ], [ "Rahmani", "Amir M.", "" ] ]
Emotional states, as indicators of affect, are pivotal to overall health, making their accurate prediction before onset crucial. Current studies are primarily centered on immediate short-term affect detection using data from wearable and mobile devices. These studies typically focus on objective sensory measures, often neglecting other forms of self-reported information like diaries and notes. In this paper, we propose a multimodal deep learning model for affect status forecasting. This model combines a transformer encoder with a pre-trained language model, facilitating the integrated analysis of objective metrics and self-reported diaries. To validate our model, we conduct a longitudinal study, enrolling college students and monitoring them over a year, to collect an extensive dataset including physiological, environmental, sleep, metabolic, and physical activity parameters, alongside open-ended textual diaries provided by the participants. Our results demonstrate that the proposed model achieves predictive accuracy of 82.50% for positive affect and 82.76% for negative affect, a full week in advance. The effectiveness of our model is further elevated by its explainability.
2101.08024
Zhonghao Zhang
Zhonghao Zhang and Yipeng Liu and Xingyu Cao and Fei Wen and Ce Zhu
Scalable Deep Compressive Sensing
null
null
null
null
cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep learning has been used to image compressive sensing (CS) for enhanced reconstruction performance. However, most existing deep learning methods train different models for different subsampling ratios, which brings additional hardware burden. In this paper, we develop a general framework named scalable deep compressive sensing (SDCS) for the scalable sampling and reconstruction (SSR) of all existing end-to-end-trained models. In the proposed way, images are measured and initialized linearly. Two sampling masks are introduced to flexibly control the subsampling ratios used in sampling and reconstruction, respectively. To make the reconstruction model adapt to any subsampling ratio, a training strategy dubbed scalable training is developed. In scalable training, the model is trained with the sampling matrix and the initialization matrix at various subsampling ratios by integrating different sampling matrix masks. Experimental results show that models with SDCS can achieve SSR without changing their structure while maintaining good performance, and SDCS outperforms other SSR methods.
[ { "created": "Wed, 20 Jan 2021 08:42:50 GMT", "version": "v1" }, { "created": "Fri, 22 Jan 2021 02:53:03 GMT", "version": "v2" } ]
2021-01-25
[ [ "Zhang", "Zhonghao", "" ], [ "Liu", "Yipeng", "" ], [ "Cao", "Xingyu", "" ], [ "Wen", "Fei", "" ], [ "Zhu", "Ce", "" ] ]
Deep learning has been used to image compressive sensing (CS) for enhanced reconstruction performance. However, most existing deep learning methods train different models for different subsampling ratios, which brings additional hardware burden. In this paper, we develop a general framework named scalable deep compressive sensing (SDCS) for the scalable sampling and reconstruction (SSR) of all existing end-to-end-trained models. In the proposed way, images are measured and initialized linearly. Two sampling masks are introduced to flexibly control the subsampling ratios used in sampling and reconstruction, respectively. To make the reconstruction model adapt to any subsampling ratio, a training strategy dubbed scalable training is developed. In scalable training, the model is trained with the sampling matrix and the initialization matrix at various subsampling ratios by integrating different sampling matrix masks. Experimental results show that models with SDCS can achieve SSR without changing their structure while maintaining good performance, and SDCS outperforms other SSR methods.
2012.06881
Muhammad Fayaz
Muhammad Fayaz, Wenqiang Yi, Yuanwei Liu, and Arumugam Nallanathan
Transmit Power Pool Design for Grant-Free NOMA-IoT Networks via Deep Reinforcement Learning
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Grant-free non-orthogonal multiple access (GF-NOMA) is a potential multiple access framework for short-packet internet-of-things (IoT) networks to enhance connectivity. However, the resource allocation problem in GF-NOMA is challenging due to the absence of closed-loop power control. We design a prototype of transmit power pool (PP) to provide open-loop power control. IoT users acquire their transmit power in advance from this prototype PP solely according to their communication distances. Firstly, a multi-agent deep Q-network (DQN) aided GF-NOMA algorithm is proposed to determine the optimal transmit power levels for the prototype PP. More specifically, each IoT user acts as an agent and learns a policy by interacting with the wireless environment that guides them to select optimal actions. Secondly, to prevent the Q-learning model overestimation problem, double DQN based GF-NOMA algorithm is proposed. Numerical results confirm that the double DQN based algorithm finds out the optimal transmit power levels that form the PP. Comparing with the conventional online learning approach, the proposed algorithm with the prototype PP converges faster under changing environments due to limiting the action space based on previous learning. The considered GF-NOMA system outperforms the networks with fixed transmission power, namely all the users have the same transmit power and the traditional GF with orthogonal multiple access techniques, in terms of throughput.
[ { "created": "Sat, 12 Dec 2020 18:26:55 GMT", "version": "v1" }, { "created": "Thu, 3 Jun 2021 10:05:13 GMT", "version": "v2" } ]
2021-06-04
[ [ "Fayaz", "Muhammad", "" ], [ "Yi", "Wenqiang", "" ], [ "Liu", "Yuanwei", "" ], [ "Nallanathan", "Arumugam", "" ] ]
Grant-free non-orthogonal multiple access (GF-NOMA) is a potential multiple access framework for short-packet internet-of-things (IoT) networks to enhance connectivity. However, the resource allocation problem in GF-NOMA is challenging due to the absence of closed-loop power control. We design a prototype of transmit power pool (PP) to provide open-loop power control. IoT users acquire their transmit power in advance from this prototype PP solely according to their communication distances. Firstly, a multi-agent deep Q-network (DQN) aided GF-NOMA algorithm is proposed to determine the optimal transmit power levels for the prototype PP. More specifically, each IoT user acts as an agent and learns a policy by interacting with the wireless environment that guides them to select optimal actions. Secondly, to prevent the Q-learning model overestimation problem, double DQN based GF-NOMA algorithm is proposed. Numerical results confirm that the double DQN based algorithm finds out the optimal transmit power levels that form the PP. Comparing with the conventional online learning approach, the proposed algorithm with the prototype PP converges faster under changing environments due to limiting the action space based on previous learning. The considered GF-NOMA system outperforms the networks with fixed transmission power, namely all the users have the same transmit power and the traditional GF with orthogonal multiple access techniques, in terms of throughput.
1801.07145
Eric Alcaide
Eric Alcaide
E-swish: Adjusting Activations to Different Network Depths
null
null
null
null
cs.CV cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Activation functions have a notorious impact on neural networks on both training and testing the models against the desired problem. Currently, the most used activation function is the Rectified Linear Unit (ReLU). This paper introduces a new and novel activation function, closely related with the new activation $Swish = x * sigmoid(x)$ (Ramachandran et al., 2017) which generalizes it. We call the new activation $E-swish = \beta x * sigmoid(x)$. We show that E-swish outperforms many other well-known activations including both ReLU and Swish. For example, using E-swish provided 1.5% and 4.6% accuracy improvements on Cifar10 and Cifar100 respectively for the WRN 10-2 when compared to ReLU and 0.35% and 0.6% respectively when compared to Swish. The code to reproduce all our experiments can be found at https://github.com/EricAlcaide/E-swish
[ { "created": "Mon, 22 Jan 2018 15:40:29 GMT", "version": "v1" } ]
2018-01-23
[ [ "Alcaide", "Eric", "" ] ]
Activation functions have a notorious impact on neural networks on both training and testing the models against the desired problem. Currently, the most used activation function is the Rectified Linear Unit (ReLU). This paper introduces a new and novel activation function, closely related with the new activation $Swish = x * sigmoid(x)$ (Ramachandran et al., 2017) which generalizes it. We call the new activation $E-swish = \beta x * sigmoid(x)$. We show that E-swish outperforms many other well-known activations including both ReLU and Swish. For example, using E-swish provided 1.5% and 4.6% accuracy improvements on Cifar10 and Cifar100 respectively for the WRN 10-2 when compared to ReLU and 0.35% and 0.6% respectively when compared to Swish. The code to reproduce all our experiments can be found at https://github.com/EricAlcaide/E-swish
1908.03106
Jesse Hoey
Jesse Hoey and Neil J. MacKinnon
"Conservatives Overfit, Liberals Underfit": The Social-Psychological Control of Affect and Uncertainty
This is an extended version of the paper presented at SE-THEMOS workshop at ACII 2019 in Cambridge England. Version 2 and 3 of this article added sections on reinforcement learning(2.6 and 5.6), and a section on neuroscience and the relation between cognition and affect (2.4)
null
null
null
cs.AI cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The presence of artificial agents in human social networks is growing. From chatbots to robots, human experience in the developed world is moving towards a socio-technical system in which agents can be technological or biological, with increasingly blurred distinctions between. Given that emotion is a key element of human interaction, enabling artificial agents with the ability to reason about affect is a key stepping stone towards a future in which technological agents and humans can work together. This paper presents work on building intelligent computational agents that integrate both emotion and cognition. These agents are grounded in the well-established social-psychological Bayesian Affect Control Theory (BayesAct). The core idea of BayesAct is that humans are motivated in their social interactions by affective alignment: they strive for their social experiences to be coherent at a deep, emotional level with their sense of identity and general world views as constructed through culturally shared symbols. This affective alignment creates cohesive bonds between group members, and is instrumental for collaborations to solidify as relational group commitments. BayesAct agents are motivated in their social interactions by a combination of affective alignment and decision theoretic reasoning, trading the two off as a function of the uncertainty or unpredictability of the situation. This paper provides a high-level view of dual process theories and advances BayesAct as a plausible, computationally tractable model based in social-psychological theory. We introduce a revised BayesAct model that more deeply integrates social-psychological theorising, and we demonstrate a component of the model as being sufficient to account for cognitive biases about fairness, dissonance and conformity. We show how the model can unify different exploration strategies in reinforcement learning.
[ { "created": "Thu, 8 Aug 2019 15:04:52 GMT", "version": "v1" }, { "created": "Fri, 16 Aug 2019 01:19:23 GMT", "version": "v2" }, { "created": "Sun, 1 Sep 2019 11:21:51 GMT", "version": "v3" } ]
2019-09-04
[ [ "Hoey", "Jesse", "" ], [ "MacKinnon", "Neil J.", "" ] ]
The presence of artificial agents in human social networks is growing. From chatbots to robots, human experience in the developed world is moving towards a socio-technical system in which agents can be technological or biological, with increasingly blurred distinctions between. Given that emotion is a key element of human interaction, enabling artificial agents with the ability to reason about affect is a key stepping stone towards a future in which technological agents and humans can work together. This paper presents work on building intelligent computational agents that integrate both emotion and cognition. These agents are grounded in the well-established social-psychological Bayesian Affect Control Theory (BayesAct). The core idea of BayesAct is that humans are motivated in their social interactions by affective alignment: they strive for their social experiences to be coherent at a deep, emotional level with their sense of identity and general world views as constructed through culturally shared symbols. This affective alignment creates cohesive bonds between group members, and is instrumental for collaborations to solidify as relational group commitments. BayesAct agents are motivated in their social interactions by a combination of affective alignment and decision theoretic reasoning, trading the two off as a function of the uncertainty or unpredictability of the situation. This paper provides a high-level view of dual process theories and advances BayesAct as a plausible, computationally tractable model based in social-psychological theory. We introduce a revised BayesAct model that more deeply integrates social-psychological theorising, and we demonstrate a component of the model as being sufficient to account for cognitive biases about fairness, dissonance and conformity. We show how the model can unify different exploration strategies in reinforcement learning.
1211.2065
Lingyang Song
Chen Xu, Lingyang Song, Zhu Han, Qun Zhao, Xiaoli Wang, Xiang Cheng, and Bingli Jiao
Efficiency Resource Allocation for Device-to-Device Underlay Communication Systems: A Reverse Iterative Combinatorial Auction Based Approach
26 pages, 6 fgures; IEEE Journals on Selected Areas in Communications, 2012
null
10.1109/JSAC.2013.SUP.0513031
null
cs.GT cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Peer-to-peer communication has been recently considered as a popular issue for local area services. An innovative resource allocation scheme is proposed to improve the performance of mobile peer-to-peer, i.e., device-to-device (D2D), communications as an underlay in the downlink (DL) cellular networks. To optimize the system sum rate over the resource sharing of both D2D and cellular modes, we introduce a reverse iterative combinatorial auction as the allocation mechanism. In the auction, all the spectrum resources are considered as a set of resource units, which as bidders compete to obtain business while the packages of the D2D pairs are auctioned off as goods in each auction round. We first formulate the valuation of each resource unit, as a basis of the proposed auction. And then a detailed non-monotonic descending price auction algorithm is explained depending on the utility function that accounts for the channel gain from D2D and the costs for the system. Further, we prove that the proposed auction-based scheme is cheat-proof, and converges in a finite number of iteration rounds. We explain non-monotonicity in the price update process and show lower complexity compared to a traditional combinatorial allocation. The simulation results demonstrate that the algorithm efficiently leads to a good performance on the system sum rate.
[ { "created": "Fri, 9 Nov 2012 07:59:50 GMT", "version": "v1" } ]
2016-11-17
[ [ "Xu", "Chen", "" ], [ "Song", "Lingyang", "" ], [ "Han", "Zhu", "" ], [ "Zhao", "Qun", "" ], [ "Wang", "Xiaoli", "" ], [ "Cheng", "Xiang", "" ], [ "Jiao", "Bingli", "" ] ]
Peer-to-peer communication has been recently considered as a popular issue for local area services. An innovative resource allocation scheme is proposed to improve the performance of mobile peer-to-peer, i.e., device-to-device (D2D), communications as an underlay in the downlink (DL) cellular networks. To optimize the system sum rate over the resource sharing of both D2D and cellular modes, we introduce a reverse iterative combinatorial auction as the allocation mechanism. In the auction, all the spectrum resources are considered as a set of resource units, which as bidders compete to obtain business while the packages of the D2D pairs are auctioned off as goods in each auction round. We first formulate the valuation of each resource unit, as a basis of the proposed auction. And then a detailed non-monotonic descending price auction algorithm is explained depending on the utility function that accounts for the channel gain from D2D and the costs for the system. Further, we prove that the proposed auction-based scheme is cheat-proof, and converges in a finite number of iteration rounds. We explain non-monotonicity in the price update process and show lower complexity compared to a traditional combinatorial allocation. The simulation results demonstrate that the algorithm efficiently leads to a good performance on the system sum rate.
2201.08354
Wolfgang Fuhl
Wolfgang Fuhl
HPCGen: Hierarchical K-Means Clustering and Level Based Principal Components for Scan Path Genaration
null
null
null
null
cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present a new approach for decomposing scan paths and its utility for generating new scan paths. For this purpose, we use the K-Means clustering procedure to the raw gaze data and subsequently iteratively to find more clusters in the found clusters. The found clusters are grouped for each level in the hierarchy, and the most important principal components are computed from the data contained in them. Using this tree hierarchy and the principal components, new scan paths can be generated that match the human behavior of the original data. We show that this generated data is very useful for generating new data for scan path classification but can also be used to generate fake scan paths.
[ { "created": "Wed, 19 Jan 2022 16:22:57 GMT", "version": "v1" } ]
2022-01-21
[ [ "Fuhl", "Wolfgang", "" ] ]
In this paper, we present a new approach for decomposing scan paths and its utility for generating new scan paths. For this purpose, we use the K-Means clustering procedure to the raw gaze data and subsequently iteratively to find more clusters in the found clusters. The found clusters are grouped for each level in the hierarchy, and the most important principal components are computed from the data contained in them. Using this tree hierarchy and the principal components, new scan paths can be generated that match the human behavior of the original data. We show that this generated data is very useful for generating new data for scan path classification but can also be used to generate fake scan paths.
2102.00382
Ruchit Agrawal
Ruchit Agrawal, Daniel Wolff, Simon Dixon
Structure-Aware Audio-to-Score Alignment using Progressively Dilated Convolutional Neural Networks
ICASSP 2021 camera-ready version. Copyrights belong to IEEE
null
null
null
cs.SD cs.LG eess.AS
http://creativecommons.org/licenses/by-nc-nd/4.0/
The identification of structural differences between a music performance and the score is a challenging yet integral step of audio-to-score alignment, an important subtask of music information retrieval. We present a novel method to detect such differences between the score and performance for a given piece of music using progressively dilated convolutional neural networks. Our method incorporates varying dilation rates at different layers to capture both short-term and long-term context, and can be employed successfully in the presence of limited annotated data. We conduct experiments on audio recordings of real performances that differ structurally from the score, and our results demonstrate that our models outperform standard methods for structure-aware audio-to-score alignment.
[ { "created": "Sun, 31 Jan 2021 05:14:58 GMT", "version": "v1" }, { "created": "Sun, 14 Feb 2021 04:52:40 GMT", "version": "v2" } ]
2021-02-16
[ [ "Agrawal", "Ruchit", "" ], [ "Wolff", "Daniel", "" ], [ "Dixon", "Simon", "" ] ]
The identification of structural differences between a music performance and the score is a challenging yet integral step of audio-to-score alignment, an important subtask of music information retrieval. We present a novel method to detect such differences between the score and performance for a given piece of music using progressively dilated convolutional neural networks. Our method incorporates varying dilation rates at different layers to capture both short-term and long-term context, and can be employed successfully in the presence of limited annotated data. We conduct experiments on audio recordings of real performances that differ structurally from the score, and our results demonstrate that our models outperform standard methods for structure-aware audio-to-score alignment.
1310.2748
Ramin Khalili
Juhoon Kim, Ramin Khalili, Anja Feldmann, Yung-Chih Chen, Don Towsley
Multi-Source Multi-Path HTTP (mHTTP): A Proposal
12 pages
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Today, most devices have multiple network interfaces. Coupled with wide-spread replication of popular content at multiple locations, this provides substantial path diversity in the Internet. We propose Multi-source Multipath HTTP, mHTTP, which takes advantage of all existing types of path diversity in the Internet. mHTTP needs only client-side but not server-side or network modifications as it is a receiver-oriented mechanism. Moreover, the modifications are restricted to the socket interface. Thus, no changes are needed to the applications or to the kernel. As mHTTP relies on HTTP range requests, it is specific to HTTP which accounts for more than 60% of the Internet traffic. We implement mHTTP and study its performance by conducting measurements over a testbed and in the wild. Our results show that mHTTP indeed takes advantage of all types of path diversity in the Internet, and that it is a viable alternative to Multipath TCP for HTTP traffic. mHTTP decreases download times for large objects up to 50%, whereas it does no harm to small object downloads.
[ { "created": "Thu, 10 Oct 2013 09:57:34 GMT", "version": "v1" }, { "created": "Mon, 9 Dec 2013 18:48:51 GMT", "version": "v2" }, { "created": "Tue, 10 Dec 2013 09:52:27 GMT", "version": "v3" } ]
2013-12-11
[ [ "Kim", "Juhoon", "" ], [ "Khalili", "Ramin", "" ], [ "Feldmann", "Anja", "" ], [ "Chen", "Yung-Chih", "" ], [ "Towsley", "Don", "" ] ]
Today, most devices have multiple network interfaces. Coupled with wide-spread replication of popular content at multiple locations, this provides substantial path diversity in the Internet. We propose Multi-source Multipath HTTP, mHTTP, which takes advantage of all existing types of path diversity in the Internet. mHTTP needs only client-side but not server-side or network modifications as it is a receiver-oriented mechanism. Moreover, the modifications are restricted to the socket interface. Thus, no changes are needed to the applications or to the kernel. As mHTTP relies on HTTP range requests, it is specific to HTTP which accounts for more than 60% of the Internet traffic. We implement mHTTP and study its performance by conducting measurements over a testbed and in the wild. Our results show that mHTTP indeed takes advantage of all types of path diversity in the Internet, and that it is a viable alternative to Multipath TCP for HTTP traffic. mHTTP decreases download times for large objects up to 50%, whereas it does no harm to small object downloads.
1606.08948
Vishal Sharma
Vishal Chandra Sharma, Ganesh Gopalakrishnan, Sriram Krishnamoorthy
PRESAGE: Protecting Structured Address Generation against Soft Errors
null
null
null
null
cs.SE cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation (to index large arrays) have not been widely researched. We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGE is that any address computation scheme that flows an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Enabling the flow of errors allows one to situate detectors at loop exit points, and helps turn silent corruptions into easily detectable error situations. Our experiments using PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.
[ { "created": "Wed, 29 Jun 2016 04:06:41 GMT", "version": "v1" } ]
2016-07-05
[ [ "Sharma", "Vishal Chandra", "" ], [ "Gopalakrishnan", "Ganesh", "" ], [ "Krishnamoorthy", "Sriram", "" ] ]
Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation (to index large arrays) have not been widely researched. We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGE is that any address computation scheme that flows an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Enabling the flow of errors allows one to situate detectors at loop exit points, and helps turn silent corruptions into easily detectable error situations. Our experiments using PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.
2105.07741
Michael Murray
Michael Murray, Vinayak Abrol, Jared Tanner
Activation function design for deep networks: linearity and effective initialisation
33 pages, 10 figures, paper code and scripts are hosted at https://github.com/Cross-Caps/AFLI
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
The activation function deployed in a deep neural network has great influence on the performance of the network at initialisation, which in turn has implications for training. In this paper we study how to avoid two problems at initialisation identified in prior works: rapid convergence of pairwise input correlations, and vanishing and exploding gradients. We prove that both these problems can be avoided by choosing an activation function possessing a sufficiently large linear region around the origin, relative to the bias variance $\sigma_b^2$ of the network's random initialisation. We demonstrate empirically that using such activation functions leads to tangible benefits in practice, both in terms test and training accuracy as well as training time. Furthermore, we observe that the shape of the nonlinear activation outside the linear region appears to have a relatively limited impact on training. Finally, our results also allow us to train networks in a new hyperparameter regime, with a much larger bias variance than has previously been possible.
[ { "created": "Mon, 17 May 2021 11:30:46 GMT", "version": "v1" } ]
2021-05-18
[ [ "Murray", "Michael", "" ], [ "Abrol", "Vinayak", "" ], [ "Tanner", "Jared", "" ] ]
The activation function deployed in a deep neural network has great influence on the performance of the network at initialisation, which in turn has implications for training. In this paper we study how to avoid two problems at initialisation identified in prior works: rapid convergence of pairwise input correlations, and vanishing and exploding gradients. We prove that both these problems can be avoided by choosing an activation function possessing a sufficiently large linear region around the origin, relative to the bias variance $\sigma_b^2$ of the network's random initialisation. We demonstrate empirically that using such activation functions leads to tangible benefits in practice, both in terms test and training accuracy as well as training time. Furthermore, we observe that the shape of the nonlinear activation outside the linear region appears to have a relatively limited impact on training. Finally, our results also allow us to train networks in a new hyperparameter regime, with a much larger bias variance than has previously been possible.
1908.02571
Afshin Sadeghi
Afshin Sadeghi and Jens Lehmann
Linking Physicians to Medical Research Results via Knowledge Graph Embeddings and Twitter
AI for Good, Data Science for Social Good, Machine learning for Social Good, Twitter Data, Knowledge Graph Embeddings, Medical Research
ECML SOGOOD 2019
null
null
cs.SI cs.AI cs.IR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Informing professionals about the latest research results in their field is a particularly important task in the field of health care, since any development in this field directly improves the health status of the patients. Meanwhile, social media is an infrastructure that allows public instant sharing of information, thus it has recently become popular in medical applications. In this study, we apply Multi Distance Knowledge Graph Embeddings (MDE) to link physicians and surgeons to the latest medical breakthroughs that are shared as the research results on Twitter. Our study shows that using this method physicians can be informed about the new findings in their field given that they have an account dedicated to their profession.
[ { "created": "Wed, 24 Jul 2019 10:15:40 GMT", "version": "v1" }, { "created": "Fri, 6 Dec 2019 14:37:36 GMT", "version": "v2" }, { "created": "Fri, 21 Feb 2020 14:25:56 GMT", "version": "v3" } ]
2020-02-24
[ [ "Sadeghi", "Afshin", "" ], [ "Lehmann", "Jens", "" ] ]
Informing professionals about the latest research results in their field is a particularly important task in the field of health care, since any development in this field directly improves the health status of the patients. Meanwhile, social media is an infrastructure that allows public instant sharing of information, thus it has recently become popular in medical applications. In this study, we apply Multi Distance Knowledge Graph Embeddings (MDE) to link physicians and surgeons to the latest medical breakthroughs that are shared as the research results on Twitter. Our study shows that using this method physicians can be informed about the new findings in their field given that they have an account dedicated to their profession.
2008.12647
Yiren Lu
Yiren Lu, Jonathan Tompson
ADAIL: Adaptive Adversarial Imitation Learning
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present the ADaptive Adversarial Imitation Learning (ADAIL) algorithm for learning adaptive policies that can be transferred between environments of varying dynamics, by imitating a small number of demonstrations collected from a single source domain. This is an important problem in robotic learning because in real world scenarios 1) reward functions are hard to obtain, 2) learned policies from one domain are difficult to deploy in another due to varying source to target domain statistics, 3) collecting expert demonstrations in multiple environments where the dynamics are known and controlled is often infeasible. We address these constraints by building upon recent advances in adversarial imitation learning; we condition our policy on a learned dynamics embedding and we employ a domain-adversarial loss to learn a dynamics-invariant discriminator. The effectiveness of our method is demonstrated on simulated control tasks with varying environment dynamics and the learned adaptive agent outperforms several recent baselines.
[ { "created": "Sun, 23 Aug 2020 06:11:00 GMT", "version": "v1" } ]
2020-08-31
[ [ "Lu", "Yiren", "" ], [ "Tompson", "Jonathan", "" ] ]
We present the ADaptive Adversarial Imitation Learning (ADAIL) algorithm for learning adaptive policies that can be transferred between environments of varying dynamics, by imitating a small number of demonstrations collected from a single source domain. This is an important problem in robotic learning because in real world scenarios 1) reward functions are hard to obtain, 2) learned policies from one domain are difficult to deploy in another due to varying source to target domain statistics, 3) collecting expert demonstrations in multiple environments where the dynamics are known and controlled is often infeasible. We address these constraints by building upon recent advances in adversarial imitation learning; we condition our policy on a learned dynamics embedding and we employ a domain-adversarial loss to learn a dynamics-invariant discriminator. The effectiveness of our method is demonstrated on simulated control tasks with varying environment dynamics and the learned adaptive agent outperforms several recent baselines.
2204.00488
Bruno Jos\'e Olivieri de Souza
Breno Perricone, Thiago Lamenza, Marcelo Paulon, Bruno Jose Olivieri de Souza, Markus Endler
GrADyS-GS -- A ground station for managing field experiments with Autonomous Vehicles and Wireless Sensor Networks
null
null
null
null
cs.RO cs.DC
http://creativecommons.org/licenses/by/4.0/
In many kinds of research, collecting data is tailored to individual research. It is usual to use dedicated and not reusable software to collect data. GrADyS Ground Station framework (GrADyS-GS) aims to collect data in a reusable manner with dynamic background tools. This technical report describes GrADyS-GS, a ground station software designed to connect with various technologies to control, monitor, and store results of Mobile Internet of Things field experiments with Autonomous Vehicles (UAV) and Sensor Networks (WSN). In the GrADyS project GrADyS-GS is used with ESP32-based IoT devices on the ground and Unmanned Aerial Vehicles (quad-copters) in the air. The GrADyS-GS tool was created to support the design, development and testing of simulated movement coordination algorithms for the AVs, testing of customized Bluetooth Mesh variations, and overall communication, coordination, and context-awareness field experiments planed in the GraDyS project. Nevertheless, GrADyS-GS is also a general purpose tool, as it relies on a dynamic and easy-to-use Python and JavaScript framework that allows easy customization and (re)utilization in another projects and field experiments with other kinds of IoT devices, other WSN types and protocols, and other kinds of mobile connected flying or ground vehicles. So far, GrADyS-GS has been used to start UAV flights and collects its data in s centralized manner inside GrADyS project.
[ { "created": "Fri, 1 Apr 2022 14:48:02 GMT", "version": "v1" } ]
2022-04-04
[ [ "Perricone", "Breno", "" ], [ "Lamenza", "Thiago", "" ], [ "Paulon", "Marcelo", "" ], [ "de Souza", "Bruno Jose Olivieri", "" ], [ "Endler", "Markus", "" ] ]
In many kinds of research, collecting data is tailored to individual research. It is usual to use dedicated and not reusable software to collect data. GrADyS Ground Station framework (GrADyS-GS) aims to collect data in a reusable manner with dynamic background tools. This technical report describes GrADyS-GS, a ground station software designed to connect with various technologies to control, monitor, and store results of Mobile Internet of Things field experiments with Autonomous Vehicles (UAV) and Sensor Networks (WSN). In the GrADyS project GrADyS-GS is used with ESP32-based IoT devices on the ground and Unmanned Aerial Vehicles (quad-copters) in the air. The GrADyS-GS tool was created to support the design, development and testing of simulated movement coordination algorithms for the AVs, testing of customized Bluetooth Mesh variations, and overall communication, coordination, and context-awareness field experiments planed in the GraDyS project. Nevertheless, GrADyS-GS is also a general purpose tool, as it relies on a dynamic and easy-to-use Python and JavaScript framework that allows easy customization and (re)utilization in another projects and field experiments with other kinds of IoT devices, other WSN types and protocols, and other kinds of mobile connected flying or ground vehicles. So far, GrADyS-GS has been used to start UAV flights and collects its data in s centralized manner inside GrADyS project.
2005.12840
Michelangelo Misuraca
Michelangelo Misuraca, Alessia Forciniti, Germana Scepi, Maria Spano
Sentiment Analysis for Education with R: packages, methods and practical applications
null
null
null
null
cs.IR stat.AP stat.CO
http://creativecommons.org/licenses/by-nc-sa/4.0/
Sentiment Analysis (SA) refers to a family of techniques at the crossroads of statistics, natural language processing, and computational linguistics. The primary goal is to detect the semantic orientation of individual opinions and comments expressed in written texts. There are several practical applications of SA in several domains. In an educational context, the use of this approach allows processing students' feedback, aiming at monitoring the teaching effectiveness of instructors and enhancing the learning experience. This paper wants to review the different R packages that can be used to carry on SA, comparing the implemented methods, discussing their characteristics, and showing how they perform by considering a simple example.
[ { "created": "Fri, 8 May 2020 14:10:43 GMT", "version": "v1" } ]
2020-05-27
[ [ "Misuraca", "Michelangelo", "" ], [ "Forciniti", "Alessia", "" ], [ "Scepi", "Germana", "" ], [ "Spano", "Maria", "" ] ]
Sentiment Analysis (SA) refers to a family of techniques at the crossroads of statistics, natural language processing, and computational linguistics. The primary goal is to detect the semantic orientation of individual opinions and comments expressed in written texts. There are several practical applications of SA in several domains. In an educational context, the use of this approach allows processing students' feedback, aiming at monitoring the teaching effectiveness of instructors and enhancing the learning experience. This paper wants to review the different R packages that can be used to carry on SA, comparing the implemented methods, discussing their characteristics, and showing how they perform by considering a simple example.
2305.16635
Jaehun Jung
Jaehun Jung, Peter West, Liwei Jiang, Faeze Brahman, Ximing Lu, Jillian Fisher, Taylor Sorensen, Yejin Choi
Impossible Distillation: from Low-Quality Model to High-Quality Dataset & Model for Summarization and Paraphrasing
NAACL 2024
null
null
null
cs.CL cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
We present Impossible Distillation, a novel framework for paraphrasing and sentence summarization, that distills a high-quality dataset and model from a low-quality teacher that itself cannot perform these tasks. Unlike prior works that rely on an extreme-scale teacher model (e.g., GPT3) or task-specific architecture, we hypothesize and verify the paraphrastic proximity intrinsic to pre-trained LMs (e.g., GPT2), where paraphrases occupy a proximal subspace in the LM distribution. By identifying and distilling generations from these subspaces, Impossible Distillation produces a high-quality dataset and model even from GPT2-scale LMs. We evaluate our method on multiple benchmarks spanning unconstrained / syntax-controlled paraphrase generation and sentence summarization. Our model with 770M parameters consistently outperforms strong baselines, including models distilled from ChatGPT, and sometimes, even ChatGPT itself. Also, we find that our distilled dataset from 1.5B LMs exhibits higher diversity and fidelity than up to 13 times larger datasets.
[ { "created": "Fri, 26 May 2023 05:19:24 GMT", "version": "v1" }, { "created": "Tue, 19 Mar 2024 16:14:04 GMT", "version": "v2" }, { "created": "Fri, 5 Apr 2024 20:10:51 GMT", "version": "v3" } ]
2024-04-09
[ [ "Jung", "Jaehun", "" ], [ "West", "Peter", "" ], [ "Jiang", "Liwei", "" ], [ "Brahman", "Faeze", "" ], [ "Lu", "Ximing", "" ], [ "Fisher", "Jillian", "" ], [ "Sorensen", "Taylor", "" ], [ "Choi", "Yejin", "" ] ]
We present Impossible Distillation, a novel framework for paraphrasing and sentence summarization, that distills a high-quality dataset and model from a low-quality teacher that itself cannot perform these tasks. Unlike prior works that rely on an extreme-scale teacher model (e.g., GPT3) or task-specific architecture, we hypothesize and verify the paraphrastic proximity intrinsic to pre-trained LMs (e.g., GPT2), where paraphrases occupy a proximal subspace in the LM distribution. By identifying and distilling generations from these subspaces, Impossible Distillation produces a high-quality dataset and model even from GPT2-scale LMs. We evaluate our method on multiple benchmarks spanning unconstrained / syntax-controlled paraphrase generation and sentence summarization. Our model with 770M parameters consistently outperforms strong baselines, including models distilled from ChatGPT, and sometimes, even ChatGPT itself. Also, we find that our distilled dataset from 1.5B LMs exhibits higher diversity and fidelity than up to 13 times larger datasets.
2310.03225
Akifumi Wachi
Akifumi Wachi, Wataru Hashimoto, Xun Shen, Kazumune Hashimoto
Safe Exploration in Reinforcement Learning: A Generalized Formulation and Algorithms
Accepted to NeurIPS 2023
null
null
null
cs.LG cs.AI cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Safe exploration is essential for the practical use of reinforcement learning (RL) in many real-world scenarios. In this paper, we present a generalized safe exploration (GSE) problem as a unified formulation of common safe exploration problems. We then propose a solution of the GSE problem in the form of a meta-algorithm for safe exploration, MASE, which combines an unconstrained RL algorithm with an uncertainty quantifier to guarantee safety in the current episode while properly penalizing unsafe explorations before actual safety violation to discourage them in future episodes. The advantage of MASE is that we can optimize a policy while guaranteeing with a high probability that no safety constraint will be violated under proper assumptions. Specifically, we present two variants of MASE with different constructions of the uncertainty quantifier: one based on generalized linear models with theoretical guarantees of safety and near-optimality, and another that combines a Gaussian process to ensure safety with a deep RL algorithm to maximize the reward. Finally, we demonstrate that our proposed algorithm achieves better performance than state-of-the-art algorithms on grid-world and Safety Gym benchmarks without violating any safety constraints, even during training.
[ { "created": "Thu, 5 Oct 2023 00:47:09 GMT", "version": "v1" } ]
2023-10-06
[ [ "Wachi", "Akifumi", "" ], [ "Hashimoto", "Wataru", "" ], [ "Shen", "Xun", "" ], [ "Hashimoto", "Kazumune", "" ] ]
Safe exploration is essential for the practical use of reinforcement learning (RL) in many real-world scenarios. In this paper, we present a generalized safe exploration (GSE) problem as a unified formulation of common safe exploration problems. We then propose a solution of the GSE problem in the form of a meta-algorithm for safe exploration, MASE, which combines an unconstrained RL algorithm with an uncertainty quantifier to guarantee safety in the current episode while properly penalizing unsafe explorations before actual safety violation to discourage them in future episodes. The advantage of MASE is that we can optimize a policy while guaranteeing with a high probability that no safety constraint will be violated under proper assumptions. Specifically, we present two variants of MASE with different constructions of the uncertainty quantifier: one based on generalized linear models with theoretical guarantees of safety and near-optimality, and another that combines a Gaussian process to ensure safety with a deep RL algorithm to maximize the reward. Finally, we demonstrate that our proposed algorithm achieves better performance than state-of-the-art algorithms on grid-world and Safety Gym benchmarks without violating any safety constraints, even during training.
1510.02574
Jun Lin
Jun Lin, Chenrong Xiong and Zhiyuan Yan
A High Throughput List Decoder Architecture for Polar Codes
submitted to IEEE TVLSI
null
10.1109/TVLSI.2015.2499777
null
cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While long polar codes can achieve the capacity of arbitrary binary-input discrete memoryless channels when decoded by a low complexity successive cancelation (SC) algorithm, the error performance of the SC algorithm is inferior for polar codes with finite block lengths. The cyclic redundancy check (CRC) aided successive cancelation list (SCL) decoding algorithm has better error performance than the SC algorithm. However, current CRC aided SCL (CA-SCL) decoders still suffer from long decoding latency and limited throughput. In this paper, a reduced latency list decoding (RLLD) algorithm for polar codes is proposed. Our RLLD algorithm performs the list decoding on a binary tree, whose leaves correspond to the bits of a polar code. In existing SCL decoding algorithms, all the nodes in the tree are traversed and all possibilities of the information bits are considered. Instead, our RLLD algorithm visits much fewer nodes in the tree and considers fewer possibilities of the information bits. When configured properly, our RLLD algorithm significantly reduces the decoding latency and hence improves throughput, while introducing little performance degradation. Based on our RLLD algorithm, we also propose a high throughput list decoder architecture, which is suitable for larger block lengths due to its scalable partial sum computation unit. Our decoder architecture has been implemented for different block lengths and list sizes using the TSMC 90nm CMOS technology. The implementation results demonstrate that our decoders achieve significant latency reduction and area efficiency improvement compared with other list polar decoders in the literature.
[ { "created": "Fri, 9 Oct 2015 06:11:34 GMT", "version": "v1" } ]
2016-11-17
[ [ "Lin", "Jun", "" ], [ "Xiong", "Chenrong", "" ], [ "Yan", "Zhiyuan", "" ] ]
While long polar codes can achieve the capacity of arbitrary binary-input discrete memoryless channels when decoded by a low complexity successive cancelation (SC) algorithm, the error performance of the SC algorithm is inferior for polar codes with finite block lengths. The cyclic redundancy check (CRC) aided successive cancelation list (SCL) decoding algorithm has better error performance than the SC algorithm. However, current CRC aided SCL (CA-SCL) decoders still suffer from long decoding latency and limited throughput. In this paper, a reduced latency list decoding (RLLD) algorithm for polar codes is proposed. Our RLLD algorithm performs the list decoding on a binary tree, whose leaves correspond to the bits of a polar code. In existing SCL decoding algorithms, all the nodes in the tree are traversed and all possibilities of the information bits are considered. Instead, our RLLD algorithm visits much fewer nodes in the tree and considers fewer possibilities of the information bits. When configured properly, our RLLD algorithm significantly reduces the decoding latency and hence improves throughput, while introducing little performance degradation. Based on our RLLD algorithm, we also propose a high throughput list decoder architecture, which is suitable for larger block lengths due to its scalable partial sum computation unit. Our decoder architecture has been implemented for different block lengths and list sizes using the TSMC 90nm CMOS technology. The implementation results demonstrate that our decoders achieve significant latency reduction and area efficiency improvement compared with other list polar decoders in the literature.
2403.02437
Hyejun Jeong
Hyejun Jeong, Shiqing Ma, Amir Houmansadr
SoK: Challenges and Opportunities in Federated Unlearning
null
null
null
null
cs.LG cs.AI cs.DC
http://creativecommons.org/licenses/by/4.0/
Federated learning (FL), introduced in 2017, facilitates collaborative learning between non-trusting parties with no need for the parties to explicitly share their data among themselves. This allows training models on user data while respecting privacy regulations such as GDPR and CPRA. However, emerging privacy requirements may mandate model owners to be able to \emph{forget} some learned data, e.g., when requested by data owners or law enforcement. This has given birth to an active field of research called \emph{machine unlearning}. In the context of FL, many techniques developed for unlearning in centralized settings are not trivially applicable! This is due to the unique differences between centralized and distributed learning, in particular, interactivity, stochasticity, heterogeneity, and limited accessibility in FL. In response, a recent line of work has focused on developing unlearning mechanisms tailored to FL. This SoK paper aims to take a deep look at the \emph{federated unlearning} literature, with the goal of identifying research trends and challenges in this emerging field. By carefully categorizing papers published on FL unlearning (since 2020), we aim to pinpoint the unique complexities of federated unlearning, highlighting limitations on directly applying centralized unlearning methods. We compare existing federated unlearning methods regarding influence removal and performance recovery, compare their threat models and assumptions, and discuss their implications and limitations. For instance, we analyze the experimental setup of FL unlearning studies from various perspectives, including data heterogeneity and its simulation, the datasets used for demonstration, and evaluation metrics. Our work aims to offer insights and suggestions for future research on federated unlearning.
[ { "created": "Mon, 4 Mar 2024 19:35:08 GMT", "version": "v1" }, { "created": "Wed, 5 Jun 2024 19:00:03 GMT", "version": "v2" } ]
2024-06-07
[ [ "Jeong", "Hyejun", "" ], [ "Ma", "Shiqing", "" ], [ "Houmansadr", "Amir", "" ] ]
Federated learning (FL), introduced in 2017, facilitates collaborative learning between non-trusting parties with no need for the parties to explicitly share their data among themselves. This allows training models on user data while respecting privacy regulations such as GDPR and CPRA. However, emerging privacy requirements may mandate model owners to be able to \emph{forget} some learned data, e.g., when requested by data owners or law enforcement. This has given birth to an active field of research called \emph{machine unlearning}. In the context of FL, many techniques developed for unlearning in centralized settings are not trivially applicable! This is due to the unique differences between centralized and distributed learning, in particular, interactivity, stochasticity, heterogeneity, and limited accessibility in FL. In response, a recent line of work has focused on developing unlearning mechanisms tailored to FL. This SoK paper aims to take a deep look at the \emph{federated unlearning} literature, with the goal of identifying research trends and challenges in this emerging field. By carefully categorizing papers published on FL unlearning (since 2020), we aim to pinpoint the unique complexities of federated unlearning, highlighting limitations on directly applying centralized unlearning methods. We compare existing federated unlearning methods regarding influence removal and performance recovery, compare their threat models and assumptions, and discuss their implications and limitations. For instance, we analyze the experimental setup of FL unlearning studies from various perspectives, including data heterogeneity and its simulation, the datasets used for demonstration, and evaluation metrics. Our work aims to offer insights and suggestions for future research on federated unlearning.
2206.09391
Jiaming Zhang
Jiaming Zhang, Qi Yi, Jitao Sang
Towards Adversarial Attack on Vision-Language Pre-training Models
Accepted by ACM MM2022. Code is available in GitHub
null
null
null
cs.LG cs.CL cs.CV cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While vision-language pre-training model (VLP) has shown revolutionary improvements on various vision-language (V+L) tasks, the studies regarding its adversarial robustness remain largely unexplored. This paper studied the adversarial attack on popular VLP models and V+L tasks. First, we analyzed the performance of adversarial attacks under different settings. By examining the influence of different perturbed objects and attack targets, we concluded some key observations as guidance on both designing strong multimodal adversarial attack and constructing robust VLP models. Second, we proposed a novel multimodal attack method on the VLP models called Collaborative Multimodal Adversarial Attack (Co-Attack), which collectively carries out the attacks on the image modality and the text modality. Experimental results demonstrated that the proposed method achieves improved attack performances on different V+L downstream tasks and VLP models. The analysis observations and novel attack method hopefully provide new understanding into the adversarial robustness of VLP models, so as to contribute their safe and reliable deployment in more real-world scenarios. Code is available at https://github.com/adversarial-for-goodness/Co-Attack.
[ { "created": "Sun, 19 Jun 2022 12:55:45 GMT", "version": "v1" }, { "created": "Thu, 20 Oct 2022 02:32:02 GMT", "version": "v2" } ]
2022-10-21
[ [ "Zhang", "Jiaming", "" ], [ "Yi", "Qi", "" ], [ "Sang", "Jitao", "" ] ]
While vision-language pre-training model (VLP) has shown revolutionary improvements on various vision-language (V+L) tasks, the studies regarding its adversarial robustness remain largely unexplored. This paper studied the adversarial attack on popular VLP models and V+L tasks. First, we analyzed the performance of adversarial attacks under different settings. By examining the influence of different perturbed objects and attack targets, we concluded some key observations as guidance on both designing strong multimodal adversarial attack and constructing robust VLP models. Second, we proposed a novel multimodal attack method on the VLP models called Collaborative Multimodal Adversarial Attack (Co-Attack), which collectively carries out the attacks on the image modality and the text modality. Experimental results demonstrated that the proposed method achieves improved attack performances on different V+L downstream tasks and VLP models. The analysis observations and novel attack method hopefully provide new understanding into the adversarial robustness of VLP models, so as to contribute their safe and reliable deployment in more real-world scenarios. Code is available at https://github.com/adversarial-for-goodness/Co-Attack.
2107.00842
Yingying Zhu
Hongji Yang, Xiufan Lu and Yingying Zhu
Cross-view Geo-localization with Evolving Transformer
Under Review
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
In this work, we address the problem of cross-view geo-localization, which estimates the geospatial location of a street view image by matching it with a database of geo-tagged aerial images. The cross-view matching task is extremely challenging due to drastic appearance and geometry differences across views. Unlike existing methods that predominantly fall back on CNN, here we devise a novel evolving geo-localization Transformer (EgoTR) that utilizes the properties of self-attention in Transformer to model global dependencies, thus significantly decreasing visual ambiguities in cross-view geo-localization. We also exploit the positional encoding of Transformer to help the EgoTR understand and correspond geometric configurations between ground and aerial images. Compared to state-of-the-art methods that impose strong assumption on geometry knowledge, the EgoTR flexibly learns the positional embeddings through the training objective and hence becomes more practical in many real-world scenarios. Although Transformer is well suited to our task, its vanilla self-attention mechanism independently interacts within image patches in each layer, which overlooks correlations between layers. Instead, this paper propose a simple yet effective self-cross attention mechanism to improve the quality of learned representations. The self-cross attention models global dependencies between adjacent layers, which relates between image patches while modeling how features evolve in the previous layer. As a result, the proposed self-cross attention leads to more stable training, improves the generalization ability and encourages representations to keep evolving as the network goes deeper. Extensive experiments demonstrate that our EgoTR performs favorably against state-of-the-art methods on standard, fine-grained and cross-dataset cross-view geo-localization tasks.
[ { "created": "Fri, 2 Jul 2021 05:33:14 GMT", "version": "v1" }, { "created": "Mon, 5 Jul 2021 02:23:48 GMT", "version": "v2" } ]
2021-07-06
[ [ "Yang", "Hongji", "" ], [ "Lu", "Xiufan", "" ], [ "Zhu", "Yingying", "" ] ]
In this work, we address the problem of cross-view geo-localization, which estimates the geospatial location of a street view image by matching it with a database of geo-tagged aerial images. The cross-view matching task is extremely challenging due to drastic appearance and geometry differences across views. Unlike existing methods that predominantly fall back on CNN, here we devise a novel evolving geo-localization Transformer (EgoTR) that utilizes the properties of self-attention in Transformer to model global dependencies, thus significantly decreasing visual ambiguities in cross-view geo-localization. We also exploit the positional encoding of Transformer to help the EgoTR understand and correspond geometric configurations between ground and aerial images. Compared to state-of-the-art methods that impose strong assumption on geometry knowledge, the EgoTR flexibly learns the positional embeddings through the training objective and hence becomes more practical in many real-world scenarios. Although Transformer is well suited to our task, its vanilla self-attention mechanism independently interacts within image patches in each layer, which overlooks correlations between layers. Instead, this paper propose a simple yet effective self-cross attention mechanism to improve the quality of learned representations. The self-cross attention models global dependencies between adjacent layers, which relates between image patches while modeling how features evolve in the previous layer. As a result, the proposed self-cross attention leads to more stable training, improves the generalization ability and encourages representations to keep evolving as the network goes deeper. Extensive experiments demonstrate that our EgoTR performs favorably against state-of-the-art methods on standard, fine-grained and cross-dataset cross-view geo-localization tasks.
1801.00280
Ganhua Wu
Ganhua Wu
Efficient priority queueing routing strategy on mobile networks
null
null
10.1142/S0217984918501373
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mobile networks are intriguing in recent years due to their practical implications. Previous routing strategies for improving transport efficiency have little attention what order should the packets be forwarded, just simply used first-in-first-out queue discipline. Here we apply priority queueing discipline to shortest distance routing strategy on mobile networks. Numerical experiments show it not only remarkably improves network throughput and the packet arriving rate, but also reduces average end-to-end delay and the rate of queueing delay. Our work may be helpful in routing strategy designing on mobile networks.
[ { "created": "Sun, 31 Dec 2017 13:15:35 GMT", "version": "v1" } ]
2018-04-18
[ [ "Wu", "Ganhua", "" ] ]
Mobile networks are intriguing in recent years due to their practical implications. Previous routing strategies for improving transport efficiency have little attention what order should the packets be forwarded, just simply used first-in-first-out queue discipline. Here we apply priority queueing discipline to shortest distance routing strategy on mobile networks. Numerical experiments show it not only remarkably improves network throughput and the packet arriving rate, but also reduces average end-to-end delay and the rate of queueing delay. Our work may be helpful in routing strategy designing on mobile networks.
1502.00695
Rudra Kumar madapuri
Rudra Kumar M and A Ananda Rao
Assessing the Fault Proneness Degree (DFP) by Estimating the Impact of Change Request Artifacts Correlation
Originally published in Eighth International Conference on Data Mining and Warehousing (ICDMW-2014), Further the extended version is selected to publish in International Journal of Information Processing, volume 8 and issue 4, 2014
International Journal of Information Processing, 8(4), 35-44, 2014
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Exploring the impact of change requests applied to a software maintenance project helps to assess the fault-proneness of the change request to be handled further, which is perhaps a bug fix or even a new feature demand. In practice, the major development community stores change requests and related data using bug tracking systems such as Bugzilla. These data, together with the data stored in a versioning system, such as Concurrent Versioning Systems, are a valuable source of information to create descriptions and also can perform useful analyzes. In our earlier work, we proposed a novel statistical bipartite weighted graph-based approach to assessing the degree of fault-proneness of the change request and Change Request artifacts. With the motivation gained from this model, here we propose a novel strategy that estimates the degree of fault-proneness of a change request by assessing the impact of a change request artifact towards fault-proneness that considers the correlation between change requests artifact as another factor, which is in addition to our earlier strategy. The proposed model can be titled as Assessing the Fault Proneness Degree of Change Request Artifacts by estimating the impact of change requests correlation (DFP-CRC). As stated in our earlier model, the method DFP-CRC also makes use of information retrieval methods to identify the change request artifacts of the devised change request. And further evaluates the degree of fault-proneness of the Change Requests by estimating the correlation between change requests. The proposed method is evaluated by applying on concurrent versioning and Change request logs of the production level maintenance project.
[ { "created": "Tue, 3 Feb 2015 00:48:44 GMT", "version": "v1" } ]
2015-02-04
[ [ "M", "Rudra Kumar", "" ], [ "Rao", "A Ananda", "" ] ]
Exploring the impact of change requests applied to a software maintenance project helps to assess the fault-proneness of the change request to be handled further, which is perhaps a bug fix or even a new feature demand. In practice, the major development community stores change requests and related data using bug tracking systems such as Bugzilla. These data, together with the data stored in a versioning system, such as Concurrent Versioning Systems, are a valuable source of information to create descriptions and also can perform useful analyzes. In our earlier work, we proposed a novel statistical bipartite weighted graph-based approach to assessing the degree of fault-proneness of the change request and Change Request artifacts. With the motivation gained from this model, here we propose a novel strategy that estimates the degree of fault-proneness of a change request by assessing the impact of a change request artifact towards fault-proneness that considers the correlation between change requests artifact as another factor, which is in addition to our earlier strategy. The proposed model can be titled as Assessing the Fault Proneness Degree of Change Request Artifacts by estimating the impact of change requests correlation (DFP-CRC). As stated in our earlier model, the method DFP-CRC also makes use of information retrieval methods to identify the change request artifacts of the devised change request. And further evaluates the degree of fault-proneness of the Change Requests by estimating the correlation between change requests. The proposed method is evaluated by applying on concurrent versioning and Change request logs of the production level maintenance project.
1609.05702
Pavlos Sermpezis
Pavlos Sermpezis, Gavriil Chaviaras, Petros Gigis, and Xenofontas Dimitropoulos
Monitor, Detect, Mitigate: Combating BGP Prefix Hijacking in Real-Time with ARTEMIS
null
In Proceedings of the ACM SIGCOMM 2016 Conference (SIGCOMM '16), 625-626
10.1145/2934872.2959078
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Border Gateway Protocol (BGP) is globally used by Autonomous Systems (ASes) to establish route paths for IP prefixes in the Internet. Due to the lack of authentication in BGP, an AS can hijack IP prefixes owned by other ASes (i.e., announce illegitimate route paths), impacting thus the Internet routing system and economy. To this end, a number of hijacking detection systems have been proposed. However, existing systems are usually third party services that -inherently- introduce a significant delay between the hijacking detection (by the service) and its mitigation (by the network administrators). To overcome this shortcoming, in this paper, we propose ARTEMIS, a tool that enables an AS to timely detect hijacks on its own prefixes, and automatically proceed to mitigation actions. To evaluate the performance of ARTEMIS, we conduct real hijacking experiments. To our best knowledge, it is the first time that a hijacking detection/mitigation system is evaluated through extensive experiments in the real Internet. Our results (a) show that ARTEMIS can detect (mitigate) a hijack within a few seconds (minutes) after it has been launched, and (b) demonstrate the efficiency of the different control-plane sources used by ARTEMIS, towards monitoring routing changes.
[ { "created": "Mon, 19 Sep 2016 13:04:18 GMT", "version": "v1" } ]
2016-11-09
[ [ "Sermpezis", "Pavlos", "" ], [ "Chaviaras", "Gavriil", "" ], [ "Gigis", "Petros", "" ], [ "Dimitropoulos", "Xenofontas", "" ] ]
The Border Gateway Protocol (BGP) is globally used by Autonomous Systems (ASes) to establish route paths for IP prefixes in the Internet. Due to the lack of authentication in BGP, an AS can hijack IP prefixes owned by other ASes (i.e., announce illegitimate route paths), impacting thus the Internet routing system and economy. To this end, a number of hijacking detection systems have been proposed. However, existing systems are usually third party services that -inherently- introduce a significant delay between the hijacking detection (by the service) and its mitigation (by the network administrators). To overcome this shortcoming, in this paper, we propose ARTEMIS, a tool that enables an AS to timely detect hijacks on its own prefixes, and automatically proceed to mitigation actions. To evaluate the performance of ARTEMIS, we conduct real hijacking experiments. To our best knowledge, it is the first time that a hijacking detection/mitigation system is evaluated through extensive experiments in the real Internet. Our results (a) show that ARTEMIS can detect (mitigate) a hijack within a few seconds (minutes) after it has been launched, and (b) demonstrate the efficiency of the different control-plane sources used by ARTEMIS, towards monitoring routing changes.
2210.00440
Bumjun Jung
Bumjun Jung, Yusuke Mukuta, Tatsuya Harada
Grouped self-attention mechanism for a memory-efficient Transformer
10 pages, 3 figures, under review as a conference paper at ICLR 2023
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Time-series data analysis is important because numerous real-world tasks such as forecasting weather, electricity consumption, and stock market involve predicting data that vary over time. Time-series data are generally recorded over a long period of observation with long sequences owing to their periodic characteristics and long-range dependencies over time. Thus, capturing long-range dependency is an important factor in time-series data forecasting. To solve these problems, we proposed two novel modules, Grouped Self-Attention (GSA) and Compressed Cross-Attention (CCA). With both modules, we achieved a computational space and time complexity of order $O(l)$ with a sequence length $l$ under small hyperparameter limitations, and can capture locality while considering global information. The results of experiments conducted on time-series datasets show that our proposed model efficiently exhibited reduced computational complexity and performance comparable to or better than existing methods.
[ { "created": "Sun, 2 Oct 2022 06:58:49 GMT", "version": "v1" }, { "created": "Thu, 6 Oct 2022 09:11:14 GMT", "version": "v2" } ]
2022-10-07
[ [ "Jung", "Bumjun", "" ], [ "Mukuta", "Yusuke", "" ], [ "Harada", "Tatsuya", "" ] ]
Time-series data analysis is important because numerous real-world tasks such as forecasting weather, electricity consumption, and stock market involve predicting data that vary over time. Time-series data are generally recorded over a long period of observation with long sequences owing to their periodic characteristics and long-range dependencies over time. Thus, capturing long-range dependency is an important factor in time-series data forecasting. To solve these problems, we proposed two novel modules, Grouped Self-Attention (GSA) and Compressed Cross-Attention (CCA). With both modules, we achieved a computational space and time complexity of order $O(l)$ with a sequence length $l$ under small hyperparameter limitations, and can capture locality while considering global information. The results of experiments conducted on time-series datasets show that our proposed model efficiently exhibited reduced computational complexity and performance comparable to or better than existing methods.
1402.3757
Weina Wang
Weina Wang, Lei Ying and Junshan Zhang
On the Relation Between Identifiability, Differential Privacy and Mutual-Information Privacy
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper investigates the relation between three different notions of privacy: identifiability, differential privacy and mutual-information privacy. Under a unified privacy-distortion framework, where the distortion is defined to be the Hamming distance of the input and output databases, we establish some fundamental connections between these three privacy notions. Given a distortion level $D$, define $\epsilon_{\mathrm{i}}^*(D)$ to be the smallest (best) identifiability level, and $\epsilon_{\mathrm{d}}^*(D)$ to be the smallest differential privacy level. We characterize $\epsilon_{\mathrm{i}}^*(D)$ and $\epsilon_{\mathrm{d}}^*(D)$, and prove that $\epsilon_{\mathrm{i}}^*(D)-\epsilon_X\le\epsilon_{\mathrm{d}}^*(D)\le\epsilon_{\mathrm{i}}^*(D)$ for $D$ in some range, where $\epsilon_X$ is a constant depending on the distribution of the original database $X$, and diminishes to zero when the distribution of $X$ is uniform. Furthermore, we show that identifiability and mutual-information privacy are consistent in the sense that given distortion level $D$, the mechanism that optimizes the mutual-information privacy also minimizes the identifiability level.
[ { "created": "Sun, 16 Feb 2014 05:43:33 GMT", "version": "v1" }, { "created": "Wed, 25 Jun 2014 21:07:24 GMT", "version": "v2" }, { "created": "Sat, 22 Aug 2015 07:25:47 GMT", "version": "v3" } ]
2015-08-25
[ [ "Wang", "Weina", "" ], [ "Ying", "Lei", "" ], [ "Zhang", "Junshan", "" ] ]
This paper investigates the relation between three different notions of privacy: identifiability, differential privacy and mutual-information privacy. Under a unified privacy-distortion framework, where the distortion is defined to be the Hamming distance of the input and output databases, we establish some fundamental connections between these three privacy notions. Given a distortion level $D$, define $\epsilon_{\mathrm{i}}^*(D)$ to be the smallest (best) identifiability level, and $\epsilon_{\mathrm{d}}^*(D)$ to be the smallest differential privacy level. We characterize $\epsilon_{\mathrm{i}}^*(D)$ and $\epsilon_{\mathrm{d}}^*(D)$, and prove that $\epsilon_{\mathrm{i}}^*(D)-\epsilon_X\le\epsilon_{\mathrm{d}}^*(D)\le\epsilon_{\mathrm{i}}^*(D)$ for $D$ in some range, where $\epsilon_X$ is a constant depending on the distribution of the original database $X$, and diminishes to zero when the distribution of $X$ is uniform. Furthermore, we show that identifiability and mutual-information privacy are consistent in the sense that given distortion level $D$, the mechanism that optimizes the mutual-information privacy also minimizes the identifiability level.
2310.16406
Ahmed Mohamed Hussain
Saeif Al-Hazbi, Ahmed Hussain, Savio Sciancalepore, Gabriele Oligeri, Panos Papadimitratos
Radio Frequency Fingerprinting via Deep Learning: Challenges and Opportunities
Authors version; Accepted for the 20th International Wireless Communications and Mobile Computing (IWCMC) Security Symposium, 2024
null
null
null
cs.CR cs.AI eess.SP
http://creativecommons.org/licenses/by-nc-nd/4.0/
Radio Frequency Fingerprinting (RFF) techniques promise to authenticate wireless devices at the physical layer based on inherent hardware imperfections introduced during manufacturing. Such RF transmitter imperfections are reflected into over-the-air signals, allowing receivers to accurately identify the RF transmitting source. Recent advances in Machine Learning, particularly in Deep Learning (DL), have improved the ability of RFF systems to extract and learn complex features that make up the device-specific fingerprint. However, integrating DL techniques with RFF and operating the system in real-world scenarios presents numerous challenges, originating from the embedded systems and the DL research domains. This paper systematically identifies and analyzes the essential considerations and challenges encountered in the creation of DL-based RFF systems across their typical development life-cycle, which include (i) data collection and preprocessing, (ii) training, and finally, (iii) deployment. Our investigation provides a comprehensive overview of the current open problems that prevent real deployment of DL-based RFF systems while also discussing promising research opportunities to enhance the overall accuracy, robustness, and privacy of these systems.
[ { "created": "Wed, 25 Oct 2023 06:45:49 GMT", "version": "v1" }, { "created": "Mon, 15 Apr 2024 16:47:50 GMT", "version": "v2" } ]
2024-04-16
[ [ "Al-Hazbi", "Saeif", "" ], [ "Hussain", "Ahmed", "" ], [ "Sciancalepore", "Savio", "" ], [ "Oligeri", "Gabriele", "" ], [ "Papadimitratos", "Panos", "" ] ]
Radio Frequency Fingerprinting (RFF) techniques promise to authenticate wireless devices at the physical layer based on inherent hardware imperfections introduced during manufacturing. Such RF transmitter imperfections are reflected into over-the-air signals, allowing receivers to accurately identify the RF transmitting source. Recent advances in Machine Learning, particularly in Deep Learning (DL), have improved the ability of RFF systems to extract and learn complex features that make up the device-specific fingerprint. However, integrating DL techniques with RFF and operating the system in real-world scenarios presents numerous challenges, originating from the embedded systems and the DL research domains. This paper systematically identifies and analyzes the essential considerations and challenges encountered in the creation of DL-based RFF systems across their typical development life-cycle, which include (i) data collection and preprocessing, (ii) training, and finally, (iii) deployment. Our investigation provides a comprehensive overview of the current open problems that prevent real deployment of DL-based RFF systems while also discussing promising research opportunities to enhance the overall accuracy, robustness, and privacy of these systems.
1403.4445
Artur Je\.z
Artur Je\.z
A really simple approximation of smallest grammar
Accepted for CPM 2014
null
null
null
cs.DS cs.FL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we present a really simple linear-time algorithm constructing a context-free grammar of size O(g log (N/g)) for the input string, where N is the size of the input string and g the size of the optimal grammar generating this string. The algorithm works for arbitrary size alphabets, but the running time is linear assuming that the alphabet Sigma of the input string can be identified with numbers from 1,ldots, N^c for some constant c. Algorithms with such an approximation guarantee and running time are known, however all of them were non-trivial and their analyses were involved. The here presented algorithm computes the LZ77 factorisation and transforms it in phases to a grammar. In each phase it maintains an LZ77-like factorisation of the word with at most l factors as well as additional O(l) letters, where l was the size of the original LZ77 factorisation. In one phase in a greedy way (by a left-to-right sweep and a help of the factorisation) we choose a set of pairs of consecutive letters to be replaced with new symbols, i.e. nonterminals of the constructed grammar. We choose at least 2/3 of the letters in the word and there are O(l) many different pairs among them. Hence there are O(log N) phases, each of them introduces O(l) nonterminals to a grammar. A more precise analysis yields a bound O(l log(N/l)). As l \leq g, this yields the desired bound O(g log(N/g)).
[ { "created": "Tue, 18 Mar 2014 13:27:46 GMT", "version": "v1" } ]
2014-03-19
[ [ "Jeż", "Artur", "" ] ]
In this paper we present a really simple linear-time algorithm constructing a context-free grammar of size O(g log (N/g)) for the input string, where N is the size of the input string and g the size of the optimal grammar generating this string. The algorithm works for arbitrary size alphabets, but the running time is linear assuming that the alphabet Sigma of the input string can be identified with numbers from 1,ldots, N^c for some constant c. Algorithms with such an approximation guarantee and running time are known, however all of them were non-trivial and their analyses were involved. The here presented algorithm computes the LZ77 factorisation and transforms it in phases to a grammar. In each phase it maintains an LZ77-like factorisation of the word with at most l factors as well as additional O(l) letters, where l was the size of the original LZ77 factorisation. In one phase in a greedy way (by a left-to-right sweep and a help of the factorisation) we choose a set of pairs of consecutive letters to be replaced with new symbols, i.e. nonterminals of the constructed grammar. We choose at least 2/3 of the letters in the word and there are O(l) many different pairs among them. Hence there are O(log N) phases, each of them introduces O(l) nonterminals to a grammar. A more precise analysis yields a bound O(l log(N/l)). As l \leq g, this yields the desired bound O(g log(N/g)).
2303.01669
Yangyang Shu
Yangyang Shu, Anton van den Hengel, Lingqiao Liu
Learning Common Rationale to Improve Self-Supervised Representation for Fine-Grained Visual Recognition Problems
CVPR 2023
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Self-supervised learning (SSL) strategies have demonstrated remarkable performance in various recognition tasks. However, both our preliminary investigation and recent studies suggest that they may be less effective in learning representations for fine-grained visual recognition (FGVR) since many features helpful for optimizing SSL objectives are not suitable for characterizing the subtle differences in FGVR. To overcome this issue, we propose learning an additional screening mechanism to identify discriminative clues commonly seen across instances and classes, dubbed as common rationales in this paper. Intuitively, common rationales tend to correspond to the discriminative patterns from the key parts of foreground objects. We show that a common rationale detector can be learned by simply exploiting the GradCAM induced from the SSL objective without using any pre-trained object parts or saliency detectors, making it seamlessly to be integrated with the existing SSL process. Specifically, we fit the GradCAM with a branch with limited fitting capacity, which allows the branch to capture the common rationales and discard the less common discriminative patterns. At the test stage, the branch generates a set of spatial weights to selectively aggregate features representing an instance. Extensive experimental results on four visual tasks demonstrate that the proposed method can lead to a significant improvement in different evaluation settings.
[ { "created": "Fri, 3 Mar 2023 02:07:40 GMT", "version": "v1" }, { "created": "Thu, 27 Jul 2023 06:40:49 GMT", "version": "v2" } ]
2023-07-28
[ [ "Shu", "Yangyang", "" ], [ "Hengel", "Anton van den", "" ], [ "Liu", "Lingqiao", "" ] ]
Self-supervised learning (SSL) strategies have demonstrated remarkable performance in various recognition tasks. However, both our preliminary investigation and recent studies suggest that they may be less effective in learning representations for fine-grained visual recognition (FGVR) since many features helpful for optimizing SSL objectives are not suitable for characterizing the subtle differences in FGVR. To overcome this issue, we propose learning an additional screening mechanism to identify discriminative clues commonly seen across instances and classes, dubbed as common rationales in this paper. Intuitively, common rationales tend to correspond to the discriminative patterns from the key parts of foreground objects. We show that a common rationale detector can be learned by simply exploiting the GradCAM induced from the SSL objective without using any pre-trained object parts or saliency detectors, making it seamlessly to be integrated with the existing SSL process. Specifically, we fit the GradCAM with a branch with limited fitting capacity, which allows the branch to capture the common rationales and discard the less common discriminative patterns. At the test stage, the branch generates a set of spatial weights to selectively aggregate features representing an instance. Extensive experimental results on four visual tasks demonstrate that the proposed method can lead to a significant improvement in different evaluation settings.