id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
1705.09772
Hazim Shakhatreh
Hazim Shakhatreh and Abdallah Khreishah
Maximizing Indoor Wireless Coverage Using UAVs Equipped with Directional Antennas
19 pages, 17 figures
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
Unmanned aerial vehicles (UAVs) can be used to provide wireless coverage during emergency cases where each UAV serves as an aerial wireless base station when the cellular network goes down. They can also be used to supplement the ground base station in order to provide better coverage and higher data rates for the users. In this paper, we aim to maximize the indoor wireless coverage using UAVs equipped with directional antennas. We study the case that the UAVs are using one channel, thus in order to maximize the total indoor wireless coverage, we avoid any overlapping in their coverage volumes. We present two methods to place the UAVs; providing wireless coverage from one building side and from two building sides. In the first method, we utilize circle packing theory to determine the 3-D locations of the UAVs in a way that the total coverage area is maximized. In the second method, we place the UAVs in front of two building sides and efficiently arrange the UAVs in alternating upsidedown arrangements. We show that the upside-down arrangements problem can be transformed from 3D to 2D and based on that we present an efficient algorithm to solve the problem. Our results show that the upside-down arrangements of UAVs, can improve the maximum total coverage by 100% compared to providing wireless coverage from one building side.
[ { "created": "Sat, 27 May 2017 06:17:55 GMT", "version": "v1" } ]
2017-05-30
[ [ "Shakhatreh", "Hazim", "" ], [ "Khreishah", "Abdallah", "" ] ]
Unmanned aerial vehicles (UAVs) can be used to provide wireless coverage during emergency cases where each UAV serves as an aerial wireless base station when the cellular network goes down. They can also be used to supplement the ground base station in order to provide better coverage and higher data rates for the users. In this paper, we aim to maximize the indoor wireless coverage using UAVs equipped with directional antennas. We study the case that the UAVs are using one channel, thus in order to maximize the total indoor wireless coverage, we avoid any overlapping in their coverage volumes. We present two methods to place the UAVs; providing wireless coverage from one building side and from two building sides. In the first method, we utilize circle packing theory to determine the 3-D locations of the UAVs in a way that the total coverage area is maximized. In the second method, we place the UAVs in front of two building sides and efficiently arrange the UAVs in alternating upsidedown arrangements. We show that the upside-down arrangements problem can be transformed from 3D to 2D and based on that we present an efficient algorithm to solve the problem. Our results show that the upside-down arrangements of UAVs, can improve the maximum total coverage by 100% compared to providing wireless coverage from one building side.
2008.09775
Qiang Liu
Zhaocheng Liu and Qiang Liu and Haoli Zhang and Yuntian Chen
DNN2LR: Interpretation-inspired Feature Crossing for Real-world Tabular Data
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For sake of reliability, it is necessary for models in real-world applications to be both powerful and globally interpretable. Simple classifiers, e.g., Logistic Regression (LR), are globally interpretable, but not powerful enough to model complex nonlinear interactions among features in tabular data. Meanwhile, Deep Neural Networks (DNNs) have shown great effectiveness for modeling tabular data, but is not globally interpretable. In this work, we find local piece-wise interpretations in DNN of a specific feature are usually inconsistent in different samples, which is caused by feature interactions in the hidden layers. Accordingly, we can design an automatic feature crossing method to find feature interactions in DNN, and use them as cross features in LR. We give definition of the interpretation inconsistency in DNN, based on which a novel feature crossing method called DNN2LR is proposed. Extensive experiments have been conducted on four public datasets and two real-world datasets. The final model, i.e., a LR model empowered with cross features, generated by DNN2LR can outperform the complex DNN model, as well as several state-of-the-art feature crossing methods. The experimental results strongly verify the effectiveness and efficiency of DNN2LR, especially on real-world datasets with large numbers of feature fields.
[ { "created": "Sat, 22 Aug 2020 08:03:15 GMT", "version": "v1" }, { "created": "Mon, 21 Sep 2020 08:28:48 GMT", "version": "v2" }, { "created": "Tue, 10 Nov 2020 04:55:49 GMT", "version": "v3" }, { "created": "Wed, 16 Dec 2020 10:30:34 GMT", "version": "v4" }, { "created": "Tue, 19 Jan 2021 06:54:36 GMT", "version": "v5" } ]
2021-01-20
[ [ "Liu", "Zhaocheng", "" ], [ "Liu", "Qiang", "" ], [ "Zhang", "Haoli", "" ], [ "Chen", "Yuntian", "" ] ]
For sake of reliability, it is necessary for models in real-world applications to be both powerful and globally interpretable. Simple classifiers, e.g., Logistic Regression (LR), are globally interpretable, but not powerful enough to model complex nonlinear interactions among features in tabular data. Meanwhile, Deep Neural Networks (DNNs) have shown great effectiveness for modeling tabular data, but is not globally interpretable. In this work, we find local piece-wise interpretations in DNN of a specific feature are usually inconsistent in different samples, which is caused by feature interactions in the hidden layers. Accordingly, we can design an automatic feature crossing method to find feature interactions in DNN, and use them as cross features in LR. We give definition of the interpretation inconsistency in DNN, based on which a novel feature crossing method called DNN2LR is proposed. Extensive experiments have been conducted on four public datasets and two real-world datasets. The final model, i.e., a LR model empowered with cross features, generated by DNN2LR can outperform the complex DNN model, as well as several state-of-the-art feature crossing methods. The experimental results strongly verify the effectiveness and efficiency of DNN2LR, especially on real-world datasets with large numbers of feature fields.
2012.09092
Chaochao Lu
Chaochao Lu, Biwei Huang, Ke Wang, Jos\'e Miguel Hern\'andez-Lobato, Kun Zhang, Bernhard Sch\"olkopf
Sample-Efficient Reinforcement Learning via Counterfactual-Based Data Augmentation
Neural Information Processing Systems Workshop on Offline Reinforcement Learning
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reinforcement learning (RL) algorithms usually require a substantial amount of interaction data and perform well only for specific tasks in a fixed environment. In some scenarios such as healthcare, however, usually only few records are available for each patient, and patients may show different responses to the same treatment, impeding the application of current RL algorithms to learn optimal policies. To address the issues of mechanism heterogeneity and related data scarcity, we propose a data-efficient RL algorithm that exploits structural causal models (SCMs) to model the state dynamics, which are estimated by leveraging both commonalities and differences across subjects. The learned SCM enables us to counterfactually reason what would have happened had another treatment been taken. It helps avoid real (possibly risky) exploration and mitigates the issue that limited experiences lead to biased policies. We propose counterfactual RL algorithms to learn both population-level and individual-level policies. We show that counterfactual outcomes are identifiable under mild conditions and that Q- learning on the counterfactual-based augmented data set converges to the optimal value function. Experimental results on synthetic and real-world data demonstrate the efficacy of the proposed approach.
[ { "created": "Wed, 16 Dec 2020 17:21:13 GMT", "version": "v1" } ]
2020-12-17
[ [ "Lu", "Chaochao", "" ], [ "Huang", "Biwei", "" ], [ "Wang", "Ke", "" ], [ "Hernández-Lobato", "José Miguel", "" ], [ "Zhang", "Kun", "" ], [ "Schölkopf", "Bernhard", "" ] ]
Reinforcement learning (RL) algorithms usually require a substantial amount of interaction data and perform well only for specific tasks in a fixed environment. In some scenarios such as healthcare, however, usually only few records are available for each patient, and patients may show different responses to the same treatment, impeding the application of current RL algorithms to learn optimal policies. To address the issues of mechanism heterogeneity and related data scarcity, we propose a data-efficient RL algorithm that exploits structural causal models (SCMs) to model the state dynamics, which are estimated by leveraging both commonalities and differences across subjects. The learned SCM enables us to counterfactually reason what would have happened had another treatment been taken. It helps avoid real (possibly risky) exploration and mitigates the issue that limited experiences lead to biased policies. We propose counterfactual RL algorithms to learn both population-level and individual-level policies. We show that counterfactual outcomes are identifiable under mild conditions and that Q- learning on the counterfactual-based augmented data set converges to the optimal value function. Experimental results on synthetic and real-world data demonstrate the efficacy of the proposed approach.
2003.01793
Romain Lebreton
Eleonora Guerrini, Romain Lebreton, Ilaria Zappatore
Enhancing simultaneous rational function recovery: adaptive error correction capability and new bounds for applications
null
null
null
null
cs.IT cs.SC math.IT
http://creativecommons.org/licenses/by-sa/4.0/
In this work we present some results that allow to improve the decoding radius in solving polynomial linear systems with errors in the scenario where errors are additive and randomly distributed over a finite field. The decoding radius depends on some bounds on the solution that we want to recover, so their overestimation could significantly decrease our error correction capability. For this reason, we introduce an algorithm that can bridge this gap, introducing some ad hoc parameters that reduce the discrepancy between the estimate decoding radius and the effective error correction capability.
[ { "created": "Tue, 3 Mar 2020 21:01:51 GMT", "version": "v1" } ]
2020-03-05
[ [ "Guerrini", "Eleonora", "" ], [ "Lebreton", "Romain", "" ], [ "Zappatore", "Ilaria", "" ] ]
In this work we present some results that allow to improve the decoding radius in solving polynomial linear systems with errors in the scenario where errors are additive and randomly distributed over a finite field. The decoding radius depends on some bounds on the solution that we want to recover, so their overestimation could significantly decrease our error correction capability. For this reason, we introduce an algorithm that can bridge this gap, introducing some ad hoc parameters that reduce the discrepancy between the estimate decoding radius and the effective error correction capability.
1909.00557
Ye Yu
Ye Yu, and Niraj K. Jha
SPRING: A Sparsity-Aware Reduced-Precision Monolithic 3D CNN Accelerator Architecture for Training and Inference
null
null
10.1109/TETC.2020.3003328
null
cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
CNNs outperform traditional machine learning algorithms across a wide range of applications. However, their computational complexity makes it necessary to design efficient hardware accelerators. Most CNN accelerators focus on exploring dataflow styles that exploit computational parallelism. However, potential performance speedup from sparsity has not been adequately addressed. The computation and memory footprint of CNNs can be significantly reduced if sparsity is exploited in network evaluations. To take advantage of sparsity, some accelerator designs explore sparsity encoding and evaluation on CNN accelerators. However, sparsity encoding is just performed on activation or weight and only in inference. It has been shown that activation and weight also have high sparsity levels during training. Hence, sparsity-aware computation should also be considered in training. To further improve performance and energy efficiency, some accelerators evaluate CNNs with limited precision. However, this is limited to the inference since reduced precision sacrifices network accuracy if used in training. In addition, CNN evaluation is usually memory-intensive, especially in training. In this paper, we propose SPRING, a SParsity-aware Reduced-precision Monolithic 3D CNN accelerator for trainING and inference. SPRING supports both CNN training and inference. It uses a binary mask scheme to encode sparsities in activation and weight. It uses the stochastic rounding algorithm to train CNNs with reduced precision without accuracy loss. To alleviate the memory bottleneck in CNN evaluation, especially in training, SPRING uses an efficient monolithic 3D NVM interface to increase memory bandwidth. Compared to GTX 1080 Ti, SPRING achieves 15.6X, 4.2X and 66.0X improvements in performance, power reduction, and energy efficiency, respectively, for CNN training, and 15.5X, 4.5X and 69.1X improvements for inference.
[ { "created": "Mon, 2 Sep 2019 05:59:54 GMT", "version": "v1" }, { "created": "Mon, 3 Feb 2020 01:29:13 GMT", "version": "v2" } ]
2020-06-25
[ [ "Yu", "Ye", "" ], [ "Jha", "Niraj K.", "" ] ]
CNNs outperform traditional machine learning algorithms across a wide range of applications. However, their computational complexity makes it necessary to design efficient hardware accelerators. Most CNN accelerators focus on exploring dataflow styles that exploit computational parallelism. However, potential performance speedup from sparsity has not been adequately addressed. The computation and memory footprint of CNNs can be significantly reduced if sparsity is exploited in network evaluations. To take advantage of sparsity, some accelerator designs explore sparsity encoding and evaluation on CNN accelerators. However, sparsity encoding is just performed on activation or weight and only in inference. It has been shown that activation and weight also have high sparsity levels during training. Hence, sparsity-aware computation should also be considered in training. To further improve performance and energy efficiency, some accelerators evaluate CNNs with limited precision. However, this is limited to the inference since reduced precision sacrifices network accuracy if used in training. In addition, CNN evaluation is usually memory-intensive, especially in training. In this paper, we propose SPRING, a SParsity-aware Reduced-precision Monolithic 3D CNN accelerator for trainING and inference. SPRING supports both CNN training and inference. It uses a binary mask scheme to encode sparsities in activation and weight. It uses the stochastic rounding algorithm to train CNNs with reduced precision without accuracy loss. To alleviate the memory bottleneck in CNN evaluation, especially in training, SPRING uses an efficient monolithic 3D NVM interface to increase memory bandwidth. Compared to GTX 1080 Ti, SPRING achieves 15.6X, 4.2X and 66.0X improvements in performance, power reduction, and energy efficiency, respectively, for CNN training, and 15.5X, 4.5X and 69.1X improvements for inference.
2406.17576
Cheng Wang
Cheng Wang, Christopher Redino, Ryan Clark, Abdul Rahman, Sal Aguinaga, Sathvik Murli, Dhruv Nandakumar, Roland Rao, Lanxiao Huang, Daniel Radke, Edward Bowen
Leveraging Reinforcement Learning in Red Teaming for Advanced Ransomware Attack Simulations
null
null
null
null
cs.CR cs.AI cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Ransomware presents a significant and increasing threat to individuals and organizations by encrypting their systems and not releasing them until a large fee has been extracted. To bolster preparedness against potential attacks, organizations commonly conduct red teaming exercises, which involve simulated attacks to assess existing security measures. This paper proposes a novel approach utilizing reinforcement learning (RL) to simulate ransomware attacks. By training an RL agent in a simulated environment mirroring real-world networks, effective attack strategies can be learned quickly, significantly streamlining traditional, manual penetration testing processes. The attack pathways revealed by the RL agent can provide valuable insights to the defense team, helping them identify network weak points and develop more resilient defensive measures. Experimental results on a 152-host example network confirm the effectiveness of the proposed approach, demonstrating the RL agent's capability to discover and orchestrate attacks on high-value targets while evading honeyfiles (decoy files strategically placed to detect unauthorized access).
[ { "created": "Tue, 25 Jun 2024 14:16:40 GMT", "version": "v1" } ]
2024-06-26
[ [ "Wang", "Cheng", "" ], [ "Redino", "Christopher", "" ], [ "Clark", "Ryan", "" ], [ "Rahman", "Abdul", "" ], [ "Aguinaga", "Sal", "" ], [ "Murli", "Sathvik", "" ], [ "Nandakumar", "Dhruv", "" ], [ "Rao", "Roland", "" ], [ "Huang", "Lanxiao", "" ], [ "Radke", "Daniel", "" ], [ "Bowen", "Edward", "" ] ]
Ransomware presents a significant and increasing threat to individuals and organizations by encrypting their systems and not releasing them until a large fee has been extracted. To bolster preparedness against potential attacks, organizations commonly conduct red teaming exercises, which involve simulated attacks to assess existing security measures. This paper proposes a novel approach utilizing reinforcement learning (RL) to simulate ransomware attacks. By training an RL agent in a simulated environment mirroring real-world networks, effective attack strategies can be learned quickly, significantly streamlining traditional, manual penetration testing processes. The attack pathways revealed by the RL agent can provide valuable insights to the defense team, helping them identify network weak points and develop more resilient defensive measures. Experimental results on a 152-host example network confirm the effectiveness of the proposed approach, demonstrating the RL agent's capability to discover and orchestrate attacks on high-value targets while evading honeyfiles (decoy files strategically placed to detect unauthorized access).
1611.03006
Emiliano De Cristofaro
Emiliano De Cristofaro and Kaitai Liang and Yuruo Zhang
Privacy-Preserving Genetic Relatedness Test
A preliminary version of this paper appears in the Proceedings of the 3rd International Workshop on Genome Privacy and Security (GenoPri'16)
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An increasing number of individuals are turning to Direct-To-Consumer (DTC) genetic testing to learn about their predisposition to diseases, traits, and/or ancestry. DTC companies like 23andme and Ancestry.com have started to offer popular and affordable ancestry and genealogy tests, with services allowing users to find unknown relatives and long-distant cousins. Naturally, access and possible dissemination of genetic data prompts serious privacy concerns, thus motivating the need to design efficient primitives supporting private genetic tests. In this paper, we present an effective protocol for privacy-preserving genetic relatedness test (PPGRT), enabling a cloud server to run relatedness tests on input an encrypted genetic database and a test facility's encrypted genetic sample. We reduce the test to a data matching problem and perform it, privately, using searchable encryption. Finally, a performance evaluation of hamming distance based PP-GRT attests to the practicality of our proposals.
[ { "created": "Wed, 9 Nov 2016 16:37:28 GMT", "version": "v1" }, { "created": "Thu, 10 Nov 2016 02:09:48 GMT", "version": "v2" } ]
2016-11-11
[ [ "De Cristofaro", "Emiliano", "" ], [ "Liang", "Kaitai", "" ], [ "Zhang", "Yuruo", "" ] ]
An increasing number of individuals are turning to Direct-To-Consumer (DTC) genetic testing to learn about their predisposition to diseases, traits, and/or ancestry. DTC companies like 23andme and Ancestry.com have started to offer popular and affordable ancestry and genealogy tests, with services allowing users to find unknown relatives and long-distant cousins. Naturally, access and possible dissemination of genetic data prompts serious privacy concerns, thus motivating the need to design efficient primitives supporting private genetic tests. In this paper, we present an effective protocol for privacy-preserving genetic relatedness test (PPGRT), enabling a cloud server to run relatedness tests on input an encrypted genetic database and a test facility's encrypted genetic sample. We reduce the test to a data matching problem and perform it, privately, using searchable encryption. Finally, a performance evaluation of hamming distance based PP-GRT attests to the practicality of our proposals.
2312.07144
Iyad Kanj
Eduard Eiben, Robert Ganian, Iyad Kanj
The Parameterized Complexity of Coordinated Motion Planning
Short version appeared in SoCG 2023
null
null
null
cs.DS cs.AI cs.CG
http://creativecommons.org/licenses/by/4.0/
In Coordinated Motion Planning (CMP), we are given a rectangular-grid on which $k$ robots occupy $k$ distinct starting gridpoints and need to reach $k$ distinct destination gridpoints. In each time step, any robot may move to a neighboring gridpoint or stay in its current gridpoint, provided that it does not collide with other robots. The goal is to compute a schedule for moving the $k$ robots to their destinations which minimizes a certain objective target - prominently the number of time steps in the schedule, i.e., the makespan, or the total length traveled by the robots. We refer to the problem arising from minimizing the former objective target as CMP-M and the latter as CMP-L. Both CMP-M and CMP-L are fundamental problems that were posed as the computational geometry challenge of SoCG 2021, and CMP also embodies the famous $(n^2-1)$-puzzle as a special case. In this paper, we settle the parameterized complexity of CMP-M and CMP-L with respect to their two most fundamental parameters: the number of robots, and the objective target. We develop a new approach to establish the fixed-parameter tractability of both problems under the former parameterization that relies on novel structural insights into optimal solutions to the problem. When parameterized by the objective target, we show that CMP-L remains fixed-parameter tractable while CMP-M becomes para-NP-hard. The latter result is noteworthy, not only because it improves the previously-known boundaries of intractability for the problem, but also because the underlying reduction allows us to establish - as a simpler case - the NP-hardness of the classical Vertex Disjoint and Edge Disjoint Paths problems with constant path-lengths on grids.
[ { "created": "Tue, 12 Dec 2023 10:26:01 GMT", "version": "v1" }, { "created": "Sat, 16 Dec 2023 22:16:55 GMT", "version": "v2" } ]
2023-12-19
[ [ "Eiben", "Eduard", "" ], [ "Ganian", "Robert", "" ], [ "Kanj", "Iyad", "" ] ]
In Coordinated Motion Planning (CMP), we are given a rectangular-grid on which $k$ robots occupy $k$ distinct starting gridpoints and need to reach $k$ distinct destination gridpoints. In each time step, any robot may move to a neighboring gridpoint or stay in its current gridpoint, provided that it does not collide with other robots. The goal is to compute a schedule for moving the $k$ robots to their destinations which minimizes a certain objective target - prominently the number of time steps in the schedule, i.e., the makespan, or the total length traveled by the robots. We refer to the problem arising from minimizing the former objective target as CMP-M and the latter as CMP-L. Both CMP-M and CMP-L are fundamental problems that were posed as the computational geometry challenge of SoCG 2021, and CMP also embodies the famous $(n^2-1)$-puzzle as a special case. In this paper, we settle the parameterized complexity of CMP-M and CMP-L with respect to their two most fundamental parameters: the number of robots, and the objective target. We develop a new approach to establish the fixed-parameter tractability of both problems under the former parameterization that relies on novel structural insights into optimal solutions to the problem. When parameterized by the objective target, we show that CMP-L remains fixed-parameter tractable while CMP-M becomes para-NP-hard. The latter result is noteworthy, not only because it improves the previously-known boundaries of intractability for the problem, but also because the underlying reduction allows us to establish - as a simpler case - the NP-hardness of the classical Vertex Disjoint and Edge Disjoint Paths problems with constant path-lengths on grids.
1710.06146
Kostadin Kratchanov
Kostadin Kratchanov
Cinnamons: A Computation Model Underlying Control Network Programming
7th Intl Conf. on Computer Science, Engineering & Applications (ICCSEA 2017) September 23~24, 2017, Copenhagen, Denmark
null
10.5121/csit.2017.71101
null
cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We give the easily recognizable name "cinnamon" and "cinnamon programming" to a new computation model intended to form a theoretical foundation for Control Network Programming (CNP). CNP has established itself as a programming paradigm combining declarative and imperative features, built-in search engine, powerful tools for search control that allow easy, intuitive, visual development of heuristic, nondeterministic, and randomized solutions. We define rigorously the syntax and semantics of the new model of computation, at the same time trying to keep clear the intuition behind and to include enough examples. The purposely simplified theoretical model is then compared to both WHILE-programs (thus demonstrating its Turing-completeness), and the "real" CNP. Finally, future research possibilities are mentioned that would eventually extend the cinnamon programming into the directions of nondeterminism, randomness, and fuzziness.
[ { "created": "Tue, 17 Oct 2017 08:13:10 GMT", "version": "v1" } ]
2017-10-18
[ [ "Kratchanov", "Kostadin", "" ] ]
We give the easily recognizable name "cinnamon" and "cinnamon programming" to a new computation model intended to form a theoretical foundation for Control Network Programming (CNP). CNP has established itself as a programming paradigm combining declarative and imperative features, built-in search engine, powerful tools for search control that allow easy, intuitive, visual development of heuristic, nondeterministic, and randomized solutions. We define rigorously the syntax and semantics of the new model of computation, at the same time trying to keep clear the intuition behind and to include enough examples. The purposely simplified theoretical model is then compared to both WHILE-programs (thus demonstrating its Turing-completeness), and the "real" CNP. Finally, future research possibilities are mentioned that would eventually extend the cinnamon programming into the directions of nondeterminism, randomness, and fuzziness.
cs/9301101
null
Lawrence C. Paulson
Verifying the Unification Algorithm in LCF
null
Science of Computer Programming 5 (1985), 143-170
null
null
cs.LO
null
Manna and Waldinger's theory of substitutions and unification has been verified using the Cambridge LCF theorem prover. A proof of the monotonicity of substitution is presented in detail, as an example of interaction with LCF. Translating the theory into LCF's domain-theoretic logic is largely straightforward. Well-founded induction on a complex ordering is translated into nested structural inductions. Correctness of unification is expressed using predicates for such properties as idempotence and most-generality. The verification is presented as a series of lemmas. The LCF proofs are compared with the original ones, and with other approaches. It appears difficult to find a logic that is both simple and flexible, especially for proving termination.
[ { "created": "Fri, 29 Sep 2000 00:00:00 GMT", "version": "v1" } ]
2008-02-03
[ [ "Paulson", "Lawrence C.", "" ] ]
Manna and Waldinger's theory of substitutions and unification has been verified using the Cambridge LCF theorem prover. A proof of the monotonicity of substitution is presented in detail, as an example of interaction with LCF. Translating the theory into LCF's domain-theoretic logic is largely straightforward. Well-founded induction on a complex ordering is translated into nested structural inductions. Correctness of unification is expressed using predicates for such properties as idempotence and most-generality. The verification is presented as a series of lemmas. The LCF proofs are compared with the original ones, and with other approaches. It appears difficult to find a logic that is both simple and flexible, especially for proving termination.
2301.03966
Vishesh Mistry
Debayan Deb, Vishesh Mistry, Rahul Parthe
AdvBiom: Adversarial Attacks on Biometric Matchers
arXiv admin note: text overlap with arXiv:1908.05008
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the advent of deep learning models, face recognition systems have achieved impressive recognition rates. The workhorses behind this success are Convolutional Neural Networks (CNNs) and the availability of large training datasets. However, we show that small human-imperceptible changes to face samples can evade most prevailing face recognition systems. Even more alarming is the fact that the same generator can be extended to other traits in the future. In this work, we present how such a generator can be trained and also extended to other biometric modalities, such as fingerprint recognition systems.
[ { "created": "Tue, 10 Jan 2023 14:01:11 GMT", "version": "v1" } ]
2023-01-11
[ [ "Deb", "Debayan", "" ], [ "Mistry", "Vishesh", "" ], [ "Parthe", "Rahul", "" ] ]
With the advent of deep learning models, face recognition systems have achieved impressive recognition rates. The workhorses behind this success are Convolutional Neural Networks (CNNs) and the availability of large training datasets. However, we show that small human-imperceptible changes to face samples can evade most prevailing face recognition systems. Even more alarming is the fact that the same generator can be extended to other traits in the future. In this work, we present how such a generator can be trained and also extended to other biometric modalities, such as fingerprint recognition systems.
2203.03201
Salman Bari
Salman Bari, Volker Gabler and Dirk Wollherr
MS2MP: A Min-Sum Message Passing Algorithm for Motion Planning
null
null
10.1109/ICRA48506.2021.9561533
null
cs.RO
http://creativecommons.org/licenses/by-nc-sa/4.0/
Gaussian Process (GP) formulation of continuoustime trajectory offers a fast solution to the motion planning problem via probabilistic inference on factor graph. However, often the solution converges to in-feasible local minima and the planned trajectory is not collision-free. We propose a message passing algorithm that is more sensitive to obstacles with fast convergence time. We leverage the utility of min-sum message passing algorithm that performs local computations at each node to solve the inference problem on factor graph. We first introduce the notion of compound factor node to transform the factor graph to a linearly structured graph. We next develop an algorithm denoted as Min-sum Message Passing algorithm for Motion Planning (MS2MP) that combines numerical optimization with message passing to find collision-free trajectories. MS2MP performs numerical optimization to solve non-linear least square minimization problem at each compound factor node and then exploits the linear structure of factor graph to compute the maximum a posteriori (MAP) estimation of complete graph by passing messages among graph nodes. The decentralized optimization approach of each compound node increases sensitivity towards avoiding obstacles for harder planning problems. We evaluate our algorithm by performing extensive experiments for exemplary motion planning tasks for a robot manipulator. Our evaluation reveals that MS2MP improves existing work in convergence time and success rate.
[ { "created": "Mon, 7 Mar 2022 08:24:20 GMT", "version": "v1" } ]
2022-03-08
[ [ "Bari", "Salman", "" ], [ "Gabler", "Volker", "" ], [ "Wollherr", "Dirk", "" ] ]
Gaussian Process (GP) formulation of continuoustime trajectory offers a fast solution to the motion planning problem via probabilistic inference on factor graph. However, often the solution converges to in-feasible local minima and the planned trajectory is not collision-free. We propose a message passing algorithm that is more sensitive to obstacles with fast convergence time. We leverage the utility of min-sum message passing algorithm that performs local computations at each node to solve the inference problem on factor graph. We first introduce the notion of compound factor node to transform the factor graph to a linearly structured graph. We next develop an algorithm denoted as Min-sum Message Passing algorithm for Motion Planning (MS2MP) that combines numerical optimization with message passing to find collision-free trajectories. MS2MP performs numerical optimization to solve non-linear least square minimization problem at each compound factor node and then exploits the linear structure of factor graph to compute the maximum a posteriori (MAP) estimation of complete graph by passing messages among graph nodes. The decentralized optimization approach of each compound node increases sensitivity towards avoiding obstacles for harder planning problems. We evaluate our algorithm by performing extensive experiments for exemplary motion planning tasks for a robot manipulator. Our evaluation reveals that MS2MP improves existing work in convergence time and success rate.
2205.08989
Ryan Feng
Ryan Feng, Somesh Jha, Atul Prakash
Constraining the Attack Space of Machine Learning Models with Distribution Clamping Preprocessing
null
null
null
null
cs.LG cs.CR cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Preprocessing and outlier detection techniques have both been applied to neural networks to increase robustness with varying degrees of success. In this paper, we formalize the ideal preprocessor function as one that would take any input and set it to the nearest in-distribution input. In other words, we detect any anomalous pixels and set them such that the new input is in-distribution. We then illustrate a relaxed solution to this problem in the context of patch attacks. Specifically, we demonstrate that we can model constraints on the patch attack that specify regions as out of distribution. With these constraints, we are able to preprocess inputs successfully, increasing robustness on CARLA object detection.
[ { "created": "Wed, 18 May 2022 15:20:18 GMT", "version": "v1" } ]
2022-05-19
[ [ "Feng", "Ryan", "" ], [ "Jha", "Somesh", "" ], [ "Prakash", "Atul", "" ] ]
Preprocessing and outlier detection techniques have both been applied to neural networks to increase robustness with varying degrees of success. In this paper, we formalize the ideal preprocessor function as one that would take any input and set it to the nearest in-distribution input. In other words, we detect any anomalous pixels and set them such that the new input is in-distribution. We then illustrate a relaxed solution to this problem in the context of patch attacks. Specifically, we demonstrate that we can model constraints on the patch attack that specify regions as out of distribution. With these constraints, we are able to preprocess inputs successfully, increasing robustness on CARLA object detection.
2201.08896
Izzeddin Gur
Izzeddin Gur, Natasha Jaques, Yingjie Miao, Jongwook Choi, Manoj Tiwari, Honglak Lee, Aleksandra Faust
Environment Generation for Zero-Shot Compositional Reinforcement Learning
Published in NeurIPS 2021
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many real-world problems are compositional - solving them requires completing interdependent sub-tasks, either in series or in parallel, that can be represented as a dependency graph. Deep reinforcement learning (RL) agents often struggle to learn such complex tasks due to the long time horizons and sparse rewards. To address this problem, we present Compositional Design of Environments (CoDE), which trains a Generator agent to automatically build a series of compositional tasks tailored to the RL agent's current skill level. This automatic curriculum not only enables the agent to learn more complex tasks than it could have otherwise, but also selects tasks where the agent's performance is weak, enhancing its robustness and ability to generalize zero-shot to unseen tasks at test-time. We analyze why current environment generation techniques are insufficient for the problem of generating compositional tasks, and propose a new algorithm that addresses these issues. Our results assess learning and generalization across multiple compositional tasks, including the real-world problem of learning to navigate and interact with web pages. We learn to generate environments composed of multiple pages or rooms, and train RL agents capable of completing wide-range of complex tasks in those environments. We contribute two new benchmark frameworks for generating compositional tasks, compositional MiniGrid and gMiniWoB for web navigation.CoDE yields 4x higher success rate than the strongest baseline, and demonstrates strong performance of real websites learned on 3500 primitive tasks.
[ { "created": "Fri, 21 Jan 2022 21:35:01 GMT", "version": "v1" } ]
2022-01-25
[ [ "Gur", "Izzeddin", "" ], [ "Jaques", "Natasha", "" ], [ "Miao", "Yingjie", "" ], [ "Choi", "Jongwook", "" ], [ "Tiwari", "Manoj", "" ], [ "Lee", "Honglak", "" ], [ "Faust", "Aleksandra", "" ] ]
Many real-world problems are compositional - solving them requires completing interdependent sub-tasks, either in series or in parallel, that can be represented as a dependency graph. Deep reinforcement learning (RL) agents often struggle to learn such complex tasks due to the long time horizons and sparse rewards. To address this problem, we present Compositional Design of Environments (CoDE), which trains a Generator agent to automatically build a series of compositional tasks tailored to the RL agent's current skill level. This automatic curriculum not only enables the agent to learn more complex tasks than it could have otherwise, but also selects tasks where the agent's performance is weak, enhancing its robustness and ability to generalize zero-shot to unseen tasks at test-time. We analyze why current environment generation techniques are insufficient for the problem of generating compositional tasks, and propose a new algorithm that addresses these issues. Our results assess learning and generalization across multiple compositional tasks, including the real-world problem of learning to navigate and interact with web pages. We learn to generate environments composed of multiple pages or rooms, and train RL agents capable of completing wide-range of complex tasks in those environments. We contribute two new benchmark frameworks for generating compositional tasks, compositional MiniGrid and gMiniWoB for web navigation.CoDE yields 4x higher success rate than the strongest baseline, and demonstrates strong performance of real websites learned on 3500 primitive tasks.
1911.12984
Alessio Cardillo
Mariana Macedo, Laura Lotero, Alessio Cardillo, Hugo Barbosa, Ronaldo Menezes
Gender Patterns of Human Mobility in Colombia: Reexamining Ravenstein's Laws of Migration
12 pages, 6 figures. Comments are welcome
Proceedings of the conference "Complex Networks XI", pp. 269-281, Springer Proceedings in Complexity (2020)
10.1007/978-3-030-40943-2_23
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Public stakeholders implement several policies and regulations to tackle gender gaps, fostering the change in the cultural constructs associated with gender. One way to quantify if such changes elicit gender equality is by studying mobility. In this work, we study the daily mobility patterns of women and men occurring in Medell\'in (Colombia) in two years: 2005 and 2017. Specifically, we focus on the spatiotemporal differences in the travels and find that purpose of travel and occupation characterise each gender differently. We show that women tend to make shorter trips, corroborating Ravenstein's Laws of Migration. Our results indicate that urban mobility in Colombia seems to behave in agreement with the "archetypal" case studied by Ravenstein.
[ { "created": "Fri, 29 Nov 2019 07:26:22 GMT", "version": "v1" } ]
2020-02-27
[ [ "Macedo", "Mariana", "" ], [ "Lotero", "Laura", "" ], [ "Cardillo", "Alessio", "" ], [ "Barbosa", "Hugo", "" ], [ "Menezes", "Ronaldo", "" ] ]
Public stakeholders implement several policies and regulations to tackle gender gaps, fostering the change in the cultural constructs associated with gender. One way to quantify if such changes elicit gender equality is by studying mobility. In this work, we study the daily mobility patterns of women and men occurring in Medell\'in (Colombia) in two years: 2005 and 2017. Specifically, we focus on the spatiotemporal differences in the travels and find that purpose of travel and occupation characterise each gender differently. We show that women tend to make shorter trips, corroborating Ravenstein's Laws of Migration. Our results indicate that urban mobility in Colombia seems to behave in agreement with the "archetypal" case studied by Ravenstein.
2405.03728
Xiaobin Li
Xiaobin Li, Kai Wu, Yujian Betterest Li, Xiaoyu Zhang, Handing Wang, Jing Liu
GLHF: General Learned Evolutionary Algorithm Via Hyper Functions
null
null
null
null
cs.NE cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pretrained Optimization Models (POMs) leverage knowledge gained from optimizing various tasks, providing efficient solutions for new optimization challenges through direct usage or fine-tuning. Despite the inefficiencies and limited generalization abilities observed in current POMs, our proposed model, the general pre-trained optimization model (GPOM), addresses these shortcomings. GPOM constructs a population-based pretrained Black-Box Optimization (BBO) model tailored for continuous optimization. Evaluation on the BBOB benchmark and two robot control tasks demonstrates that GPOM outperforms other pretrained BBO models significantly, especially for high-dimensional tasks. Its direct optimization performance exceeds that of state-of-the-art evolutionary algorithms and POMs. Furthermore, GPOM exhibits robust generalization capabilities across diverse task distributions, dimensions, population sizes, and optimization horizons.
[ { "created": "Mon, 6 May 2024 09:11:49 GMT", "version": "v1" } ]
2024-05-08
[ [ "Li", "Xiaobin", "" ], [ "Wu", "Kai", "" ], [ "Li", "Yujian Betterest", "" ], [ "Zhang", "Xiaoyu", "" ], [ "Wang", "Handing", "" ], [ "Liu", "Jing", "" ] ]
Pretrained Optimization Models (POMs) leverage knowledge gained from optimizing various tasks, providing efficient solutions for new optimization challenges through direct usage or fine-tuning. Despite the inefficiencies and limited generalization abilities observed in current POMs, our proposed model, the general pre-trained optimization model (GPOM), addresses these shortcomings. GPOM constructs a population-based pretrained Black-Box Optimization (BBO) model tailored for continuous optimization. Evaluation on the BBOB benchmark and two robot control tasks demonstrates that GPOM outperforms other pretrained BBO models significantly, especially for high-dimensional tasks. Its direct optimization performance exceeds that of state-of-the-art evolutionary algorithms and POMs. Furthermore, GPOM exhibits robust generalization capabilities across diverse task distributions, dimensions, population sizes, and optimization horizons.
2306.13995
Rohitash Chandra
Chaarvi Bansal, Rohitash Chandra, Vinti Agarwal, P. R. Deepa
A clustering and graph deep learning-based framework for COVID-19 drug repurposing
null
null
null
null
cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Drug repurposing (or repositioning) is the process of finding new therapeutic uses for drugs already approved by drug regulatory authorities (e.g., the Food and Drug Administration (FDA) and Therapeutic Goods Administration (TGA)) for other diseases. This involves analyzing the interactions between different biological entities, such as drug targets (genes/proteins and biological pathways) and drug properties, to discover novel drug-target or drug-disease relations. Artificial intelligence methods such as machine learning and deep learning have successfully analyzed complex heterogeneous data in the biomedical domain and have also been used for drug repurposing. This study presents a novel unsupervised machine learning framework that utilizes a graph-based autoencoder for multi-feature type clustering on heterogeneous drug data. The dataset consists of 438 drugs, of which 224 are under clinical trials for COVID-19 (category A). The rest are systematically filtered to ensure the safety and efficacy of the treatment (category B). The framework solely relies on reported drug data, including its pharmacological properties, chemical/physical properties, interaction with the host, and efficacy in different publicly available COVID-19 assays. Our machine-learning framework reveals three clusters of interest and provides recommendations featuring the top 15 drugs for COVID-19 drug repurposing, which were shortlisted based on the predicted clusters that were dominated by category A drugs. The anti-COVID efficacy of the drugs should be verified by experimental studies. Our framework can be extended to support other datasets and drug repurposing studies, given open-source code and data availability.
[ { "created": "Sat, 24 Jun 2023 15:00:47 GMT", "version": "v1" } ]
2023-06-27
[ [ "Bansal", "Chaarvi", "" ], [ "Chandra", "Rohitash", "" ], [ "Agarwal", "Vinti", "" ], [ "Deepa", "P. R.", "" ] ]
Drug repurposing (or repositioning) is the process of finding new therapeutic uses for drugs already approved by drug regulatory authorities (e.g., the Food and Drug Administration (FDA) and Therapeutic Goods Administration (TGA)) for other diseases. This involves analyzing the interactions between different biological entities, such as drug targets (genes/proteins and biological pathways) and drug properties, to discover novel drug-target or drug-disease relations. Artificial intelligence methods such as machine learning and deep learning have successfully analyzed complex heterogeneous data in the biomedical domain and have also been used for drug repurposing. This study presents a novel unsupervised machine learning framework that utilizes a graph-based autoencoder for multi-feature type clustering on heterogeneous drug data. The dataset consists of 438 drugs, of which 224 are under clinical trials for COVID-19 (category A). The rest are systematically filtered to ensure the safety and efficacy of the treatment (category B). The framework solely relies on reported drug data, including its pharmacological properties, chemical/physical properties, interaction with the host, and efficacy in different publicly available COVID-19 assays. Our machine-learning framework reveals three clusters of interest and provides recommendations featuring the top 15 drugs for COVID-19 drug repurposing, which were shortlisted based on the predicted clusters that were dominated by category A drugs. The anti-COVID efficacy of the drugs should be verified by experimental studies. Our framework can be extended to support other datasets and drug repurposing studies, given open-source code and data availability.
2208.00332
Maleknaz Nayebi
Sk Golam Saroar, Waseefa Ahmed, Maleknaz Nayebi
GitHub Marketplace for Practitioners and Researchers to Date: A Systematic Analysis of the Knowledge Mobilization Gap in Open Source Software Automation
The paper is under review in a journal
null
null
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
Marketplaces for distributing software products and services have been getting increasing popularity. GitHub, which is most known for its version control functionality through Git, launched its own marketplace in 2017. GitHub Marketplace hosts third party apps and actions to automate workflows in software teams. Currently, this marketplace hosts 440 Apps and 7,878 Actions across 32 different categories. Overall, 419 Third party developers released their apps on this platform which 111 distinct customers adopted. The popularity and accessibility of GitHub projects have made this platform and the projects hosted on it one of the most frequent subjects for experimentation in the software engineering research. A simple Google Scholar search shows that 24,100 Research papers have discussed GitHub within the Software Engineering field since 2017, but none have looked into the marketplace. The GitHub Marketplace provides a unique source of information on the tools used by the practitioners in the Open Source Software (OSS) ecosystem for automating their project's workflow. In this study, we (i) mine and provide a descriptive overview of the GitHub Marketplace, (ii) perform a systematic mapping of research studies in automation for open source software, and (iii) compare the state of the art with the state of the practice on the automation tools. We conclude the paper by discussing the potential of GitHub Marketplace for knowledge mobilization and collaboration within the field. This is the first study on the GitHub Marketplace in the field.
[ { "created": "Sun, 31 Jul 2022 01:48:19 GMT", "version": "v1" } ]
2022-08-02
[ [ "Saroar", "Sk Golam", "" ], [ "Ahmed", "Waseefa", "" ], [ "Nayebi", "Maleknaz", "" ] ]
Marketplaces for distributing software products and services have been getting increasing popularity. GitHub, which is most known for its version control functionality through Git, launched its own marketplace in 2017. GitHub Marketplace hosts third party apps and actions to automate workflows in software teams. Currently, this marketplace hosts 440 Apps and 7,878 Actions across 32 different categories. Overall, 419 Third party developers released their apps on this platform which 111 distinct customers adopted. The popularity and accessibility of GitHub projects have made this platform and the projects hosted on it one of the most frequent subjects for experimentation in the software engineering research. A simple Google Scholar search shows that 24,100 Research papers have discussed GitHub within the Software Engineering field since 2017, but none have looked into the marketplace. The GitHub Marketplace provides a unique source of information on the tools used by the practitioners in the Open Source Software (OSS) ecosystem for automating their project's workflow. In this study, we (i) mine and provide a descriptive overview of the GitHub Marketplace, (ii) perform a systematic mapping of research studies in automation for open source software, and (iii) compare the state of the art with the state of the practice on the automation tools. We conclude the paper by discussing the potential of GitHub Marketplace for knowledge mobilization and collaboration within the field. This is the first study on the GitHub Marketplace in the field.
2405.13755
Nneka Okolo
Gergely Neu, Nneka Okolo
Offline RL via Feature-Occupancy Gradient Ascent
26 pages
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study offline Reinforcement Learning in large infinite-horizon discounted Markov Decision Processes (MDPs) when the reward and transition models are linearly realizable under a known feature map. Starting from the classic linear-program formulation of the optimal control problem in MDPs, we develop a new algorithm that performs a form of gradient ascent in the space of feature occupancies, defined as the expected feature vectors that can potentially be generated by executing policies in the environment. We show that the resulting simple algorithm satisfies strong computational and sample complexity guarantees, achieved under the least restrictive data coverage assumptions known in the literature. In particular, we show that the sample complexity of our method scales optimally with the desired accuracy level and depends on a weak notion of coverage that only requires the empirical feature covariance matrix to cover a single direction in the feature space (as opposed to covering a full subspace). Additionally, our method is easy to implement and requires no prior knowledge of the coverage ratio (or even an upper bound on it), which altogether make it the strongest known algorithm for this setting to date.
[ { "created": "Wed, 22 May 2024 15:39:05 GMT", "version": "v1" } ]
2024-05-24
[ [ "Neu", "Gergely", "" ], [ "Okolo", "Nneka", "" ] ]
We study offline Reinforcement Learning in large infinite-horizon discounted Markov Decision Processes (MDPs) when the reward and transition models are linearly realizable under a known feature map. Starting from the classic linear-program formulation of the optimal control problem in MDPs, we develop a new algorithm that performs a form of gradient ascent in the space of feature occupancies, defined as the expected feature vectors that can potentially be generated by executing policies in the environment. We show that the resulting simple algorithm satisfies strong computational and sample complexity guarantees, achieved under the least restrictive data coverage assumptions known in the literature. In particular, we show that the sample complexity of our method scales optimally with the desired accuracy level and depends on a weak notion of coverage that only requires the empirical feature covariance matrix to cover a single direction in the feature space (as opposed to covering a full subspace). Additionally, our method is easy to implement and requires no prior knowledge of the coverage ratio (or even an upper bound on it), which altogether make it the strongest known algorithm for this setting to date.
2111.13171
Tolga Birdal
Tolga Birdal, Aaron Lou, Leonidas Guibas, Umut \c{S}im\c{s}ekli
Intrinsic Dimension, Persistent Homology and Generalization in Neural Networks
Appears at NeurIPS 2021
null
null
null
cs.LG cs.AI cs.CV math.GN stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Disobeying the classical wisdom of statistical learning theory, modern deep neural networks generalize well even though they typically contain millions of parameters. Recently, it has been shown that the trajectories of iterative optimization algorithms can possess fractal structures, and their generalization error can be formally linked to the complexity of such fractals. This complexity is measured by the fractal's intrinsic dimension, a quantity usually much smaller than the number of parameters in the network. Even though this perspective provides an explanation for why overparametrized networks would not overfit, computing the intrinsic dimension (e.g., for monitoring generalization during training) is a notoriously difficult task, where existing methods typically fail even in moderate ambient dimensions. In this study, we consider this problem from the lens of topological data analysis (TDA) and develop a generic computational tool that is built on rigorous mathematical foundations. By making a novel connection between learning theory and TDA, we first illustrate that the generalization error can be equivalently bounded in terms of a notion called the 'persistent homology dimension' (PHD), where, compared with prior work, our approach does not require any additional geometrical or statistical assumptions on the training dynamics. Then, by utilizing recently established theoretical results and TDA tools, we develop an efficient algorithm to estimate PHD in the scale of modern deep neural networks and further provide visualization tools to help understand generalization in deep learning. Our experiments show that the proposed approach can efficiently compute a network's intrinsic dimension in a variety of settings, which is predictive of the generalization error.
[ { "created": "Thu, 25 Nov 2021 17:06:15 GMT", "version": "v1" } ]
2021-11-29
[ [ "Birdal", "Tolga", "" ], [ "Lou", "Aaron", "" ], [ "Guibas", "Leonidas", "" ], [ "Şimşekli", "Umut", "" ] ]
Disobeying the classical wisdom of statistical learning theory, modern deep neural networks generalize well even though they typically contain millions of parameters. Recently, it has been shown that the trajectories of iterative optimization algorithms can possess fractal structures, and their generalization error can be formally linked to the complexity of such fractals. This complexity is measured by the fractal's intrinsic dimension, a quantity usually much smaller than the number of parameters in the network. Even though this perspective provides an explanation for why overparametrized networks would not overfit, computing the intrinsic dimension (e.g., for monitoring generalization during training) is a notoriously difficult task, where existing methods typically fail even in moderate ambient dimensions. In this study, we consider this problem from the lens of topological data analysis (TDA) and develop a generic computational tool that is built on rigorous mathematical foundations. By making a novel connection between learning theory and TDA, we first illustrate that the generalization error can be equivalently bounded in terms of a notion called the 'persistent homology dimension' (PHD), where, compared with prior work, our approach does not require any additional geometrical or statistical assumptions on the training dynamics. Then, by utilizing recently established theoretical results and TDA tools, we develop an efficient algorithm to estimate PHD in the scale of modern deep neural networks and further provide visualization tools to help understand generalization in deep learning. Our experiments show that the proposed approach can efficiently compute a network's intrinsic dimension in a variety of settings, which is predictive of the generalization error.
1811.08047
Chunhui Guo
Chunhui Guo, Hao Wu, Xiayu Hua, Shangping Ren, Jerzy Nogiec
Optimizing System Quality of Service through Rejuvenation for Long-Running Applications with Real-Time Constraints
null
null
null
null
cs.SE cs.DC cs.PF cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reliability, longevity, availability, and deadline guarantees are the four most important metrics to measure the QoS of long-running safety-critical real-time applications. Software aging is one of the major factors that impact the safety of long-running real-time applications as the degraded performance and increased failure rate caused by software aging can lead to deadline missing and catastrophic consequences. Software rejuvenation is one of the most commonly used approaches to handle issues caused by software aging. In this paper, we study the optimal time when software rejuvenation shall take place so that the system's reliability, longevity, and availability are maximized, and application delays caused by software rejuvenation is minimized. In particular, we formally analyze the relationships between software rejuvenation frequency and system reliability, longevity, and availability. Based on the theoretic analysis, we develop approaches to maximizing system reliability, longevity, and availability, and use simulation to evaluate the developed approaches. In addition, we design the MIN-DELAY semi-priority-driven scheduling algorithm to minimize application delays caused by rejuvenation processes. The simulation experiments show that the developed semi-priority-driven scheduling algorithm reduces application delays by 9.01% and 14.24% over the earliest deadline first (EDF) and least release time (LRT) scheduling algorithms, respectively.
[ { "created": "Tue, 20 Nov 2018 02:51:50 GMT", "version": "v1" } ]
2018-11-21
[ [ "Guo", "Chunhui", "" ], [ "Wu", "Hao", "" ], [ "Hua", "Xiayu", "" ], [ "Ren", "Shangping", "" ], [ "Nogiec", "Jerzy", "" ] ]
Reliability, longevity, availability, and deadline guarantees are the four most important metrics to measure the QoS of long-running safety-critical real-time applications. Software aging is one of the major factors that impact the safety of long-running real-time applications as the degraded performance and increased failure rate caused by software aging can lead to deadline missing and catastrophic consequences. Software rejuvenation is one of the most commonly used approaches to handle issues caused by software aging. In this paper, we study the optimal time when software rejuvenation shall take place so that the system's reliability, longevity, and availability are maximized, and application delays caused by software rejuvenation is minimized. In particular, we formally analyze the relationships between software rejuvenation frequency and system reliability, longevity, and availability. Based on the theoretic analysis, we develop approaches to maximizing system reliability, longevity, and availability, and use simulation to evaluate the developed approaches. In addition, we design the MIN-DELAY semi-priority-driven scheduling algorithm to minimize application delays caused by rejuvenation processes. The simulation experiments show that the developed semi-priority-driven scheduling algorithm reduces application delays by 9.01% and 14.24% over the earliest deadline first (EDF) and least release time (LRT) scheduling algorithms, respectively.
2303.12224
Noam Buckman
Noam Buckman, Shiva Sreeram, Mathias Lechner, Yutong Ban, Ramin Hasani, Sertac Karaman, Daniela Rus
Infrastructure-based End-to-End Learning and Prevention of Driver Failure
8 pages. Accepted to ICRA 2023
null
10.1109/ICRA48891.2023.10161536
null
cs.RO cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Intelligent intersection managers can improve safety by detecting dangerous drivers or failure modes in autonomous vehicles, warning oncoming vehicles as they approach an intersection. In this work, we present FailureNet, a recurrent neural network trained end-to-end on trajectories of both nominal and reckless drivers in a scaled miniature city. FailureNet observes the poses of vehicles as they approach an intersection and detects whether a failure is present in the autonomy stack, warning cross-traffic of potentially dangerous drivers. FailureNet can accurately identify control failures, upstream perception errors, and speeding drivers, distinguishing them from nominal driving. The network is trained and deployed with autonomous vehicles in the MiniCity. Compared to speed or frequency-based predictors, FailureNet's recurrent neural network structure provides improved predictive power, yielding upwards of 84% accuracy when deployed on hardware.
[ { "created": "Tue, 21 Mar 2023 22:55:51 GMT", "version": "v1" } ]
2023-10-02
[ [ "Buckman", "Noam", "" ], [ "Sreeram", "Shiva", "" ], [ "Lechner", "Mathias", "" ], [ "Ban", "Yutong", "" ], [ "Hasani", "Ramin", "" ], [ "Karaman", "Sertac", "" ], [ "Rus", "Daniela", "" ] ]
Intelligent intersection managers can improve safety by detecting dangerous drivers or failure modes in autonomous vehicles, warning oncoming vehicles as they approach an intersection. In this work, we present FailureNet, a recurrent neural network trained end-to-end on trajectories of both nominal and reckless drivers in a scaled miniature city. FailureNet observes the poses of vehicles as they approach an intersection and detects whether a failure is present in the autonomy stack, warning cross-traffic of potentially dangerous drivers. FailureNet can accurately identify control failures, upstream perception errors, and speeding drivers, distinguishing them from nominal driving. The network is trained and deployed with autonomous vehicles in the MiniCity. Compared to speed or frequency-based predictors, FailureNet's recurrent neural network structure provides improved predictive power, yielding upwards of 84% accuracy when deployed on hardware.
1704.08526
Kiran Gupta A
Naveen S Naik, Kiran A Gupta
An Efficient Reconfigurable FIR Digital Filter Using Modified Distribute Arithmetic Technique
5 pages,4 figures, journal 2015
null
null
null
cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper provides modified Distributed Arithmetic based technique to compute sum of products saving appreciable number of Multiply And accumulation blocks and this consecutively reduces circuit size. In this technique multiplexer based structure is used to reuse the blocks so as to reduce the required memory locations. In this technique a Carry Look Ahead based adder tree is used to have better area-delay product. Designing of FIR filter is done using VHDL and synthesized using Xilinx 12.2 synthesis tool and ISIM simulator. The power analysis is done using Xilinx Xpower analyzer. The proposed structure requires nearly 42% less cells, 40% less LUT flip-flop pairs used, and also 2% less power compared with existing structure.
[ { "created": "Thu, 27 Apr 2017 12:07:52 GMT", "version": "v1" } ]
2017-04-28
[ [ "Naik", "Naveen S", "" ], [ "Gupta", "Kiran A", "" ] ]
This paper provides modified Distributed Arithmetic based technique to compute sum of products saving appreciable number of Multiply And accumulation blocks and this consecutively reduces circuit size. In this technique multiplexer based structure is used to reuse the blocks so as to reduce the required memory locations. In this technique a Carry Look Ahead based adder tree is used to have better area-delay product. Designing of FIR filter is done using VHDL and synthesized using Xilinx 12.2 synthesis tool and ISIM simulator. The power analysis is done using Xilinx Xpower analyzer. The proposed structure requires nearly 42% less cells, 40% less LUT flip-flop pairs used, and also 2% less power compared with existing structure.
1907.06768
Mohammad Hasanzadeh Mofrad
Mohammad Hasanzadeh Mofrad, Rami Melhem, Mohammad Hammoud
Partitioning Graphs for the Cloud using Reinforcement Learning
null
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose Revolver, a parallel graph partitioning algorithm capable of partitioning large-scale graphs on a single shared-memory machine. Revolver employs an asynchronous processing framework, which leverages reinforcement learning and label propagation to adaptively partition a graph. In addition, it adopts a vertex-centric view of the graph where each vertex is assigned an autonomous agent responsible for selecting a suitable partition for it, distributing thereby the computation across all vertices. The intuition behind using a vertex-centric view is that it naturally fits the graph partitioning problem, which entails that a graph can be partitioned using local information provided by each vertex's neighborhood. We fully implemented and comprehensively tested Revolver using nine real-world graphs. Our results show that Revolver is scalable and can outperform three popular and state-of-the-art graph partitioners via producing comparable localized partitions, yet without sacrificing the load balance across partitions.
[ { "created": "Mon, 15 Jul 2019 21:50:56 GMT", "version": "v1" }, { "created": "Wed, 17 Jul 2019 16:16:28 GMT", "version": "v2" } ]
2019-07-18
[ [ "Mofrad", "Mohammad Hasanzadeh", "" ], [ "Melhem", "Rami", "" ], [ "Hammoud", "Mohammad", "" ] ]
In this paper, we propose Revolver, a parallel graph partitioning algorithm capable of partitioning large-scale graphs on a single shared-memory machine. Revolver employs an asynchronous processing framework, which leverages reinforcement learning and label propagation to adaptively partition a graph. In addition, it adopts a vertex-centric view of the graph where each vertex is assigned an autonomous agent responsible for selecting a suitable partition for it, distributing thereby the computation across all vertices. The intuition behind using a vertex-centric view is that it naturally fits the graph partitioning problem, which entails that a graph can be partitioned using local information provided by each vertex's neighborhood. We fully implemented and comprehensively tested Revolver using nine real-world graphs. Our results show that Revolver is scalable and can outperform three popular and state-of-the-art graph partitioners via producing comparable localized partitions, yet without sacrificing the load balance across partitions.
2207.03398
Davis Wertheimer
Davis Wertheimer, Luming Tang, and Bharath Hariharan
Diagnosing and Remedying Shot Sensitivity with Cosine Few-Shot Learners
null
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
Few-shot recognition involves training an image classifier to distinguish novel concepts at test time using few examples (shot). Existing approaches generally assume that the shot number at test time is known in advance. This is not realistic, and the performance of a popular and foundational method has been shown to suffer when train and test shots do not match. We conduct a systematic empirical study of this phenomenon. In line with prior work, we find that shot sensitivity is broadly present across metric-based few-shot learners, but in contrast to prior work, larger neural architectures provide a degree of built-in robustness to varying test shot. More importantly, a simple, previously known but greatly overlooked class of approaches based on cosine distance consistently and greatly improves robustness to shot variation, by removing sensitivity to sample noise. We derive cosine alternatives to popular and recent few-shot classifiers, broadening their applicability to realistic settings. These cosine models consistently improve shot-robustness, outperform prior shot-robust state of the art, and provide competitive accuracy on a range of benchmarks and architectures, including notable gains in the very-low-shot regime.
[ { "created": "Thu, 7 Jul 2022 16:05:28 GMT", "version": "v1" } ]
2022-07-08
[ [ "Wertheimer", "Davis", "" ], [ "Tang", "Luming", "" ], [ "Hariharan", "Bharath", "" ] ]
Few-shot recognition involves training an image classifier to distinguish novel concepts at test time using few examples (shot). Existing approaches generally assume that the shot number at test time is known in advance. This is not realistic, and the performance of a popular and foundational method has been shown to suffer when train and test shots do not match. We conduct a systematic empirical study of this phenomenon. In line with prior work, we find that shot sensitivity is broadly present across metric-based few-shot learners, but in contrast to prior work, larger neural architectures provide a degree of built-in robustness to varying test shot. More importantly, a simple, previously known but greatly overlooked class of approaches based on cosine distance consistently and greatly improves robustness to shot variation, by removing sensitivity to sample noise. We derive cosine alternatives to popular and recent few-shot classifiers, broadening their applicability to realistic settings. These cosine models consistently improve shot-robustness, outperform prior shot-robust state of the art, and provide competitive accuracy on a range of benchmarks and architectures, including notable gains in the very-low-shot regime.
2102.08707
Mohammad Dehghani Soltani
Mohammad Dehghani Soltani, Elham Sarbazi, Nikolaos Bamiedakis, Priyanka de Souza, Hossein Kazemi, Jaafar M. H. Elmirghani, Ian H. White, Richard V. Penty, Harald Haas and Majid Safari
Safety Analysis for Laser-based Optical Wireless Communications: A Tutorial
54 pages, 24 figures Submitted to IEEE journal
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
Light amplification by stimulated emission of radiation (laser) sources have many advantages for use in high data rate optical wireless communications. In particular, the low cost and high-bandwidth properties of laser sources such as vertical-cavity surface-emitting lasers (VCSELs) make them attractive for future indoor optical wireless communications. In order to be integrated into future indoor networks, such lasers should conform to eye safety regulations determined by the international electrotechnical commission (IEC) standards for laser safety. In this paper, we provide a detailed study of beam propagation to evaluate the received power of various laser sources, based on which as well as the maximum permissible exposure (MPE) defined by the IEC 60825-1:2014 standard, we establish a comprehensive framework for eye safety analyses. This framework allows us to calculate the maximum allowable transmit power, which is crucial in the design of a reliable and safe laser-based wireless communication system. Initially, we consider a single-mode Gaussian beam and calculate the maximum permissible transmit power. Subsequently, we generalize this approach for higher-mode beams. It is shown that the M-squared-based approach for analysis of multimode lasers ensures the IEC eye safety limits, however, in some scenarios, it can be too conservative compared to the precise beam decomposition method. Laser safety analyses with consideration of optical elements such as lens and diffuser, as well as for VCSEL array have been also presented. Skin safety, as another significant factor of laser safety, has also been investigated in this paper. We have studied the impacts of various parameters such as wavelength, exposure duration and the divergence angle of laser sources on the safety analysis by presenting insightful results.
[ { "created": "Wed, 17 Feb 2021 11:44:30 GMT", "version": "v1" }, { "created": "Tue, 20 Apr 2021 22:27:17 GMT", "version": "v2" }, { "created": "Wed, 5 May 2021 14:40:43 GMT", "version": "v3" } ]
2021-05-06
[ [ "Soltani", "Mohammad Dehghani", "" ], [ "Sarbazi", "Elham", "" ], [ "Bamiedakis", "Nikolaos", "" ], [ "de Souza", "Priyanka", "" ], [ "Kazemi", "Hossein", "" ], [ "Elmirghani", "Jaafar M. H.", "" ], [ "White", "Ian H.", "" ], [ "Penty", "Richard V.", "" ], [ "Haas", "Harald", "" ], [ "Safari", "Majid", "" ] ]
Light amplification by stimulated emission of radiation (laser) sources have many advantages for use in high data rate optical wireless communications. In particular, the low cost and high-bandwidth properties of laser sources such as vertical-cavity surface-emitting lasers (VCSELs) make them attractive for future indoor optical wireless communications. In order to be integrated into future indoor networks, such lasers should conform to eye safety regulations determined by the international electrotechnical commission (IEC) standards for laser safety. In this paper, we provide a detailed study of beam propagation to evaluate the received power of various laser sources, based on which as well as the maximum permissible exposure (MPE) defined by the IEC 60825-1:2014 standard, we establish a comprehensive framework for eye safety analyses. This framework allows us to calculate the maximum allowable transmit power, which is crucial in the design of a reliable and safe laser-based wireless communication system. Initially, we consider a single-mode Gaussian beam and calculate the maximum permissible transmit power. Subsequently, we generalize this approach for higher-mode beams. It is shown that the M-squared-based approach for analysis of multimode lasers ensures the IEC eye safety limits, however, in some scenarios, it can be too conservative compared to the precise beam decomposition method. Laser safety analyses with consideration of optical elements such as lens and diffuser, as well as for VCSEL array have been also presented. Skin safety, as another significant factor of laser safety, has also been investigated in this paper. We have studied the impacts of various parameters such as wavelength, exposure duration and the divergence angle of laser sources on the safety analysis by presenting insightful results.
2406.00150
Yilin Zheng
Yilin Zheng, Atilla Eryilmaz
Non-Federated Multi-Task Split Learning for Heterogeneous Sources
null
null
null
null
cs.LG cs.DC
http://creativecommons.org/licenses/by/4.0/
With the development of edge networks and mobile computing, the need to serve heterogeneous data sources at the network edge requires the design of new distributed machine learning mechanisms. As a prevalent approach, Federated Learning (FL) employs parameter-sharing and gradient-averaging between clients and a server. Despite its many favorable qualities, such as convergence and data-privacy guarantees, it is well-known that classic FL fails to address the challenge of data heterogeneity and computation heterogeneity across clients. Most existing works that aim to accommodate such sources of heterogeneity stay within the FL operation paradigm, with modifications to overcome the negative effect of heterogeneous data. In this work, as an alternative paradigm, we propose a Multi-Task Split Learning (MTSL) framework, which combines the advantages of Split Learning (SL) with the flexibility of distributed network architectures. In contrast to the FL counterpart, in this paradigm, heterogeneity is not an obstacle to overcome, but a useful property to take advantage of. As such, this work aims to introduce a new architecture and methodology to perform multi-task learning for heterogeneous data sources efficiently, with the hope of encouraging the community to further explore the potential advantages we reveal. To support this promise, we first show through theoretical analysis that MTSL can achieve fast convergence by tuning the learning rate of the server and clients. Then, we compare the performance of MTSL with existing multi-task FL methods numerically on several image classification datasets to show that MTSL has advantages over FL in training speed, communication cost, and robustness to heterogeneous data.
[ { "created": "Fri, 31 May 2024 19:27:03 GMT", "version": "v1" } ]
2024-06-04
[ [ "Zheng", "Yilin", "" ], [ "Eryilmaz", "Atilla", "" ] ]
With the development of edge networks and mobile computing, the need to serve heterogeneous data sources at the network edge requires the design of new distributed machine learning mechanisms. As a prevalent approach, Federated Learning (FL) employs parameter-sharing and gradient-averaging between clients and a server. Despite its many favorable qualities, such as convergence and data-privacy guarantees, it is well-known that classic FL fails to address the challenge of data heterogeneity and computation heterogeneity across clients. Most existing works that aim to accommodate such sources of heterogeneity stay within the FL operation paradigm, with modifications to overcome the negative effect of heterogeneous data. In this work, as an alternative paradigm, we propose a Multi-Task Split Learning (MTSL) framework, which combines the advantages of Split Learning (SL) with the flexibility of distributed network architectures. In contrast to the FL counterpart, in this paradigm, heterogeneity is not an obstacle to overcome, but a useful property to take advantage of. As such, this work aims to introduce a new architecture and methodology to perform multi-task learning for heterogeneous data sources efficiently, with the hope of encouraging the community to further explore the potential advantages we reveal. To support this promise, we first show through theoretical analysis that MTSL can achieve fast convergence by tuning the learning rate of the server and clients. Then, we compare the performance of MTSL with existing multi-task FL methods numerically on several image classification datasets to show that MTSL has advantages over FL in training speed, communication cost, and robustness to heterogeneous data.
1612.07548
Wendelin B\"ohmer
Wendelin B\"ohmer and Rong Guo and Klaus Obermayer
Non-Deterministic Policy Improvement Stabilizes Approximated Reinforcement Learning
This paper has been presented at the 13th European Workshop on Reinforcement Learning (EWRL 2016) on the 3rd and 4th of December 2016 in Barcelona, Spain
null
null
null
cs.AI cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper investigates a type of instability that is linked to the greedy policy improvement in approximated reinforcement learning. We show empirically that non-deterministic policy improvement can stabilize methods like LSPI by controlling the improvements' stochasticity. Additionally we show that a suitable representation of the value function also stabilizes the solution to some degree. The presented approach is simple and should also be easily transferable to more sophisticated algorithms like deep reinforcement learning.
[ { "created": "Thu, 22 Dec 2016 11:30:35 GMT", "version": "v1" } ]
2016-12-23
[ [ "Böhmer", "Wendelin", "" ], [ "Guo", "Rong", "" ], [ "Obermayer", "Klaus", "" ] ]
This paper investigates a type of instability that is linked to the greedy policy improvement in approximated reinforcement learning. We show empirically that non-deterministic policy improvement can stabilize methods like LSPI by controlling the improvements' stochasticity. Additionally we show that a suitable representation of the value function also stabilizes the solution to some degree. The presented approach is simple and should also be easily transferable to more sophisticated algorithms like deep reinforcement learning.
2207.14545
Yanchen Li
Yanchen Li, Qingzhong Ai and Fumihiko Ino
A One-Shot Reparameterization Method for Reducing the Loss of Tile Pruning on DNNs
Presented at IJCNN 2022, oral
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Recently, tile pruning has been widely studied to accelerate the inference of deep neural networks (DNNs). However, we found that the loss due to tile pruning, which can eliminate important elements together with unimportant elements, is large on trained DNNs. In this study, we propose a one-shot reparameterization method, called TileTrans, to reduce the loss of tile pruning. Specifically, we repermute the rows or columns of the weight matrix such that the model architecture can be kept unchanged after reparameterization. This repermutation realizes the reparameterization of the DNN model without any retraining. The proposed reparameterization method combines important elements into the same tile; thus, preserving the important elements after the tile pruning. Furthermore, TileTrans can be seamlessly integrated into existing tile pruning methods because it is a pre-processing method executed before pruning, which is orthogonal to most existing methods. The experimental results demonstrate that our method is essential in reducing the loss of tile pruning on DNNs. Specifically, the accuracy is improved by up to 17% for AlexNet while 5% for ResNet-34, where both models are pre-trained on ImageNet.
[ { "created": "Fri, 29 Jul 2022 08:27:15 GMT", "version": "v1" } ]
2022-08-01
[ [ "Li", "Yanchen", "" ], [ "Ai", "Qingzhong", "" ], [ "Ino", "Fumihiko", "" ] ]
Recently, tile pruning has been widely studied to accelerate the inference of deep neural networks (DNNs). However, we found that the loss due to tile pruning, which can eliminate important elements together with unimportant elements, is large on trained DNNs. In this study, we propose a one-shot reparameterization method, called TileTrans, to reduce the loss of tile pruning. Specifically, we repermute the rows or columns of the weight matrix such that the model architecture can be kept unchanged after reparameterization. This repermutation realizes the reparameterization of the DNN model without any retraining. The proposed reparameterization method combines important elements into the same tile; thus, preserving the important elements after the tile pruning. Furthermore, TileTrans can be seamlessly integrated into existing tile pruning methods because it is a pre-processing method executed before pruning, which is orthogonal to most existing methods. The experimental results demonstrate that our method is essential in reducing the loss of tile pruning on DNNs. Specifically, the accuracy is improved by up to 17% for AlexNet while 5% for ResNet-34, where both models are pre-trained on ImageNet.
2309.04656
Jan Vondrak
Shahar Dobzinski, Wenzheng Li, Aviad Rubinstein and Jan Vondrak
A constant factor approximation for Nash social welfare with subadditive valuations
null
null
null
null
cs.GT cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a constant-factor approximation algorithm for the Nash social welfare maximization problem with subadditive valuations accessible via demand queries. More generally, we propose a template for NSW optimization by solving a configuration-type LP and using a rounding procedure for (utilitarian) social welfare as a blackbox, which could be applicable to other variants of the problem.
[ { "created": "Sat, 9 Sep 2023 01:31:14 GMT", "version": "v1" } ]
2023-09-12
[ [ "Dobzinski", "Shahar", "" ], [ "Li", "Wenzheng", "" ], [ "Rubinstein", "Aviad", "" ], [ "Vondrak", "Jan", "" ] ]
We present a constant-factor approximation algorithm for the Nash social welfare maximization problem with subadditive valuations accessible via demand queries. More generally, we propose a template for NSW optimization by solving a configuration-type LP and using a rounding procedure for (utilitarian) social welfare as a blackbox, which could be applicable to other variants of the problem.
2106.06617
Erik Nelson
Erik Nelson
Inexact Loops in Robotics Problems
Robotics: Science and Systems (RSS) 2021
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Loops are pervasive in robotics problems, appearing in mapping and localization, where one is interested in finding loop closure constraints to better approximate robot poses or other estimated quantities, as well as planning and prediction, where one is interested in the homotopy classes of the space through which a robot is moving. We generalize the standard topological definition of a loop to cases where a trajectory passes close to itself, but doesn't necessarily touch, giving a definition that is more practical for real robotics problems. This relaxation leads to new and useful properties of inexact loops, such as their ability to be partitioned into topologically connected sets closely matching the concept of a "loop closure", and the existence of simple and nonsimple loops. Building from these ideas, we introduce several ways to measure properties and quantities of inexact loops on a trajectory, such as the trajectory's "loop area" and "loop density", and use them to compare strategies for sampling representative inexact loops to build constraints in mapping and localization problems.
[ { "created": "Fri, 11 Jun 2021 21:33:26 GMT", "version": "v1" } ]
2021-06-15
[ [ "Nelson", "Erik", "" ] ]
Loops are pervasive in robotics problems, appearing in mapping and localization, where one is interested in finding loop closure constraints to better approximate robot poses or other estimated quantities, as well as planning and prediction, where one is interested in the homotopy classes of the space through which a robot is moving. We generalize the standard topological definition of a loop to cases where a trajectory passes close to itself, but doesn't necessarily touch, giving a definition that is more practical for real robotics problems. This relaxation leads to new and useful properties of inexact loops, such as their ability to be partitioned into topologically connected sets closely matching the concept of a "loop closure", and the existence of simple and nonsimple loops. Building from these ideas, we introduce several ways to measure properties and quantities of inexact loops on a trajectory, such as the trajectory's "loop area" and "loop density", and use them to compare strategies for sampling representative inexact loops to build constraints in mapping and localization problems.
2107.11792
Haide Wang
Haide Wang, Ji Zhou, Jinlong Wei, Dong Guo, Yuanhua Feng, Weiping Liu, Changyuan Yu, Dawei Wang, and Zhaohui Li
Multi-Rate Nyquist-SCM for C-Band 100Gbit/s Signal over 50km Dispersion-Uncompensated Link
This paper has been accepted by Journal of Lightwave Techonlogy
null
10.1109/JLT.2021.3131603
null
cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by/4.0/
In this paper, to the best of our knowledge, we propose the first multi-rate Nyquist-subcarriers modulation (SCM) for C-band 100Gbit/s signal transmission over 50km dispersion-uncompensated link. Chromatic dispersion (CD) introduces severe spectral nulls on optical double-sideband signal, which greatly degrades the performance of intensity-modulation and direct-detection systems. Based on the prior knowledge of the dispersive channel, Nyquist-SCM with multi-rate subcarriers is proposed to keep away from the CD-caused spectral nulls flexibly. Signal on each subcarrier can be individually recovered by a digital signal processing, including the feed-forward equalizer with no more than 31 taps, a two-tap post filter, and maximum likelihood sequence estimation with one memory length. Combining with entropy loading based on probabilistic constellation shaping to maximize the capacity-reach, the C-band 100Gbit/s multi-rate Nyquist-SCM signal over 50km dispersion-uncompensated link can achieve 7% hard-decision forward error correction limit and average normalized generalized mutual information of 0.967 at received optical power of -4dBm and optical signal-to-noise ratio of 47.67dB. In conclusion, the multi-rate Nyquist-SCM shows great potentials in solving the CD-caused spectral distortions.
[ { "created": "Sun, 25 Jul 2021 12:22:09 GMT", "version": "v1" }, { "created": "Sun, 28 Nov 2021 08:03:03 GMT", "version": "v2" } ]
2022-04-06
[ [ "Wang", "Haide", "" ], [ "Zhou", "Ji", "" ], [ "Wei", "Jinlong", "" ], [ "Guo", "Dong", "" ], [ "Feng", "Yuanhua", "" ], [ "Liu", "Weiping", "" ], [ "Yu", "Changyuan", "" ], [ "Wang", "Dawei", "" ], [ "Li", "Zhaohui", "" ] ]
In this paper, to the best of our knowledge, we propose the first multi-rate Nyquist-subcarriers modulation (SCM) for C-band 100Gbit/s signal transmission over 50km dispersion-uncompensated link. Chromatic dispersion (CD) introduces severe spectral nulls on optical double-sideband signal, which greatly degrades the performance of intensity-modulation and direct-detection systems. Based on the prior knowledge of the dispersive channel, Nyquist-SCM with multi-rate subcarriers is proposed to keep away from the CD-caused spectral nulls flexibly. Signal on each subcarrier can be individually recovered by a digital signal processing, including the feed-forward equalizer with no more than 31 taps, a two-tap post filter, and maximum likelihood sequence estimation with one memory length. Combining with entropy loading based on probabilistic constellation shaping to maximize the capacity-reach, the C-band 100Gbit/s multi-rate Nyquist-SCM signal over 50km dispersion-uncompensated link can achieve 7% hard-decision forward error correction limit and average normalized generalized mutual information of 0.967 at received optical power of -4dBm and optical signal-to-noise ratio of 47.67dB. In conclusion, the multi-rate Nyquist-SCM shows great potentials in solving the CD-caused spectral distortions.
2005.01777
Ngoc Thang Vu
Chia-Yu Li, Daniel Ortega, Dirk V\"ath, Florian Lux, Lindsey Vanderlyn, Maximilian Schmidt, Michael Neumann, Moritz V\"olkel, Pavel Denisov, Sabrina Jenne, Zorica Kacarevic and Ngoc Thang Vu
ADVISER: A Toolkit for Developing Multi-modal, Multi-domain and Socially-engaged Conversational Agents
All authors contributed equally. Accepted to be presented at ACL - System demonstrations - 2020
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present ADVISER - an open-source, multi-domain dialog system toolkit that enables the development of multi-modal (incorporating speech, text and vision), socially-engaged (e.g. emotion recognition, engagement level prediction and backchanneling) conversational agents. The final Python-based implementation of our toolkit is flexible, easy to use, and easy to extend not only for technically experienced users, such as machine learning researchers, but also for less technically experienced users, such as linguists or cognitive scientists, thereby providing a flexible platform for collaborative research. Link to open-source code: https://github.com/DigitalPhonetics/adviser
[ { "created": "Mon, 4 May 2020 18:27:58 GMT", "version": "v1" } ]
2020-05-06
[ [ "Li", "Chia-Yu", "" ], [ "Ortega", "Daniel", "" ], [ "Väth", "Dirk", "" ], [ "Lux", "Florian", "" ], [ "Vanderlyn", "Lindsey", "" ], [ "Schmidt", "Maximilian", "" ], [ "Neumann", "Michael", "" ], [ "Völkel", "Moritz", "" ], [ "Denisov", "Pavel", "" ], [ "Jenne", "Sabrina", "" ], [ "Kacarevic", "Zorica", "" ], [ "Vu", "Ngoc Thang", "" ] ]
We present ADVISER - an open-source, multi-domain dialog system toolkit that enables the development of multi-modal (incorporating speech, text and vision), socially-engaged (e.g. emotion recognition, engagement level prediction and backchanneling) conversational agents. The final Python-based implementation of our toolkit is flexible, easy to use, and easy to extend not only for technically experienced users, such as machine learning researchers, but also for less technically experienced users, such as linguists or cognitive scientists, thereby providing a flexible platform for collaborative research. Link to open-source code: https://github.com/DigitalPhonetics/adviser
2306.01891
Abanob Soliman
Abanob Soliman, Fabien Bonardi, D\'esir\'e Sidib\'e, Samia Bouchafa
DH-PTAM: A Deep Hybrid Stereo Events-Frames Parallel Tracking And Mapping System
Accepted for publication in the IEEE Transactions on Intelligent Vehicles
Vol.0, 2024
10.1109/TIV.2024.3412595
null
cs.CV cs.RO eess.IV eess.SP
http://creativecommons.org/licenses/by/4.0/
This paper presents a robust approach for a visual parallel tracking and mapping (PTAM) system that excels in challenging environments. Our proposed method combines the strengths of heterogeneous multi-modal visual sensors, including stereo event-based and frame-based sensors, in a unified reference frame through a novel spatio-temporal synchronization of stereo visual frames and stereo event streams. We employ deep learning-based feature extraction and description for estimation to enhance robustness further. We also introduce an end-to-end parallel tracking and mapping optimization layer complemented by a simple loop-closure algorithm for efficient SLAM behavior. Through comprehensive experiments on both small-scale and large-scale real-world sequences of VECtor and TUM-VIE benchmarks, our proposed method (DH-PTAM) demonstrates superior performance in terms of robustness and accuracy in adverse conditions, especially in large-scale HDR scenarios. Our implementation's research-based Python API is publicly available on GitHub for further research and development: https://github.com/AbanobSoliman/DH-PTAM.
[ { "created": "Fri, 2 Jun 2023 19:52:13 GMT", "version": "v1" }, { "created": "Wed, 23 Aug 2023 21:29:03 GMT", "version": "v2" }, { "created": "Sun, 9 Jun 2024 13:56:51 GMT", "version": "v3" } ]
2024-06-11
[ [ "Soliman", "Abanob", "" ], [ "Bonardi", "Fabien", "" ], [ "Sidibé", "Désiré", "" ], [ "Bouchafa", "Samia", "" ] ]
This paper presents a robust approach for a visual parallel tracking and mapping (PTAM) system that excels in challenging environments. Our proposed method combines the strengths of heterogeneous multi-modal visual sensors, including stereo event-based and frame-based sensors, in a unified reference frame through a novel spatio-temporal synchronization of stereo visual frames and stereo event streams. We employ deep learning-based feature extraction and description for estimation to enhance robustness further. We also introduce an end-to-end parallel tracking and mapping optimization layer complemented by a simple loop-closure algorithm for efficient SLAM behavior. Through comprehensive experiments on both small-scale and large-scale real-world sequences of VECtor and TUM-VIE benchmarks, our proposed method (DH-PTAM) demonstrates superior performance in terms of robustness and accuracy in adverse conditions, especially in large-scale HDR scenarios. Our implementation's research-based Python API is publicly available on GitHub for further research and development: https://github.com/AbanobSoliman/DH-PTAM.
2101.07927
Yuanhao Gong
Yuanhao Gong, Wenming Tang, Lebin Zhou, Lantao Yu, Guoping Qiu
A Discrete Scheme for Computing Image's Weighted Gaussian Curvature
null
null
null
null
cs.CV eess.IV eess.SP math.DG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Weighted Gaussian Curvature is an important measurement for images. However, its conventional computation scheme has low performance, low accuracy and requires that the input image must be second order differentiable. To tackle these three issues, we propose a novel discrete computation scheme for the weighted Gaussian curvature. Our scheme does not require the second order differentiability. Moreover, our scheme is more accurate, has smaller support region and computationally more efficient than the conventional schemes. Therefore, our scheme holds promise for a large range of applications where the weighted Gaussian curvature is needed, for example, image smoothing, cartoon texture decomposition, optical flow estimation, etc.
[ { "created": "Wed, 20 Jan 2021 02:15:51 GMT", "version": "v1" } ]
2021-01-21
[ [ "Gong", "Yuanhao", "" ], [ "Tang", "Wenming", "" ], [ "Zhou", "Lebin", "" ], [ "Yu", "Lantao", "" ], [ "Qiu", "Guoping", "" ] ]
Weighted Gaussian Curvature is an important measurement for images. However, its conventional computation scheme has low performance, low accuracy and requires that the input image must be second order differentiable. To tackle these three issues, we propose a novel discrete computation scheme for the weighted Gaussian curvature. Our scheme does not require the second order differentiability. Moreover, our scheme is more accurate, has smaller support region and computationally more efficient than the conventional schemes. Therefore, our scheme holds promise for a large range of applications where the weighted Gaussian curvature is needed, for example, image smoothing, cartoon texture decomposition, optical flow estimation, etc.
2105.08597
Mohammed Ibrahim
Mohammed Ibrahim, Susan Gauch, Tyler Gerth, Brandon Cox
WOVe: Incorporating Word Order in GloVe Word Embeddings
null
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
Word vector representations open up new opportunities to extract useful information from unstructured text. Defining a word as a vector made it easy for the machine learning algorithms to understand a text and extract information from. Word vector representations have been used in many applications such word synonyms, word analogy, syntactic parsing, and many others. GloVe, based on word contexts and matrix vectorization, is an ef-fective vector-learning algorithm. It improves on previous vector-learning algorithms. However, the GloVe model fails to explicitly consider the order in which words appear within their contexts. In this paper, multiple methods of incorporating word order in GloVe word embeddings are proposed. Experimental results show that our Word Order Vector (WOVe) word embeddings approach outperforms unmodified GloVe on the natural lan-guage tasks of analogy completion and word similarity. WOVe with direct concatenation slightly outperformed GloVe on the word similarity task, increasing average rank by 2%. However, it greatly improved on the GloVe baseline on a word analogy task, achieving an average 36.34% improvement in accuracy.
[ { "created": "Tue, 18 May 2021 15:28:20 GMT", "version": "v1" } ]
2021-05-19
[ [ "Ibrahim", "Mohammed", "" ], [ "Gauch", "Susan", "" ], [ "Gerth", "Tyler", "" ], [ "Cox", "Brandon", "" ] ]
Word vector representations open up new opportunities to extract useful information from unstructured text. Defining a word as a vector made it easy for the machine learning algorithms to understand a text and extract information from. Word vector representations have been used in many applications such word synonyms, word analogy, syntactic parsing, and many others. GloVe, based on word contexts and matrix vectorization, is an ef-fective vector-learning algorithm. It improves on previous vector-learning algorithms. However, the GloVe model fails to explicitly consider the order in which words appear within their contexts. In this paper, multiple methods of incorporating word order in GloVe word embeddings are proposed. Experimental results show that our Word Order Vector (WOVe) word embeddings approach outperforms unmodified GloVe on the natural lan-guage tasks of analogy completion and word similarity. WOVe with direct concatenation slightly outperformed GloVe on the word similarity task, increasing average rank by 2%. However, it greatly improved on the GloVe baseline on a word analogy task, achieving an average 36.34% improvement in accuracy.
1910.10176
Rene Haberland
Ren\'e Haberland
Review of Recent Heap Specification and Verification Techniques
fully translated preprint (English); final journal paper 26 pages (in Russian)
Computer Tools in Education Journal, ISSN 2071-2359, ISSN 2071-2340
10.32603/2071-2340-2019-2-5-30
null
cs.LO cs.SC
http://creativecommons.org/licenses/by-nc-sa/4.0/
The article provides an overview of the existing methods of dynamic memory verification; a comparative analysis is carried out; the applicability for solving problems of control, monitoring, and verification of dynamic memory is evaluated. This article is divided into eight sections. The first section introduces formal verification, followed by a section that discusses dynamic memory management problems. The third section discusses Hoare's calculus resumed by heap transformations to the stack. The fifth and sixth sections introduce the concept of dynamic memory shape analysis and the rotation of pointers. The seventh is on separation logic. The last section discusses possible areas of further research, particularly the recognition at recording level of various instances of objects; automation of proofs; "hot" code, that is, software code that updates itself when the program runs; expanding intuitiveness, for instance, on proof explanations.
[ { "created": "Tue, 22 Oct 2019 18:01:33 GMT", "version": "v1" }, { "created": "Thu, 24 Mar 2022 11:43:05 GMT", "version": "v2" } ]
2022-03-25
[ [ "Haberland", "René", "" ] ]
The article provides an overview of the existing methods of dynamic memory verification; a comparative analysis is carried out; the applicability for solving problems of control, monitoring, and verification of dynamic memory is evaluated. This article is divided into eight sections. The first section introduces formal verification, followed by a section that discusses dynamic memory management problems. The third section discusses Hoare's calculus resumed by heap transformations to the stack. The fifth and sixth sections introduce the concept of dynamic memory shape analysis and the rotation of pointers. The seventh is on separation logic. The last section discusses possible areas of further research, particularly the recognition at recording level of various instances of objects; automation of proofs; "hot" code, that is, software code that updates itself when the program runs; expanding intuitiveness, for instance, on proof explanations.
1303.1517
Pei Wang
Pei Wang
Belief Revision in Probability Theory
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-519-526
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a probability-based reasoning system, Bayes' theorem and its variations are often used to revise the system's beliefs. However, if the explicit conditions and the implicit conditions of probability assignments `me properly distinguished, it follows that Bayes' theorem is not a generally applicable revision rule. Upon properly distinguishing belief revision from belief updating, we see that Jeffrey's rule and its variations are not revision rules, either. Without these distinctions, the limitation of the Bayesian approach is often ignored or underestimated. Revision, in its general form, cannot be done in the Bayesian approach, because a probability distribution function alone does not contain the information needed by the operation.
[ { "created": "Wed, 6 Mar 2013 14:24:24 GMT", "version": "v1" } ]
2013-03-08
[ [ "Wang", "Pei", "" ] ]
In a probability-based reasoning system, Bayes' theorem and its variations are often used to revise the system's beliefs. However, if the explicit conditions and the implicit conditions of probability assignments `me properly distinguished, it follows that Bayes' theorem is not a generally applicable revision rule. Upon properly distinguishing belief revision from belief updating, we see that Jeffrey's rule and its variations are not revision rules, either. Without these distinctions, the limitation of the Bayesian approach is often ignored or underestimated. Revision, in its general form, cannot be done in the Bayesian approach, because a probability distribution function alone does not contain the information needed by the operation.
1910.04493
Kevin Gomez
Kevin Gomez, Matthias T\"aschner, M. Ali Rostami, Christopher Rost, Erhard Rahm
Graph Sampling with Distributed In-Memory Dataflow Systems
null
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given a large graph, a graph sample determines a subgraph with similar characteristics for certain metrics of the original graph. The samples are much smaller thereby accelerating and simplifying the analysis and visualization of large graphs. We focus on the implementation of distributed graph sampling for Big Data frameworks and in-memory dataflow systems such as Apache Spark or Apache Flink. We evaluate the scalability of the new implementations and analyze to what degree the sampling approaches preserve certain graph metrics compared to the original graph. The latter analysis also uses comparative graph visualizations. The presented methods will be open source and be integrated into Gradoop, a system for distributed graph analytics.
[ { "created": "Thu, 10 Oct 2019 11:44:59 GMT", "version": "v1" } ]
2019-10-11
[ [ "Gomez", "Kevin", "" ], [ "Täschner", "Matthias", "" ], [ "Rostami", "M. Ali", "" ], [ "Rost", "Christopher", "" ], [ "Rahm", "Erhard", "" ] ]
Given a large graph, a graph sample determines a subgraph with similar characteristics for certain metrics of the original graph. The samples are much smaller thereby accelerating and simplifying the analysis and visualization of large graphs. We focus on the implementation of distributed graph sampling for Big Data frameworks and in-memory dataflow systems such as Apache Spark or Apache Flink. We evaluate the scalability of the new implementations and analyze to what degree the sampling approaches preserve certain graph metrics compared to the original graph. The latter analysis also uses comparative graph visualizations. The presented methods will be open source and be integrated into Gradoop, a system for distributed graph analytics.
1909.11285
Ze Wang
Ze Wang, Xiuyuan Cheng, Guillermo Sapiro, and Qiang Qiu
A Dictionary Approach to Domain-Invariant Learning in Deep Networks
null
null
null
null
cs.LG cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we consider domain-invariant deep learning by explicitly modeling domain shifts with only a small amount of domain-specific parameters in a Convolutional Neural Network (CNN). By exploiting the observation that a convolutional filter can be well approximated as a linear combination of a small set of dictionary atoms, we show for the first time, both empirically and theoretically, that domain shifts can be effectively handled by decomposing a convolutional layer into a domain-specific atom layer and a domain-shared coefficient layer, while both remain convolutional. An input channel will now first convolve spatially only with each respective domain-specific dictionary atom to "absorb" domain variations, and then output channels are linearly combined using common decomposition coefficients trained to promote shared semantics across domains. We use toy examples, rigorous analysis, and real-world examples with diverse datasets and architectures, to show the proposed plug-in framework's effectiveness in cross and joint domain performance and domain adaptation. With the proposed architecture, we need only a small set of dictionary atoms to model each additional domain, which brings a negligible amount of additional parameters, typically a few hundred.
[ { "created": "Wed, 25 Sep 2019 04:35:04 GMT", "version": "v1" }, { "created": "Mon, 28 Sep 2020 23:31:44 GMT", "version": "v2" } ]
2020-09-30
[ [ "Wang", "Ze", "" ], [ "Cheng", "Xiuyuan", "" ], [ "Sapiro", "Guillermo", "" ], [ "Qiu", "Qiang", "" ] ]
In this paper, we consider domain-invariant deep learning by explicitly modeling domain shifts with only a small amount of domain-specific parameters in a Convolutional Neural Network (CNN). By exploiting the observation that a convolutional filter can be well approximated as a linear combination of a small set of dictionary atoms, we show for the first time, both empirically and theoretically, that domain shifts can be effectively handled by decomposing a convolutional layer into a domain-specific atom layer and a domain-shared coefficient layer, while both remain convolutional. An input channel will now first convolve spatially only with each respective domain-specific dictionary atom to "absorb" domain variations, and then output channels are linearly combined using common decomposition coefficients trained to promote shared semantics across domains. We use toy examples, rigorous analysis, and real-world examples with diverse datasets and architectures, to show the proposed plug-in framework's effectiveness in cross and joint domain performance and domain adaptation. With the proposed architecture, we need only a small set of dictionary atoms to model each additional domain, which brings a negligible amount of additional parameters, typically a few hundred.
2202.10824
Yuyu Guo
Yuyu Guo, Jingkuan Song, Lianli Gao, Heng Tao Shen
One-shot Scene Graph Generation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As a structured representation of the image content, the visual scene graph (visual relationship) acts as a bridge between computer vision and natural language processing. Existing models on the scene graph generation task notoriously require tens or hundreds of labeled samples. By contrast, human beings can learn visual relationships from a few or even one example. Inspired by this, we design a task named One-Shot Scene Graph Generation, where each relationship triplet (e.g., "dog-has-head") comes from only one labeled example. The key insight is that rather than learning from scratch, one can utilize rich prior knowledge. In this paper, we propose Multiple Structured Knowledge (Relational Knowledge and Commonsense Knowledge) for the one-shot scene graph generation task. Specifically, the Relational Knowledge represents the prior knowledge of relationships between entities extracted from the visual content, e.g., the visual relationships "standing in", "sitting in", and "lying in" may exist between "dog" and "yard", while the Commonsense Knowledge encodes "sense-making" knowledge like "dog can guard yard". By organizing these two kinds of knowledge in a graph structure, Graph Convolution Networks (GCNs) are used to extract knowledge-embedded semantic features of the entities. Besides, instead of extracting isolated visual features from each entity generated by Faster R-CNN, we utilize an Instance Relation Transformer encoder to fully explore their context information. Based on a constructed one-shot dataset, the experimental results show that our method significantly outperforms existing state-of-the-art methods by a large margin. Ablation studies also verify the effectiveness of the Instance Relation Transformer encoder and the Multiple Structured Knowledge.
[ { "created": "Tue, 22 Feb 2022 11:32:59 GMT", "version": "v1" }, { "created": "Sat, 26 Feb 2022 03:01:37 GMT", "version": "v2" } ]
2022-03-01
[ [ "Guo", "Yuyu", "" ], [ "Song", "Jingkuan", "" ], [ "Gao", "Lianli", "" ], [ "Shen", "Heng Tao", "" ] ]
As a structured representation of the image content, the visual scene graph (visual relationship) acts as a bridge between computer vision and natural language processing. Existing models on the scene graph generation task notoriously require tens or hundreds of labeled samples. By contrast, human beings can learn visual relationships from a few or even one example. Inspired by this, we design a task named One-Shot Scene Graph Generation, where each relationship triplet (e.g., "dog-has-head") comes from only one labeled example. The key insight is that rather than learning from scratch, one can utilize rich prior knowledge. In this paper, we propose Multiple Structured Knowledge (Relational Knowledge and Commonsense Knowledge) for the one-shot scene graph generation task. Specifically, the Relational Knowledge represents the prior knowledge of relationships between entities extracted from the visual content, e.g., the visual relationships "standing in", "sitting in", and "lying in" may exist between "dog" and "yard", while the Commonsense Knowledge encodes "sense-making" knowledge like "dog can guard yard". By organizing these two kinds of knowledge in a graph structure, Graph Convolution Networks (GCNs) are used to extract knowledge-embedded semantic features of the entities. Besides, instead of extracting isolated visual features from each entity generated by Faster R-CNN, we utilize an Instance Relation Transformer encoder to fully explore their context information. Based on a constructed one-shot dataset, the experimental results show that our method significantly outperforms existing state-of-the-art methods by a large margin. Ablation studies also verify the effectiveness of the Instance Relation Transformer encoder and the Multiple Structured Knowledge.
2006.10241
Aritra Guha
Aritra Guha, Rayleigh Lei, Jiacheng Zhu, XuanLong Nguyen and Ding Zhao
Robust Unsupervised Learning of Temporal Dynamic Interactions
null
null
null
null
cs.LG cs.MA cs.RO stat.AP stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Robust representation learning of temporal dynamic interactions is an important problem in robotic learning in general and automated unsupervised learning in particular. Temporal dynamic interactions can be described by (multiple) geometric trajectories in a suitable space over which unsupervised learning techniques may be applied to extract useful features from raw and high-dimensional data measurements. Taking a geometric approach to robust representation learning for temporal dynamic interactions, it is necessary to develop suitable metrics and a systematic methodology for comparison and for assessing the stability of an unsupervised learning method with respect to its tuning parameters. Such metrics must account for the (geometric) constraints in the physical world as well as the uncertainty associated with the learned patterns. In this paper we introduce a model-free metric based on the Procrustes distance for robust representation learning of interactions, and an optimal transport based distance metric for comparing between distributions of interaction primitives. These distance metrics can serve as an objective for assessing the stability of an interaction learning algorithm. They are also used for comparing the outcomes produced by different algorithms. Moreover, they may also be adopted as an objective function to obtain clusters and representative interaction primitives. These concepts and techniques will be introduced, along with mathematical properties, while their usefulness will be demonstrated in unsupervised learning of vehicle-to-vechicle interactions extracted from the Safety Pilot database, the world's largest database for connected vehicles.
[ { "created": "Thu, 18 Jun 2020 02:39:45 GMT", "version": "v1" } ]
2020-06-19
[ [ "Guha", "Aritra", "" ], [ "Lei", "Rayleigh", "" ], [ "Zhu", "Jiacheng", "" ], [ "Nguyen", "XuanLong", "" ], [ "Zhao", "Ding", "" ] ]
Robust representation learning of temporal dynamic interactions is an important problem in robotic learning in general and automated unsupervised learning in particular. Temporal dynamic interactions can be described by (multiple) geometric trajectories in a suitable space over which unsupervised learning techniques may be applied to extract useful features from raw and high-dimensional data measurements. Taking a geometric approach to robust representation learning for temporal dynamic interactions, it is necessary to develop suitable metrics and a systematic methodology for comparison and for assessing the stability of an unsupervised learning method with respect to its tuning parameters. Such metrics must account for the (geometric) constraints in the physical world as well as the uncertainty associated with the learned patterns. In this paper we introduce a model-free metric based on the Procrustes distance for robust representation learning of interactions, and an optimal transport based distance metric for comparing between distributions of interaction primitives. These distance metrics can serve as an objective for assessing the stability of an interaction learning algorithm. They are also used for comparing the outcomes produced by different algorithms. Moreover, they may also be adopted as an objective function to obtain clusters and representative interaction primitives. These concepts and techniques will be introduced, along with mathematical properties, while their usefulness will be demonstrated in unsupervised learning of vehicle-to-vechicle interactions extracted from the Safety Pilot database, the world's largest database for connected vehicles.
0912.1015
Rdv Ijcsis
Mrs. J. P. Rothe, Dr. A. K. Wadhwani, Dr. Mrs. S. Wadhwani
Short Term Load Forecasting Using Multi Parameter Regression
4 pages IEEE format, International Journal of Computer Science and Information Security, IJCSIS November 2009, ISSN 1947 5500, http://sites.google.com/site/ijcsis/
International Journal of Computer Science and Information Security, IJCSIS, Vol. 6, No. 2, pp. 303-306, November 2009, USA
null
ISSN 1947 5500
cs.NE cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Short Term Load forecasting in this paper uses input data dependent on parameters such as load for current hour and previous two hours, temperature for current hour and previous two hours, wind for current hour and previous two hours, cloud for current hour and previous two hours. Forecasting will be of load demand for coming hour based on input parameters at that hour. In this paper we are using multiparameter regression method for forecasting which has error within tolerable range. Algorithms implementing these forecasting techniques have been programmed using MATLAB and applied to the case study. Other methodologies in this area are ANN, Fuzzy and Evolutionary Algorithms for which investigations are under process. Adaptive multiparameter regression for load forecasting, in near future will be possible.
[ { "created": "Sat, 5 Dec 2009 13:18:35 GMT", "version": "v1" } ]
2009-12-08
[ [ "Rothe", "Mrs. J. P.", "" ], [ "Wadhwani", "Dr. A. K.", "" ], [ "Wadhwani", "Dr. Mrs. S.", "" ] ]
Short Term Load forecasting in this paper uses input data dependent on parameters such as load for current hour and previous two hours, temperature for current hour and previous two hours, wind for current hour and previous two hours, cloud for current hour and previous two hours. Forecasting will be of load demand for coming hour based on input parameters at that hour. In this paper we are using multiparameter regression method for forecasting which has error within tolerable range. Algorithms implementing these forecasting techniques have been programmed using MATLAB and applied to the case study. Other methodologies in this area are ANN, Fuzzy and Evolutionary Algorithms for which investigations are under process. Adaptive multiparameter regression for load forecasting, in near future will be possible.
0803.1908
Massimiliano Laddomada Ph.D.
F. Daneshgaran, M. Laddomada, F. Mesiti, M. Mondin
On the Throughput Allocation for Proportional Fairness in Multirate IEEE 802.11 DCF under General Load Conditions
5 pages, 2 figures, submitted to IEEE Globecom 2008, March 2008
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a modified proportional fairness (PF) criterion suitable for mitigating the \textit{rate anomaly} problem of multirate IEEE 802.11 Wireless LANs employing the mandatory Distributed Coordination Function (DCF) option. Compared to the widely adopted assumption of saturated network, the proposed criterion can be applied to general networks whereby the contending stations are characterized by specific packet arrival rates, $\lambda_s$, and transmission rates $R_d^{s}$. The throughput allocation resulting from the proposed algorithm is able to greatly increase the aggregate throughput of the DCF while ensuring fairness levels among the stations of the same order of the ones available with the classical PF criterion. Put simply, each station is allocated a throughput that depends on a suitable normalization of its packet rate, which, to some extent, measures the frequency by which the station tries to gain access to the channel. Simulation results are presented for some sample scenarios, confirming the effectiveness of the proposed criterion.
[ { "created": "Thu, 13 Mar 2008 06:48:50 GMT", "version": "v1" } ]
2008-12-18
[ [ "Daneshgaran", "F.", "" ], [ "Laddomada", "M.", "" ], [ "Mesiti", "F.", "" ], [ "Mondin", "M.", "" ] ]
This paper presents a modified proportional fairness (PF) criterion suitable for mitigating the \textit{rate anomaly} problem of multirate IEEE 802.11 Wireless LANs employing the mandatory Distributed Coordination Function (DCF) option. Compared to the widely adopted assumption of saturated network, the proposed criterion can be applied to general networks whereby the contending stations are characterized by specific packet arrival rates, $\lambda_s$, and transmission rates $R_d^{s}$. The throughput allocation resulting from the proposed algorithm is able to greatly increase the aggregate throughput of the DCF while ensuring fairness levels among the stations of the same order of the ones available with the classical PF criterion. Put simply, each station is allocated a throughput that depends on a suitable normalization of its packet rate, which, to some extent, measures the frequency by which the station tries to gain access to the channel. Simulation results are presented for some sample scenarios, confirming the effectiveness of the proposed criterion.
2004.06871
Chien-Sheng Wu
Chien-Sheng Wu, Steven Hoi, Richard Socher, and Caiming Xiong
TOD-BERT: Pre-trained Natural Language Understanding for Task-Oriented Dialogue
EMNLP 2020 camera-ready
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The underlying difference of linguistic patterns between general text and task-oriented dialogue makes existing pre-trained language models less useful in practice. In this work, we unify nine human-human and multi-turn task-oriented dialogue datasets for language modeling. To better model dialogue behavior during pre-training, we incorporate user and system tokens into the masked language modeling. We propose a contrastive objective function to simulate the response selection task. Our pre-trained task-oriented dialogue BERT (TOD-BERT) outperforms strong baselines like BERT on four downstream task-oriented dialogue applications, including intention recognition, dialogue state tracking, dialogue act prediction, and response selection. We also show that TOD-BERT has a stronger few-shot ability that can mitigate the data scarcity problem for task-oriented dialogue.
[ { "created": "Wed, 15 Apr 2020 04:09:05 GMT", "version": "v1" }, { "created": "Wed, 29 Apr 2020 18:10:32 GMT", "version": "v2" }, { "created": "Thu, 1 Oct 2020 16:34:52 GMT", "version": "v3" } ]
2020-10-02
[ [ "Wu", "Chien-Sheng", "" ], [ "Hoi", "Steven", "" ], [ "Socher", "Richard", "" ], [ "Xiong", "Caiming", "" ] ]
The underlying difference of linguistic patterns between general text and task-oriented dialogue makes existing pre-trained language models less useful in practice. In this work, we unify nine human-human and multi-turn task-oriented dialogue datasets for language modeling. To better model dialogue behavior during pre-training, we incorporate user and system tokens into the masked language modeling. We propose a contrastive objective function to simulate the response selection task. Our pre-trained task-oriented dialogue BERT (TOD-BERT) outperforms strong baselines like BERT on four downstream task-oriented dialogue applications, including intention recognition, dialogue state tracking, dialogue act prediction, and response selection. We also show that TOD-BERT has a stronger few-shot ability that can mitigate the data scarcity problem for task-oriented dialogue.
2404.04869
Yiqun Duan
Yiqun Duan, Qiang Zhang, Renjing Xu
Prompting Multi-Modal Tokens to Enhance End-to-End Autonomous Driving Imitation Learning with LLMs
null
Published as oral presentation paper atthe 2024 IEEE International Conference on Robotics and Automation (ICRA2024), Yokohama, Japan
null
null
cs.RO cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The utilization of Large Language Models (LLMs) within the realm of reinforcement learning, particularly as planners, has garnered a significant degree of attention in recent scholarly literature. However, a substantial proportion of existing research predominantly focuses on planning models for robotics that transmute the outputs derived from perception models into linguistic forms, thus adopting a `pure-language' strategy. In this research, we propose a hybrid End-to-End learning framework for autonomous driving by combining basic driving imitation learning with LLMs based on multi-modality prompt tokens. Instead of simply converting perception results from the separated train model into pure language input, our novelty lies in two aspects. 1) The end-to-end integration of visual and LiDAR sensory input into learnable multi-modality tokens, thereby intrinsically alleviating description bias by separated pre-trained perception models. 2) Instead of directly letting LLMs drive, this paper explores a hybrid setting of letting LLMs help the driving model correct mistakes and complicated scenarios. The results of our experiments suggest that the proposed methodology can attain driving scores of 49.21%, coupled with an impressive route completion rate of 91.34% in the offline evaluation conducted via CARLA. These performance metrics are comparable to the most advanced driving models.
[ { "created": "Sun, 7 Apr 2024 08:31:12 GMT", "version": "v1" }, { "created": "Mon, 29 Jul 2024 11:43:31 GMT", "version": "v2" } ]
2024-07-30
[ [ "Duan", "Yiqun", "" ], [ "Zhang", "Qiang", "" ], [ "Xu", "Renjing", "" ] ]
The utilization of Large Language Models (LLMs) within the realm of reinforcement learning, particularly as planners, has garnered a significant degree of attention in recent scholarly literature. However, a substantial proportion of existing research predominantly focuses on planning models for robotics that transmute the outputs derived from perception models into linguistic forms, thus adopting a `pure-language' strategy. In this research, we propose a hybrid End-to-End learning framework for autonomous driving by combining basic driving imitation learning with LLMs based on multi-modality prompt tokens. Instead of simply converting perception results from the separated train model into pure language input, our novelty lies in two aspects. 1) The end-to-end integration of visual and LiDAR sensory input into learnable multi-modality tokens, thereby intrinsically alleviating description bias by separated pre-trained perception models. 2) Instead of directly letting LLMs drive, this paper explores a hybrid setting of letting LLMs help the driving model correct mistakes and complicated scenarios. The results of our experiments suggest that the proposed methodology can attain driving scores of 49.21%, coupled with an impressive route completion rate of 91.34% in the offline evaluation conducted via CARLA. These performance metrics are comparable to the most advanced driving models.
1806.01357
Jian Ren
Jian Ren, Ilker Hacihaliloglu, Eric A. Singer, David J. Foran, Xin Qi
Adversarial Domain Adaptation for Classification of Prostate Histopathology Whole-Slide Images
Accepted to MICCAI 2018
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automatic and accurate Gleason grading of histopathology tissue slides is crucial for prostate cancer diagnosis, treatment, and prognosis. Usually, histopathology tissue slides from different institutions show heterogeneous appearances because of different tissue preparation and staining procedures, thus the predictable model learned from one domain may not be applicable to a new domain directly. Here we propose to adopt unsupervised domain adaptation to transfer the discriminative knowledge obtained from the source domain to the target domain without requiring labeling of images at the target domain. The adaptation is achieved through adversarial training to find an invariant feature space along with the proposed Siamese architecture on the target domain to add a regularization that is appropriate for the whole-slide images. We validate the method on two prostate cancer datasets and obtain significant classification improvement of Gleason scores as compared with the baseline models.
[ { "created": "Mon, 4 Jun 2018 20:01:09 GMT", "version": "v1" }, { "created": "Wed, 6 Jun 2018 23:49:17 GMT", "version": "v2" } ]
2018-06-08
[ [ "Ren", "Jian", "" ], [ "Hacihaliloglu", "Ilker", "" ], [ "Singer", "Eric A.", "" ], [ "Foran", "David J.", "" ], [ "Qi", "Xin", "" ] ]
Automatic and accurate Gleason grading of histopathology tissue slides is crucial for prostate cancer diagnosis, treatment, and prognosis. Usually, histopathology tissue slides from different institutions show heterogeneous appearances because of different tissue preparation and staining procedures, thus the predictable model learned from one domain may not be applicable to a new domain directly. Here we propose to adopt unsupervised domain adaptation to transfer the discriminative knowledge obtained from the source domain to the target domain without requiring labeling of images at the target domain. The adaptation is achieved through adversarial training to find an invariant feature space along with the proposed Siamese architecture on the target domain to add a regularization that is appropriate for the whole-slide images. We validate the method on two prostate cancer datasets and obtain significant classification improvement of Gleason scores as compared with the baseline models.
2309.02672
Hanshen Xiao
Hanshen Xiao, Jun Wan and Srinivas Devadas
Geometry of Sensitivity: Twice Sampling and Hybrid Clipping in Differential Privacy with Optimal Gaussian Noise and Application to Deep Learning
null
ACM CCS 2023
null
null
cs.CR cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
We study the fundamental problem of the construction of optimal randomization in Differential Privacy. Depending on the clipping strategy or additional properties of the processing function, the corresponding sensitivity set theoretically determines the necessary randomization to produce the required security parameters. Towards the optimal utility-privacy tradeoff, finding the minimal perturbation for properly-selected sensitivity sets stands as a central problem in DP research. In practice, l_2/l_1-norm clippings with Gaussian/Laplace noise mechanisms are among the most common setups. However, they also suffer from the curse of dimensionality. For more generic clipping strategies, the understanding of the optimal noise for a high-dimensional sensitivity set remains limited. In this paper, we revisit the geometry of high-dimensional sensitivity sets and present a series of results to characterize the non-asymptotically optimal Gaussian noise for R\'enyi DP (RDP). Our results are both negative and positive: on one hand, we show the curse of dimensionality is tight for a broad class of sensitivity sets satisfying certain symmetry properties; but if, fortunately, the representation of the sensitivity set is asymmetric on some group of orthogonal bases, we show the optimal noise bounds need not be explicitly dependent on either dimension or rank. We also revisit sampling in the high-dimensional scenario, which is the key for both privacy amplification and computation efficiency in large-scale data processing. We propose a novel method, termed twice sampling, which implements both sample-wise and coordinate-wise sampling, to enable Gaussian noises to fit the sensitivity geometry more closely. With closed-form RDP analysis, we prove twice sampling produces asymptotic improvement of the privacy amplification given an additional infinity-norm restriction, especially for small sampling rate.
[ { "created": "Wed, 6 Sep 2023 02:45:08 GMT", "version": "v1" }, { "created": "Thu, 28 Sep 2023 12:49:24 GMT", "version": "v2" } ]
2023-09-29
[ [ "Xiao", "Hanshen", "" ], [ "Wan", "Jun", "" ], [ "Devadas", "Srinivas", "" ] ]
We study the fundamental problem of the construction of optimal randomization in Differential Privacy. Depending on the clipping strategy or additional properties of the processing function, the corresponding sensitivity set theoretically determines the necessary randomization to produce the required security parameters. Towards the optimal utility-privacy tradeoff, finding the minimal perturbation for properly-selected sensitivity sets stands as a central problem in DP research. In practice, l_2/l_1-norm clippings with Gaussian/Laplace noise mechanisms are among the most common setups. However, they also suffer from the curse of dimensionality. For more generic clipping strategies, the understanding of the optimal noise for a high-dimensional sensitivity set remains limited. In this paper, we revisit the geometry of high-dimensional sensitivity sets and present a series of results to characterize the non-asymptotically optimal Gaussian noise for R\'enyi DP (RDP). Our results are both negative and positive: on one hand, we show the curse of dimensionality is tight for a broad class of sensitivity sets satisfying certain symmetry properties; but if, fortunately, the representation of the sensitivity set is asymmetric on some group of orthogonal bases, we show the optimal noise bounds need not be explicitly dependent on either dimension or rank. We also revisit sampling in the high-dimensional scenario, which is the key for both privacy amplification and computation efficiency in large-scale data processing. We propose a novel method, termed twice sampling, which implements both sample-wise and coordinate-wise sampling, to enable Gaussian noises to fit the sensitivity geometry more closely. With closed-form RDP analysis, we prove twice sampling produces asymptotic improvement of the privacy amplification given an additional infinity-norm restriction, especially for small sampling rate.
2306.12107
Jose Ignacio Farran
Jos\'e Ignacio Farr\'an and David Cerezo
A new color image secret sharing protocol
6 pages, 5 figures, preprint
null
null
null
cs.CR cs.IT math.IT
http://creativecommons.org/licenses/by-nc-nd/4.0/
Visual cryptography aims to protect images against their possible illegitimate use. Thus, one can cipher, hash, or add watermarks for protecting copyright, among others. In this paper we provide a new solution to the problem of secret sharing for the case when the secret is an image. Our method combines the Shamir scheme for secret sharing using finite fields of characteristic 2 with the CBC mode of operation of a secure symmetric cryptographic scheme like AES, so that the security relies on that of the mentioned techniques. The resulting shares have the same resolution as that of the original image. The idea of the method could be generalized to other multimedia formats like audio or video, adapting the method to the corresponding encoded information.
[ { "created": "Wed, 21 Jun 2023 08:47:31 GMT", "version": "v1" } ]
2023-06-22
[ [ "Farrán", "José Ignacio", "" ], [ "Cerezo", "David", "" ] ]
Visual cryptography aims to protect images against their possible illegitimate use. Thus, one can cipher, hash, or add watermarks for protecting copyright, among others. In this paper we provide a new solution to the problem of secret sharing for the case when the secret is an image. Our method combines the Shamir scheme for secret sharing using finite fields of characteristic 2 with the CBC mode of operation of a secure symmetric cryptographic scheme like AES, so that the security relies on that of the mentioned techniques. The resulting shares have the same resolution as that of the original image. The idea of the method could be generalized to other multimedia formats like audio or video, adapting the method to the corresponding encoded information.
2402.05978
Rocio Alaiz-Rodriguez
M. T. Garc\'ia-Ord\'as, E. Alegre-Guti\'errez, V. Gonz\'alez-Castro, R. Alaiz-Rodr\'iguez
Combining shape and contour features to improve tool wear monitoring in milling processes
null
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
In this paper, a new system based on combinations of a shape descriptor and a contour descriptor has been proposed for classifying inserts in milling processes according to their wear level following a computer vision based approach. To describe the wear region shape we have proposed a new descriptor called ShapeFeat and its contour has been characterized using the method BORCHIZ that, to the best of our knowledge, achieves the best performance for tool wear monitoring following a computer vision-based approach. Results show that the combination of BORCHIZ with ShapeFeat using a late fusion method improves the classification performance significantly, obtaining an accuracy of 91.44% in the binary classification (i.e. the classification of the wear as high or low) and 82.90% using three target classes (i.e. classification of the wear as high, medium or low). These results outperform the ones obtained by both descriptors used on their own, which achieve accuracies of 88.70 and 80.67% for two and three classes, respectively, using ShapeFeat and 87.06 and 80.24% with B-ORCHIZ. This study yielded encouraging results for the manufacturing community in order to classify automatically the inserts in terms of their wear for milling processes.
[ { "created": "Wed, 7 Feb 2024 22:27:16 GMT", "version": "v1" } ]
2024-02-12
[ [ "García-Ordás", "M. T.", "" ], [ "Alegre-Gutiérrez", "E.", "" ], [ "González-Castro", "V.", "" ], [ "Alaiz-Rodríguez", "R.", "" ] ]
In this paper, a new system based on combinations of a shape descriptor and a contour descriptor has been proposed for classifying inserts in milling processes according to their wear level following a computer vision based approach. To describe the wear region shape we have proposed a new descriptor called ShapeFeat and its contour has been characterized using the method BORCHIZ that, to the best of our knowledge, achieves the best performance for tool wear monitoring following a computer vision-based approach. Results show that the combination of BORCHIZ with ShapeFeat using a late fusion method improves the classification performance significantly, obtaining an accuracy of 91.44% in the binary classification (i.e. the classification of the wear as high or low) and 82.90% using three target classes (i.e. classification of the wear as high, medium or low). These results outperform the ones obtained by both descriptors used on their own, which achieve accuracies of 88.70 and 80.67% for two and three classes, respectively, using ShapeFeat and 87.06 and 80.24% with B-ORCHIZ. This study yielded encouraging results for the manufacturing community in order to classify automatically the inserts in terms of their wear for milling processes.
2202.03807
Maximilian Geisslinger
Alexander Wischnewski, Maximilian Geisslinger, Johannes Betz, Tobias Betz, Felix Fent, Alexander Heilmeier, Leonhard Hermansdorfer, Thomas Herrmann, Sebastian Huch, Phillip Karle, Felix Nobis, Levent \"Ogretmen, Matthias Rowold, Florian Sauerbeck, Tim Stahl, Rainer Trauth, Markus Lienkamp, Boris Lohmann
Indy Autonomous Challenge -- Autonomous Race Cars at the Handling Limits
null
null
null
null
cs.RO cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motorsport has always been an enabler for technological advancement, and the same applies to the autonomous driving industry. The team TUM Auton-omous Motorsports will participate in the Indy Autonomous Challenge in Octo-ber 2021 to benchmark its self-driving software-stack by racing one out of ten autonomous Dallara AV-21 racecars at the Indianapolis Motor Speedway. The first part of this paper explains the reasons for entering an autonomous vehicle race from an academic perspective: It allows focusing on several edge cases en-countered by autonomous vehicles, such as challenging evasion maneuvers and unstructured scenarios. At the same time, it is inherently safe due to the motor-sport related track safety precautions. It is therefore an ideal testing ground for the development of autonomous driving algorithms capable of mastering the most challenging and rare situations. In addition, we provide insight into our soft-ware development workflow and present our Hardware-in-the-Loop simulation setup. It is capable of running simulations of up to eight autonomous vehicles in real time. The second part of the paper gives a high-level overview of the soft-ware architecture and covers our development priorities in building a high-per-formance autonomous racing software: maximum sensor detection range, relia-ble handling of multi-vehicle situations, as well as reliable motion control under uncertainty.
[ { "created": "Tue, 8 Feb 2022 11:55:05 GMT", "version": "v1" } ]
2022-02-09
[ [ "Wischnewski", "Alexander", "" ], [ "Geisslinger", "Maximilian", "" ], [ "Betz", "Johannes", "" ], [ "Betz", "Tobias", "" ], [ "Fent", "Felix", "" ], [ "Heilmeier", "Alexander", "" ], [ "Hermansdorfer", "Leonhard", "" ], [ "Herrmann", "Thomas", "" ], [ "Huch", "Sebastian", "" ], [ "Karle", "Phillip", "" ], [ "Nobis", "Felix", "" ], [ "Ögretmen", "Levent", "" ], [ "Rowold", "Matthias", "" ], [ "Sauerbeck", "Florian", "" ], [ "Stahl", "Tim", "" ], [ "Trauth", "Rainer", "" ], [ "Lienkamp", "Markus", "" ], [ "Lohmann", "Boris", "" ] ]
Motorsport has always been an enabler for technological advancement, and the same applies to the autonomous driving industry. The team TUM Auton-omous Motorsports will participate in the Indy Autonomous Challenge in Octo-ber 2021 to benchmark its self-driving software-stack by racing one out of ten autonomous Dallara AV-21 racecars at the Indianapolis Motor Speedway. The first part of this paper explains the reasons for entering an autonomous vehicle race from an academic perspective: It allows focusing on several edge cases en-countered by autonomous vehicles, such as challenging evasion maneuvers and unstructured scenarios. At the same time, it is inherently safe due to the motor-sport related track safety precautions. It is therefore an ideal testing ground for the development of autonomous driving algorithms capable of mastering the most challenging and rare situations. In addition, we provide insight into our soft-ware development workflow and present our Hardware-in-the-Loop simulation setup. It is capable of running simulations of up to eight autonomous vehicles in real time. The second part of the paper gives a high-level overview of the soft-ware architecture and covers our development priorities in building a high-per-formance autonomous racing software: maximum sensor detection range, relia-ble handling of multi-vehicle situations, as well as reliable motion control under uncertainty.
2407.20856
Sarthak Anand
Sarthak Anand, Yutong Jiang, Giorgi Kokaia
Learn by Selling: Equipping Large Language Models with Product Knowledge for Context-Driven Recommendations
null
null
null
null
cs.IR cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
The rapid evolution of large language models (LLMs) has opened up new possibilities for applications such as context-driven product recommendations. However, the effectiveness of these models in this context is heavily reliant on their comprehensive understanding of the product inventory. This paper presents a novel approach to equipping LLMs with product knowledge by training them to respond contextually to synthetic search queries that include product IDs. We delve into an extensive analysis of this method, evaluating its effectiveness, outlining its benefits, and highlighting its constraints. The paper also discusses the potential improvements and future directions for this approach, providing a comprehensive understanding of the role of LLMs in product recommendations.
[ { "created": "Tue, 30 Jul 2024 14:31:53 GMT", "version": "v1" } ]
2024-07-31
[ [ "Anand", "Sarthak", "" ], [ "Jiang", "Yutong", "" ], [ "Kokaia", "Giorgi", "" ] ]
The rapid evolution of large language models (LLMs) has opened up new possibilities for applications such as context-driven product recommendations. However, the effectiveness of these models in this context is heavily reliant on their comprehensive understanding of the product inventory. This paper presents a novel approach to equipping LLMs with product knowledge by training them to respond contextually to synthetic search queries that include product IDs. We delve into an extensive analysis of this method, evaluating its effectiveness, outlining its benefits, and highlighting its constraints. The paper also discusses the potential improvements and future directions for this approach, providing a comprehensive understanding of the role of LLMs in product recommendations.
2407.15660
Claudius Kienle
Claudius Kienle, Benjamin Alt, Onur Celik, Philipp Becker, Darko Katic, Rainer J\"akel and Gerhard Neumann
MuTT: A Multimodal Trajectory Transformer for Robot Skills
null
null
null
null
cs.RO cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
High-level robot skills represent an increasingly popular paradigm in robot programming. However, configuring the skills' parameters for a specific task remains a manual and time-consuming endeavor. Existing approaches for learning or optimizing these parameters often require numerous real-world executions or do not work in dynamic environments. To address these challenges, we propose MuTT, a novel encoder-decoder transformer architecture designed to predict environment-aware executions of robot skills by integrating vision, trajectory, and robot skill parameters. Notably, we pioneer the fusion of vision and trajectory, introducing a novel trajectory projection. Furthermore, we illustrate MuTT's efficacy as a predictor when combined with a model-based robot skill optimizer. This approach facilitates the optimization of robot skill parameters for the current environment, without the need for real-world executions during optimization. Designed for compatibility with any representation of robot skills, MuTT demonstrates its versatility across three comprehensive experiments, showcasing superior performance across two different skill representations.
[ { "created": "Mon, 22 Jul 2024 14:18:52 GMT", "version": "v1" } ]
2024-07-23
[ [ "Kienle", "Claudius", "" ], [ "Alt", "Benjamin", "" ], [ "Celik", "Onur", "" ], [ "Becker", "Philipp", "" ], [ "Katic", "Darko", "" ], [ "Jäkel", "Rainer", "" ], [ "Neumann", "Gerhard", "" ] ]
High-level robot skills represent an increasingly popular paradigm in robot programming. However, configuring the skills' parameters for a specific task remains a manual and time-consuming endeavor. Existing approaches for learning or optimizing these parameters often require numerous real-world executions or do not work in dynamic environments. To address these challenges, we propose MuTT, a novel encoder-decoder transformer architecture designed to predict environment-aware executions of robot skills by integrating vision, trajectory, and robot skill parameters. Notably, we pioneer the fusion of vision and trajectory, introducing a novel trajectory projection. Furthermore, we illustrate MuTT's efficacy as a predictor when combined with a model-based robot skill optimizer. This approach facilitates the optimization of robot skill parameters for the current environment, without the need for real-world executions during optimization. Designed for compatibility with any representation of robot skills, MuTT demonstrates its versatility across three comprehensive experiments, showcasing superior performance across two different skill representations.
2306.12014
Nigel Fernandez
Sneha Singhania, Nigel Fernandez, Shrisha Rao
3HAN: A Deep Neural Network for Fake News Detection
Published as a conference paper at ICONIP 2017
null
10.1007/978-3-319-70096-0_59
null
cs.LG cs.CL cs.SI
http://creativecommons.org/licenses/by/4.0/
The rapid spread of fake news is a serious problem calling for AI solutions. We employ a deep learning based automated detector through a three level hierarchical attention network (3HAN) for fast, accurate detection of fake news. 3HAN has three levels, one each for words, sentences, and the headline, and constructs a news vector: an effective representation of an input news article, by processing an article in an hierarchical bottom-up manner. The headline is known to be a distinguishing feature of fake news, and furthermore, relatively few words and sentences in an article are more important than the rest. 3HAN gives a differential importance to parts of an article, on account of its three layers of attention. By experiments on a large real-world data set, we observe the effectiveness of 3HAN with an accuracy of 96.77%. Unlike some other deep learning models, 3HAN provides an understandable output through the attention weights given to different parts of an article, which can be visualized through a heatmap to enable further manual fact checking.
[ { "created": "Wed, 21 Jun 2023 04:34:27 GMT", "version": "v1" } ]
2023-06-22
[ [ "Singhania", "Sneha", "" ], [ "Fernandez", "Nigel", "" ], [ "Rao", "Shrisha", "" ] ]
The rapid spread of fake news is a serious problem calling for AI solutions. We employ a deep learning based automated detector through a three level hierarchical attention network (3HAN) for fast, accurate detection of fake news. 3HAN has three levels, one each for words, sentences, and the headline, and constructs a news vector: an effective representation of an input news article, by processing an article in an hierarchical bottom-up manner. The headline is known to be a distinguishing feature of fake news, and furthermore, relatively few words and sentences in an article are more important than the rest. 3HAN gives a differential importance to parts of an article, on account of its three layers of attention. By experiments on a large real-world data set, we observe the effectiveness of 3HAN with an accuracy of 96.77%. Unlike some other deep learning models, 3HAN provides an understandable output through the attention weights given to different parts of an article, which can be visualized through a heatmap to enable further manual fact checking.
2407.10775
Marco Mussi
Alessandro Montenegro and Marco Mussi and Matteo Papini and Alberto Maria Metelli
Last-Iterate Global Convergence of Policy Gradients for Constrained Reinforcement Learning
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Constrained Reinforcement Learning (CRL) tackles sequential decision-making problems where agents are required to achieve goals by maximizing the expected return while meeting domain-specific constraints, which are often formulated as expected costs. In this setting, policy-based methods are widely used since they come with several advantages when dealing with continuous-control problems. These methods search in the policy space with an action-based or parameter-based exploration strategy, depending on whether they learn directly the parameters of a stochastic policy or those of a stochastic hyperpolicy. In this paper, we propose a general framework for addressing CRL problems via gradient-based primal-dual algorithms, relying on an alternate ascent/descent scheme with dual-variable regularization. We introduce an exploration-agnostic algorithm, called C-PG, which exhibits global last-iterate convergence guarantees under (weak) gradient domination assumptions, improving and generalizing existing results. Then, we design C-PGAE and C-PGPE, the action-based and the parameter-based versions of C-PG, respectively, and we illustrate how they naturally extend to constraints defined in terms of risk measures over the costs, as it is often requested in safety-critical scenarios. Finally, we numerically validate our algorithms on constrained control problems, and compare them with state-of-the-art baselines, demonstrating their effectiveness.
[ { "created": "Mon, 15 Jul 2024 14:54:57 GMT", "version": "v1" } ]
2024-07-16
[ [ "Montenegro", "Alessandro", "" ], [ "Mussi", "Marco", "" ], [ "Papini", "Matteo", "" ], [ "Metelli", "Alberto Maria", "" ] ]
Constrained Reinforcement Learning (CRL) tackles sequential decision-making problems where agents are required to achieve goals by maximizing the expected return while meeting domain-specific constraints, which are often formulated as expected costs. In this setting, policy-based methods are widely used since they come with several advantages when dealing with continuous-control problems. These methods search in the policy space with an action-based or parameter-based exploration strategy, depending on whether they learn directly the parameters of a stochastic policy or those of a stochastic hyperpolicy. In this paper, we propose a general framework for addressing CRL problems via gradient-based primal-dual algorithms, relying on an alternate ascent/descent scheme with dual-variable regularization. We introduce an exploration-agnostic algorithm, called C-PG, which exhibits global last-iterate convergence guarantees under (weak) gradient domination assumptions, improving and generalizing existing results. Then, we design C-PGAE and C-PGPE, the action-based and the parameter-based versions of C-PG, respectively, and we illustrate how they naturally extend to constraints defined in terms of risk measures over the costs, as it is often requested in safety-critical scenarios. Finally, we numerically validate our algorithms on constrained control problems, and compare them with state-of-the-art baselines, demonstrating their effectiveness.
1806.10447
Alexey Gruzdev
Sergey Zherzdev and Alexey Gruzdev
LPRNet: License Plate Recognition via Deep Neural Networks
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper proposes LPRNet - end-to-end method for Automatic License Plate Recognition without preliminary character segmentation. Our approach is inspired by recent breakthroughs in Deep Neural Networks, and works in real-time with recognition accuracy up to 95% for Chinese license plates: 3 ms/plate on nVIDIA GeForce GTX 1080 and 1.3 ms/plate on Intel Core i7-6700K CPU. LPRNet consists of the lightweight Convolutional Neural Network, so it can be trained in end-to-end way. To the best of our knowledge, LPRNet is the first real-time License Plate Recognition system that does not use RNNs. As a result, the LPRNet algorithm may be used to create embedded solutions for LPR that feature high level accuracy even on challenging Chinese license plates.
[ { "created": "Wed, 27 Jun 2018 12:57:17 GMT", "version": "v1" } ]
2018-06-28
[ [ "Zherzdev", "Sergey", "" ], [ "Gruzdev", "Alexey", "" ] ]
This paper proposes LPRNet - end-to-end method for Automatic License Plate Recognition without preliminary character segmentation. Our approach is inspired by recent breakthroughs in Deep Neural Networks, and works in real-time with recognition accuracy up to 95% for Chinese license plates: 3 ms/plate on nVIDIA GeForce GTX 1080 and 1.3 ms/plate on Intel Core i7-6700K CPU. LPRNet consists of the lightweight Convolutional Neural Network, so it can be trained in end-to-end way. To the best of our knowledge, LPRNet is the first real-time License Plate Recognition system that does not use RNNs. As a result, the LPRNet algorithm may be used to create embedded solutions for LPR that feature high level accuracy even on challenging Chinese license plates.
2211.14646
Sriram Balasubramanian
Sriram Balasubramanian and Soheil Feizi
Towards Improved Input Masking for Convolutional Neural Networks
29 pages, 19 figures. Accepted at ICCV 2023
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
The ability to remove features from the input of machine learning models is very important to understand and interpret model predictions. However, this is non-trivial for vision models since masking out parts of the input image typically causes large distribution shifts. This is because the baseline color used for masking (typically grey or black) is out of distribution. Furthermore, the shape of the mask itself can contain unwanted signals which can be used by the model for its predictions. Recently, there has been some progress in mitigating this issue (called missingness bias) in image masking for vision transformers. In this work, we propose a new masking method for CNNs we call layer masking in which the missingness bias caused by masking is reduced to a large extent. Intuitively, layer masking applies a mask to intermediate activation maps so that the model only processes the unmasked input. We show that our method (i) is able to eliminate or minimize the influence of the mask shape or color on the output of the model, and (ii) is much better than replacing the masked region by black or grey for input perturbation based interpretability techniques like LIME. Thus, layer masking is much less affected by missingness bias than other masking strategies. We also demonstrate how the shape of the mask may leak information about the class, thus affecting estimates of model reliance on class-relevant features derived from input masking. Furthermore, we discuss the role of data augmentation techniques for tackling this problem, and argue that they are not sufficient for preventing model reliance on mask shape. The code for this project is publicly available at https://github.com/SriramB-98/layer_masking
[ { "created": "Sat, 26 Nov 2022 19:31:49 GMT", "version": "v1" }, { "created": "Sun, 16 Jul 2023 05:40:44 GMT", "version": "v2" }, { "created": "Sun, 29 Oct 2023 22:11:33 GMT", "version": "v3" } ]
2023-10-31
[ [ "Balasubramanian", "Sriram", "" ], [ "Feizi", "Soheil", "" ] ]
The ability to remove features from the input of machine learning models is very important to understand and interpret model predictions. However, this is non-trivial for vision models since masking out parts of the input image typically causes large distribution shifts. This is because the baseline color used for masking (typically grey or black) is out of distribution. Furthermore, the shape of the mask itself can contain unwanted signals which can be used by the model for its predictions. Recently, there has been some progress in mitigating this issue (called missingness bias) in image masking for vision transformers. In this work, we propose a new masking method for CNNs we call layer masking in which the missingness bias caused by masking is reduced to a large extent. Intuitively, layer masking applies a mask to intermediate activation maps so that the model only processes the unmasked input. We show that our method (i) is able to eliminate or minimize the influence of the mask shape or color on the output of the model, and (ii) is much better than replacing the masked region by black or grey for input perturbation based interpretability techniques like LIME. Thus, layer masking is much less affected by missingness bias than other masking strategies. We also demonstrate how the shape of the mask may leak information about the class, thus affecting estimates of model reliance on class-relevant features derived from input masking. Furthermore, we discuss the role of data augmentation techniques for tackling this problem, and argue that they are not sufficient for preventing model reliance on mask shape. The code for this project is publicly available at https://github.com/SriramB-98/layer_masking
2111.01364
Juncheng Liu Dr
Liu Juncheng, McCane Brendan, Mills Steven
Learning to Explore by Reinforcement over High-Level Options
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Autonomous 3D environment exploration is a fundamental task for various applications such as navigation. The goal of exploration is to investigate a new environment and build its occupancy map efficiently. In this paper, we propose a new method which grants an agent two intertwined options of behaviors: "look-around" and "frontier navigation". This is implemented by an option-critic architecture and trained by reinforcement learning algorithms. In each timestep, an agent produces an option and a corresponding action according to the policy. We also take advantage of macro-actions by incorporating classic path-planning techniques to increase training efficiency. We demonstrate the effectiveness of the proposed method on two publicly available 3D environment datasets and the results show our method achieves higher coverage than competing techniques with better efficiency.
[ { "created": "Tue, 2 Nov 2021 04:21:34 GMT", "version": "v1" } ]
2021-11-03
[ [ "Juncheng", "Liu", "" ], [ "Brendan", "McCane", "" ], [ "Steven", "Mills", "" ] ]
Autonomous 3D environment exploration is a fundamental task for various applications such as navigation. The goal of exploration is to investigate a new environment and build its occupancy map efficiently. In this paper, we propose a new method which grants an agent two intertwined options of behaviors: "look-around" and "frontier navigation". This is implemented by an option-critic architecture and trained by reinforcement learning algorithms. In each timestep, an agent produces an option and a corresponding action according to the policy. We also take advantage of macro-actions by incorporating classic path-planning techniques to increase training efficiency. We demonstrate the effectiveness of the proposed method on two publicly available 3D environment datasets and the results show our method achieves higher coverage than competing techniques with better efficiency.
2210.03516
Felix Chalumeau
Felix Chalumeau, Raphael Boige, Bryan Lim, Valentin Mac\'e, Maxime Allard, Arthur Flajolet, Antoine Cully, Thomas Pierrot
Neuroevolution is a Competitive Alternative to Reinforcement Learning for Skill Discovery
Camera ready version for ICLR2023 (spotlight)
null
null
null
cs.NE cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep Reinforcement Learning (RL) has emerged as a powerful paradigm for training neural policies to solve complex control tasks. However, these policies tend to be overfit to the exact specifications of the task and environment they were trained on, and thus do not perform well when conditions deviate slightly or when composed hierarchically to solve even more complex tasks. Recent work has shown that training a mixture of policies, as opposed to a single one, that are driven to explore different regions of the state-action space can address this shortcoming by generating a diverse set of behaviors, referred to as skills, that can be collectively used to great effect in adaptation tasks or for hierarchical planning. This is typically realized by including a diversity term - often derived from information theory - in the objective function optimized by RL. However these approaches often require careful hyperparameter tuning to be effective. In this work, we demonstrate that less widely-used neuroevolution methods, specifically Quality Diversity (QD), are a competitive alternative to information-theory-augmented RL for skill discovery. Through an extensive empirical evaluation comparing eight state-of-the-art algorithms (four flagship algorithms from each line of work) on the basis of (i) metrics directly evaluating the skills' diversity, (ii) the skills' performance on adaptation tasks, and (iii) the skills' performance when used as primitives for hierarchical planning; QD methods are found to provide equal, and sometimes improved, performance whilst being less sensitive to hyperparameters and more scalable. As no single method is found to provide near-optimal performance across all environments, there is a rich scope for further research which we support by proposing future directions and providing optimized open-source implementations.
[ { "created": "Thu, 6 Oct 2022 11:06:39 GMT", "version": "v1" }, { "created": "Fri, 31 Mar 2023 08:55:33 GMT", "version": "v2" }, { "created": "Thu, 15 Jun 2023 12:08:19 GMT", "version": "v3" }, { "created": "Fri, 8 Sep 2023 09:33:41 GMT", "version": "v4" } ]
2023-09-11
[ [ "Chalumeau", "Felix", "" ], [ "Boige", "Raphael", "" ], [ "Lim", "Bryan", "" ], [ "Macé", "Valentin", "" ], [ "Allard", "Maxime", "" ], [ "Flajolet", "Arthur", "" ], [ "Cully", "Antoine", "" ], [ "Pierrot", "Thomas", "" ] ]
Deep Reinforcement Learning (RL) has emerged as a powerful paradigm for training neural policies to solve complex control tasks. However, these policies tend to be overfit to the exact specifications of the task and environment they were trained on, and thus do not perform well when conditions deviate slightly or when composed hierarchically to solve even more complex tasks. Recent work has shown that training a mixture of policies, as opposed to a single one, that are driven to explore different regions of the state-action space can address this shortcoming by generating a diverse set of behaviors, referred to as skills, that can be collectively used to great effect in adaptation tasks or for hierarchical planning. This is typically realized by including a diversity term - often derived from information theory - in the objective function optimized by RL. However these approaches often require careful hyperparameter tuning to be effective. In this work, we demonstrate that less widely-used neuroevolution methods, specifically Quality Diversity (QD), are a competitive alternative to information-theory-augmented RL for skill discovery. Through an extensive empirical evaluation comparing eight state-of-the-art algorithms (four flagship algorithms from each line of work) on the basis of (i) metrics directly evaluating the skills' diversity, (ii) the skills' performance on adaptation tasks, and (iii) the skills' performance when used as primitives for hierarchical planning; QD methods are found to provide equal, and sometimes improved, performance whilst being less sensitive to hyperparameters and more scalable. As no single method is found to provide near-optimal performance across all environments, there is a rich scope for further research which we support by proposing future directions and providing optimized open-source implementations.
2111.08794
\.Inci M. Bayta\c{s}
Deniz Sezin Ayvaz and Inci M. Baytas
Investigating Conversion from Mild Cognitive Impairment to Alzheimer's Disease using Latent Space Manipulation
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Alzheimer's disease is the most common cause of dementia that affects millions of lives worldwide. Investigating the underlying causes and risk factors of Alzheimer's disease is essential to prevent its progression. Mild Cognitive Impairment (MCI) is considered an intermediate stage before Alzheimer's disease. Early prediction of the conversion from the MCI to Alzheimer's is crucial to take necessary precautions for decelerating the progression and developing suitable treatments. In this study, we propose a deep learning framework to discover the variables which are identifiers of the conversion from MCI to Alzheimer's disease. In particular, the latent space of a variational auto-encoder network trained with the MCI and Alzheimer's patients is manipulated to obtain the significant attributes and decipher their behavior that leads to the conversion from MCI to Alzheimer's disease. By utilizing a generative decoder and the dimensions that lead to the Alzheimer's diagnosis, we generate synthetic dementia patients from MCI patients in the dataset. Experimental results show promising quantitative and qualitative results on one of the most extensive and commonly used Alzheimer's disease neuroimaging datasets in literature.
[ { "created": "Tue, 16 Nov 2021 21:48:09 GMT", "version": "v1" }, { "created": "Sun, 20 Aug 2023 19:40:45 GMT", "version": "v2" } ]
2023-08-22
[ [ "Ayvaz", "Deniz Sezin", "" ], [ "Baytas", "Inci M.", "" ] ]
Alzheimer's disease is the most common cause of dementia that affects millions of lives worldwide. Investigating the underlying causes and risk factors of Alzheimer's disease is essential to prevent its progression. Mild Cognitive Impairment (MCI) is considered an intermediate stage before Alzheimer's disease. Early prediction of the conversion from the MCI to Alzheimer's is crucial to take necessary precautions for decelerating the progression and developing suitable treatments. In this study, we propose a deep learning framework to discover the variables which are identifiers of the conversion from MCI to Alzheimer's disease. In particular, the latent space of a variational auto-encoder network trained with the MCI and Alzheimer's patients is manipulated to obtain the significant attributes and decipher their behavior that leads to the conversion from MCI to Alzheimer's disease. By utilizing a generative decoder and the dimensions that lead to the Alzheimer's diagnosis, we generate synthetic dementia patients from MCI patients in the dataset. Experimental results show promising quantitative and qualitative results on one of the most extensive and commonly used Alzheimer's disease neuroimaging datasets in literature.
1605.06245
Christophe Limbree LCh
Christophe Limbree and Quentin Cappart and Charles Pecheur and Stefano Tonetta
Verification of railway interlocking - Compositional approach with OCRA
16 pages
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the railway domain, an electronic interlocking is a computerised system that controls the railway signalling components (e.g. switches or signals) in order to allow a safe operation of the train traffic. Interlockings are controlled by a software logic that relies on a generic software and a set of application data particular to the station under control. The verification of the application data is time consuming and error prone as it is mostly performed by human testers. In the first stage of our research, we built a model of a small Belgian railway station and we performed the verification of the application data with the nusmv model checker. However, the verification of larger stations fails due to the state space explosion problem. The intuition is that large stations can be split into smaller components that can be verified separately. This concept is known as compositional verification. This article explains how we used the ocra tool in order to model a medium size station and how we verified safety properties by mean of contracts. We also took advantage of new algorithms (k-liveness and ic3) recently implemented in nuxmv in order to verify LTL properties on our model.
[ { "created": "Fri, 20 May 2016 08:46:43 GMT", "version": "v1" } ]
2016-05-23
[ [ "Limbree", "Christophe", "" ], [ "Cappart", "Quentin", "" ], [ "Pecheur", "Charles", "" ], [ "Tonetta", "Stefano", "" ] ]
In the railway domain, an electronic interlocking is a computerised system that controls the railway signalling components (e.g. switches or signals) in order to allow a safe operation of the train traffic. Interlockings are controlled by a software logic that relies on a generic software and a set of application data particular to the station under control. The verification of the application data is time consuming and error prone as it is mostly performed by human testers. In the first stage of our research, we built a model of a small Belgian railway station and we performed the verification of the application data with the nusmv model checker. However, the verification of larger stations fails due to the state space explosion problem. The intuition is that large stations can be split into smaller components that can be verified separately. This concept is known as compositional verification. This article explains how we used the ocra tool in order to model a medium size station and how we verified safety properties by mean of contracts. We also took advantage of new algorithms (k-liveness and ic3) recently implemented in nuxmv in order to verify LTL properties on our model.
2211.13203
Yuxin Zhang
Yuxin Zhang, Nisha Huang, Fan Tang, Haibin Huang, Chongyang Ma, Weiming Dong, Changsheng Xu
Inversion-Based Style Transfer with Diffusion Models
accepted by CVPR 2023
null
null
null
cs.CV cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The artistic style within a painting is the means of expression, which includes not only the painting material, colors, and brushstrokes, but also the high-level attributes including semantic elements, object shapes, etc. Previous arbitrary example-guided artistic image generation methods often fail to control shape changes or convey elements. The pre-trained text-to-image synthesis diffusion probabilistic models have achieved remarkable quality, but it often requires extensive textual descriptions to accurately portray attributes of a particular painting. We believe that the uniqueness of an artwork lies precisely in the fact that it cannot be adequately explained with normal language. Our key idea is to learn artistic style directly from a single painting and then guide the synthesis without providing complex textual descriptions. Specifically, we assume style as a learnable textual description of a painting. We propose an inversion-based style transfer method (InST), which can efficiently and accurately learn the key information of an image, thus capturing and transferring the artistic style of a painting. We demonstrate the quality and efficiency of our method on numerous paintings of various artists and styles. Code and models are available at https://github.com/zyxElsa/InST.
[ { "created": "Wed, 23 Nov 2022 18:44:25 GMT", "version": "v1" }, { "created": "Thu, 9 Mar 2023 13:44:11 GMT", "version": "v2" }, { "created": "Mon, 20 Mar 2023 14:32:01 GMT", "version": "v3" } ]
2023-03-21
[ [ "Zhang", "Yuxin", "" ], [ "Huang", "Nisha", "" ], [ "Tang", "Fan", "" ], [ "Huang", "Haibin", "" ], [ "Ma", "Chongyang", "" ], [ "Dong", "Weiming", "" ], [ "Xu", "Changsheng", "" ] ]
The artistic style within a painting is the means of expression, which includes not only the painting material, colors, and brushstrokes, but also the high-level attributes including semantic elements, object shapes, etc. Previous arbitrary example-guided artistic image generation methods often fail to control shape changes or convey elements. The pre-trained text-to-image synthesis diffusion probabilistic models have achieved remarkable quality, but it often requires extensive textual descriptions to accurately portray attributes of a particular painting. We believe that the uniqueness of an artwork lies precisely in the fact that it cannot be adequately explained with normal language. Our key idea is to learn artistic style directly from a single painting and then guide the synthesis without providing complex textual descriptions. Specifically, we assume style as a learnable textual description of a painting. We propose an inversion-based style transfer method (InST), which can efficiently and accurately learn the key information of an image, thus capturing and transferring the artistic style of a painting. We demonstrate the quality and efficiency of our method on numerous paintings of various artists and styles. Code and models are available at https://github.com/zyxElsa/InST.
2312.16415
Matthew Ding
Matthew Ding and Jason Li
Deterministic Minimum Steiner Cut in Maximum Flow Time
18 pages, 1 figure, to appear at ESA 2024
null
null
null
cs.DS cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We devise a deterministic algorithm for minimum Steiner cut, which uses $(\log n)^{O(1)}$ maximum flow calls and additional near-linear time. This algorithm improves on Li and Panigrahi's (FOCS 2020) algorithm, which uses $(\log n)^{O(1/\epsilon^4)}$ maximum flow calls and additional $O(m^{1+\epsilon})$ time, for $\epsilon > 0$. Our algorithm thus shows that deterministic minimum Steiner cut can be solved in maximum flow time up to polylogarithmic factors, given any black-box deterministic maximum flow algorithm. Our main technical contribution is a novel deterministic graph decomposition method for terminal vertices that generalizes all existing $s$-strong partitioning methods, which we believe may have future applications.
[ { "created": "Wed, 27 Dec 2023 05:27:22 GMT", "version": "v1" }, { "created": "Mon, 1 Jul 2024 21:22:13 GMT", "version": "v2" } ]
2024-07-03
[ [ "Ding", "Matthew", "" ], [ "Li", "Jason", "" ] ]
We devise a deterministic algorithm for minimum Steiner cut, which uses $(\log n)^{O(1)}$ maximum flow calls and additional near-linear time. This algorithm improves on Li and Panigrahi's (FOCS 2020) algorithm, which uses $(\log n)^{O(1/\epsilon^4)}$ maximum flow calls and additional $O(m^{1+\epsilon})$ time, for $\epsilon > 0$. Our algorithm thus shows that deterministic minimum Steiner cut can be solved in maximum flow time up to polylogarithmic factors, given any black-box deterministic maximum flow algorithm. Our main technical contribution is a novel deterministic graph decomposition method for terminal vertices that generalizes all existing $s$-strong partitioning methods, which we believe may have future applications.
2009.03260
Tarun Kathuria
Tarun Kathuria
A Potential Reduction Inspired Algorithm for Exact Max Flow in Almost $\widetilde{O}(m^{4/3})$ Time
null
null
null
null
cs.DS math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present an algorithm for computing $s$-$t$ maximum flows in directed graphs in $\widetilde{O}(m^{4/3+o(1)}U^{1/3})$ time. Our algorithm is inspired by potential reduction interior point methods for linear programming. Instead of using scaled gradient/Newton steps of a potential function, we take the step which maximizes the decrease in the potential value subject to advancing a certain amount on the central path, which can be efficiently computed. This allows us to trace the central path with our progress depending only $\ell_\infty$ norm bounds on the congestion vector (as opposed to the $\ell_4$ norm required by previous works) and runs in $O(\sqrt{m})$ iterations. To improve the number of iterations by establishing tighter bounds on the $\ell_\infty$ norm, we then consider the weighted central path framework of Madry \cite{M13,M16,CMSV17} and Liu-Sidford \cite{LS20}. Instead of changing weights to maximize energy, we consider finding weights which maximize the maximum decrease in potential value. Finally, similar to finding weights which maximize energy as done in \cite{LS20} this problem can be solved by the iterative refinement framework for smoothed $\ell_2$-$\ell_p$ norm flow problems \cite{KPSW19} completing our algorithm. We believe our potential reduction based viewpoint provides a versatile framework which may lead to faster algorithms for max flow.
[ { "created": "Mon, 7 Sep 2020 17:31:24 GMT", "version": "v1" } ]
2020-09-08
[ [ "Kathuria", "Tarun", "" ] ]
We present an algorithm for computing $s$-$t$ maximum flows in directed graphs in $\widetilde{O}(m^{4/3+o(1)}U^{1/3})$ time. Our algorithm is inspired by potential reduction interior point methods for linear programming. Instead of using scaled gradient/Newton steps of a potential function, we take the step which maximizes the decrease in the potential value subject to advancing a certain amount on the central path, which can be efficiently computed. This allows us to trace the central path with our progress depending only $\ell_\infty$ norm bounds on the congestion vector (as opposed to the $\ell_4$ norm required by previous works) and runs in $O(\sqrt{m})$ iterations. To improve the number of iterations by establishing tighter bounds on the $\ell_\infty$ norm, we then consider the weighted central path framework of Madry \cite{M13,M16,CMSV17} and Liu-Sidford \cite{LS20}. Instead of changing weights to maximize energy, we consider finding weights which maximize the maximum decrease in potential value. Finally, similar to finding weights which maximize energy as done in \cite{LS20} this problem can be solved by the iterative refinement framework for smoothed $\ell_2$-$\ell_p$ norm flow problems \cite{KPSW19} completing our algorithm. We believe our potential reduction based viewpoint provides a versatile framework which may lead to faster algorithms for max flow.
2109.14133
Sarkhan Badirli
Sarkhan Badirli, Zeynep Akata, George Mohler, Christine Picard, Murat Dundar
Fine-Grained Zero-Shot Learning with DNA as Side Information
Accepted to NeurIPS 2021
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Fine-grained zero-shot learning task requires some form of side-information to transfer discriminative information from seen to unseen classes. As manually annotated visual attributes are extremely costly and often impractical to obtain for a large number of classes, in this study we use DNA as side information for the first time for fine-grained zero-shot classification of species. Mitochondrial DNA plays an important role as a genetic marker in evolutionary biology and has been used to achieve near-perfect accuracy in the species classification of living organisms. We implement a simple hierarchical Bayesian model that uses DNA information to establish the hierarchy in the image space and employs local priors to define surrogate classes for unseen ones. On the benchmark CUB dataset, we show that DNA can be equally promising yet in general a more accessible alternative than word vectors as a side information. This is especially important as obtaining robust word representations for fine-grained species names is not a practicable goal when information about these species in free-form text is limited. On a newly compiled fine-grained insect dataset that uses DNA information from over a thousand species, we show that the Bayesian approach outperforms state-of-the-art by a wide margin.
[ { "created": "Wed, 29 Sep 2021 01:45:22 GMT", "version": "v1" } ]
2021-09-30
[ [ "Badirli", "Sarkhan", "" ], [ "Akata", "Zeynep", "" ], [ "Mohler", "George", "" ], [ "Picard", "Christine", "" ], [ "Dundar", "Murat", "" ] ]
Fine-grained zero-shot learning task requires some form of side-information to transfer discriminative information from seen to unseen classes. As manually annotated visual attributes are extremely costly and often impractical to obtain for a large number of classes, in this study we use DNA as side information for the first time for fine-grained zero-shot classification of species. Mitochondrial DNA plays an important role as a genetic marker in evolutionary biology and has been used to achieve near-perfect accuracy in the species classification of living organisms. We implement a simple hierarchical Bayesian model that uses DNA information to establish the hierarchy in the image space and employs local priors to define surrogate classes for unseen ones. On the benchmark CUB dataset, we show that DNA can be equally promising yet in general a more accessible alternative than word vectors as a side information. This is especially important as obtaining robust word representations for fine-grained species names is not a practicable goal when information about these species in free-form text is limited. On a newly compiled fine-grained insect dataset that uses DNA information from over a thousand species, we show that the Bayesian approach outperforms state-of-the-art by a wide margin.
2306.01798
David Woo
David James Woo, Kai Guo, Hengky Susanto
Exploring EFL students' prompt engineering in human-AI story writing: an Activity Theory perspective
44 pages, 9 figures
Interactive_Learning_Environments (2024) 1_20
10.1080/10494820.2024.2361381
null
cs.CY cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
This study applies Activity Theory to investigate how English as a foreign language (EFL) students prompt generative artificial intelligence (AI) tools during short story writing. Sixty-seven Hong Kong secondary school students created generative-AI tools using open-source language models and wrote short stories with them. The study collected and analyzed the students' generative-AI tools, short stories, and written reflections on their conditions or purposes for prompting. The research identified three main themes regarding the purposes for which students prompt generative-AI tools during short story writing: a lack of awareness of purposes, overcoming writer's block, and developing, expanding, and improving the story. The study also identified common characteristics of students' activity systems, including the sophistication of their generative-AI tools, the quality of their stories, and their school's overall academic achievement level, for their prompting of generative-AI tools for the three purposes during short story writing. The study's findings suggest that teachers should be aware of students' purposes for prompting generative-AI tools to provide tailored instructions and scaffolded guidance. The findings may also help designers provide differentiated instructions for users at various levels of story development when using a generative-AI tool.
[ { "created": "Thu, 1 Jun 2023 14:52:28 GMT", "version": "v1" }, { "created": "Sat, 10 Feb 2024 14:13:43 GMT", "version": "v2" } ]
2024-07-03
[ [ "Woo", "David James", "" ], [ "Guo", "Kai", "" ], [ "Susanto", "Hengky", "" ] ]
This study applies Activity Theory to investigate how English as a foreign language (EFL) students prompt generative artificial intelligence (AI) tools during short story writing. Sixty-seven Hong Kong secondary school students created generative-AI tools using open-source language models and wrote short stories with them. The study collected and analyzed the students' generative-AI tools, short stories, and written reflections on their conditions or purposes for prompting. The research identified three main themes regarding the purposes for which students prompt generative-AI tools during short story writing: a lack of awareness of purposes, overcoming writer's block, and developing, expanding, and improving the story. The study also identified common characteristics of students' activity systems, including the sophistication of their generative-AI tools, the quality of their stories, and their school's overall academic achievement level, for their prompting of generative-AI tools for the three purposes during short story writing. The study's findings suggest that teachers should be aware of students' purposes for prompting generative-AI tools to provide tailored instructions and scaffolded guidance. The findings may also help designers provide differentiated instructions for users at various levels of story development when using a generative-AI tool.
cs/0404057
Marcus Hutter
Jan Poland and Marcus Hutter
Convergence of Discrete MDL for Sequential Prediction
17 pages
Proc. 17th Annual Conf. on Learning Theory (COLT-2004), pages 300--314
null
IDSIA-03-04
cs.LG cs.AI math.ST stat.TH
null
We study the properties of the Minimum Description Length principle for sequence prediction, considering a two-part MDL estimator which is chosen from a countable class of models. This applies in particular to the important case of universal sequence prediction, where the model class corresponds to all algorithms for some fixed universal Turing machine (this correspondence is by enumerable semimeasures, hence the resulting models are stochastic). We prove convergence theorems similar to Solomonoff's theorem of universal induction, which also holds for general Bayes mixtures. The bound characterizing the convergence speed for MDL predictions is exponentially larger as compared to Bayes mixtures. We observe that there are at least three different ways of using MDL for prediction. One of these has worse prediction properties, for which predictions only converge if the MDL estimator stabilizes. We establish sufficient conditions for this to occur. Finally, some immediate consequences for complexity relations and randomness criteria are proven.
[ { "created": "Wed, 28 Apr 2004 15:58:35 GMT", "version": "v1" } ]
2011-11-09
[ [ "Poland", "Jan", "" ], [ "Hutter", "Marcus", "" ] ]
We study the properties of the Minimum Description Length principle for sequence prediction, considering a two-part MDL estimator which is chosen from a countable class of models. This applies in particular to the important case of universal sequence prediction, where the model class corresponds to all algorithms for some fixed universal Turing machine (this correspondence is by enumerable semimeasures, hence the resulting models are stochastic). We prove convergence theorems similar to Solomonoff's theorem of universal induction, which also holds for general Bayes mixtures. The bound characterizing the convergence speed for MDL predictions is exponentially larger as compared to Bayes mixtures. We observe that there are at least three different ways of using MDL for prediction. One of these has worse prediction properties, for which predictions only converge if the MDL estimator stabilizes. We establish sufficient conditions for this to occur. Finally, some immediate consequences for complexity relations and randomness criteria are proven.
1209.4922
Mazen Alamir Prof
Mazen Alamir
Monitoring Control Updating Period In Fast Gradient Based NMPC
6 pages, 8 Figures
Proceedings of the European Control Conference, Zurich, 2013
null
null
cs.SY cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, a method is proposed for on-line monitoring of the control updating period in fast-gradient-based Model Predictive Control (MPC) schemes. Such schemes are currently under intense investigation as a way to accommodate for real-time requirements when dealing with systems showing fast dynamics. The method needs cheap computations that use the algorithm on-line behavior in order to recover the optimal updating period in terms of cost function decrease. A simple example of constrained triple integrator is used to illustrate the proposed method and to assess its efficiency.
[ { "created": "Fri, 21 Sep 2012 21:22:56 GMT", "version": "v1" } ]
2013-09-20
[ [ "Alamir", "Mazen", "" ] ]
In this paper, a method is proposed for on-line monitoring of the control updating period in fast-gradient-based Model Predictive Control (MPC) schemes. Such schemes are currently under intense investigation as a way to accommodate for real-time requirements when dealing with systems showing fast dynamics. The method needs cheap computations that use the algorithm on-line behavior in order to recover the optimal updating period in terms of cost function decrease. A simple example of constrained triple integrator is used to illustrate the proposed method and to assess its efficiency.
2301.11569
Dmitry Tanana
Dmitry Tanana
Vulnerablity analysis of Azure Blockchain Workbench key management system
4 pages, 2 figures, 1 table
null
null
null
cs.CR
http://creativecommons.org/licenses/by-nc-sa/4.0/
With rise of blockchain popularity, more and more people seek to implement blockchain technology into their projects. Most common way is to take existing blockchain stack, such as Azure Blockchain Workbench or Oracle Blockchain Platform. While the blockchain technology is well-protected by its algorithms it is still vulnerable because its privacy relies on regular cryptography. And mistakes or vulnerabilities in key management protocols can affect even the most secure blockchain projects. This article considers question of vulnerabilities within Azure Blockchain Workbench key management system. We describe potential threats for each stage of key management lifecycle based on public reports and then assess how likely are those threats to realize within Azure Blockchain Workbench environment based on the technical documentation for Azure Blockchain Workbench and Azure Key Vault. Finally, we compile results of our assessment into the key management threat table with three distinct degrees of protection: fully protected, partially protected and not protected.
[ { "created": "Fri, 27 Jan 2023 07:29:37 GMT", "version": "v1" } ]
2023-01-30
[ [ "Tanana", "Dmitry", "" ] ]
With rise of blockchain popularity, more and more people seek to implement blockchain technology into their projects. Most common way is to take existing blockchain stack, such as Azure Blockchain Workbench or Oracle Blockchain Platform. While the blockchain technology is well-protected by its algorithms it is still vulnerable because its privacy relies on regular cryptography. And mistakes or vulnerabilities in key management protocols can affect even the most secure blockchain projects. This article considers question of vulnerabilities within Azure Blockchain Workbench key management system. We describe potential threats for each stage of key management lifecycle based on public reports and then assess how likely are those threats to realize within Azure Blockchain Workbench environment based on the technical documentation for Azure Blockchain Workbench and Azure Key Vault. Finally, we compile results of our assessment into the key management threat table with three distinct degrees of protection: fully protected, partially protected and not protected.
2303.02657
Sicong Liu
Xiao Tang, Sicong Liu, Xiaojiang Du, Mohsen Guizani
Sparsity-Aware Intelligent Massive Random Access Control in Open RAN: A Reinforcement Learning Based Approach
This paper has been submitted to IEEE Journal on Selected Areas in Communications
null
null
null
cs.LG cs.AI cs.IT cs.NI eess.SP math.IT
http://creativecommons.org/licenses/by/4.0/
Massive random access of devices in the emerging Open Radio Access Network (O-RAN) brings great challenge to the access control and management. Exploiting the bursting nature of the access requests, sparse active user detection (SAUD) is an efficient enabler towards efficient access management, but the sparsity might be deteriorated in case of uncoordinated massive access requests. To dynamically preserve the sparsity of access requests, a reinforcement-learning (RL)-assisted scheme of closed-loop access control utilizing the access class barring technique is proposed, where the RL policy is determined through continuous interaction between the RL agent, i.e., a next generation node base (gNB), and the environment. The proposed scheme can be implemented by the near-real-time RAN intelligent controller (near-RT RIC) in O-RAN, supporting rapid switching between heterogeneous vertical applications, such as mMTC and uRLLC services. Moreover, a data-driven scheme of deep-RL-assisted SAUD is proposed to resolve highly complex environments with continuous and high-dimensional state and action spaces, where a replay buffer is applied for automatic large-scale data collection. An actor-critic framework is formulated to incorporate the strategy-learning modules into the near-RT RIC. Simulation results show that the proposed schemes can achieve superior performance in both access efficiency and user detection accuracy over the benchmark scheme for different heterogeneous services with massive access requests.
[ { "created": "Sun, 5 Mar 2023 12:25:49 GMT", "version": "v1" } ]
2023-03-07
[ [ "Tang", "Xiao", "" ], [ "Liu", "Sicong", "" ], [ "Du", "Xiaojiang", "" ], [ "Guizani", "Mohsen", "" ] ]
Massive random access of devices in the emerging Open Radio Access Network (O-RAN) brings great challenge to the access control and management. Exploiting the bursting nature of the access requests, sparse active user detection (SAUD) is an efficient enabler towards efficient access management, but the sparsity might be deteriorated in case of uncoordinated massive access requests. To dynamically preserve the sparsity of access requests, a reinforcement-learning (RL)-assisted scheme of closed-loop access control utilizing the access class barring technique is proposed, where the RL policy is determined through continuous interaction between the RL agent, i.e., a next generation node base (gNB), and the environment. The proposed scheme can be implemented by the near-real-time RAN intelligent controller (near-RT RIC) in O-RAN, supporting rapid switching between heterogeneous vertical applications, such as mMTC and uRLLC services. Moreover, a data-driven scheme of deep-RL-assisted SAUD is proposed to resolve highly complex environments with continuous and high-dimensional state and action spaces, where a replay buffer is applied for automatic large-scale data collection. An actor-critic framework is formulated to incorporate the strategy-learning modules into the near-RT RIC. Simulation results show that the proposed schemes can achieve superior performance in both access efficiency and user detection accuracy over the benchmark scheme for different heterogeneous services with massive access requests.
2212.12669
Ying Wen
Ying Wen, Ziyu Wan, Ming Zhou, Shufang Hou, Zhe Cao, Chenyang Le, Jingxiao Chen, Zheng Tian, Weinan Zhang, Jun Wang
On Realization of Intelligent Decision-Making in the Real World: A Foundation Decision Model Perspective
26 pages, 4 figures
null
null
null
cs.AI cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
The pervasive uncertainty and dynamic nature of real-world environments present significant challenges for the widespread implementation of machine-driven Intelligent Decision-Making (IDM) systems. Consequently, IDM should possess the ability to continuously acquire new skills and effectively generalize across a broad range of applications. The advancement of Artificial General Intelligence (AGI) that transcends task and application boundaries is critical for enhancing IDM. Recent studies have extensively investigated the Transformer neural architecture as a foundational model for various tasks, including computer vision, natural language processing, and reinforcement learning. We propose that a Foundation Decision Model (FDM) can be developed by formulating diverse decision-making tasks as sequence decoding tasks using the Transformer architecture, offering a promising solution for expanding IDM applications in complex real-world situations. In this paper, we discuss the efficiency and generalization improvements offered by a foundation decision model for IDM and explore its potential applications in multi-agent game AI, production scheduling, and robotics tasks. Lastly, we present a case study demonstrating our FDM implementation, DigitalBrain (DB1) with 1.3 billion parameters, achieving human-level performance in 870 tasks, such as text generation, image captioning, video game playing, robotic control, and traveling salesman problems. As a foundation decision model, DB1 represents an initial step toward more autonomous and efficient real-world IDM applications.
[ { "created": "Sat, 24 Dec 2022 06:16:45 GMT", "version": "v1" }, { "created": "Tue, 16 May 2023 07:03:19 GMT", "version": "v2" } ]
2023-05-17
[ [ "Wen", "Ying", "" ], [ "Wan", "Ziyu", "" ], [ "Zhou", "Ming", "" ], [ "Hou", "Shufang", "" ], [ "Cao", "Zhe", "" ], [ "Le", "Chenyang", "" ], [ "Chen", "Jingxiao", "" ], [ "Tian", "Zheng", "" ], [ "Zhang", "Weinan", "" ], [ "Wang", "Jun", "" ] ]
The pervasive uncertainty and dynamic nature of real-world environments present significant challenges for the widespread implementation of machine-driven Intelligent Decision-Making (IDM) systems. Consequently, IDM should possess the ability to continuously acquire new skills and effectively generalize across a broad range of applications. The advancement of Artificial General Intelligence (AGI) that transcends task and application boundaries is critical for enhancing IDM. Recent studies have extensively investigated the Transformer neural architecture as a foundational model for various tasks, including computer vision, natural language processing, and reinforcement learning. We propose that a Foundation Decision Model (FDM) can be developed by formulating diverse decision-making tasks as sequence decoding tasks using the Transformer architecture, offering a promising solution for expanding IDM applications in complex real-world situations. In this paper, we discuss the efficiency and generalization improvements offered by a foundation decision model for IDM and explore its potential applications in multi-agent game AI, production scheduling, and robotics tasks. Lastly, we present a case study demonstrating our FDM implementation, DigitalBrain (DB1) with 1.3 billion parameters, achieving human-level performance in 870 tasks, such as text generation, image captioning, video game playing, robotic control, and traveling salesman problems. As a foundation decision model, DB1 represents an initial step toward more autonomous and efficient real-world IDM applications.
2109.03468
Florian Holzinger
Florian Holzinger, Michael Kommenda
Preprocessing and Modeling of Radial Fan Data for Health State Prediction
International Conference on Computer Aided Systems Theory, Eurocast 2019, pp 312-318
In: Moreno-D\'iaz R. et al (eds) Computer Aided Systems Theory, Eurocast 2019. Lecture Notes in Computer Science, vol 12013 (2020)
10.1007/978-3-030-45093-9_38
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Monitoring critical components of systems is a crucial step towards failure safety. Affordable sensors are available and the industry is in the process of introducing and extending monitoring solutions to improve product quality. Often, no expertise of how much data is required for a certain task (e.g. monitoring) exists. Especially in vital machinery, a trend to exaggerated sensors may be noticed, both in quality and in quantity. This often results in an excessive generation of data, which should be transferred, processed and stored nonetheless. In a previous case study, several sensors have been mounted on a healthy radial fan, which was later artificially damaged. The gathered data was used for modeling (and therefore monitoring) a healthy state. The models were evaluated on a dataset created by using a faulty impeller. This paper focuses on the reduction of this data through downsampling and binning. Different models are created with linear regression and random forest regression and the resulting difference in quality is discussed.
[ { "created": "Wed, 8 Sep 2021 07:37:18 GMT", "version": "v1" } ]
2021-09-09
[ [ "Holzinger", "Florian", "" ], [ "Kommenda", "Michael", "" ] ]
Monitoring critical components of systems is a crucial step towards failure safety. Affordable sensors are available and the industry is in the process of introducing and extending monitoring solutions to improve product quality. Often, no expertise of how much data is required for a certain task (e.g. monitoring) exists. Especially in vital machinery, a trend to exaggerated sensors may be noticed, both in quality and in quantity. This often results in an excessive generation of data, which should be transferred, processed and stored nonetheless. In a previous case study, several sensors have been mounted on a healthy radial fan, which was later artificially damaged. The gathered data was used for modeling (and therefore monitoring) a healthy state. The models were evaluated on a dataset created by using a faulty impeller. This paper focuses on the reduction of this data through downsampling and binning. Different models are created with linear regression and random forest regression and the resulting difference in quality is discussed.
2106.15883
Jack Parker-Holder
Jack Parker-Holder and Vu Nguyen and Shaan Desai and Stephen Roberts
Tuning Mixed Input Hyperparameters on the Fly for Efficient Population Based AutoRL
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite a series of recent successes in reinforcement learning (RL), many RL algorithms remain sensitive to hyperparameters. As such, there has recently been interest in the field of AutoRL, which seeks to automate design decisions to create more general algorithms. Recent work suggests that population based approaches may be effective AutoRL algorithms, by learning hyperparameter schedules on the fly. In particular, the PB2 algorithm is able to achieve strong performance in RL tasks by formulating online hyperparameter optimization as time varying GP-bandit problem, while also providing theoretical guarantees. However, PB2 is only designed to work for continuous hyperparameters, which severely limits its utility in practice. In this paper we introduce a new (provably) efficient hierarchical approach for optimizing both continuous and categorical variables, using a new time-varying bandit algorithm specifically designed for the population based training regime. We evaluate our approach on the challenging Procgen benchmark, where we show that explicitly modelling dependence between data augmentation and other hyperparameters improves generalization.
[ { "created": "Wed, 30 Jun 2021 08:15:59 GMT", "version": "v1" } ]
2021-07-01
[ [ "Parker-Holder", "Jack", "" ], [ "Nguyen", "Vu", "" ], [ "Desai", "Shaan", "" ], [ "Roberts", "Stephen", "" ] ]
Despite a series of recent successes in reinforcement learning (RL), many RL algorithms remain sensitive to hyperparameters. As such, there has recently been interest in the field of AutoRL, which seeks to automate design decisions to create more general algorithms. Recent work suggests that population based approaches may be effective AutoRL algorithms, by learning hyperparameter schedules on the fly. In particular, the PB2 algorithm is able to achieve strong performance in RL tasks by formulating online hyperparameter optimization as time varying GP-bandit problem, while also providing theoretical guarantees. However, PB2 is only designed to work for continuous hyperparameters, which severely limits its utility in practice. In this paper we introduce a new (provably) efficient hierarchical approach for optimizing both continuous and categorical variables, using a new time-varying bandit algorithm specifically designed for the population based training regime. We evaluate our approach on the challenging Procgen benchmark, where we show that explicitly modelling dependence between data augmentation and other hyperparameters improves generalization.
2304.08085
Xiao Wang
Xiao Wang, Weikang Zhou, Can Zu, Han Xia, Tianze Chen, Yuansen Zhang, Rui Zheng, Junjie Ye, Qi Zhang, Tao Gui, Jihua Kang, Jingsheng Yang, Siyuan Li, Chunsai Du
InstructUIE: Multi-task Instruction Tuning for Unified Information Extraction
null
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large language models have unlocked strong multi-task capabilities from reading instructive prompts. However, recent studies have shown that existing large models still have difficulty with information extraction tasks. For example, gpt-3.5-turbo achieved an F1 score of 18.22 on the Ontonotes dataset, which is significantly lower than the state-of-the-art performance. In this paper, we propose InstructUIE, a unified information extraction framework based on instruction tuning, which can uniformly model various information extraction tasks and capture the inter-task dependency. To validate the proposed method, we introduce IE INSTRUCTIONS, a benchmark of 32 diverse information extraction datasets in a unified text-to-text format with expert-written instructions. Experimental results demonstrate that our method achieves comparable performance to Bert in supervised settings and significantly outperforms the state-of-the-art and gpt3.5 in zero-shot settings.
[ { "created": "Mon, 17 Apr 2023 09:00:50 GMT", "version": "v1" } ]
2023-04-18
[ [ "Wang", "Xiao", "" ], [ "Zhou", "Weikang", "" ], [ "Zu", "Can", "" ], [ "Xia", "Han", "" ], [ "Chen", "Tianze", "" ], [ "Zhang", "Yuansen", "" ], [ "Zheng", "Rui", "" ], [ "Ye", "Junjie", "" ], [ "Zhang", "Qi", "" ], [ "Gui", "Tao", "" ], [ "Kang", "Jihua", "" ], [ "Yang", "Jingsheng", "" ], [ "Li", "Siyuan", "" ], [ "Du", "Chunsai", "" ] ]
Large language models have unlocked strong multi-task capabilities from reading instructive prompts. However, recent studies have shown that existing large models still have difficulty with information extraction tasks. For example, gpt-3.5-turbo achieved an F1 score of 18.22 on the Ontonotes dataset, which is significantly lower than the state-of-the-art performance. In this paper, we propose InstructUIE, a unified information extraction framework based on instruction tuning, which can uniformly model various information extraction tasks and capture the inter-task dependency. To validate the proposed method, we introduce IE INSTRUCTIONS, a benchmark of 32 diverse information extraction datasets in a unified text-to-text format with expert-written instructions. Experimental results demonstrate that our method achieves comparable performance to Bert in supervised settings and significantly outperforms the state-of-the-art and gpt3.5 in zero-shot settings.
2407.02648
Yicheng Zeng
Yuhao Huang, Yicheng Zeng, Xiaobin Xiong
STRIDE: An Open-Source, Low-Cost, and Versatile Bipedal Robot Platform for Research and Education
8 pages, 8 figures
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present STRIDE, a Simple, Terrestrial, Reconfigurable, Intelligent, Dynamic, and Educational bipedal platform. STRIDE aims to propel bipedal robotics research and education by providing a cost-effective implementation with step-by-step instructions for building a bipedal robotic platform while providing flexible customizations via a modular and durable design. Moreover, a versatile terrain setup and a quantitative disturbance injection system are augmented to the robot platform to replicate natural terrains and push forces that can be used to evaluate legged locomotion in practical and adversarial scenarios. We demonstrate the functionalities of this platform by realizing an adaptive step-to-step dynamics based walking controller to achieve dynamic walking. Our work with the open-soured implementation shows that STRIDE is a highly versatile and durable platform that can be used in research and education to evaluate locomotion algorithms, mechanical designs, and robust and adaptative controls.
[ { "created": "Tue, 2 Jul 2024 20:29:42 GMT", "version": "v1" } ]
2024-07-04
[ [ "Huang", "Yuhao", "" ], [ "Zeng", "Yicheng", "" ], [ "Xiong", "Xiaobin", "" ] ]
In this paper, we present STRIDE, a Simple, Terrestrial, Reconfigurable, Intelligent, Dynamic, and Educational bipedal platform. STRIDE aims to propel bipedal robotics research and education by providing a cost-effective implementation with step-by-step instructions for building a bipedal robotic platform while providing flexible customizations via a modular and durable design. Moreover, a versatile terrain setup and a quantitative disturbance injection system are augmented to the robot platform to replicate natural terrains and push forces that can be used to evaluate legged locomotion in practical and adversarial scenarios. We demonstrate the functionalities of this platform by realizing an adaptive step-to-step dynamics based walking controller to achieve dynamic walking. Our work with the open-soured implementation shows that STRIDE is a highly versatile and durable platform that can be used in research and education to evaluate locomotion algorithms, mechanical designs, and robust and adaptative controls.
2208.09910
Lihe Yang
Lihe Yang, Lei Qi, Litong Feng, Wayne Zhang, Yinghuan Shi
Revisiting Weak-to-Strong Consistency in Semi-Supervised Semantic Segmentation
Accepted by CVPR 2023. Code and logs: https://github.com/LiheYoung/UniMatch
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we revisit the weak-to-strong consistency framework, popularized by FixMatch from semi-supervised classification, where the prediction of a weakly perturbed image serves as supervision for its strongly perturbed version. Intriguingly, we observe that such a simple pipeline already achieves competitive results against recent advanced works, when transferred to our segmentation scenario. Its success heavily relies on the manual design of strong data augmentations, however, which may be limited and inadequate to explore a broader perturbation space. Motivated by this, we propose an auxiliary feature perturbation stream as a supplement, leading to an expanded perturbation space. On the other, to sufficiently probe original image-level augmentations, we present a dual-stream perturbation technique, enabling two strong views to be simultaneously guided by a common weak view. Consequently, our overall Unified Dual-Stream Perturbations approach (UniMatch) surpasses all existing methods significantly across all evaluation protocols on the Pascal, Cityscapes, and COCO benchmarks. Its superiority is also demonstrated in remote sensing interpretation and medical image analysis. We hope our reproduced FixMatch and our results can inspire more future works. Code and logs are available at https://github.com/LiheYoung/UniMatch.
[ { "created": "Sun, 21 Aug 2022 15:32:43 GMT", "version": "v1" }, { "created": "Sun, 26 Mar 2023 07:10:13 GMT", "version": "v2" } ]
2023-03-28
[ [ "Yang", "Lihe", "" ], [ "Qi", "Lei", "" ], [ "Feng", "Litong", "" ], [ "Zhang", "Wayne", "" ], [ "Shi", "Yinghuan", "" ] ]
In this work, we revisit the weak-to-strong consistency framework, popularized by FixMatch from semi-supervised classification, where the prediction of a weakly perturbed image serves as supervision for its strongly perturbed version. Intriguingly, we observe that such a simple pipeline already achieves competitive results against recent advanced works, when transferred to our segmentation scenario. Its success heavily relies on the manual design of strong data augmentations, however, which may be limited and inadequate to explore a broader perturbation space. Motivated by this, we propose an auxiliary feature perturbation stream as a supplement, leading to an expanded perturbation space. On the other, to sufficiently probe original image-level augmentations, we present a dual-stream perturbation technique, enabling two strong views to be simultaneously guided by a common weak view. Consequently, our overall Unified Dual-Stream Perturbations approach (UniMatch) surpasses all existing methods significantly across all evaluation protocols on the Pascal, Cityscapes, and COCO benchmarks. Its superiority is also demonstrated in remote sensing interpretation and medical image analysis. We hope our reproduced FixMatch and our results can inspire more future works. Code and logs are available at https://github.com/LiheYoung/UniMatch.
1910.14520
Mo Yu
Haoyu Wang, Mo Yu, Xiaoxiao Guo, Rajarshi Das, Wenhan Xiong, Tian Gao
Do Multi-hop Readers Dream of Reasoning Chains?
Accepted by MRQA Workshop 2019
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
General Question Answering (QA) systems over texts require the multi-hop reasoning capability, i.e. the ability to reason with information collected from multiple passages to derive the answer. In this paper we conduct a systematic analysis to assess such an ability of various existing models proposed for multi-hop QA tasks. Specifically, our analysis investigates that whether providing the full reasoning chain of multiple passages, instead of just one final passage where the answer appears, could improve the performance of the existing QA models. Surprisingly, when using the additional evidence passages, the improvements of all the existing multi-hop reading approaches are rather limited, with the highest error reduction of 5.8% on F1 (corresponding to 1.3% absolute improvement) from the BERT model. To better understand whether the reasoning chains could indeed help find correct answers, we further develop a co-matching-based method that leads to 13.1% error reduction with passage chains when applied to two of our base readers (including BERT). Our results demonstrate the existence of the potential improvement using explicit multi-hop reasoning and the necessity to develop models with better reasoning abilities.
[ { "created": "Thu, 31 Oct 2019 15:02:49 GMT", "version": "v1" } ]
2019-11-01
[ [ "Wang", "Haoyu", "" ], [ "Yu", "Mo", "" ], [ "Guo", "Xiaoxiao", "" ], [ "Das", "Rajarshi", "" ], [ "Xiong", "Wenhan", "" ], [ "Gao", "Tian", "" ] ]
General Question Answering (QA) systems over texts require the multi-hop reasoning capability, i.e. the ability to reason with information collected from multiple passages to derive the answer. In this paper we conduct a systematic analysis to assess such an ability of various existing models proposed for multi-hop QA tasks. Specifically, our analysis investigates that whether providing the full reasoning chain of multiple passages, instead of just one final passage where the answer appears, could improve the performance of the existing QA models. Surprisingly, when using the additional evidence passages, the improvements of all the existing multi-hop reading approaches are rather limited, with the highest error reduction of 5.8% on F1 (corresponding to 1.3% absolute improvement) from the BERT model. To better understand whether the reasoning chains could indeed help find correct answers, we further develop a co-matching-based method that leads to 13.1% error reduction with passage chains when applied to two of our base readers (including BERT). Our results demonstrate the existence of the potential improvement using explicit multi-hop reasoning and the necessity to develop models with better reasoning abilities.
2107.14039
Joost Visser
Olivier Koster and Ruud Kosman and Joost Visser
A Checklist for Explainable AI in the Insurance Domain
Preprint of short paper for QUATIC 2021 conference
null
null
null
cs.CY
http://creativecommons.org/licenses/by/4.0/
Artificial intelligence (AI) is a powerful tool to accomplish a great many tasks. This exciting branch of technology is being adopted increasingly across varying sectors, including the insurance domain. With that power arise several complications. One of which is a lack of transparency and explainability of an algorithm for experts and non-experts alike. This brings into question both the usefulness as well as the accuracy of the algorithm, coupled with an added difficulty to assess potential biases within the data or the model. In this paper, we investigate the current usage of AI algorithms in the Dutch insurance industry and the adoption of explainable artificial intelligence (XAI) techniques. Armed with this knowledge we design a checklist for insurance companies that should help assure quality standards regarding XAI and a solid foundation for cooperation between organisations. This checklist extends an existing checklist of SIVI, the standardisation institute for digital cooperation and innovation in Dutch insurance.
[ { "created": "Sun, 18 Jul 2021 10:19:04 GMT", "version": "v1" } ]
2021-07-30
[ [ "Koster", "Olivier", "" ], [ "Kosman", "Ruud", "" ], [ "Visser", "Joost", "" ] ]
Artificial intelligence (AI) is a powerful tool to accomplish a great many tasks. This exciting branch of technology is being adopted increasingly across varying sectors, including the insurance domain. With that power arise several complications. One of which is a lack of transparency and explainability of an algorithm for experts and non-experts alike. This brings into question both the usefulness as well as the accuracy of the algorithm, coupled with an added difficulty to assess potential biases within the data or the model. In this paper, we investigate the current usage of AI algorithms in the Dutch insurance industry and the adoption of explainable artificial intelligence (XAI) techniques. Armed with this knowledge we design a checklist for insurance companies that should help assure quality standards regarding XAI and a solid foundation for cooperation between organisations. This checklist extends an existing checklist of SIVI, the standardisation institute for digital cooperation and innovation in Dutch insurance.
2203.09227
Furong Ye
Furong Ye and Diederick L. Vermetten and Carola Doerr and Thomas B\"ack
Non-Elitist Selection Can Improve the Performance of Irace
PPSN 2022
null
null
null
cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modern optimization strategies such as evolutionary algorithms, ant colony algorithms, Bayesian optimization techniques, etc. come with several parameters that steer their behavior during the optimization process. To obtain high-performing algorithm instances, automated algorithm configuration techniques have been developed. One of the most popular tools is irace, which evaluates configurations in sequential races, making use of iterated statistical tests to discard poorly performing configurations. At the end of the race, a set of elite configurations are selected from those survivor configurations that were not discarded, using greedy truncation selection. We study two alternative selection methods: one keeps the best survivor and selects the remaining configurations uniformly at random from the set of survivors, while the other applies entropy to maximize the diversity of the elites. These methods are tested for tuning ant colony optimization algorithms for traveling salesperson problems and the quadratic assignment problem and tuning an exact tree search solver for satisfiability problems. The experimental results show improvement on the tested benchmarks compared to the default selection of irace. In addition, the obtained results indicate that non-elitist can obtain diverse algorithm configurations, which encourages us to explore a wider range of solutions to understand the behavior of algorithms.
[ { "created": "Thu, 17 Mar 2022 10:34:30 GMT", "version": "v1" }, { "created": "Sun, 17 Apr 2022 20:29:41 GMT", "version": "v2" }, { "created": "Sat, 25 Jun 2022 23:45:24 GMT", "version": "v3" } ]
2022-06-28
[ [ "Ye", "Furong", "" ], [ "Vermetten", "Diederick L.", "" ], [ "Doerr", "Carola", "" ], [ "Bäck", "Thomas", "" ] ]
Modern optimization strategies such as evolutionary algorithms, ant colony algorithms, Bayesian optimization techniques, etc. come with several parameters that steer their behavior during the optimization process. To obtain high-performing algorithm instances, automated algorithm configuration techniques have been developed. One of the most popular tools is irace, which evaluates configurations in sequential races, making use of iterated statistical tests to discard poorly performing configurations. At the end of the race, a set of elite configurations are selected from those survivor configurations that were not discarded, using greedy truncation selection. We study two alternative selection methods: one keeps the best survivor and selects the remaining configurations uniformly at random from the set of survivors, while the other applies entropy to maximize the diversity of the elites. These methods are tested for tuning ant colony optimization algorithms for traveling salesperson problems and the quadratic assignment problem and tuning an exact tree search solver for satisfiability problems. The experimental results show improvement on the tested benchmarks compared to the default selection of irace. In addition, the obtained results indicate that non-elitist can obtain diverse algorithm configurations, which encourages us to explore a wider range of solutions to understand the behavior of algorithms.
2107.14283
Kristina Sojakova
Kristina Sojakova
Syllepsis in Homotopy Type Theory
null
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is well-known that in homotopy type theory (HoTT), one can prove the Eckmann-Hilton theorem: given two 2-loops p, q : 1 = 1 on the reflexivity path at an arbitrary point a : A, we have pq = qp. If we go one dimension higher, i.e., if p and q are 3-loops, we show that a property classically known as syllepsis also holds in HoTT: namely, the Eckmann-Hilton proof for q and p is the inverse of the Eckmann-Hilton proof for p and q.
[ { "created": "Thu, 29 Jul 2021 19:03:02 GMT", "version": "v1" } ]
2021-08-02
[ [ "Sojakova", "Kristina", "" ] ]
It is well-known that in homotopy type theory (HoTT), one can prove the Eckmann-Hilton theorem: given two 2-loops p, q : 1 = 1 on the reflexivity path at an arbitrary point a : A, we have pq = qp. If we go one dimension higher, i.e., if p and q are 3-loops, we show that a property classically known as syllepsis also holds in HoTT: namely, the Eckmann-Hilton proof for q and p is the inverse of the Eckmann-Hilton proof for p and q.
2211.07491
Yigit Baran Can
Yigit Baran Can, Alexander Liniger, Danda Pani Paudel, Luc Van Gool
Piecewise Planar Hulls for Semi-Supervised Learning of 3D Shape and Pose from 2D Images
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the problem of estimating 3D shape and pose of an object in terms of keypoints, from a single 2D image. The shape and pose are learned directly from images collected by categories and their partial 2D keypoint annotations.. In this work, we first propose an end-to-end training framework for intermediate 2D keypoints extraction and final 3D shape and pose estimation. The proposed framework is then trained using only the weak supervision of the intermediate 2D keypoints. Additionally, we devise a semi-supervised training framework that benefits from both labeled and unlabeled data. To leverage the unlabeled data, we introduce and exploit the \emph{piece-wise planar hull} prior of the canonical object shape. These planar hulls are defined manually once per object category, with the help of the keypoints. On the one hand, the proposed method learns to segment these planar hulls from the labeled data. On the other hand, it simultaneously enforces the consistency between predicted keypoints and the segmented hulls on the unlabeled data. The enforced consistency allows us to efficiently use the unlabeled data for the task at hand. The proposed method achieves comparable results with fully supervised state-of-the-art methods by using only half of the annotations. Our source code will be made publicly available.
[ { "created": "Mon, 14 Nov 2022 16:18:11 GMT", "version": "v1" } ]
2022-11-15
[ [ "Can", "Yigit Baran", "" ], [ "Liniger", "Alexander", "" ], [ "Paudel", "Danda Pani", "" ], [ "Van Gool", "Luc", "" ] ]
We study the problem of estimating 3D shape and pose of an object in terms of keypoints, from a single 2D image. The shape and pose are learned directly from images collected by categories and their partial 2D keypoint annotations.. In this work, we first propose an end-to-end training framework for intermediate 2D keypoints extraction and final 3D shape and pose estimation. The proposed framework is then trained using only the weak supervision of the intermediate 2D keypoints. Additionally, we devise a semi-supervised training framework that benefits from both labeled and unlabeled data. To leverage the unlabeled data, we introduce and exploit the \emph{piece-wise planar hull} prior of the canonical object shape. These planar hulls are defined manually once per object category, with the help of the keypoints. On the one hand, the proposed method learns to segment these planar hulls from the labeled data. On the other hand, it simultaneously enforces the consistency between predicted keypoints and the segmented hulls on the unlabeled data. The enforced consistency allows us to efficiently use the unlabeled data for the task at hand. The proposed method achieves comparable results with fully supervised state-of-the-art methods by using only half of the annotations. Our source code will be made publicly available.
2312.08200
Yunchen Li
Yunchen Li, Zhou Yu, Gaoqi He, Yunhang Shen, Ke Li, Xing Sun, Shaohui Lin
SPD-DDPM: Denoising Diffusion Probabilistic Models in the Symmetric Positive Definite Space
AAAI2024
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Symmetric positive definite~(SPD) matrices have shown important value and applications in statistics and machine learning, such as FMRI analysis and traffic prediction. Previous works on SPD matrices mostly focus on discriminative models, where predictions are made directly on $E(X|y)$, where $y$ is a vector and $X$ is an SPD matrix. However, these methods are challenging to handle for large-scale data, as they need to access and process the whole data. In this paper, inspired by denoising diffusion probabilistic model~(DDPM), we propose a novel generative model, termed SPD-DDPM, by introducing Gaussian distribution in the SPD space to estimate $E(X|y)$. Moreover, our model is able to estimate $p(X)$ unconditionally and flexibly without giving $y$. On the one hand, the model conditionally learns $p(X|y)$ and utilizes the mean of samples to obtain $E(X|y)$ as a prediction. On the other hand, the model unconditionally learns the probability distribution of the data $p(X)$ and generates samples that conform to this distribution. Furthermore, we propose a new SPD net which is much deeper than the previous networks and allows for the inclusion of conditional factors. Experiment results on toy data and real taxi data demonstrate that our models effectively fit the data distribution both unconditionally and unconditionally and provide accurate predictions.
[ { "created": "Wed, 13 Dec 2023 15:08:54 GMT", "version": "v1" } ]
2023-12-14
[ [ "Li", "Yunchen", "" ], [ "Yu", "Zhou", "" ], [ "He", "Gaoqi", "" ], [ "Shen", "Yunhang", "" ], [ "Li", "Ke", "" ], [ "Sun", "Xing", "" ], [ "Lin", "Shaohui", "" ] ]
Symmetric positive definite~(SPD) matrices have shown important value and applications in statistics and machine learning, such as FMRI analysis and traffic prediction. Previous works on SPD matrices mostly focus on discriminative models, where predictions are made directly on $E(X|y)$, where $y$ is a vector and $X$ is an SPD matrix. However, these methods are challenging to handle for large-scale data, as they need to access and process the whole data. In this paper, inspired by denoising diffusion probabilistic model~(DDPM), we propose a novel generative model, termed SPD-DDPM, by introducing Gaussian distribution in the SPD space to estimate $E(X|y)$. Moreover, our model is able to estimate $p(X)$ unconditionally and flexibly without giving $y$. On the one hand, the model conditionally learns $p(X|y)$ and utilizes the mean of samples to obtain $E(X|y)$ as a prediction. On the other hand, the model unconditionally learns the probability distribution of the data $p(X)$ and generates samples that conform to this distribution. Furthermore, we propose a new SPD net which is much deeper than the previous networks and allows for the inclusion of conditional factors. Experiment results on toy data and real taxi data demonstrate that our models effectively fit the data distribution both unconditionally and unconditionally and provide accurate predictions.
2103.08469
Alexander Barbie
Alexander Barbie, Niklas Pech, Wilhelm Hasselbring, Sascha Fl\"ogel, Frank Wenzh\"ofer, Michael Walter, Elena Shchekinova, Marc Busse, Matthias T\"urk, Michael Hofbauer, Stefan Sommer
Developing an Underwater Network of Ocean Observation Systems with Digital Twin Prototypes -- A Field Report from the Baltic Sea
8 pages, 5 figures, to be published in IEEE Internet Computing
null
10.1109/MIC.2021.3065245
null
cs.SE cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
During the research cruise AL547 with RV ALKOR (October 20-31, 2020), a collaborative underwater network of ocean observation systems was deployed in Boknis Eck (SW Baltic Sea, German exclusive economic zone (EEZ)) in the context of the project ARCHES (Autonomous Robotic Networks to Help Modern Societies). This network was realized via a Digital Twin Prototype approach. During that period different scenarios were executed to demonstrate the feasibility of Digital Twins in an extreme environment such as underwater. One of the scenarios showed the collaboration of stage IV Digital Twins with their physical counterparts on the seafloor. This way, we address the research question, whether Digital Twins represent a feasible approach to operate mobile ad hoc networks for ocean and coastal observation.
[ { "created": "Mon, 15 Mar 2021 15:45:49 GMT", "version": "v1" } ]
2021-03-16
[ [ "Barbie", "Alexander", "" ], [ "Pech", "Niklas", "" ], [ "Hasselbring", "Wilhelm", "" ], [ "Flögel", "Sascha", "" ], [ "Wenzhöfer", "Frank", "" ], [ "Walter", "Michael", "" ], [ "Shchekinova", "Elena", "" ], [ "Busse", "Marc", "" ], [ "Türk", "Matthias", "" ], [ "Hofbauer", "Michael", "" ], [ "Sommer", "Stefan", "" ] ]
During the research cruise AL547 with RV ALKOR (October 20-31, 2020), a collaborative underwater network of ocean observation systems was deployed in Boknis Eck (SW Baltic Sea, German exclusive economic zone (EEZ)) in the context of the project ARCHES (Autonomous Robotic Networks to Help Modern Societies). This network was realized via a Digital Twin Prototype approach. During that period different scenarios were executed to demonstrate the feasibility of Digital Twins in an extreme environment such as underwater. One of the scenarios showed the collaboration of stage IV Digital Twins with their physical counterparts on the seafloor. This way, we address the research question, whether Digital Twins represent a feasible approach to operate mobile ad hoc networks for ocean and coastal observation.
2010.00866
Wolfgang Fuhl
Wolfgang Fuhl, Enkelejda Kasneci
Weight and Gradient Centralization in Deep Neural Networks
null
null
null
null
cs.CV cs.LG eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Batch normalization is currently the most widely used variant of internal normalization for deep neural networks. Additional work has shown that the normalization of weights and additional conditioning as well as the normalization of gradients further improve the generalization. In this work, we combine several of these methods and thereby increase the generalization of the networks. The advantage of the newer methods compared to the batch normalization is not only increased generalization, but also that these methods only have to be applied during training and, therefore, do not influence the running time during use. Link to CUDA code https://atreus.informatik.uni-tuebingen.de/seafile/d/8e2ab8c3fdd444e1a135/
[ { "created": "Fri, 2 Oct 2020 08:50:04 GMT", "version": "v1" }, { "created": "Fri, 30 Oct 2020 14:13:39 GMT", "version": "v2" }, { "created": "Sun, 17 Jan 2021 12:05:14 GMT", "version": "v3" } ]
2021-01-19
[ [ "Fuhl", "Wolfgang", "" ], [ "Kasneci", "Enkelejda", "" ] ]
Batch normalization is currently the most widely used variant of internal normalization for deep neural networks. Additional work has shown that the normalization of weights and additional conditioning as well as the normalization of gradients further improve the generalization. In this work, we combine several of these methods and thereby increase the generalization of the networks. The advantage of the newer methods compared to the batch normalization is not only increased generalization, but also that these methods only have to be applied during training and, therefore, do not influence the running time during use. Link to CUDA code https://atreus.informatik.uni-tuebingen.de/seafile/d/8e2ab8c3fdd444e1a135/
2008.01701
Aupendu Kar
Aupendu Kar, Sobhan Kanti Dhara, Debashis Sen, Prabir Kumar Biswas
Progressive Update Guided Interdependent Networks for Single Image Dehazing
First two authors contributed equally. This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible. Project Website: https://aupendu.github.io/progressive-dehaze
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Images with haze of different varieties often pose a significant challenge to dehazing. Therefore, guidance by estimates of haze parameters related to the variety would be beneficial, and their progressive update jointly with haze reduction will allow effective dehazing. To this end, we propose a multi-network dehazing framework containing novel interdependent dehazing and haze parameter updater networks that operate in a progressive manner. The haze parameters, transmission map and atmospheric light, are first estimated using dedicated convolutional networks that allow color-cast handling. The estimated parameters are then used to guide our dehazing module, where the estimates are progressively updated by novel convolutional networks. The updating takes place jointly with progressive dehazing using a network that invokes inter-step dependencies. The joint progressive updating and dehazing gradually modify the haze parameter values toward achieving effective dehazing. Through different studies, our dehazing framework is shown to be more effective than image-to-image mapping and predefined haze formation model based dehazing. The framework is also found capable of handling a wide variety of hazy conditions wtih different types and amounts of haze and color casts. Our dehazing framework is qualitatively and quantitatively found to outperform the state-of-the-art on synthetic and real-world hazy images of multiple datasets with varied haze conditions.
[ { "created": "Tue, 4 Aug 2020 17:05:48 GMT", "version": "v1" }, { "created": "Tue, 8 Feb 2022 17:42:12 GMT", "version": "v2" }, { "created": "Tue, 27 Dec 2022 09:09:33 GMT", "version": "v3" }, { "created": "Wed, 7 Jun 2023 17:28:39 GMT", "version": "v4" } ]
2023-06-08
[ [ "Kar", "Aupendu", "" ], [ "Dhara", "Sobhan Kanti", "" ], [ "Sen", "Debashis", "" ], [ "Biswas", "Prabir Kumar", "" ] ]
Images with haze of different varieties often pose a significant challenge to dehazing. Therefore, guidance by estimates of haze parameters related to the variety would be beneficial, and their progressive update jointly with haze reduction will allow effective dehazing. To this end, we propose a multi-network dehazing framework containing novel interdependent dehazing and haze parameter updater networks that operate in a progressive manner. The haze parameters, transmission map and atmospheric light, are first estimated using dedicated convolutional networks that allow color-cast handling. The estimated parameters are then used to guide our dehazing module, where the estimates are progressively updated by novel convolutional networks. The updating takes place jointly with progressive dehazing using a network that invokes inter-step dependencies. The joint progressive updating and dehazing gradually modify the haze parameter values toward achieving effective dehazing. Through different studies, our dehazing framework is shown to be more effective than image-to-image mapping and predefined haze formation model based dehazing. The framework is also found capable of handling a wide variety of hazy conditions wtih different types and amounts of haze and color casts. Our dehazing framework is qualitatively and quantitatively found to outperform the state-of-the-art on synthetic and real-world hazy images of multiple datasets with varied haze conditions.
2206.09106
Zhengyi Luo
Zhengyi Luo, Shun Iwase, Ye Yuan, Kris Kitani
Embodied Scene-aware Human Pose Estimation
NeurIPS 2022. Project website: https://zhengyiluo.github.io/projects/embodied_pose/. Zhengyi Luo and Shun Iwase contributed equally
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
We propose embodied scene-aware human pose estimation where we estimate 3D poses based on a simulated agent's proprioception and scene awareness, along with external third-person observations. Unlike prior methods that often resort to multistage optimization, non-causal inference, and complex contact modeling to estimate human pose and human scene interactions, our method is one-stage, causal, and recovers global 3D human poses in a simulated environment. Since 2D third-person observations are coupled with the camera pose, we propose to disentangle the camera pose and use a multi-step projection gradient defined in the global coordinate frame as the movement cue for our embodied agent. Leveraging a physics simulation and prescanned scenes (e.g., 3D mesh), we simulate our agent in everyday environments (library, office, bedroom, etc.) and equip our agent with environmental sensors to intelligently navigate and interact with the geometries of the scene. Our method also relies only on 2D keypoints and can be trained on synthetic datasets derived from popular human motion databases. To evaluate, we use the popular H36M and PROX datasets and achieve high quality pose estimation on the challenging PROX dataset without ever using PROX motion sequences for training. Code and videos are available on the project page.
[ { "created": "Sat, 18 Jun 2022 03:50:19 GMT", "version": "v1" }, { "created": "Thu, 14 Jul 2022 20:13:26 GMT", "version": "v2" }, { "created": "Thu, 13 Oct 2022 20:31:36 GMT", "version": "v3" } ]
2022-10-17
[ [ "Luo", "Zhengyi", "" ], [ "Iwase", "Shun", "" ], [ "Yuan", "Ye", "" ], [ "Kitani", "Kris", "" ] ]
We propose embodied scene-aware human pose estimation where we estimate 3D poses based on a simulated agent's proprioception and scene awareness, along with external third-person observations. Unlike prior methods that often resort to multistage optimization, non-causal inference, and complex contact modeling to estimate human pose and human scene interactions, our method is one-stage, causal, and recovers global 3D human poses in a simulated environment. Since 2D third-person observations are coupled with the camera pose, we propose to disentangle the camera pose and use a multi-step projection gradient defined in the global coordinate frame as the movement cue for our embodied agent. Leveraging a physics simulation and prescanned scenes (e.g., 3D mesh), we simulate our agent in everyday environments (library, office, bedroom, etc.) and equip our agent with environmental sensors to intelligently navigate and interact with the geometries of the scene. Our method also relies only on 2D keypoints and can be trained on synthetic datasets derived from popular human motion databases. To evaluate, we use the popular H36M and PROX datasets and achieve high quality pose estimation on the challenging PROX dataset without ever using PROX motion sequences for training. Code and videos are available on the project page.
2402.18792
Fangyuan Zhang
Fangyuan Zhang, Huichi Zhou, Shuangjiao Li, Hongtao Wang
MPAT: Building Robust Deep Neural Networks against Textual Adversarial Attacks
null
null
null
null
cs.LG cs.CL cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep neural networks have been proven to be vulnerable to adversarial examples and various methods have been proposed to defend against adversarial attacks for natural language processing tasks. However, previous defense methods have limitations in maintaining effective defense while ensuring the performance of the original task. In this paper, we propose a malicious perturbation based adversarial training method (MPAT) for building robust deep neural networks against textual adversarial attacks. Specifically, we construct a multi-level malicious example generation strategy to generate adversarial examples with malicious perturbations, which are used instead of original inputs for model training. Additionally, we employ a novel training objective function to ensure achieving the defense goal without compromising the performance on the original task. We conduct comprehensive experiments to evaluate our defense method by attacking five victim models on three benchmark datasets. The result demonstrates that our method is more effective against malicious adversarial attacks compared with previous defense methods while maintaining or further improving the performance on the original task.
[ { "created": "Thu, 29 Feb 2024 01:49:18 GMT", "version": "v1" } ]
2024-03-01
[ [ "Zhang", "Fangyuan", "" ], [ "Zhou", "Huichi", "" ], [ "Li", "Shuangjiao", "" ], [ "Wang", "Hongtao", "" ] ]
Deep neural networks have been proven to be vulnerable to adversarial examples and various methods have been proposed to defend against adversarial attacks for natural language processing tasks. However, previous defense methods have limitations in maintaining effective defense while ensuring the performance of the original task. In this paper, we propose a malicious perturbation based adversarial training method (MPAT) for building robust deep neural networks against textual adversarial attacks. Specifically, we construct a multi-level malicious example generation strategy to generate adversarial examples with malicious perturbations, which are used instead of original inputs for model training. Additionally, we employ a novel training objective function to ensure achieving the defense goal without compromising the performance on the original task. We conduct comprehensive experiments to evaluate our defense method by attacking five victim models on three benchmark datasets. The result demonstrates that our method is more effective against malicious adversarial attacks compared with previous defense methods while maintaining or further improving the performance on the original task.
1507.08600
Alexey Sorokin
Alexey Sorokin
Normal forms for linear displacement context-free grammars
5 pages, just for educational and referential purposes, therefore no references
null
null
null
cs.FL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we prove several results on normal forms for linear displacement context-free grammars. The results themselves are rather simple and use well-known techniques, but they are extensively used in more complex constructions. Therefore this article mostly serves educational and referential purposes.
[ { "created": "Thu, 30 Jul 2015 17:54:59 GMT", "version": "v1" } ]
2015-07-31
[ [ "Sorokin", "Alexey", "" ] ]
In this paper we prove several results on normal forms for linear displacement context-free grammars. The results themselves are rather simple and use well-known techniques, but they are extensively used in more complex constructions. Therefore this article mostly serves educational and referential purposes.
2407.16680
Adrian Remonda
Adrian Remonda, Nicklas Hansen, Ayoub Raji, Nicola Musiu, Marko Bertogna, Eduardo Veas, Xiaolong Wang
A Simulation Benchmark for Autonomous Racing with Large-Scale Human Data
Project page and code can be found at: \url{https://assetto-corsa-gym.github.io/}
null
null
null
cs.RO cs.LG
http://creativecommons.org/licenses/by/4.0/
Despite the availability of international prize-money competitions, scaled vehicles, and simulation environments, research on autonomous racing and the control of sports cars operating close to the limit of handling has been limited by the high costs of vehicle acquisition and management, as well as the limited physics accuracy of open-source simulators. In this paper, we propose a racing simulation platform based on the simulator Assetto Corsa to test, validate, and benchmark autonomous driving algorithms, including reinforcement learning (RL) and classical Model Predictive Control (MPC), in realistic and challenging scenarios. Our contributions include the development of this simulation platform, several state-of-the-art algorithms tailored to the racing environment, and a comprehensive dataset collected from human drivers. Additionally, we evaluate algorithms in the offline RL setting. All the necessary code (including environment and benchmarks), working examples, datasets, and videos are publicly released and can be found at: https://assetto-corsa-gym.github.io
[ { "created": "Tue, 23 Jul 2024 17:45:16 GMT", "version": "v1" }, { "created": "Wed, 24 Jul 2024 10:58:48 GMT", "version": "v2" } ]
2024-07-25
[ [ "Remonda", "Adrian", "" ], [ "Hansen", "Nicklas", "" ], [ "Raji", "Ayoub", "" ], [ "Musiu", "Nicola", "" ], [ "Bertogna", "Marko", "" ], [ "Veas", "Eduardo", "" ], [ "Wang", "Xiaolong", "" ] ]
Despite the availability of international prize-money competitions, scaled vehicles, and simulation environments, research on autonomous racing and the control of sports cars operating close to the limit of handling has been limited by the high costs of vehicle acquisition and management, as well as the limited physics accuracy of open-source simulators. In this paper, we propose a racing simulation platform based on the simulator Assetto Corsa to test, validate, and benchmark autonomous driving algorithms, including reinforcement learning (RL) and classical Model Predictive Control (MPC), in realistic and challenging scenarios. Our contributions include the development of this simulation platform, several state-of-the-art algorithms tailored to the racing environment, and a comprehensive dataset collected from human drivers. Additionally, we evaluate algorithms in the offline RL setting. All the necessary code (including environment and benchmarks), working examples, datasets, and videos are publicly released and can be found at: https://assetto-corsa-gym.github.io
cs/0307054
Mihai V. Putz
Viorel Putz and Mihai V. Putz
Contributions to the Development and Improvement of a Regulatory and Pre-Regulatory Digitally System for the Tools within Flexible Fabrication Systems
5 pages, 3 figures
null
null
null
cs.CE cs.SE
null
The paper reports the obtained results for the projection and realization of a digitally system aiming to assist the equipment for a regulatory and pre-regulatory tools and holding tools within the flexible fabrication systems (FFS). Moreover, based on the present results, the same methodology can be applied for assisting tools from the point of view of their integrity and to wear compensation in the FFS framework.
[ { "created": "Thu, 24 Jul 2003 14:01:16 GMT", "version": "v1" } ]
2007-05-23
[ [ "Putz", "Viorel", "" ], [ "Putz", "Mihai V.", "" ] ]
The paper reports the obtained results for the projection and realization of a digitally system aiming to assist the equipment for a regulatory and pre-regulatory tools and holding tools within the flexible fabrication systems (FFS). Moreover, based on the present results, the same methodology can be applied for assisting tools from the point of view of their integrity and to wear compensation in the FFS framework.
1611.06314
Georgios Giasemidis Dr
Georgios Giasemidis, Colin Singleton, Ioannis Agrafiotis, Jason R.C. Nurse, Alan Pilgrim, Chris Willis, Danica Vukadinovic Greetham
Determining the Veracity of Rumours on Twitter
21 pages, 6 figures, 2 tables
SocInfo 2016, Part I, LNCS 10046, pp. 185-205, 2016
10.1007/978-3-319-47880-7_12
null
cs.SI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While social networks can provide an ideal platform for up-to-date information from individuals across the world, it has also proved to be a place where rumours fester and accidental or deliberate misinformation often emerges. In this article, we aim to support the task of making sense from social media data, and specifically, seek to build an autonomous message-classifier that filters relevant and trustworthy information from Twitter. For our work, we collected about 100 million public tweets, including users' past tweets, from which we identified 72 rumours (41 true, 31 false). We considered over 80 trustworthiness measures including the authors' profile and past behaviour, the social network connections (graphs), and the content of tweets themselves. We ran modern machine-learning classifiers over those measures to produce trustworthiness scores at various time windows from the outbreak of the rumour. Such time-windows were key as they allowed useful insight into the progression of the rumours. From our findings, we identified that our model was significantly more accurate than similar studies in the literature. We also identified critical attributes of the data that give rise to the trustworthiness scores assigned. Finally we developed a software demonstration that provides a visual user interface to allow the user to examine the analysis.
[ { "created": "Sat, 19 Nov 2016 06:22:50 GMT", "version": "v1" } ]
2016-11-22
[ [ "Giasemidis", "Georgios", "" ], [ "Singleton", "Colin", "" ], [ "Agrafiotis", "Ioannis", "" ], [ "Nurse", "Jason R. C.", "" ], [ "Pilgrim", "Alan", "" ], [ "Willis", "Chris", "" ], [ "Greetham", "Danica Vukadinovic", "" ] ]
While social networks can provide an ideal platform for up-to-date information from individuals across the world, it has also proved to be a place where rumours fester and accidental or deliberate misinformation often emerges. In this article, we aim to support the task of making sense from social media data, and specifically, seek to build an autonomous message-classifier that filters relevant and trustworthy information from Twitter. For our work, we collected about 100 million public tweets, including users' past tweets, from which we identified 72 rumours (41 true, 31 false). We considered over 80 trustworthiness measures including the authors' profile and past behaviour, the social network connections (graphs), and the content of tweets themselves. We ran modern machine-learning classifiers over those measures to produce trustworthiness scores at various time windows from the outbreak of the rumour. Such time-windows were key as they allowed useful insight into the progression of the rumours. From our findings, we identified that our model was significantly more accurate than similar studies in the literature. We also identified critical attributes of the data that give rise to the trustworthiness scores assigned. Finally we developed a software demonstration that provides a visual user interface to allow the user to examine the analysis.
1504.01928
EPTCS
Nikolay Pakulin (Institute for System Programming Russian Academy of Sciences), Alexander K. Petrenko (Institute for System Programming Russian Academy of Sciences), Bernd-Holger Schlingloff (Humboldt-Universit\"at zu Berlin, Institut f\"ur Informatik)
Proceedings Tenth Workshop on Model Based Testing
null
EPTCS 180, 2015
10.4204/EPTCS.180
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The workshop is devoted to model-based testing of both software and hardware. Model-based testing uses models describing the required behavior of the system under consideration to guide such efforts as test selection and test results evaluation. Testing validates the real system behavior against models and checks that the implementation conforms to them, but is capable also to find errors in the models themselves. The intent of this workshop is to bring together researchers and users of model-based testing techniques and tools to discuss the state of the art in theory, applications, tools, and industrialization of model-based testing and related domains.
[ { "created": "Wed, 8 Apr 2015 12:05:03 GMT", "version": "v1" } ]
2015-04-09
[ [ "Pakulin", "Nikolay", "", "Institute for System Programming Russian Academy of\n Sciences" ], [ "Petrenko", "Alexander K.", "", "Institute for System Programming Russian\n Academy of Sciences" ], [ "Schlingloff", "Bernd-Holger", "", "Humboldt-Universität zu\n Berlin, Institut für Informatik" ] ]
The workshop is devoted to model-based testing of both software and hardware. Model-based testing uses models describing the required behavior of the system under consideration to guide such efforts as test selection and test results evaluation. Testing validates the real system behavior against models and checks that the implementation conforms to them, but is capable also to find errors in the models themselves. The intent of this workshop is to bring together researchers and users of model-based testing techniques and tools to discuss the state of the art in theory, applications, tools, and industrialization of model-based testing and related domains.
1807.00637
Shaked Perek
Shaked Perek, Alon Hazan, Ella Barkan, Ayelet Akselrod-Ballin
Mammography Dual View Mass Correspondence
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Standard breast cancer screening involves the acquisition of two mammography X-ray projections for each breast. Typically, a comparison of both views supports the challenging task of tumor detection and localization. We introduce a deep learning, patch-based Siamese network for lesion matching in dual-view mammography. Our locally-fitted approach generates a joint patch pair representation and comparison with a shared configuration between the two views. We performed a comprehensive set of experiments with the network on standard datasets, among them the large Digital Database for Screening Mammography (DDSM). We analyzed the effect of transfer learning with the network between different types of datasets and compared the network-based matching to using Euclidean distance by template matching. Finally, we evaluated the contribution of the matching network in a full detection pipeline. Experimental results demonstrate the promise of improved detection accuracy using our approach.
[ { "created": "Mon, 2 Jul 2018 12:52:24 GMT", "version": "v1" } ]
2018-07-03
[ [ "Perek", "Shaked", "" ], [ "Hazan", "Alon", "" ], [ "Barkan", "Ella", "" ], [ "Akselrod-Ballin", "Ayelet", "" ] ]
Standard breast cancer screening involves the acquisition of two mammography X-ray projections for each breast. Typically, a comparison of both views supports the challenging task of tumor detection and localization. We introduce a deep learning, patch-based Siamese network for lesion matching in dual-view mammography. Our locally-fitted approach generates a joint patch pair representation and comparison with a shared configuration between the two views. We performed a comprehensive set of experiments with the network on standard datasets, among them the large Digital Database for Screening Mammography (DDSM). We analyzed the effect of transfer learning with the network between different types of datasets and compared the network-based matching to using Euclidean distance by template matching. Finally, we evaluated the contribution of the matching network in a full detection pipeline. Experimental results demonstrate the promise of improved detection accuracy using our approach.
1308.0090
Alex James Dr
A. P. James and L.R.V.J. Francis and D. Kumar
Resistive Threshold Logic
Memristors, Brain inspired logic circuits. IEEE Transactions on Very Large Scale Integration (VLSI) Systems, 2013
null
10.1109/TVLSI.2012.2232946
null
cs.ET cs.AR
http://creativecommons.org/licenses/by/3.0/
We report a resistance based threshold logic family useful for mimicking brain like large variable logic functions in VLSI. A universal Boolean logic cell based on an analog resistive divider and threshold logic circuit is presented. The resistive divider is implemented using memristors and provides output voltage as a summation of weighted product of input voltages. The output of resistive divider is converted into a binary value by a threshold operation implemented by CMOS inverter and/or Opamp. An universal cell structure is presented to decrease the overall implementation complexity and number of components. When the number of input variables become very high, the proposed cell offers advantages of smaller area and design simplicity in comparison with CMOS based logic circuits.
[ { "created": "Thu, 1 Aug 2013 04:17:30 GMT", "version": "v1" } ]
2013-08-03
[ [ "James", "A. P.", "" ], [ "Francis", "L. R. V. J.", "" ], [ "Kumar", "D.", "" ] ]
We report a resistance based threshold logic family useful for mimicking brain like large variable logic functions in VLSI. A universal Boolean logic cell based on an analog resistive divider and threshold logic circuit is presented. The resistive divider is implemented using memristors and provides output voltage as a summation of weighted product of input voltages. The output of resistive divider is converted into a binary value by a threshold operation implemented by CMOS inverter and/or Opamp. An universal cell structure is presented to decrease the overall implementation complexity and number of components. When the number of input variables become very high, the proposed cell offers advantages of smaller area and design simplicity in comparison with CMOS based logic circuits.
2405.06761
Aida Manzano Kharman
Aida Manzano Kharman, Pietro Ferraro, Homayoun Hamedmoghadam, Robert Shorten
Tree Proof-of-Position Algorithms
null
null
null
null
cs.DS
http://creativecommons.org/licenses/by-nc-sa/4.0/
We present a novel class of proof-of-position algorithms: Tree-Proof-of-Position (T-PoP). This algorithm is decentralised, collaborative and can be computed in a privacy preserving manner, such that agents do not need to reveal their position publicly. We make no assumptions of honest behaviour in the system, and consider varying ways in which agents may misbehave. Our algorithm is therefore resilient to highly adversarial scenarios. This makes it suitable for a wide class of applications, namely those in which trust in a centralised infrastructure may not be assumed, or high security risk scenarios. Our algorithm has a worst case quadratic runtime, making it suitable for hardware constrained IoT applications. We also provide a mathematical model that summarises T-PoP's performance for varying operating conditions. We then simulate T-PoP's behaviour with a large number of agent-based simulations, which are in complete agreement with our mathematical model, thus demonstrating its validity. T-PoP can achieve high levels of reliability and security by tuning its operating conditions, both in high and low density environments. Finally, we also present a mathematical model to probabilistically detect platooning attacks.
[ { "created": "Fri, 10 May 2024 18:26:01 GMT", "version": "v1" }, { "created": "Tue, 4 Jun 2024 15:13:08 GMT", "version": "v2" } ]
2024-06-05
[ [ "Kharman", "Aida Manzano", "" ], [ "Ferraro", "Pietro", "" ], [ "Hamedmoghadam", "Homayoun", "" ], [ "Shorten", "Robert", "" ] ]
We present a novel class of proof-of-position algorithms: Tree-Proof-of-Position (T-PoP). This algorithm is decentralised, collaborative and can be computed in a privacy preserving manner, such that agents do not need to reveal their position publicly. We make no assumptions of honest behaviour in the system, and consider varying ways in which agents may misbehave. Our algorithm is therefore resilient to highly adversarial scenarios. This makes it suitable for a wide class of applications, namely those in which trust in a centralised infrastructure may not be assumed, or high security risk scenarios. Our algorithm has a worst case quadratic runtime, making it suitable for hardware constrained IoT applications. We also provide a mathematical model that summarises T-PoP's performance for varying operating conditions. We then simulate T-PoP's behaviour with a large number of agent-based simulations, which are in complete agreement with our mathematical model, thus demonstrating its validity. T-PoP can achieve high levels of reliability and security by tuning its operating conditions, both in high and low density environments. Finally, we also present a mathematical model to probabilistically detect platooning attacks.
2112.14055
EPTCS
Samuel Mimram (Ecole polytechnique), Aly-Bora Ulusoy (Ecole polytechnique)
Syntactic Regions for Concurrent Programs
In Proceedings MFPS 2021, arXiv:2112.13746
EPTCS 351, 2021, pp. 184-199
10.4204/EPTCS.351.12
null
cs.LO
http://creativecommons.org/licenses/by/4.0/
In order to gain a better understanding of the state space of programs, with the aim of making their verification more tractable, models based on directed topological spaces have been introduced, allowing to take in account equivalence between execution traces, as well as translate features of the execution (such as the presence of deadlocks) into geometrical situations. In this context, many algorithms were introduced, based on a description of the geometrical models as regions consisting of unions of rectangles. We explain here that these constructions can actually be performed directly on the syntax of programs, thus resulting in representations which are more natural and easier to implement. In order to do so, we start from the observation that positions in a program can be described as partial explorations of the program. The operational semantics induces a partial order on positions, and regions can be defined as formal unions of intervals in the resulting poset. We then study the structure of such regions and show that, under reasonable conditions, they form a boolean algebra and admit a representation in normal form (which corresponds to covering a space by maximal intervals), thus supporting the constructions needed for the purpose of studying programs. All the operations involved here are given explicit algorithmic descriptions.
[ { "created": "Tue, 28 Dec 2021 09:09:02 GMT", "version": "v1" } ]
2021-12-30
[ [ "Mimram", "Samuel", "", "Ecole polytechnique" ], [ "Ulusoy", "Aly-Bora", "", "Ecole\n polytechnique" ] ]
In order to gain a better understanding of the state space of programs, with the aim of making their verification more tractable, models based on directed topological spaces have been introduced, allowing to take in account equivalence between execution traces, as well as translate features of the execution (such as the presence of deadlocks) into geometrical situations. In this context, many algorithms were introduced, based on a description of the geometrical models as regions consisting of unions of rectangles. We explain here that these constructions can actually be performed directly on the syntax of programs, thus resulting in representations which are more natural and easier to implement. In order to do so, we start from the observation that positions in a program can be described as partial explorations of the program. The operational semantics induces a partial order on positions, and regions can be defined as formal unions of intervals in the resulting poset. We then study the structure of such regions and show that, under reasonable conditions, they form a boolean algebra and admit a representation in normal form (which corresponds to covering a space by maximal intervals), thus supporting the constructions needed for the purpose of studying programs. All the operations involved here are given explicit algorithmic descriptions.
2308.05948
Yiyang Cai
Yiyang Cai, Jiaming Lu, Jiewen Wang, Shuang Liang
Uncertainty-Aware Cross-Modal Transfer Network for Sketch-Based 3D Shape Retrieval
6 pages, 7 figures; To be published in IEEE International Conference on Multimedia and Expo 2023
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, sketch-based 3D shape retrieval has attracted growing attention. While many previous studies have focused on cross-modal matching between hand-drawn sketches and 3D shapes, the critical issue of how to handle low-quality and noisy samples in sketch data has been largely neglected. This paper presents an uncertainty-aware cross-modal transfer network (UACTN) that addresses this issue. UACTN decouples the representation learning of sketches and 3D shapes into two separate tasks: classification-based sketch uncertainty learning and 3D shape feature transfer. We first introduce an end-to-end classification-based approach that simultaneously learns sketch features and uncertainty, allowing uncertainty to prevent overfitting noisy sketches by assigning different levels of importance to clean and noisy sketches. Then, 3D shape features are mapped into the pre-learned sketch embedding space for feature alignment. Extensive experiments and ablation studies on two benchmarks demonstrate the superiority of our proposed method compared to state-of-the-art methods.
[ { "created": "Fri, 11 Aug 2023 05:46:52 GMT", "version": "v1" } ]
2023-08-14
[ [ "Cai", "Yiyang", "" ], [ "Lu", "Jiaming", "" ], [ "Wang", "Jiewen", "" ], [ "Liang", "Shuang", "" ] ]
In recent years, sketch-based 3D shape retrieval has attracted growing attention. While many previous studies have focused on cross-modal matching between hand-drawn sketches and 3D shapes, the critical issue of how to handle low-quality and noisy samples in sketch data has been largely neglected. This paper presents an uncertainty-aware cross-modal transfer network (UACTN) that addresses this issue. UACTN decouples the representation learning of sketches and 3D shapes into two separate tasks: classification-based sketch uncertainty learning and 3D shape feature transfer. We first introduce an end-to-end classification-based approach that simultaneously learns sketch features and uncertainty, allowing uncertainty to prevent overfitting noisy sketches by assigning different levels of importance to clean and noisy sketches. Then, 3D shape features are mapped into the pre-learned sketch embedding space for feature alignment. Extensive experiments and ablation studies on two benchmarks demonstrate the superiority of our proposed method compared to state-of-the-art methods.
2202.06268
Nannan Li
Nannan Li, Yaran Chen, Weifan Li, Zixiang Ding, Dongbin Zhao
BViT: Broad Attention based Vision Transformer
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent works have demonstrated that transformer can achieve promising performance in computer vision, by exploiting the relationship among image patches with self-attention. While they only consider the attention in a single feature layer, but ignore the complementarity of attention in different levels. In this paper, we propose the broad attention to improve the performance by incorporating the attention relationship of different layers for vision transformer, which is called BViT. The broad attention is implemented by broad connection and parameter-free attention. Broad connection of each transformer layer promotes the transmission and integration of information for BViT. Without introducing additional trainable parameters, parameter-free attention jointly focuses on the already available attention information in different layers for extracting useful information and building their relationship. Experiments on image classification tasks demonstrate that BViT delivers state-of-the-art accuracy of 74.8\%/81.6\% top-1 accuracy on ImageNet with 5M/22M parameters. Moreover, we transfer BViT to downstream object recognition benchmarks to achieve 98.9\% and 89.9\% on CIFAR10 and CIFAR100 respectively that exceed ViT with fewer parameters. For the generalization test, the broad attention in Swin Transformer and T2T-ViT also bring an improvement of more than 1\%. To sum up, broad attention is promising to promote the performance of attention based models. Code and pre-trained models are available at https://github.com/DRL-CASIA/Broad_ViT.
[ { "created": "Sun, 13 Feb 2022 09:23:29 GMT", "version": "v1" }, { "created": "Fri, 9 Jun 2023 06:08:37 GMT", "version": "v2" } ]
2023-06-12
[ [ "Li", "Nannan", "" ], [ "Chen", "Yaran", "" ], [ "Li", "Weifan", "" ], [ "Ding", "Zixiang", "" ], [ "Zhao", "Dongbin", "" ] ]
Recent works have demonstrated that transformer can achieve promising performance in computer vision, by exploiting the relationship among image patches with self-attention. While they only consider the attention in a single feature layer, but ignore the complementarity of attention in different levels. In this paper, we propose the broad attention to improve the performance by incorporating the attention relationship of different layers for vision transformer, which is called BViT. The broad attention is implemented by broad connection and parameter-free attention. Broad connection of each transformer layer promotes the transmission and integration of information for BViT. Without introducing additional trainable parameters, parameter-free attention jointly focuses on the already available attention information in different layers for extracting useful information and building their relationship. Experiments on image classification tasks demonstrate that BViT delivers state-of-the-art accuracy of 74.8\%/81.6\% top-1 accuracy on ImageNet with 5M/22M parameters. Moreover, we transfer BViT to downstream object recognition benchmarks to achieve 98.9\% and 89.9\% on CIFAR10 and CIFAR100 respectively that exceed ViT with fewer parameters. For the generalization test, the broad attention in Swin Transformer and T2T-ViT also bring an improvement of more than 1\%. To sum up, broad attention is promising to promote the performance of attention based models. Code and pre-trained models are available at https://github.com/DRL-CASIA/Broad_ViT.
2404.01268
Weixin Liang
Weixin Liang, Yaohui Zhang, Zhengxuan Wu, Haley Lepp, Wenlong Ji, Xuandong Zhao, Hancheng Cao, Sheng Liu, Siyu He, Zhi Huang, Diyi Yang, Christopher Potts, Christopher D Manning, James Y. Zou
Mapping the Increasing Use of LLMs in Scientific Papers
null
null
null
null
cs.CL cs.AI cs.DL cs.LG cs.SI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Scientific publishing lays the foundation of science by disseminating research findings, fostering collaboration, encouraging reproducibility, and ensuring that scientific knowledge is accessible, verifiable, and built upon over time. Recently, there has been immense speculation about how many people are using large language models (LLMs) like ChatGPT in their academic writing, and to what extent this tool might have an effect on global scientific practices. However, we lack a precise measure of the proportion of academic writing substantially modified or produced by LLMs. To address this gap, we conduct the first systematic, large-scale analysis across 950,965 papers published between January 2020 and February 2024 on the arXiv, bioRxiv, and Nature portfolio journals, using a population-level statistical framework to measure the prevalence of LLM-modified content over time. Our statistical estimation operates on the corpus level and is more robust than inference on individual instances. Our findings reveal a steady increase in LLM usage, with the largest and fastest growth observed in Computer Science papers (up to 17.5%). In comparison, Mathematics papers and the Nature portfolio showed the least LLM modification (up to 6.3%). Moreover, at an aggregate level, our analysis reveals that higher levels of LLM-modification are associated with papers whose first authors post preprints more frequently, papers in more crowded research areas, and papers of shorter lengths. Our findings suggests that LLMs are being broadly used in scientific writings.
[ { "created": "Mon, 1 Apr 2024 17:45:15 GMT", "version": "v1" } ]
2024-04-02
[ [ "Liang", "Weixin", "" ], [ "Zhang", "Yaohui", "" ], [ "Wu", "Zhengxuan", "" ], [ "Lepp", "Haley", "" ], [ "Ji", "Wenlong", "" ], [ "Zhao", "Xuandong", "" ], [ "Cao", "Hancheng", "" ], [ "Liu", "Sheng", "" ], [ "He", "Siyu", "" ], [ "Huang", "Zhi", "" ], [ "Yang", "Diyi", "" ], [ "Potts", "Christopher", "" ], [ "Manning", "Christopher D", "" ], [ "Zou", "James Y.", "" ] ]
Scientific publishing lays the foundation of science by disseminating research findings, fostering collaboration, encouraging reproducibility, and ensuring that scientific knowledge is accessible, verifiable, and built upon over time. Recently, there has been immense speculation about how many people are using large language models (LLMs) like ChatGPT in their academic writing, and to what extent this tool might have an effect on global scientific practices. However, we lack a precise measure of the proportion of academic writing substantially modified or produced by LLMs. To address this gap, we conduct the first systematic, large-scale analysis across 950,965 papers published between January 2020 and February 2024 on the arXiv, bioRxiv, and Nature portfolio journals, using a population-level statistical framework to measure the prevalence of LLM-modified content over time. Our statistical estimation operates on the corpus level and is more robust than inference on individual instances. Our findings reveal a steady increase in LLM usage, with the largest and fastest growth observed in Computer Science papers (up to 17.5%). In comparison, Mathematics papers and the Nature portfolio showed the least LLM modification (up to 6.3%). Moreover, at an aggregate level, our analysis reveals that higher levels of LLM-modification are associated with papers whose first authors post preprints more frequently, papers in more crowded research areas, and papers of shorter lengths. Our findings suggests that LLMs are being broadly used in scientific writings.
1604.03763
Shun Zheng
Shun Zheng, Jialei Wang, Fen Xia, Wei Xu, Tong Zhang
A General Distributed Dual Coordinate Optimization Framework for Regularized Loss Minimization
null
null
null
null
cs.LG cs.DC math.OC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In modern large-scale machine learning applications, the training data are often partitioned and stored on multiple machines. It is customary to employ the "data parallelism" approach, where the aggregated training loss is minimized without moving data across machines. In this paper, we introduce a novel distributed dual formulation for regularized loss minimization problems that can directly handle data parallelism in the distributed setting. This formulation allows us to systematically derive dual coordinate optimization procedures, which we refer to as Distributed Alternating Dual Maximization (DADM). The framework extends earlier studies described in (Boyd et al., 2011; Ma et al., 2015a; Jaggi et al., 2014; Yang, 2013) and has rigorous theoretical analyses. Moreover with the help of the new formulation, we develop the accelerated version of DADM (Acc-DADM) by generalizing the acceleration technique from (Shalev-Shwartz and Zhang, 2014) to the distributed setting. We also provide theoretical results for the proposed accelerated version and the new result improves previous ones (Yang, 2013; Ma et al., 2015a) whose runtimes grow linearly on the condition number. Our empirical studies validate our theory and show that our accelerated approach significantly improves the previous state-of-the-art distributed dual coordinate optimization algorithms.
[ { "created": "Wed, 13 Apr 2016 13:33:32 GMT", "version": "v1" }, { "created": "Wed, 19 Apr 2017 15:00:24 GMT", "version": "v2" }, { "created": "Fri, 25 Aug 2017 02:42:32 GMT", "version": "v3" } ]
2017-08-28
[ [ "Zheng", "Shun", "" ], [ "Wang", "Jialei", "" ], [ "Xia", "Fen", "" ], [ "Xu", "Wei", "" ], [ "Zhang", "Tong", "" ] ]
In modern large-scale machine learning applications, the training data are often partitioned and stored on multiple machines. It is customary to employ the "data parallelism" approach, where the aggregated training loss is minimized without moving data across machines. In this paper, we introduce a novel distributed dual formulation for regularized loss minimization problems that can directly handle data parallelism in the distributed setting. This formulation allows us to systematically derive dual coordinate optimization procedures, which we refer to as Distributed Alternating Dual Maximization (DADM). The framework extends earlier studies described in (Boyd et al., 2011; Ma et al., 2015a; Jaggi et al., 2014; Yang, 2013) and has rigorous theoretical analyses. Moreover with the help of the new formulation, we develop the accelerated version of DADM (Acc-DADM) by generalizing the acceleration technique from (Shalev-Shwartz and Zhang, 2014) to the distributed setting. We also provide theoretical results for the proposed accelerated version and the new result improves previous ones (Yang, 2013; Ma et al., 2015a) whose runtimes grow linearly on the condition number. Our empirical studies validate our theory and show that our accelerated approach significantly improves the previous state-of-the-art distributed dual coordinate optimization algorithms.