id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2212.10819
Wen Xiao
Wen Xiao, Lesly Miculicich, Yang Liu, Pengcheng He, Giuseppe Carenini
Attend to the Right Context: A Plug-and-Play Module for Content-Controllable Summarization
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Content-Controllable Summarization generates summaries focused on the given controlling signals. Due to the lack of large-scale training corpora for the task, we propose a plug-and-play module RelAttn to adapt any general summarizers to the content-controllable summarization task. RelAttn first identifies the relevant content in the source documents, and then makes the model attend to the right context by directly steering the attention weight. We further apply an unsupervised online adaptive parameter searching algorithm to determine the degree of control in the zero-shot setting, while such parameters are learned in the few-shot setting. By applying the module to three backbone summarization models, experiments show that our method effectively improves all the summarizers, and outperforms the prefix-based method and a widely used plug-and-play model in both zero- and few-shot settings. Tellingly, more benefit is observed in the scenarios when more control is needed.
[ { "created": "Wed, 21 Dec 2022 07:17:32 GMT", "version": "v1" } ]
2022-12-22
[ [ "Xiao", "Wen", "" ], [ "Miculicich", "Lesly", "" ], [ "Liu", "Yang", "" ], [ "He", "Pengcheng", "" ], [ "Carenini", "Giuseppe", "" ] ]
Content-Controllable Summarization generates summaries focused on the given controlling signals. Due to the lack of large-scale training corpora for the task, we propose a plug-and-play module RelAttn to adapt any general summarizers to the content-controllable summarization task. RelAttn first identifies the relevant content in the source documents, and then makes the model attend to the right context by directly steering the attention weight. We further apply an unsupervised online adaptive parameter searching algorithm to determine the degree of control in the zero-shot setting, while such parameters are learned in the few-shot setting. By applying the module to three backbone summarization models, experiments show that our method effectively improves all the summarizers, and outperforms the prefix-based method and a widely used plug-and-play model in both zero- and few-shot settings. Tellingly, more benefit is observed in the scenarios when more control is needed.
2404.00924
Zhiyuan Cheng
Zhiyuan Cheng, Zhaoyi Liu, Tengda Guo, Shiwei Feng, Dongfang Liu, Mingjie Tang, Xiangyu Zhang
BadPart: Unified Black-box Adversarial Patch Attacks against Pixel-wise Regression Tasks
Paper accepted at ICML 2024
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pixel-wise regression tasks (e.g., monocular depth estimation (MDE) and optical flow estimation (OFE)) have been widely involved in our daily life in applications like autonomous driving, augmented reality and video composition. Although certain applications are security-critical or bear societal significance, the adversarial robustness of such models are not sufficiently studied, especially in the black-box scenario. In this work, we introduce the first unified black-box adversarial patch attack framework against pixel-wise regression tasks, aiming to identify the vulnerabilities of these models under query-based black-box attacks. We propose a novel square-based adversarial patch optimization framework and employ probabilistic square sampling and score-based gradient estimation techniques to generate the patch effectively and efficiently, overcoming the scalability problem of previous black-box patch attacks. Our attack prototype, named BadPart, is evaluated on both MDE and OFE tasks, utilizing a total of 7 models. BadPart surpasses 3 baseline methods in terms of both attack performance and efficiency. We also apply BadPart on the Google online service for portrait depth estimation, causing 43.5% relative distance error with 50K queries. State-of-the-art (SOTA) countermeasures cannot defend our attack effectively.
[ { "created": "Mon, 1 Apr 2024 05:01:52 GMT", "version": "v1" }, { "created": "Thu, 23 May 2024 08:28:24 GMT", "version": "v2" }, { "created": "Sat, 25 May 2024 03:04:20 GMT", "version": "v3" } ]
2024-05-28
[ [ "Cheng", "Zhiyuan", "" ], [ "Liu", "Zhaoyi", "" ], [ "Guo", "Tengda", "" ], [ "Feng", "Shiwei", "" ], [ "Liu", "Dongfang", "" ], [ "Tang", "Mingjie", "" ], [ "Zhang", "Xiangyu", "" ] ]
Pixel-wise regression tasks (e.g., monocular depth estimation (MDE) and optical flow estimation (OFE)) have been widely involved in our daily life in applications like autonomous driving, augmented reality and video composition. Although certain applications are security-critical or bear societal significance, the adversarial robustness of such models are not sufficiently studied, especially in the black-box scenario. In this work, we introduce the first unified black-box adversarial patch attack framework against pixel-wise regression tasks, aiming to identify the vulnerabilities of these models under query-based black-box attacks. We propose a novel square-based adversarial patch optimization framework and employ probabilistic square sampling and score-based gradient estimation techniques to generate the patch effectively and efficiently, overcoming the scalability problem of previous black-box patch attacks. Our attack prototype, named BadPart, is evaluated on both MDE and OFE tasks, utilizing a total of 7 models. BadPart surpasses 3 baseline methods in terms of both attack performance and efficiency. We also apply BadPart on the Google online service for portrait depth estimation, causing 43.5% relative distance error with 50K queries. State-of-the-art (SOTA) countermeasures cannot defend our attack effectively.
1911.10819
Maxim Berman
Maxim Berman and Matthew B. Blaschko
Discriminative training of conditional random fields with probably submodular constraints
14 pages
null
null
null
cs.LG cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Problems of segmentation, denoising, registration and 3D reconstruction are often addressed with the graph cut algorithm. However, solving an unconstrained graph cut problem is NP-hard. For tractable optimization, pairwise potentials have to fulfill the submodularity inequality. In our learning paradigm, pairwise potentials are created as the dot product of a learned vector w with positive feature vectors. In order to constrain such a model to remain tractable, previous approaches have enforced the weight vector to be positive for pairwise potentials in which the labels differ, and set pairwise potentials to zero in the case that the label remains the same. Such constraints are sufficient to guarantee that the resulting pairwise potentials satisfy the submodularity inequality. However, we show that such an approach unnecessarily restricts the capacity of the learned models. Guaranteeing submodularity for all possible inputs, no matter how improbable, reduces inference error to effectively zero, but increases model error. In contrast, we relax the requirement of guaranteed submodularity to solutions that are probably approximately submodular. We show that the conceptually simple strategy of enforcing submodularity on the training examples guarantees with low sample complexity that test images will also yield submodular pairwise potentials. Results are presented in the binary and muticlass settings, showing substantial improvement from the resulting increased model capacity.
[ { "created": "Mon, 25 Nov 2019 10:38:05 GMT", "version": "v1" } ]
2019-11-26
[ [ "Berman", "Maxim", "" ], [ "Blaschko", "Matthew B.", "" ] ]
Problems of segmentation, denoising, registration and 3D reconstruction are often addressed with the graph cut algorithm. However, solving an unconstrained graph cut problem is NP-hard. For tractable optimization, pairwise potentials have to fulfill the submodularity inequality. In our learning paradigm, pairwise potentials are created as the dot product of a learned vector w with positive feature vectors. In order to constrain such a model to remain tractable, previous approaches have enforced the weight vector to be positive for pairwise potentials in which the labels differ, and set pairwise potentials to zero in the case that the label remains the same. Such constraints are sufficient to guarantee that the resulting pairwise potentials satisfy the submodularity inequality. However, we show that such an approach unnecessarily restricts the capacity of the learned models. Guaranteeing submodularity for all possible inputs, no matter how improbable, reduces inference error to effectively zero, but increases model error. In contrast, we relax the requirement of guaranteed submodularity to solutions that are probably approximately submodular. We show that the conceptually simple strategy of enforcing submodularity on the training examples guarantees with low sample complexity that test images will also yield submodular pairwise potentials. Results are presented in the binary and muticlass settings, showing substantial improvement from the resulting increased model capacity.
1904.05980
Issam Damaj
Issam Damaj
Parallel algorithms development for programmable logic devices
47 Pages, 25 Figures, 2 Tables. arXiv admin note: substantial text overlap with arXiv:1904.03756, arXiv:1904.05437
Advances in Engineering Software, Elsevier. 37 (2006) 561-582
10.1016/j.advengsoft.2006.01.009
null
cs.PL cs.PF
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Programmable Logic Devices (PLDs) continue to grow in size and currently contain several millions of gates. At the same time, research effort is going into higher-level hardware synthesis methodologies for reconfigurable computing that can exploit PLD technology. In this paper, we explore the effectiveness and extend one such formal methodology in the design of massively parallel algorithms. We take a step-wise refinement approach to the development of correct reconfigurable hardware circuits from formal specifications. A functional programming notation is used for specifying algorithms and for reasoning about them. The specifications are realised through the use of a combination of function decomposition strategies, data refinement techniques, and off-the-shelf refinements based upon higher-order functions. The off-the-shelf refinements are inspired by the operators of Communicating Sequential Processes (CSP) and map easily to programs in Handel-C (a hardware description language). The Handel-C descriptions are directly compiled into reconfigurable hardware. The practical realisation of this methodology is evidenced by a case studying the matrix multiplication algorithm as it is relatively simple and well known. In this paper, we obtain several hardware implementations with different performance characteristics by applying different refinements to the algorithm. The developed designs are compiled and tested under Celoxica's RC-1000 reconfigurable computer with its 2 million gates Virtex-E FPGA. Performance analysis and evaluation of these implementations are included.
[ { "created": "Mon, 1 Apr 2019 13:48:08 GMT", "version": "v1" } ]
2019-05-24
[ [ "Damaj", "Issam", "" ] ]
Programmable Logic Devices (PLDs) continue to grow in size and currently contain several millions of gates. At the same time, research effort is going into higher-level hardware synthesis methodologies for reconfigurable computing that can exploit PLD technology. In this paper, we explore the effectiveness and extend one such formal methodology in the design of massively parallel algorithms. We take a step-wise refinement approach to the development of correct reconfigurable hardware circuits from formal specifications. A functional programming notation is used for specifying algorithms and for reasoning about them. The specifications are realised through the use of a combination of function decomposition strategies, data refinement techniques, and off-the-shelf refinements based upon higher-order functions. The off-the-shelf refinements are inspired by the operators of Communicating Sequential Processes (CSP) and map easily to programs in Handel-C (a hardware description language). The Handel-C descriptions are directly compiled into reconfigurable hardware. The practical realisation of this methodology is evidenced by a case studying the matrix multiplication algorithm as it is relatively simple and well known. In this paper, we obtain several hardware implementations with different performance characteristics by applying different refinements to the algorithm. The developed designs are compiled and tested under Celoxica's RC-1000 reconfigurable computer with its 2 million gates Virtex-E FPGA. Performance analysis and evaluation of these implementations are included.
2011.05045
Mattia Lecci
Mattia Lecci, Matteo Drago, Andrea Zanella, Michele Zorzi
Exploiting Scheduled Access Features of mmWave WLANs for Periodic Traffic Sources
8 pages, 6 figures, 2 algorithms. This paper was submitted to IEEE MedComNet 2021
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many current and future multimedia and industrial applications, like video streaming, eXtended Reality or remote robot control, are characterized by periodic data transmissions with strict latency and reliability constraints. In an effort to meet the stringent demand of such traffic sources, the WiGig standards support a contention-free channel access mechanism, named Service Period, that makes it possible to allocate dedicated time intervals to certain wireless stations. However, the standard only covers the fundamental aspects that ensure interoperability, while the actual schedule logic is left to vendors. In this paper, we propose two algorithms for joint admission control and scheduling of periodic traffic streams with contrasting performance objectives, specifically a simple scheduler and a max-min fair scheduler. The schemes are compared in two different scenarios, in order to characterize and highlight some fundamental trade-offs. As expected from their design principles, the simple scheduler tends to trade acceptance rate for resource availability, contrary to the max-min fair scheduler, giving to implementers a clear performance trade-off, although performance cannot be balanced by means of a tunable parameter.
[ { "created": "Tue, 10 Nov 2020 11:14:22 GMT", "version": "v1" }, { "created": "Tue, 16 Mar 2021 11:05:05 GMT", "version": "v2" } ]
2021-03-17
[ [ "Lecci", "Mattia", "" ], [ "Drago", "Matteo", "" ], [ "Zanella", "Andrea", "" ], [ "Zorzi", "Michele", "" ] ]
Many current and future multimedia and industrial applications, like video streaming, eXtended Reality or remote robot control, are characterized by periodic data transmissions with strict latency and reliability constraints. In an effort to meet the stringent demand of such traffic sources, the WiGig standards support a contention-free channel access mechanism, named Service Period, that makes it possible to allocate dedicated time intervals to certain wireless stations. However, the standard only covers the fundamental aspects that ensure interoperability, while the actual schedule logic is left to vendors. In this paper, we propose two algorithms for joint admission control and scheduling of periodic traffic streams with contrasting performance objectives, specifically a simple scheduler and a max-min fair scheduler. The schemes are compared in two different scenarios, in order to characterize and highlight some fundamental trade-offs. As expected from their design principles, the simple scheduler tends to trade acceptance rate for resource availability, contrary to the max-min fair scheduler, giving to implementers a clear performance trade-off, although performance cannot be balanced by means of a tunable parameter.
1604.07099
Saeed Mehrabi
Therese Biedl, Timothy M. Chan, Stephanie Lee, Saeed Mehrabi, Fabrizio Montecchiani, and Hamideh Vosoughpour
On Guarding Orthogonal Polygons with Sliding Cameras
15 pages
null
null
null
cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A sliding camera inside an orthogonal polygon $P$ is a point guard that travels back and forth along an orthogonal line segment $\gamma$ in $P$. The sliding camera $g$ can see a point $p$ in $P$ if the perpendicular from $p$ onto $\gamma$ is inside $P$. In this paper, we give the first constant-factor approximation algorithm for the problem of guarding $P$ with the minimum number of sliding cameras. Next, we show that the sliding guards problem is linear-time solvable if the (suitably defined) dual graph of the polygon has bounded treewidth. Finally, we study art gallery theorems for sliding cameras, thus, give upper and lower bounds in terms of the number of guards needed relative to the number of vertices $n$.
[ { "created": "Mon, 25 Apr 2016 00:16:25 GMT", "version": "v1" } ]
2016-04-26
[ [ "Biedl", "Therese", "" ], [ "Chan", "Timothy M.", "" ], [ "Lee", "Stephanie", "" ], [ "Mehrabi", "Saeed", "" ], [ "Montecchiani", "Fabrizio", "" ], [ "Vosoughpour", "Hamideh", "" ] ]
A sliding camera inside an orthogonal polygon $P$ is a point guard that travels back and forth along an orthogonal line segment $\gamma$ in $P$. The sliding camera $g$ can see a point $p$ in $P$ if the perpendicular from $p$ onto $\gamma$ is inside $P$. In this paper, we give the first constant-factor approximation algorithm for the problem of guarding $P$ with the minimum number of sliding cameras. Next, we show that the sliding guards problem is linear-time solvable if the (suitably defined) dual graph of the polygon has bounded treewidth. Finally, we study art gallery theorems for sliding cameras, thus, give upper and lower bounds in terms of the number of guards needed relative to the number of vertices $n$.
1810.00159
Jun Jin
Jun Jin, Laura Petrich, Masood Dehghan, Zichen Zhang, Martin Jagersand
Robot eye-hand coordination learning by watching human demonstrations: a task function approximation approach
Accepted in ICRA 2019
null
10.1109/ICRA.2019.8793649
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a robot eye-hand coordination learning method that can directly learn visual task specification by watching human demonstrations. Task specification is represented as a task function, which is learned using inverse reinforcement learning(IRL) by inferring differential rewards between state changes. The learned task function is then used as continuous feedbacks in an uncalibrated visual servoing(UVS) controller designed for the execution phase. Our proposed method can directly learn from raw videos, which removes the need for hand-engineered task specification. It can also provide task interpretability by directly approximating the task function. Besides, benefiting from the use of a traditional UVS controller, our training process is efficient and the learned policy is independent from a particular robot platform. Various experiments were designed to show that, for a certain DOF task, our method can adapt to task/environment variances in target positions, backgrounds, illuminations, and occlusions without prior retraining.
[ { "created": "Sat, 29 Sep 2018 06:34:40 GMT", "version": "v1" }, { "created": "Wed, 27 Feb 2019 06:55:26 GMT", "version": "v2" } ]
2020-11-20
[ [ "Jin", "Jun", "" ], [ "Petrich", "Laura", "" ], [ "Dehghan", "Masood", "" ], [ "Zhang", "Zichen", "" ], [ "Jagersand", "Martin", "" ] ]
We present a robot eye-hand coordination learning method that can directly learn visual task specification by watching human demonstrations. Task specification is represented as a task function, which is learned using inverse reinforcement learning(IRL) by inferring differential rewards between state changes. The learned task function is then used as continuous feedbacks in an uncalibrated visual servoing(UVS) controller designed for the execution phase. Our proposed method can directly learn from raw videos, which removes the need for hand-engineered task specification. It can also provide task interpretability by directly approximating the task function. Besides, benefiting from the use of a traditional UVS controller, our training process is efficient and the learned policy is independent from a particular robot platform. Various experiments were designed to show that, for a certain DOF task, our method can adapt to task/environment variances in target positions, backgrounds, illuminations, and occlusions without prior retraining.
1810.01107
Gabriele Spini
Thomas Attema and Emiliano Mancini and Gabriele Spini and Mark Abspoel and Jan de Gier and Serge Fehr and Thijs Veugen and Maran van Heesch and Dani\"el Worm and Andrea De Luca and Ronald Cramer and Peter M.A. Sloot
A New Approach to Privacy-Preserving Clinical Decision Support Systems
15 pages, 4 figures
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: Clinical decision support systems (CDSS) are a category of health information technologies that can assist clinicians to choose optimal treatments. These support systems are based on clinical trials and expert knowledge; however, the amount of data available to these systems is limited. For this reason, CDSSs could be significantly improved by using the knowledge obtained by treating patients. This knowledge is mainly contained in patient records, whose usage is restricted due to privacy and confidentiality constraints. Methods: A treatment effectiveness measure, containing valuable information for treatment prescription, was defined and a method to extract this measure from patient records was developed. This method uses an advanced cryptographic technology, known as secure Multiparty Computation (henceforth referred to as MPC), to preserve the privacy of the patient records and the confidentiality of the clinicians' decisions. Results: Our solution enables to compute the effectiveness measure of a treatment based on patient records, while preserving privacy. Moreover, clinicians are not burdened with the computational and communication costs introduced by the privacy-preserving techniques that are used. Our system is able to compute the effectiveness of 100 treatments for a specific patient in less than 24 minutes, querying a database containing 20,000 patient records. Conclusion: This paper presents a novel and efficient clinical decision support system, that harnesses the potential and insights acquired from treatment data, while preserving the privacy of patient records and the confidentiality of clinician decisions.
[ { "created": "Tue, 2 Oct 2018 08:13:20 GMT", "version": "v1" }, { "created": "Mon, 3 Dec 2018 11:41:11 GMT", "version": "v2" } ]
2018-12-04
[ [ "Attema", "Thomas", "" ], [ "Mancini", "Emiliano", "" ], [ "Spini", "Gabriele", "" ], [ "Abspoel", "Mark", "" ], [ "de Gier", "Jan", "" ], [ "Fehr", "Serge", "" ], [ "Veugen", "Thijs", "" ], [ "van Heesch", "Maran", "" ], [ "Worm", "Daniël", "" ], [ "De Luca", "Andrea", "" ], [ "Cramer", "Ronald", "" ], [ "Sloot", "Peter M. A.", "" ] ]
Background: Clinical decision support systems (CDSS) are a category of health information technologies that can assist clinicians to choose optimal treatments. These support systems are based on clinical trials and expert knowledge; however, the amount of data available to these systems is limited. For this reason, CDSSs could be significantly improved by using the knowledge obtained by treating patients. This knowledge is mainly contained in patient records, whose usage is restricted due to privacy and confidentiality constraints. Methods: A treatment effectiveness measure, containing valuable information for treatment prescription, was defined and a method to extract this measure from patient records was developed. This method uses an advanced cryptographic technology, known as secure Multiparty Computation (henceforth referred to as MPC), to preserve the privacy of the patient records and the confidentiality of the clinicians' decisions. Results: Our solution enables to compute the effectiveness measure of a treatment based on patient records, while preserving privacy. Moreover, clinicians are not burdened with the computational and communication costs introduced by the privacy-preserving techniques that are used. Our system is able to compute the effectiveness of 100 treatments for a specific patient in less than 24 minutes, querying a database containing 20,000 patient records. Conclusion: This paper presents a novel and efficient clinical decision support system, that harnesses the potential and insights acquired from treatment data, while preserving the privacy of patient records and the confidentiality of clinician decisions.
1112.1730
Samir Medina Perlaza
Samir M. Perlaza and Hamidou Tembin\'e and Samson Lasaulce and M\'erouane Debbah
Quality-Of-Service Provisioning in Decentralized Networks: A Satisfaction Equilibrium Approach
Article accepted for publication in IEEE Journal on Selected Topics in Signal Processing, special issue in Game Theory in Signal Processing. 16 pages, 6 figures
null
10.1109/JSTSP.2011.2180507
null
cs.IT cs.GT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces a particular game formulation and its corresponding notion of equilibrium, namely the satisfaction form (SF) and the satisfaction equilibrium (SE). A game in SF models the case where players are uniquely interested in the satisfaction of some individual performance constraints, instead of individual performance optimization. Under this formulation, the notion of equilibrium corresponds to the situation where all players can simultaneously satisfy their individual constraints. The notion of SE, models the problem of QoS provisioning in decentralized self-configuring networks. Here, radio devices are satisfied if they are able to provide the requested QoS. Within this framework, the concept of SE is formalized for both pure and mixed strategies considering finite sets of players and actions. In both cases, sufficient conditions for the existence and uniqueness of the SE are presented. When multiple SE exist, we introduce the idea of effort or cost of satisfaction and we propose a refinement of the SE, namely the efficient SE (ESE). At the ESE, all players adopt the action which requires the lowest effort for satisfaction. A learning method that allows radio devices to achieve a SE in pure strategies in finite time and requiring only one-bit feedback is also presented. Finally, a power control game in the interference channel is used to highlight the advantages of modeling QoS problems following the notion of SE rather than other equilibrium concepts, e.g., generalized Nash equilibrium.
[ { "created": "Wed, 7 Dec 2011 23:21:38 GMT", "version": "v1" } ]
2015-06-03
[ [ "Perlaza", "Samir M.", "" ], [ "Tembiné", "Hamidou", "" ], [ "Lasaulce", "Samson", "" ], [ "Debbah", "Mérouane", "" ] ]
This paper introduces a particular game formulation and its corresponding notion of equilibrium, namely the satisfaction form (SF) and the satisfaction equilibrium (SE). A game in SF models the case where players are uniquely interested in the satisfaction of some individual performance constraints, instead of individual performance optimization. Under this formulation, the notion of equilibrium corresponds to the situation where all players can simultaneously satisfy their individual constraints. The notion of SE, models the problem of QoS provisioning in decentralized self-configuring networks. Here, radio devices are satisfied if they are able to provide the requested QoS. Within this framework, the concept of SE is formalized for both pure and mixed strategies considering finite sets of players and actions. In both cases, sufficient conditions for the existence and uniqueness of the SE are presented. When multiple SE exist, we introduce the idea of effort or cost of satisfaction and we propose a refinement of the SE, namely the efficient SE (ESE). At the ESE, all players adopt the action which requires the lowest effort for satisfaction. A learning method that allows radio devices to achieve a SE in pure strategies in finite time and requiring only one-bit feedback is also presented. Finally, a power control game in the interference channel is used to highlight the advantages of modeling QoS problems following the notion of SE rather than other equilibrium concepts, e.g., generalized Nash equilibrium.
2111.00969
Xudong Xu
Xudong Xu, Xingang Pan, Dahua Lin, Bo Dai
Generative Occupancy Fields for 3D Surface-Aware Image Synthesis
Accepted to NeurIPS2021. We propose Generative Occupancy Fields(GOF), a 3D-aware generative model which could synthesize realistic images with 3D consistency and simultaneously learn compact object surfaces
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The advent of generative radiance fields has significantly promoted the development of 3D-aware image synthesis. The cumulative rendering process in radiance fields makes training these generative models much easier since gradients are distributed over the entire volume, but leads to diffused object surfaces. In the meantime, compared to radiance fields occupancy representations could inherently ensure deterministic surfaces. However, if we directly apply occupancy representations to generative models, during training they will only receive sparse gradients located on object surfaces and eventually suffer from the convergence problem. In this paper, we propose Generative Occupancy Fields (GOF), a novel model based on generative radiance fields that can learn compact object surfaces without impeding its training convergence. The key insight of GOF is a dedicated transition from the cumulative rendering in radiance fields to rendering with only the surface points as the learned surface gets more and more accurate. In this way, GOF combines the merits of two representations in a unified framework. In practice, the training-time transition of start from radiance fields and march to occupancy representations is achieved in GOF by gradually shrinking the sampling region in its rendering process from the entire volume to a minimal neighboring region around the surface. Through comprehensive experiments on multiple datasets, we demonstrate that GOF can synthesize high-quality images with 3D consistency and simultaneously learn compact and smooth object surfaces. Code, models, and demo videos are available at https://sheldontsui.github.io/projects/GOF
[ { "created": "Mon, 1 Nov 2021 14:20:43 GMT", "version": "v1" } ]
2021-11-02
[ [ "Xu", "Xudong", "" ], [ "Pan", "Xingang", "" ], [ "Lin", "Dahua", "" ], [ "Dai", "Bo", "" ] ]
The advent of generative radiance fields has significantly promoted the development of 3D-aware image synthesis. The cumulative rendering process in radiance fields makes training these generative models much easier since gradients are distributed over the entire volume, but leads to diffused object surfaces. In the meantime, compared to radiance fields occupancy representations could inherently ensure deterministic surfaces. However, if we directly apply occupancy representations to generative models, during training they will only receive sparse gradients located on object surfaces and eventually suffer from the convergence problem. In this paper, we propose Generative Occupancy Fields (GOF), a novel model based on generative radiance fields that can learn compact object surfaces without impeding its training convergence. The key insight of GOF is a dedicated transition from the cumulative rendering in radiance fields to rendering with only the surface points as the learned surface gets more and more accurate. In this way, GOF combines the merits of two representations in a unified framework. In practice, the training-time transition of start from radiance fields and march to occupancy representations is achieved in GOF by gradually shrinking the sampling region in its rendering process from the entire volume to a minimal neighboring region around the surface. Through comprehensive experiments on multiple datasets, we demonstrate that GOF can synthesize high-quality images with 3D consistency and simultaneously learn compact and smooth object surfaces. Code, models, and demo videos are available at https://sheldontsui.github.io/projects/GOF
2209.10071
Kangdi Shi
Kangdi Shi (1), Muhammad Alrabeiah (2) and Jun Chen (1) ((1) Department of Electrical and Computer Engineering, McMaster University, Hamilton, Canada, (2) Electrical Engineering Department, King Saud University, Saudi Arabia.)
Progressive with Purpose: Guiding Progressive Inpainting DNNs through Context and Structure
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The advent of deep learning in the past decade has significantly helped advance image inpainting. Although achieving promising performance, deep learning-based inpainting algorithms still struggle from the distortion caused by the fusion of structural and contextual features, which are commonly obtained from, respectively, deep and shallow layers of a convolutional encoder. Motivated by this observation, we propose a novel progressive inpainting network that maintains the structural and contextual integrity of a processed image. More specifically, inspired by the Gaussian and Laplacian pyramids, the core of the proposed network is a feature extraction module named GLE. Stacking GLE modules enables the network to extract image features from different image frequency components. This ability is important to maintain structural and contextual integrity, for high frequency components correspond to structural information while low frequency components correspond to contextual information. The proposed network utilizes the GLE features to progressively fill in missing regions in a corrupted image in an iterative manner. Our benchmarking experiments demonstrate that the proposed method achieves clear improvement in performance over many state-of-the-art inpainting algorithms.
[ { "created": "Wed, 21 Sep 2022 02:15:02 GMT", "version": "v1" }, { "created": "Tue, 3 Jan 2023 22:15:54 GMT", "version": "v2" } ]
2023-01-05
[ [ "Shi", "Kangdi", "" ], [ "Alrabeiah", "Muhammad", "" ], [ "Chen", "Jun", "" ] ]
The advent of deep learning in the past decade has significantly helped advance image inpainting. Although achieving promising performance, deep learning-based inpainting algorithms still struggle from the distortion caused by the fusion of structural and contextual features, which are commonly obtained from, respectively, deep and shallow layers of a convolutional encoder. Motivated by this observation, we propose a novel progressive inpainting network that maintains the structural and contextual integrity of a processed image. More specifically, inspired by the Gaussian and Laplacian pyramids, the core of the proposed network is a feature extraction module named GLE. Stacking GLE modules enables the network to extract image features from different image frequency components. This ability is important to maintain structural and contextual integrity, for high frequency components correspond to structural information while low frequency components correspond to contextual information. The proposed network utilizes the GLE features to progressively fill in missing regions in a corrupted image in an iterative manner. Our benchmarking experiments demonstrate that the proposed method achieves clear improvement in performance over many state-of-the-art inpainting algorithms.
1402.3985
Payaswini P
P Payaswini, D. H Manjaiah
Challenges and issues in 4G Networks Mobility Management
Wrongly uploaded file
International Journal of Computer Trends and Technology (IJCTT) volume 4 Issue 5 May 2013
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Wireless broadband technology is now in motion to provide higher data rate, wider coverage and improved mobility. Towards this the 4G - network is an integration of various wireless technologies and expected to provide seamless mobility. Moreover 4G-networks will be entirely packet switched systems based on IP protocol. One of the research challenges for 4G-Network is the design of intelligent mobility management techniques that take advantage of IP-based technologies to achieve global roaming among various access technologies. Hence Mobile IPv6 is considered to be one of the key technologies for integration of heterogeneous networks. However the original Mobile IPv6 does not support fast handover, which is essential function for mobile networks. Number of research groups working towards this to develop a common protocol to enable seamless mobility. In this paper we identify and explore the different issues and challenges related to mobility management in 4G - networks.
[ { "created": "Mon, 17 Feb 2014 12:54:34 GMT", "version": "v1" }, { "created": "Sun, 12 Dec 2021 08:31:32 GMT", "version": "v2" } ]
2021-12-14
[ [ "Payaswini", "P", "" ], [ "Manjaiah", "D. H", "" ] ]
Wireless broadband technology is now in motion to provide higher data rate, wider coverage and improved mobility. Towards this the 4G - network is an integration of various wireless technologies and expected to provide seamless mobility. Moreover 4G-networks will be entirely packet switched systems based on IP protocol. One of the research challenges for 4G-Network is the design of intelligent mobility management techniques that take advantage of IP-based technologies to achieve global roaming among various access technologies. Hence Mobile IPv6 is considered to be one of the key technologies for integration of heterogeneous networks. However the original Mobile IPv6 does not support fast handover, which is essential function for mobile networks. Number of research groups working towards this to develop a common protocol to enable seamless mobility. In this paper we identify and explore the different issues and challenges related to mobility management in 4G - networks.
1903.06741
Xi Xiong
Xi Xiong and Erdong Xiao and Li Jin
Analysis of a Stochastic Model for Coordinated Platooning of Heavy-duty Vehicles
null
null
null
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Platooning of heavy-duty vehicles (HDVs) is a key component of smart and connected highways and is expected to bring remarkable fuel savings and emission reduction. In this paper, we study the coordination of HDV platooning on a highway section. We model the arrival of HDVs as a Poisson process. Multiple HDVs are merged into one platoon if their headways are below a given threshold. The merging is done by accelerating the following vehicles to catch up with the leading ones. We characterize the following random variables: (i) platoon size, (ii) headway between platoons, and (iii) travel time increment due to platoon formation. We formulate and solve an optimization problem to determine the headway threshold for platooning that leads to minimal cost (time plus fuel). We also compare our results with that from Simulation of Urban MObility (SUMO).
[ { "created": "Fri, 15 Mar 2019 18:41:15 GMT", "version": "v1" }, { "created": "Fri, 27 Sep 2019 23:14:18 GMT", "version": "v2" } ]
2019-10-01
[ [ "Xiong", "Xi", "" ], [ "Xiao", "Erdong", "" ], [ "Jin", "Li", "" ] ]
Platooning of heavy-duty vehicles (HDVs) is a key component of smart and connected highways and is expected to bring remarkable fuel savings and emission reduction. In this paper, we study the coordination of HDV platooning on a highway section. We model the arrival of HDVs as a Poisson process. Multiple HDVs are merged into one platoon if their headways are below a given threshold. The merging is done by accelerating the following vehicles to catch up with the leading ones. We characterize the following random variables: (i) platoon size, (ii) headway between platoons, and (iii) travel time increment due to platoon formation. We formulate and solve an optimization problem to determine the headway threshold for platooning that leads to minimal cost (time plus fuel). We also compare our results with that from Simulation of Urban MObility (SUMO).
1809.00421
Yang Liu
Yang Liu, Zhaoyang Lu, Jing Li, Tao Yang
Hierarchically Learned View-Invariant Representations for Cross-View Action Recognition
Published in IEEE Transactions on Circuits and Systems for Video Technology, codes can be found at https://yangliu9208.github.io/JSRDA/
null
10.1109/TCSVT.2018.2868123
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recognizing human actions from varied views is challenging due to huge appearance variations in different views. The key to this problem is to learn discriminant view-invariant representations generalizing well across views. In this paper, we address this problem by learning view-invariant representations hierarchically using a novel method, referred to as Joint Sparse Representation and Distribution Adaptation (JSRDA). To obtain robust and informative feature representations, we first incorporate a sample-affinity matrix into the marginalized stacked denoising Autoencoder (mSDA) to obtain shared features, which are then combined with the private features. In order to make the feature representations of videos across views transferable, we then learn a transferable dictionary pair simultaneously from pairs of videos taken at different views to encourage each action video across views to have the same sparse representation. However, the distribution difference across views still exists because a unified subspace where the sparse representations of one action across views are the same may not exist when the view difference is large. Therefore, we propose a novel unsupervised distribution adaptation method that learns a set of projections that project the source and target views data into respective low-dimensional subspaces where the marginal and conditional distribution differences are reduced simultaneously. Therefore, the finally learned feature representation is view-invariant and robust for substantial distribution difference across views even the view difference is large. Experimental results on four multiview datasets show that our approach outperforms the state-ofthe-art approaches.
[ { "created": "Mon, 3 Sep 2018 01:31:05 GMT", "version": "v1" }, { "created": "Wed, 18 Sep 2019 08:14:46 GMT", "version": "v2" } ]
2019-09-19
[ [ "Liu", "Yang", "" ], [ "Lu", "Zhaoyang", "" ], [ "Li", "Jing", "" ], [ "Yang", "Tao", "" ] ]
Recognizing human actions from varied views is challenging due to huge appearance variations in different views. The key to this problem is to learn discriminant view-invariant representations generalizing well across views. In this paper, we address this problem by learning view-invariant representations hierarchically using a novel method, referred to as Joint Sparse Representation and Distribution Adaptation (JSRDA). To obtain robust and informative feature representations, we first incorporate a sample-affinity matrix into the marginalized stacked denoising Autoencoder (mSDA) to obtain shared features, which are then combined with the private features. In order to make the feature representations of videos across views transferable, we then learn a transferable dictionary pair simultaneously from pairs of videos taken at different views to encourage each action video across views to have the same sparse representation. However, the distribution difference across views still exists because a unified subspace where the sparse representations of one action across views are the same may not exist when the view difference is large. Therefore, we propose a novel unsupervised distribution adaptation method that learns a set of projections that project the source and target views data into respective low-dimensional subspaces where the marginal and conditional distribution differences are reduced simultaneously. Therefore, the finally learned feature representation is view-invariant and robust for substantial distribution difference across views even the view difference is large. Experimental results on four multiview datasets show that our approach outperforms the state-ofthe-art approaches.
2009.10270
Davis Liang
Davis Liang, Peng Xu, Siamak Shakeri, Cicero Nogueira dos Santos, Ramesh Nallapati, Zhiheng Huang, Bing Xiang
Embedding-based Zero-shot Retrieval through Query Generation
null
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Passage retrieval addresses the problem of locating relevant passages, usually from a large corpus, given a query. In practice, lexical term-matching algorithms like BM25 are popular choices for retrieval owing to their efficiency. However, term-based matching algorithms often miss relevant passages that have no lexical overlap with the query and cannot be finetuned to downstream datasets. In this work, we consider the embedding-based two-tower architecture as our neural retrieval model. Since labeled data can be scarce and because neural retrieval models require vast amounts of data to train, we propose a novel method for generating synthetic training data for retrieval. Our system produces remarkable results, significantly outperforming BM25 on 5 out of 6 datasets tested, by an average of 2.45 points for Recall@1. In some cases, our model trained on synthetic data can even outperform the same model trained on real data
[ { "created": "Tue, 22 Sep 2020 01:54:27 GMT", "version": "v1" } ]
2020-09-23
[ [ "Liang", "Davis", "" ], [ "Xu", "Peng", "" ], [ "Shakeri", "Siamak", "" ], [ "Santos", "Cicero Nogueira dos", "" ], [ "Nallapati", "Ramesh", "" ], [ "Huang", "Zhiheng", "" ], [ "Xiang", "Bing", "" ] ]
Passage retrieval addresses the problem of locating relevant passages, usually from a large corpus, given a query. In practice, lexical term-matching algorithms like BM25 are popular choices for retrieval owing to their efficiency. However, term-based matching algorithms often miss relevant passages that have no lexical overlap with the query and cannot be finetuned to downstream datasets. In this work, we consider the embedding-based two-tower architecture as our neural retrieval model. Since labeled data can be scarce and because neural retrieval models require vast amounts of data to train, we propose a novel method for generating synthetic training data for retrieval. Our system produces remarkable results, significantly outperforming BM25 on 5 out of 6 datasets tested, by an average of 2.45 points for Recall@1. In some cases, our model trained on synthetic data can even outperform the same model trained on real data
2203.06918
Daeyoung Kim
Daeyoung Kim, Seongsu Bae, Seungho Kim, Edward Choi
Uncertainty-Aware Text-to-Program for Question Answering on Structured Electronic Health Records
In Proceedings of the Conference on Health, Inference, and Learning (CHIL 2022)
null
null
null
cs.CL cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Question Answering on Electronic Health Records (EHR-QA) has a significant impact on the healthcare domain, and it is being actively studied. Previous research on structured EHR-QA focuses on converting natural language queries into query language such as SQL or SPARQL (NLQ2Query), so the problem scope is limited to pre-defined data types by the specific query language. In order to expand the EHR-QA task beyond this limitation to handle multi-modal medical data and solve complex inference in the future, more primitive systemic language is needed. In this paper, we design the program-based model (NLQ2Program) for EHR-QA as the first step towards the future direction. We tackle MIMICSPARQL*, the graph-based EHR-QA dataset, via a program-based approach in a semi-supervised manner in order to overcome the absence of gold programs. Without the gold program, our proposed model shows comparable performance to the previous state-of-the-art model, which is an NLQ2Query model (0.9% gain). In addition, for a reliable EHR-QA model, we apply the uncertainty decomposition method to measure the ambiguity in the input question. We empirically confirmed data uncertainty is most indicative of the ambiguity in the input question.
[ { "created": "Mon, 14 Mar 2022 08:12:16 GMT", "version": "v1" }, { "created": "Fri, 15 Apr 2022 03:08:25 GMT", "version": "v2" } ]
2022-04-18
[ [ "Kim", "Daeyoung", "" ], [ "Bae", "Seongsu", "" ], [ "Kim", "Seungho", "" ], [ "Choi", "Edward", "" ] ]
Question Answering on Electronic Health Records (EHR-QA) has a significant impact on the healthcare domain, and it is being actively studied. Previous research on structured EHR-QA focuses on converting natural language queries into query language such as SQL or SPARQL (NLQ2Query), so the problem scope is limited to pre-defined data types by the specific query language. In order to expand the EHR-QA task beyond this limitation to handle multi-modal medical data and solve complex inference in the future, more primitive systemic language is needed. In this paper, we design the program-based model (NLQ2Program) for EHR-QA as the first step towards the future direction. We tackle MIMICSPARQL*, the graph-based EHR-QA dataset, via a program-based approach in a semi-supervised manner in order to overcome the absence of gold programs. Without the gold program, our proposed model shows comparable performance to the previous state-of-the-art model, which is an NLQ2Query model (0.9% gain). In addition, for a reliable EHR-QA model, we apply the uncertainty decomposition method to measure the ambiguity in the input question. We empirically confirmed data uncertainty is most indicative of the ambiguity in the input question.
1710.06677
Feras Dayoub
Dimity Miller, Lachlan Nicholson, Feras Dayoub, Niko S\"underhauf
Dropout Sampling for Robust Object Detection in Open-Set Conditions
to appear in IEEE International Conference on Robotics and Automation 2018 (ICRA 2018)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Dropout Variational Inference, or Dropout Sampling, has been recently proposed as an approximation technique for Bayesian Deep Learning and evaluated for image classification and regression tasks. This paper investigates the utility of Dropout Sampling for object detection for the first time. We demonstrate how label uncertainty can be extracted from a state-of-the-art object detection system via Dropout Sampling. We evaluate this approach on a large synthetic dataset of 30,000 images, and a real-world dataset captured by a mobile robot in a versatile campus environment. We show that this uncertainty can be utilized to increase object detection performance under the open-set conditions that are typically encountered in robotic vision. A Dropout Sampling network is shown to achieve a 12.3% increase in recall (for the same precision score as a standard network) and a 15.1% increase in precision (for the same recall score as the standard network).
[ { "created": "Wed, 18 Oct 2017 11:16:53 GMT", "version": "v1" }, { "created": "Wed, 18 Apr 2018 06:10:02 GMT", "version": "v2" } ]
2018-04-19
[ [ "Miller", "Dimity", "" ], [ "Nicholson", "Lachlan", "" ], [ "Dayoub", "Feras", "" ], [ "Sünderhauf", "Niko", "" ] ]
Dropout Variational Inference, or Dropout Sampling, has been recently proposed as an approximation technique for Bayesian Deep Learning and evaluated for image classification and regression tasks. This paper investigates the utility of Dropout Sampling for object detection for the first time. We demonstrate how label uncertainty can be extracted from a state-of-the-art object detection system via Dropout Sampling. We evaluate this approach on a large synthetic dataset of 30,000 images, and a real-world dataset captured by a mobile robot in a versatile campus environment. We show that this uncertainty can be utilized to increase object detection performance under the open-set conditions that are typically encountered in robotic vision. A Dropout Sampling network is shown to achieve a 12.3% increase in recall (for the same precision score as a standard network) and a 15.1% increase in precision (for the same recall score as the standard network).
1903.01182
Hao-Yun Chen
Hao-Yun Chen, Pei-Hsin Wang, Chun-Hao Liu, Shih-Chieh Chang, Jia-Yu Pan, Yu-Ting Chen, Wei Wei, Da-Cheng Juan
Complement Objective Training
ICLR'19 Camera Ready
null
null
null
cs.LG cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Learning with a primary objective, such as softmax cross entropy for classification and sequence generation, has been the norm for training deep neural networks for years. Although being a widely-adopted approach, using cross entropy as the primary objective exploits mostly the information from the ground-truth class for maximizing data likelihood, and largely ignores information from the complement (incorrect) classes. We argue that, in addition to the primary objective, training also using a complement objective that leverages information from the complement classes can be effective in improving model performance. This motivates us to study a new training paradigm that maximizes the likelihood of the groundtruth class while neutralizing the probabilities of the complement classes. We conduct extensive experiments on multiple tasks ranging from computer vision to natural language understanding. The experimental results confirm that, compared to the conventional training with just one primary objective, training also with the complement objective further improves the performance of the state-of-the-art models across all tasks. In addition to the accuracy improvement, we also show that models trained with both primary and complement objectives are more robust to single-step adversarial attacks.
[ { "created": "Mon, 4 Mar 2019 11:35:27 GMT", "version": "v1" }, { "created": "Thu, 21 Mar 2019 18:33:12 GMT", "version": "v2" } ]
2019-03-25
[ [ "Chen", "Hao-Yun", "" ], [ "Wang", "Pei-Hsin", "" ], [ "Liu", "Chun-Hao", "" ], [ "Chang", "Shih-Chieh", "" ], [ "Pan", "Jia-Yu", "" ], [ "Chen", "Yu-Ting", "" ], [ "Wei", "Wei", "" ], [ "Juan", "Da-Cheng", "" ] ]
Learning with a primary objective, such as softmax cross entropy for classification and sequence generation, has been the norm for training deep neural networks for years. Although being a widely-adopted approach, using cross entropy as the primary objective exploits mostly the information from the ground-truth class for maximizing data likelihood, and largely ignores information from the complement (incorrect) classes. We argue that, in addition to the primary objective, training also using a complement objective that leverages information from the complement classes can be effective in improving model performance. This motivates us to study a new training paradigm that maximizes the likelihood of the groundtruth class while neutralizing the probabilities of the complement classes. We conduct extensive experiments on multiple tasks ranging from computer vision to natural language understanding. The experimental results confirm that, compared to the conventional training with just one primary objective, training also with the complement objective further improves the performance of the state-of-the-art models across all tasks. In addition to the accuracy improvement, we also show that models trained with both primary and complement objectives are more robust to single-step adversarial attacks.
2109.08303
EPTCS
Giuseppe Mazzotta (University of Calabria)
Compilation of Aggregates in ASP
In Proceedings ICLP 2021, arXiv:2109.07914
EPTCS 345, 2021, pp. 286-295
10.4204/EPTCS.345.45
null
cs.LO
http://creativecommons.org/licenses/by/4.0/
Answer Set Programming (ASP) is a well-known problem-solving formalism in computational logic. Nowadays, ASP is used in many real world scenarios thanks to ASP solvers. Standard evaluation of ASP programs suffers from an intrinsic limitation, knows as Grounding Bottleneck, due to the grounding of some rules that could fit all the available memory. As a result, there exist instances of real world problems that are untractable using the standard Ground and Solve approach. In order to tackle this problem, different strategies have been proposed. Among them we focus on a recent approach based on compilation of problematic constraints as propagators, which revealed to be very promising but is currently limited to constraints without aggregates. Since aggregates are widely used in ASP, in this paper we extend such an approach also to constraints containing aggregates. Good results, that proof the effectiveness of the proposed approach, have been reached.
[ { "created": "Fri, 17 Sep 2021 01:50:52 GMT", "version": "v1" } ]
2021-09-20
[ [ "Mazzotta", "Giuseppe", "", "University of Calabria" ] ]
Answer Set Programming (ASP) is a well-known problem-solving formalism in computational logic. Nowadays, ASP is used in many real world scenarios thanks to ASP solvers. Standard evaluation of ASP programs suffers from an intrinsic limitation, knows as Grounding Bottleneck, due to the grounding of some rules that could fit all the available memory. As a result, there exist instances of real world problems that are untractable using the standard Ground and Solve approach. In order to tackle this problem, different strategies have been proposed. Among them we focus on a recent approach based on compilation of problematic constraints as propagators, which revealed to be very promising but is currently limited to constraints without aggregates. Since aggregates are widely used in ASP, in this paper we extend such an approach also to constraints containing aggregates. Good results, that proof the effectiveness of the proposed approach, have been reached.
1906.01942
Yunsu Kim
Yunsu Kim, Hendrik Rosendahl, Nick Rossenbach, Jan Rosendahl, Shahram Khadivi, Hermann Ney
Learning Bilingual Sentence Embeddings via Autoencoding and Computing Similarities with a Multilayer Perceptron
ACL 2019 Repl4NLP camera-ready
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a novel model architecture and training algorithm to learn bilingual sentence embeddings from a combination of parallel and monolingual data. Our method connects autoencoding and neural machine translation to force the source and target sentence embeddings to share the same space without the help of a pivot language or an additional transformation. We train a multilayer perceptron on top of the sentence embeddings to extract good bilingual sentence pairs from nonparallel or noisy parallel data. Our approach shows promising performance on sentence alignment recovery and the WMT 2018 parallel corpus filtering tasks with only a single model.
[ { "created": "Wed, 5 Jun 2019 11:16:33 GMT", "version": "v1" } ]
2019-06-06
[ [ "Kim", "Yunsu", "" ], [ "Rosendahl", "Hendrik", "" ], [ "Rossenbach", "Nick", "" ], [ "Rosendahl", "Jan", "" ], [ "Khadivi", "Shahram", "" ], [ "Ney", "Hermann", "" ] ]
We propose a novel model architecture and training algorithm to learn bilingual sentence embeddings from a combination of parallel and monolingual data. Our method connects autoencoding and neural machine translation to force the source and target sentence embeddings to share the same space without the help of a pivot language or an additional transformation. We train a multilayer perceptron on top of the sentence embeddings to extract good bilingual sentence pairs from nonparallel or noisy parallel data. Our approach shows promising performance on sentence alignment recovery and the WMT 2018 parallel corpus filtering tasks with only a single model.
1511.01399
\'Eric Tanter
Ronald Garcia and \'Eric Tanter
Deriving a Simple Gradual Security Language
null
null
null
null
cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Abstracting Gradual Typing (AGT) is an approach to systematically deriving gradual counterparts to static type disciplines. The approach consists of defining the semantics of gradual types by interpreting them as sets of static types, and then defining an optimal abstraction back to gradual types. These operations are used to lift the static discipline to the gradual setting. The runtime semantics of the gradual language then arises as reductions on gradual typing derivations. To demonstrate the flexibility of AGT, we gradualize $\lambda_\text{SEC}$, the prototypical security-typed language, with respect to only security labels rather than entire types, yielding a type system that ranges gradually from simply-typed to securely-typed. We establish noninterference for the gradual language, called $\lambda_{\widetilde{\text{SEC}}}$, using Zdancewic's logical relation proof method. Whereas prior work presents gradual security cast languages, which require explicit security casts, this work yields the first gradual security source language, which requires no explicit casts.
[ { "created": "Wed, 4 Nov 2015 17:07:00 GMT", "version": "v1" }, { "created": "Thu, 5 Nov 2015 01:40:26 GMT", "version": "v2" }, { "created": "Fri, 20 Nov 2015 15:18:22 GMT", "version": "v3" } ]
2015-11-23
[ [ "Garcia", "Ronald", "" ], [ "Tanter", "Éric", "" ] ]
Abstracting Gradual Typing (AGT) is an approach to systematically deriving gradual counterparts to static type disciplines. The approach consists of defining the semantics of gradual types by interpreting them as sets of static types, and then defining an optimal abstraction back to gradual types. These operations are used to lift the static discipline to the gradual setting. The runtime semantics of the gradual language then arises as reductions on gradual typing derivations. To demonstrate the flexibility of AGT, we gradualize $\lambda_\text{SEC}$, the prototypical security-typed language, with respect to only security labels rather than entire types, yielding a type system that ranges gradually from simply-typed to securely-typed. We establish noninterference for the gradual language, called $\lambda_{\widetilde{\text{SEC}}}$, using Zdancewic's logical relation proof method. Whereas prior work presents gradual security cast languages, which require explicit security casts, this work yields the first gradual security source language, which requires no explicit casts.
1006.0849
Nicholas Fyson
Nick Fyson, Tijl De Bie and Nello Cristianini
Reconstruction of Causal Networks by Set Covering
Under consideration for the ECML PKDD 2010 conference
null
null
null
cs.DS stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a method for the reconstruction of networks, based on the order of nodes visited by a stochastic branching process. Our algorithm reconstructs a network of minimal size that ensures consistency with the data. Crucially, we show that global consistency with the data can be achieved through purely local considerations, inferring the neighbourhood of each node in turn. The optimisation problem solved for each individual node can be reduced to a Set Covering Problem, which is known to be NP-hard but can be approximated well in practice. We then extend our approach to account for noisy data, based on the Minimum Description Length principle. We demonstrate our algorithms on synthetic data, generated by an SIR-like epidemiological model.
[ { "created": "Fri, 4 Jun 2010 10:33:49 GMT", "version": "v1" } ]
2010-06-07
[ [ "Fyson", "Nick", "" ], [ "De Bie", "Tijl", "" ], [ "Cristianini", "Nello", "" ] ]
We present a method for the reconstruction of networks, based on the order of nodes visited by a stochastic branching process. Our algorithm reconstructs a network of minimal size that ensures consistency with the data. Crucially, we show that global consistency with the data can be achieved through purely local considerations, inferring the neighbourhood of each node in turn. The optimisation problem solved for each individual node can be reduced to a Set Covering Problem, which is known to be NP-hard but can be approximated well in practice. We then extend our approach to account for noisy data, based on the Minimum Description Length principle. We demonstrate our algorithms on synthetic data, generated by an SIR-like epidemiological model.
2211.14366
Gregory Spell
Gregory P. Spell, Simiao Ren, Leslie M. Collins, Jordan M. Malof
Mixture Manifold Networks: A Computationally Efficient Baseline for Inverse Modeling
This paper has been accepted to AAAI 2023; this is not the final version
null
10.1609/aaai.v37i8.26178
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose and show the efficacy of a new method to address generic inverse problems. Inverse modeling is the task whereby one seeks to determine the control parameters of a natural system that produce a given set of observed measurements. Recent work has shown impressive results using deep learning, but we note that there is a trade-off between model performance and computational time. For some applications, the computational time at inference for the best performing inverse modeling method may be overly prohibitive to its use. We present a new method that leverages multiple manifolds as a mixture of backward (e.g., inverse) models in a forward-backward model architecture. These multiple backwards models all share a common forward model, and their training is mitigated by generating training examples from the forward model. The proposed method thus has two innovations: 1) the multiple Manifold Mixture Network (MMN) architecture, and 2) the training procedure involving augmenting backward model training data using the forward model. We demonstrate the advantages of our method by comparing to several baselines on four benchmark inverse problems, and we furthermore provide analysis to motivate its design.
[ { "created": "Fri, 25 Nov 2022 20:18:07 GMT", "version": "v1" } ]
2023-08-15
[ [ "Spell", "Gregory P.", "" ], [ "Ren", "Simiao", "" ], [ "Collins", "Leslie M.", "" ], [ "Malof", "Jordan M.", "" ] ]
We propose and show the efficacy of a new method to address generic inverse problems. Inverse modeling is the task whereby one seeks to determine the control parameters of a natural system that produce a given set of observed measurements. Recent work has shown impressive results using deep learning, but we note that there is a trade-off between model performance and computational time. For some applications, the computational time at inference for the best performing inverse modeling method may be overly prohibitive to its use. We present a new method that leverages multiple manifolds as a mixture of backward (e.g., inverse) models in a forward-backward model architecture. These multiple backwards models all share a common forward model, and their training is mitigated by generating training examples from the forward model. The proposed method thus has two innovations: 1) the multiple Manifold Mixture Network (MMN) architecture, and 2) the training procedure involving augmenting backward model training data using the forward model. We demonstrate the advantages of our method by comparing to several baselines on four benchmark inverse problems, and we furthermore provide analysis to motivate its design.
1701.02415
Sachini Jayasooriya
Sachini Jayasooriya, Mahyar Shirvanimoghaddam, Lawrence Ong, and Sarah J. Johnson
Analysis and design of Raptor codes using a multi-edge framework
This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The focus of this paper is on the analysis and design of Raptor codes using a multi-edge framework. In this regard, we first represent the Raptor code as a multi-edge type low-density parity-check (METLDPC) code. This MET representation gives a general framework to analyze and design Raptor codes over a binary input additive white Gaussian noise channel using MET density evolution (MET-DE). We consider a joint decoding scheme based on the belief propagation (BP) decoding for Raptor codes in the multi-edge framework, and analyze the convergence behavior of the BP decoder using MET-DE. In joint decoding of Raptor codes, the component codes correspond to inner code and precode are decoded in parallel and provide information to each other. We also derive an exact expression for the stability of Raptor codes with joint decoding. We then propose an efficient Raptor code design method using the multi-edge framework, where we simultaneously optimize the inner code and the precode. Finally we consider performance-complexity trade-offs of Raptor codes using the multi-edge framework. Through density evolution analysis we show that the designed Raptor codes using the multi-edge framework outperform the existing Raptor codes in literature in terms of the realized rate.
[ { "created": "Tue, 10 Jan 2017 02:22:48 GMT", "version": "v1" } ]
2017-01-11
[ [ "Jayasooriya", "Sachini", "" ], [ "Shirvanimoghaddam", "Mahyar", "" ], [ "Ong", "Lawrence", "" ], [ "Johnson", "Sarah J.", "" ] ]
The focus of this paper is on the analysis and design of Raptor codes using a multi-edge framework. In this regard, we first represent the Raptor code as a multi-edge type low-density parity-check (METLDPC) code. This MET representation gives a general framework to analyze and design Raptor codes over a binary input additive white Gaussian noise channel using MET density evolution (MET-DE). We consider a joint decoding scheme based on the belief propagation (BP) decoding for Raptor codes in the multi-edge framework, and analyze the convergence behavior of the BP decoder using MET-DE. In joint decoding of Raptor codes, the component codes correspond to inner code and precode are decoded in parallel and provide information to each other. We also derive an exact expression for the stability of Raptor codes with joint decoding. We then propose an efficient Raptor code design method using the multi-edge framework, where we simultaneously optimize the inner code and the precode. Finally we consider performance-complexity trade-offs of Raptor codes using the multi-edge framework. Through density evolution analysis we show that the designed Raptor codes using the multi-edge framework outperform the existing Raptor codes in literature in terms of the realized rate.
2101.04319
Alsharif Abuadbba Dr
Alsharif Abuadbba, Hyoungshick Kim, Surya Nepal
DeepiSign: Invisible Fragile Watermark to Protect the Integrityand Authenticity of CNN
The 36th ACM SIGAPP Symposium on Applied Computing (ACM SAC)
null
10.1145/3412841.3441970
null
cs.CR cs.LG
http://creativecommons.org/licenses/by/4.0/
Convolutional Neural Networks (CNNs) deployed in real-life applications such as autonomous vehicles have shown to be vulnerable to manipulation attacks, such as poisoning attacks and fine-tuning. Hence, it is essential to ensure the integrity and authenticity of CNNs because compromised models can produce incorrect outputs and behave maliciously. In this paper, we propose a self-contained tamper-proofing method, called DeepiSign, to ensure the integrity and authenticity of CNN models against such manipulation attacks. DeepiSign applies the idea of fragile invisible watermarking to securely embed a secret and its hash value into a CNN model. To verify the integrity and authenticity of the model, we retrieve the secret from the model, compute the hash value of the secret, and compare it with the embedded hash value. To minimize the effects of the embedded secret on the CNN model, we use a wavelet-based technique to transform weights into the frequency domain and embed the secret into less significant coefficients. Our theoretical analysis shows that DeepiSign can hide up to 1KB secret in each layer with minimal loss of the model's accuracy. To evaluate the security and performance of DeepiSign, we performed experiments on four pre-trained models (ResNet18, VGG16, AlexNet, and MobileNet) using three datasets (MNIST, CIFAR-10, and Imagenet) against three types of manipulation attacks (targeted input poisoning, output poisoning, and fine-tuning). The results demonstrate that DeepiSign is verifiable without degrading the classification accuracy, and robust against representative CNN manipulation attacks.
[ { "created": "Tue, 12 Jan 2021 06:42:45 GMT", "version": "v1" } ]
2021-01-13
[ [ "Abuadbba", "Alsharif", "" ], [ "Kim", "Hyoungshick", "" ], [ "Nepal", "Surya", "" ] ]
Convolutional Neural Networks (CNNs) deployed in real-life applications such as autonomous vehicles have shown to be vulnerable to manipulation attacks, such as poisoning attacks and fine-tuning. Hence, it is essential to ensure the integrity and authenticity of CNNs because compromised models can produce incorrect outputs and behave maliciously. In this paper, we propose a self-contained tamper-proofing method, called DeepiSign, to ensure the integrity and authenticity of CNN models against such manipulation attacks. DeepiSign applies the idea of fragile invisible watermarking to securely embed a secret and its hash value into a CNN model. To verify the integrity and authenticity of the model, we retrieve the secret from the model, compute the hash value of the secret, and compare it with the embedded hash value. To minimize the effects of the embedded secret on the CNN model, we use a wavelet-based technique to transform weights into the frequency domain and embed the secret into less significant coefficients. Our theoretical analysis shows that DeepiSign can hide up to 1KB secret in each layer with minimal loss of the model's accuracy. To evaluate the security and performance of DeepiSign, we performed experiments on four pre-trained models (ResNet18, VGG16, AlexNet, and MobileNet) using three datasets (MNIST, CIFAR-10, and Imagenet) against three types of manipulation attacks (targeted input poisoning, output poisoning, and fine-tuning). The results demonstrate that DeepiSign is verifiable without degrading the classification accuracy, and robust against representative CNN manipulation attacks.
1004.4359
Kenton Born
Kenton Born, David Gustafson
NgViz: Detecting DNS Tunnels through N-Gram Visualization and Quantitative Analysis
In Proceedings of the the 6th Annual Cyber Security and Information Intelligence Research Workshop, Oak Ridge, TN, April 21-23, 2010
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduced NgViz, a tool that examines DNS traffic and shows anomalies in n-gram frequencies. This is accomplished by comparing input files against a fingerprint of legitimate traffic. Both quantitative analysis and visual aids are provided that allow the user to make determinations about the legitimacy of the DNS traffic.
[ { "created": "Sun, 25 Apr 2010 15:40:52 GMT", "version": "v1" } ]
2010-04-27
[ [ "Born", "Kenton", "" ], [ "Gustafson", "David", "" ] ]
This paper introduced NgViz, a tool that examines DNS traffic and shows anomalies in n-gram frequencies. This is accomplished by comparing input files against a fingerprint of legitimate traffic. Both quantitative analysis and visual aids are provided that allow the user to make determinations about the legitimacy of the DNS traffic.
1609.02107
Jian-Kang Zhang
Zheng Dong and Yan-Yu Zhang and Jian-Kang Zhang and Xiang-Chuan Gao
Quadrature Amplitude Modulation Division for Multiuser MISO Broadcast Channels
null
null
10.1109/JSTSP.2016.2607684
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper considers a discrete-time multiuser multiple-input single-output (MISO) Gaussian broadcast channel~(BC), in which channel state information (CSI) is available at both the transmitter and the receivers. The flexible and explicit design of a uniquely decomposable constellation group (UDCG) is provided based on pulse amplitude modulation (PAM) and rectangular quadrature amplitude modulation (QAM) constellations. With this, a modulation division (MD) transmission scheme is developed for the MISO BC. The proposed MD scheme enables each receiver to uniquely and efficiently detect their desired signals from the superposition of mutually interfering cochannel signals in the absence of noise. In our design, the optimal transmitter beamforming problem is solved in a closed-form for two-user MISO BC using max-min fairness as a design criterion. Then, for a general case with more than two receivers, we develop a user-grouping-based beamforming scheme, where the grouping method, beamforming vector design and power allocation problems are addressed by using weighted max-min fairness. It is shown that our proposed approach has a lower probability of error compared with the zero-forcing (ZF) method when the Hermitian angle between the two channel vectors is small in a two-user case. In addition, simulation results also reveal that for the general channel model with more than two users, our user-grouping-based scheme significantly outperforms the ZF, time division (TD), minimum mean-square error (MMSE) and signal-to-leakage-and-noise ratio (SLNR) based techniques in moderate and high SNR regimes when the number of users approaches to the number of base station (BS) antennas and it degrades into the ZF scheme when the number of users is far less than the number of BS antennas in Rayleigh fading channels.
[ { "created": "Wed, 7 Sep 2016 18:45:47 GMT", "version": "v1" } ]
2016-12-21
[ [ "Dong", "Zheng", "" ], [ "Zhang", "Yan-Yu", "" ], [ "Zhang", "Jian-Kang", "" ], [ "Gao", "Xiang-Chuan", "" ] ]
This paper considers a discrete-time multiuser multiple-input single-output (MISO) Gaussian broadcast channel~(BC), in which channel state information (CSI) is available at both the transmitter and the receivers. The flexible and explicit design of a uniquely decomposable constellation group (UDCG) is provided based on pulse amplitude modulation (PAM) and rectangular quadrature amplitude modulation (QAM) constellations. With this, a modulation division (MD) transmission scheme is developed for the MISO BC. The proposed MD scheme enables each receiver to uniquely and efficiently detect their desired signals from the superposition of mutually interfering cochannel signals in the absence of noise. In our design, the optimal transmitter beamforming problem is solved in a closed-form for two-user MISO BC using max-min fairness as a design criterion. Then, for a general case with more than two receivers, we develop a user-grouping-based beamforming scheme, where the grouping method, beamforming vector design and power allocation problems are addressed by using weighted max-min fairness. It is shown that our proposed approach has a lower probability of error compared with the zero-forcing (ZF) method when the Hermitian angle between the two channel vectors is small in a two-user case. In addition, simulation results also reveal that for the general channel model with more than two users, our user-grouping-based scheme significantly outperforms the ZF, time division (TD), minimum mean-square error (MMSE) and signal-to-leakage-and-noise ratio (SLNR) based techniques in moderate and high SNR regimes when the number of users approaches to the number of base station (BS) antennas and it degrades into the ZF scheme when the number of users is far less than the number of BS antennas in Rayleigh fading channels.
1302.1638
Jnanamurthy Hk
Jnanamurthy H. K.
Discovery of Maximal Frequent Item Sets using Subset Creation
null
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data mining is the practice to search large amount of data to discover data patterns. Data mining uses mathematical algorithms to group the data and evaluate the future events. Association rule is a research area in the field of knowledge discovery. Many data mining researchers had improved upon the quality of association rule for business development by incorporating influential factors like utility, number of items sold and for the mining of association data patterns. In this paper, we propose an efficient algorithm to find maximal frequent itemset first. Most of the association rule algorithms used to find minimal frequent item first, then with the help of minimal frequent itemsets derive the maximal frequent itemsets, these methods consume more time to find maximal frequent itemsets. To overcome this problem, we propose a new approach to find maximal frequent itemset directly using the concepts of subsets. The proposed method is found to be efficient in finding maximal frequent itemsets.
[ { "created": "Thu, 7 Feb 2013 04:29:39 GMT", "version": "v1" } ]
2013-02-08
[ [ "K.", "Jnanamurthy H.", "" ] ]
Data mining is the practice to search large amount of data to discover data patterns. Data mining uses mathematical algorithms to group the data and evaluate the future events. Association rule is a research area in the field of knowledge discovery. Many data mining researchers had improved upon the quality of association rule for business development by incorporating influential factors like utility, number of items sold and for the mining of association data patterns. In this paper, we propose an efficient algorithm to find maximal frequent itemset first. Most of the association rule algorithms used to find minimal frequent item first, then with the help of minimal frequent itemsets derive the maximal frequent itemsets, these methods consume more time to find maximal frequent itemsets. To overcome this problem, we propose a new approach to find maximal frequent itemset directly using the concepts of subsets. The proposed method is found to be efficient in finding maximal frequent itemsets.
2207.07271
Sarah Li Ms.
Sarah H.Q. Li, Assal\'e Adj\'e, Pierre-Lo\"ic Garoche, Beh\c{c}et A\c{c}{\i}kme\c{s}e
Set-based value operators for non-stationary Markovian environments
17 pages, 11 figures, 1 table
null
null
null
cs.LG math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper analyzes finite state Markov Decision Processes (MDPs) with uncertain parameters in compact sets and re-examines results from robust MDP via set-based fixed point theory. To this end, we generalize the Bellman and policy evaluation operators to contracting operators on the value function space and denote them as \emph{value operators}. We lift these value operators to act on \emph{sets} of value functions and denote them as \emph{set-based value operators}. We prove that the set-based value operators are \emph{contractions} in the space of compact value function sets. Leveraging insights from set theory, we generalize the rectangularity condition in classic robust MDP literature to a containment condition for all value operators, which is weaker and can be applied to a larger set of parameter-uncertain MDPs and contracting operators in dynamic programming. We prove that both the rectangularity condition and the containment condition sufficiently ensure that the set-based value operator's fixed point set contains its own extrema elements. For convex and compact sets of uncertain MDP parameters, we show equivalence between the classic robust value function and the supremum of the fixed point set of the set-based Bellman operator. Under dynamically changing MDP parameters in compact sets, we prove a set convergence result for value iteration, which otherwise may not converge to a single value function. Finally, we derive novel guarantees for probabilistic path-planning problems in planet exploration and stratospheric station-keeping.
[ { "created": "Fri, 15 Jul 2022 03:37:59 GMT", "version": "v1" }, { "created": "Fri, 9 Sep 2022 18:16:48 GMT", "version": "v2" }, { "created": "Tue, 8 Aug 2023 14:51:47 GMT", "version": "v3" } ]
2023-08-09
[ [ "Li", "Sarah H. Q.", "" ], [ "Adjé", "Assalé", "" ], [ "Garoche", "Pierre-Loïc", "" ], [ "Açıkmeşe", "Behçet", "" ] ]
This paper analyzes finite state Markov Decision Processes (MDPs) with uncertain parameters in compact sets and re-examines results from robust MDP via set-based fixed point theory. To this end, we generalize the Bellman and policy evaluation operators to contracting operators on the value function space and denote them as \emph{value operators}. We lift these value operators to act on \emph{sets} of value functions and denote them as \emph{set-based value operators}. We prove that the set-based value operators are \emph{contractions} in the space of compact value function sets. Leveraging insights from set theory, we generalize the rectangularity condition in classic robust MDP literature to a containment condition for all value operators, which is weaker and can be applied to a larger set of parameter-uncertain MDPs and contracting operators in dynamic programming. We prove that both the rectangularity condition and the containment condition sufficiently ensure that the set-based value operator's fixed point set contains its own extrema elements. For convex and compact sets of uncertain MDP parameters, we show equivalence between the classic robust value function and the supremum of the fixed point set of the set-based Bellman operator. Under dynamically changing MDP parameters in compact sets, we prove a set convergence result for value iteration, which otherwise may not converge to a single value function. Finally, we derive novel guarantees for probabilistic path-planning problems in planet exploration and stratospheric station-keeping.
2303.08657
Sriram Radhakrishna
Sriram Radhakrishna, Adithya Balasubramanyam
Economical Quaternion Extraction from a Human Skeletal Pose Estimate using 2-D Cameras
This is the post-final version of the paper published with IEEE CONECCT 2023 with some figure reference errors rectified
2023 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT), Bangalore, India, 2023, pp. 1-6
10.1109/CONECCT57959.2023.10234829
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
In this paper, we present a novel algorithm to extract a quaternion from a two dimensional camera frame for estimating a contained human skeletal pose. The problem of pose estimation is usually tackled through the usage of stereo cameras and intertial measurement units for obtaining depth and euclidean distance for measurement of points in 3D space. However, the usage of these devices comes with a high signal processing latency as well as a significant monetary cost. By making use of MediaPipe, a framework for building perception pipelines for human pose estimation, the proposed algorithm extracts a quaternion from a 2-D frame capturing an image of a human object at a sub-fifty millisecond latency while also being capable of deployment at edges with a single camera frame and a generally low computational resource availability, especially for use cases involving last-minute detection and reaction by autonomous robots. The algorithm seeks to bypass the funding barrier and improve accessibility for robotics researchers involved in designing control systems.
[ { "created": "Wed, 15 Mar 2023 14:41:17 GMT", "version": "v1" }, { "created": "Thu, 30 Mar 2023 07:36:10 GMT", "version": "v2" }, { "created": "Thu, 14 Sep 2023 04:26:01 GMT", "version": "v3" } ]
2023-09-15
[ [ "Radhakrishna", "Sriram", "" ], [ "Balasubramanyam", "Adithya", "" ] ]
In this paper, we present a novel algorithm to extract a quaternion from a two dimensional camera frame for estimating a contained human skeletal pose. The problem of pose estimation is usually tackled through the usage of stereo cameras and intertial measurement units for obtaining depth and euclidean distance for measurement of points in 3D space. However, the usage of these devices comes with a high signal processing latency as well as a significant monetary cost. By making use of MediaPipe, a framework for building perception pipelines for human pose estimation, the proposed algorithm extracts a quaternion from a 2-D frame capturing an image of a human object at a sub-fifty millisecond latency while also being capable of deployment at edges with a single camera frame and a generally low computational resource availability, especially for use cases involving last-minute detection and reaction by autonomous robots. The algorithm seeks to bypass the funding barrier and improve accessibility for robotics researchers involved in designing control systems.
1909.04312
Wei Zhang
Shuo Yang, Wei Zhang, Weizhi Lu, Hesheng Wang, and Yibin Li
Learning Actions from Human Demonstration Video for Robotic Manipulation
Accepted by IROS 2019
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Learning actions from human demonstration is an emerging trend for designing intelligent robotic systems, which can be referred as video to command. The performance of such approach highly relies on the quality of video captioning. However, the general video captioning methods focus more on the understanding of the full frame, lacking of consideration on the specific object of interests in robotic manipulations. We propose a novel deep model to learn actions from human demonstration video for robotic manipulation. It consists of two deep networks, grasp detection network (GNet) and video captioning network (CNet). GNet performs two functions: providing grasp solutions and extracting the local features for the object of interests in robotic manipulation. CNet outputs the captioning results by fusing the features of both full frames and local objects. Experimental results on UR5 robotic arm show that our method could produce more accurate command from video demonstration than state-of-the-art work, thereby leading to more robust grasping performance.
[ { "created": "Tue, 10 Sep 2019 06:20:46 GMT", "version": "v1" } ]
2019-09-11
[ [ "Yang", "Shuo", "" ], [ "Zhang", "Wei", "" ], [ "Lu", "Weizhi", "" ], [ "Wang", "Hesheng", "" ], [ "Li", "Yibin", "" ] ]
Learning actions from human demonstration is an emerging trend for designing intelligent robotic systems, which can be referred as video to command. The performance of such approach highly relies on the quality of video captioning. However, the general video captioning methods focus more on the understanding of the full frame, lacking of consideration on the specific object of interests in robotic manipulations. We propose a novel deep model to learn actions from human demonstration video for robotic manipulation. It consists of two deep networks, grasp detection network (GNet) and video captioning network (CNet). GNet performs two functions: providing grasp solutions and extracting the local features for the object of interests in robotic manipulation. CNet outputs the captioning results by fusing the features of both full frames and local objects. Experimental results on UR5 robotic arm show that our method could produce more accurate command from video demonstration than state-of-the-art work, thereby leading to more robust grasping performance.
2402.11608
Louis Jalouzot
Louis Jalouzot, Robin Sobczyk, Bastien Lhopitallier, Jeanne Salle, Nur Lan, Emmanuel Chemla, Yair Lakretz
Metric-Learning Encoding Models Identify Processing Profiles of Linguistic Features in BERT's Representations
17 pages, 13 figures
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
We introduce Metric-Learning Encoding Models (MLEMs) as a new approach to understand how neural systems represent the theoretical features of the objects they process. As a proof-of-concept, we apply MLEMs to neural representations extracted from BERT, and track a wide variety of linguistic features (e.g., tense, subject person, clause type, clause embedding). We find that: (1) linguistic features are ordered: they separate representations of sentences to different degrees in different layers; (2) neural representations are organized hierarchically: in some layers, we find clusters of representations nested within larger clusters, following successively important linguistic features; (3) linguistic features are disentangled in middle layers: distinct, selective units are activated by distinct linguistic features. Methodologically, MLEMs are superior (4) to multivariate decoding methods, being more robust to type-I errors, and (5) to univariate encoding methods, in being able to predict both local and distributed representations. Together, this demonstrates the utility of Metric-Learning Encoding Methods for studying how linguistic features are neurally encoded in language models and the advantage of MLEMs over traditional methods. MLEMs can be extended to other domains (e.g. vision) and to other neural systems, such as the human brain.
[ { "created": "Sun, 18 Feb 2024 14:57:53 GMT", "version": "v1" } ]
2024-02-20
[ [ "Jalouzot", "Louis", "" ], [ "Sobczyk", "Robin", "" ], [ "Lhopitallier", "Bastien", "" ], [ "Salle", "Jeanne", "" ], [ "Lan", "Nur", "" ], [ "Chemla", "Emmanuel", "" ], [ "Lakretz", "Yair", "" ] ]
We introduce Metric-Learning Encoding Models (MLEMs) as a new approach to understand how neural systems represent the theoretical features of the objects they process. As a proof-of-concept, we apply MLEMs to neural representations extracted from BERT, and track a wide variety of linguistic features (e.g., tense, subject person, clause type, clause embedding). We find that: (1) linguistic features are ordered: they separate representations of sentences to different degrees in different layers; (2) neural representations are organized hierarchically: in some layers, we find clusters of representations nested within larger clusters, following successively important linguistic features; (3) linguistic features are disentangled in middle layers: distinct, selective units are activated by distinct linguistic features. Methodologically, MLEMs are superior (4) to multivariate decoding methods, being more robust to type-I errors, and (5) to univariate encoding methods, in being able to predict both local and distributed representations. Together, this demonstrates the utility of Metric-Learning Encoding Methods for studying how linguistic features are neurally encoded in language models and the advantage of MLEMs over traditional methods. MLEMs can be extended to other domains (e.g. vision) and to other neural systems, such as the human brain.
1408.1068
Carlos Alberto Fernandez-y-Fernandez
Jorge Aguilar, Moises Sanchez, Carlos Fernandez-y-Fernandez, Everth Rocha, David Martinez and Jose Figueroa
The Size of Software Projects Developed by Mexican Companies
5 pages, The 2014 International Conference on Software Engineering Research and Practice (SERP'14)
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Currently, most software projects around the world are small rather than large. Despite this, there are more methodologies, tools, frameworks, processes, and so on, for developing and managing large software projects than for small ones. Small software projects are important because they generate considerable resources. For example: apps (small mobile applications) generate around $25 billion dollars of revenue. This paper shows our findings regarding the size of the projects built by Mexican software development companies. We surveyed 107 Mexican companies and found that 92% of their developed projects are micro and small, and 8% are medium or large. In addition, according to our research, 84.1% of companies in Mexico are micro or small businesses.
[ { "created": "Tue, 5 Aug 2014 18:53:59 GMT", "version": "v1" } ]
2014-08-06
[ [ "Aguilar", "Jorge", "" ], [ "Sanchez", "Moises", "" ], [ "Fernandez-y-Fernandez", "Carlos", "" ], [ "Rocha", "Everth", "" ], [ "Martinez", "David", "" ], [ "Figueroa", "Jose", "" ] ]
Currently, most software projects around the world are small rather than large. Despite this, there are more methodologies, tools, frameworks, processes, and so on, for developing and managing large software projects than for small ones. Small software projects are important because they generate considerable resources. For example: apps (small mobile applications) generate around $25 billion dollars of revenue. This paper shows our findings regarding the size of the projects built by Mexican software development companies. We surveyed 107 Mexican companies and found that 92% of their developed projects are micro and small, and 8% are medium or large. In addition, according to our research, 84.1% of companies in Mexico are micro or small businesses.
2105.10133
Anahita Mohseni-Kabir
Anahita Mohseni-Kabir, Manuela Veloso, Maxim Likhachev
Waiting Tables as a Robot Planning Problem
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by-nc-sa/4.0/
We present how we formalize the waiting tables task in a restaurant as a robot planning problem. This formalization was used to test our recently developed algorithms that allow for optimal planning for achieving multiple independent tasks that are partially observable and evolve over time [1], [2].
[ { "created": "Fri, 21 May 2021 05:21:54 GMT", "version": "v1" } ]
2021-05-24
[ [ "Mohseni-Kabir", "Anahita", "" ], [ "Veloso", "Manuela", "" ], [ "Likhachev", "Maxim", "" ] ]
We present how we formalize the waiting tables task in a restaurant as a robot planning problem. This formalization was used to test our recently developed algorithms that allow for optimal planning for achieving multiple independent tasks that are partially observable and evolve over time [1], [2].
1804.00115
Anak Agung Julius
Wei Qiao and Kyle Altman and Agung Julius and Bernard Possidente and John T. Wen
Continuous Circadian Phase Estimation Using Adaptive Notch Filter
null
null
null
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Actigraphy has been widely used for the analysis of circadian rhythm. Current practice applies regression analysis to data from multiple days to estimate the circadian phase. This paper presents a filtering method for online processing of biometric data to estimate the circadian phase. We apply the proposed method on actigraphy data of fruit flies (Drosophila melanogaster).
[ { "created": "Sat, 31 Mar 2018 03:38:56 GMT", "version": "v1" } ]
2018-04-03
[ [ "Qiao", "Wei", "" ], [ "Altman", "Kyle", "" ], [ "Julius", "Agung", "" ], [ "Possidente", "Bernard", "" ], [ "Wen", "John T.", "" ] ]
Actigraphy has been widely used for the analysis of circadian rhythm. Current practice applies regression analysis to data from multiple days to estimate the circadian phase. This paper presents a filtering method for online processing of biometric data to estimate the circadian phase. We apply the proposed method on actigraphy data of fruit flies (Drosophila melanogaster).
1709.09612
Abhishek Dubey
Michael A. Walker, Abhishek Dubey, Aron Laszka, and Douglas C. Schmidt
PlaTIBART: a Platform for Transactive IoT Blockchain Applications with Repeatable Testing
Workshop on Middleware and Applications for the Internet of Things (M4IoT) 2017
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the advent of blockchain-enabled IoT applications, there is an increased need for related software patterns, middleware concepts, and testing practices to ensure adequate quality and productivity. IoT and blockchain each provide different design goals, concepts, and practices that must be integrated, including the distributed actor model and fault tolerance from IoT and transactive information integrity over untrustworthy sources from blockchain. Both IoT and blockchain are emerging technologies and both lack codified patterns and practices for development of applications when combined. This paper describes PlaTIBART, which is a platform for transactive IoT blockchain applications with repeatable testing that combines the Actor pattern (which is a commonly used model of computation in IoT) together with a custom Domain Specific Language (DSL) and test network management tools. We show how PlaTIBART has been applied to develop, test, and analyze fault-tolerant IoT blockchain applications.
[ { "created": "Wed, 27 Sep 2017 16:38:03 GMT", "version": "v1" }, { "created": "Sat, 30 Sep 2017 01:44:46 GMT", "version": "v2" } ]
2017-10-03
[ [ "Walker", "Michael A.", "" ], [ "Dubey", "Abhishek", "" ], [ "Laszka", "Aron", "" ], [ "Schmidt", "Douglas C.", "" ] ]
With the advent of blockchain-enabled IoT applications, there is an increased need for related software patterns, middleware concepts, and testing practices to ensure adequate quality and productivity. IoT and blockchain each provide different design goals, concepts, and practices that must be integrated, including the distributed actor model and fault tolerance from IoT and transactive information integrity over untrustworthy sources from blockchain. Both IoT and blockchain are emerging technologies and both lack codified patterns and practices for development of applications when combined. This paper describes PlaTIBART, which is a platform for transactive IoT blockchain applications with repeatable testing that combines the Actor pattern (which is a commonly used model of computation in IoT) together with a custom Domain Specific Language (DSL) and test network management tools. We show how PlaTIBART has been applied to develop, test, and analyze fault-tolerant IoT blockchain applications.
1210.4662
AbdelRahman Karawia Dr.
A. A. Karawia
A New Recursive Algorithm For Inverting A General Comrade Matrix
null
null
null
null
cs.SC cs.MS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, the author present a reliable symbolic computational algorithm for inverting a general comrade matrix by using parallel computing along with recursion. The computational cost of our algorithm is O(n^2). The algorithm is implementable to the Computer Algebra System (CAS) such as MAPLE, MATLAB and MATHEMATICA. Three examples are presented for the sake of illustration.
[ { "created": "Wed, 17 Oct 2012 08:09:08 GMT", "version": "v1" } ]
2012-10-18
[ [ "Karawia", "A. A.", "" ] ]
In this paper, the author present a reliable symbolic computational algorithm for inverting a general comrade matrix by using parallel computing along with recursion. The computational cost of our algorithm is O(n^2). The algorithm is implementable to the Computer Algebra System (CAS) such as MAPLE, MATLAB and MATHEMATICA. Three examples are presented for the sake of illustration.
2210.02524
Benjamin Biggs
Benjamin Biggs, Hans He, James McMahon and Daniel J. Stilwell
Experiments in Underwater Feature Tracking with Performance Guarantees Using a Small AUV
7 pages, 6 figures, 1 table, IROS 2022
null
null
null
cs.RO cs.SY eess.SY
http://creativecommons.org/licenses/by/4.0/
We present the results of experiments performed using a small autonomous underwater vehicle to determine the location of an isobath within a bounded area. The primary contribution of this work is to implement and integrate several recent developments real-time planning for environmental mapping, and to demonstrate their utility in a challenging practical example. We model the bathymetry within the operational area using a Gaussian process and propose a reward function that represents the task of mapping a desired isobath. As is common in applications where plans must be continually updated based on real-time sensor measurements, we adopt a receding horizon framework where the vehicle continually computes near-optimal paths. The sequence of paths does not, in general, inherit the optimality properties of each individual path. Our real-time planning implementation incorporates recent results that lead to performance guarantees for receding-horizon planning.
[ { "created": "Wed, 5 Oct 2022 19:46:24 GMT", "version": "v1" } ]
2022-10-07
[ [ "Biggs", "Benjamin", "" ], [ "He", "Hans", "" ], [ "McMahon", "James", "" ], [ "Stilwell", "Daniel J.", "" ] ]
We present the results of experiments performed using a small autonomous underwater vehicle to determine the location of an isobath within a bounded area. The primary contribution of this work is to implement and integrate several recent developments real-time planning for environmental mapping, and to demonstrate their utility in a challenging practical example. We model the bathymetry within the operational area using a Gaussian process and propose a reward function that represents the task of mapping a desired isobath. As is common in applications where plans must be continually updated based on real-time sensor measurements, we adopt a receding horizon framework where the vehicle continually computes near-optimal paths. The sequence of paths does not, in general, inherit the optimality properties of each individual path. Our real-time planning implementation incorporates recent results that lead to performance guarantees for receding-horizon planning.
2401.13551
Hao Huang
Yongwei Nie, Hao Huang, Chengjiang Long, Qing Zhang, Pradipta Maji, Hongmin Cai
Interleaving One-Class and Weakly-Supervised Models with Adaptive Thresholding for Unsupervised Video Anomaly Detection
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Without human annotations, a typical Unsupervised Video Anomaly Detection (UVAD) method needs to train two models that generate pseudo labels for each other. In previous work, the two models are closely entangled with each other, and it is not known how to upgrade their method without modifying their training framework significantly. Second, previous work usually adopts fixed thresholding to obtain pseudo labels, however the user-specified threshold is not reliable which inevitably introduces errors into the training process. To alleviate these two problems, we propose a novel interleaved framework that alternately trains a One-Class Classification (OCC) model and a Weakly-Supervised (WS) model for UVAD. The OCC or WS models in our method can be easily replaced with other OCC or WS models, which facilitates our method to upgrade with the most recent developments in both fields. For handling the fixed thresholding problem, we break through the conventional cognitive boundary and propose a weighted OCC model that can be trained on both normal and abnormal data. We also propose an adaptive mechanism for automatically finding the optimal threshold for the WS model in a loose to strict manner. Experiments demonstrate that the proposed UVAD method outperforms previous approaches.
[ { "created": "Wed, 24 Jan 2024 16:11:42 GMT", "version": "v1" } ]
2024-01-25
[ [ "Nie", "Yongwei", "" ], [ "Huang", "Hao", "" ], [ "Long", "Chengjiang", "" ], [ "Zhang", "Qing", "" ], [ "Maji", "Pradipta", "" ], [ "Cai", "Hongmin", "" ] ]
Without human annotations, a typical Unsupervised Video Anomaly Detection (UVAD) method needs to train two models that generate pseudo labels for each other. In previous work, the two models are closely entangled with each other, and it is not known how to upgrade their method without modifying their training framework significantly. Second, previous work usually adopts fixed thresholding to obtain pseudo labels, however the user-specified threshold is not reliable which inevitably introduces errors into the training process. To alleviate these two problems, we propose a novel interleaved framework that alternately trains a One-Class Classification (OCC) model and a Weakly-Supervised (WS) model for UVAD. The OCC or WS models in our method can be easily replaced with other OCC or WS models, which facilitates our method to upgrade with the most recent developments in both fields. For handling the fixed thresholding problem, we break through the conventional cognitive boundary and propose a weighted OCC model that can be trained on both normal and abnormal data. We also propose an adaptive mechanism for automatically finding the optimal threshold for the WS model in a loose to strict manner. Experiments demonstrate that the proposed UVAD method outperforms previous approaches.
2406.04770
Bill Yuchen Lin
Bill Yuchen Lin, Yuntian Deng, Khyathi Chandu, Faeze Brahman, Abhilasha Ravichander, Valentina Pyatkin, Nouha Dziri, Ronan Le Bras, Yejin Choi
WildBench: Benchmarking LLMs with Challenging Tasks from Real Users in the Wild
Link: https://hf.co/spaces/allenai/WildBench
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
We introduce WildBench, an automated evaluation framework designed to benchmark large language models (LLMs) using challenging, real-world user queries. WildBench consists of 1,024 tasks carefully selected from over one million human-chatbot conversation logs. For automated evaluation with WildBench, we have developed two metrics, WB-Reward and WB-Score, which are computable using advanced LLMs such as GPT-4-turbo. WildBench evaluation uses task-specific checklists to evaluate model outputs systematically and provides structured explanations that justify the scores and comparisons, resulting in more reliable and interpretable automatic judgments. WB-Reward employs fine-grained pairwise comparisons between model responses, generating five potential outcomes: much better, slightly better, slightly worse, much worse, or a tie. Unlike previous evaluations that employed a single baseline model, we selected three baseline models at varying performance levels to ensure a comprehensive pairwise evaluation. Additionally, we propose a simple method to mitigate length bias, by converting outcomes of ``slightly better/worse'' to ``tie'' if the winner response exceeds the loser one by more than $K$ characters. WB-Score evaluates the quality of model outputs individually, making it a fast and cost-efficient evaluation metric. WildBench results demonstrate a strong correlation with the human-voted Elo ratings from Chatbot Arena on hard tasks. Specifically, WB-Reward achieves a Pearson correlation of 0.98 with top-ranking models. Additionally, WB-Score reaches 0.95, surpassing both ArenaHard's 0.91 and AlpacaEval2.0's 0.89 for length-controlled win rates, as well as the 0.87 for regular win rates.
[ { "created": "Fri, 7 Jun 2024 09:15:44 GMT", "version": "v1" } ]
2024-06-10
[ [ "Lin", "Bill Yuchen", "" ], [ "Deng", "Yuntian", "" ], [ "Chandu", "Khyathi", "" ], [ "Brahman", "Faeze", "" ], [ "Ravichander", "Abhilasha", "" ], [ "Pyatkin", "Valentina", "" ], [ "Dziri", "Nouha", "" ], [ "Bras", "Ronan Le", "" ], [ "Choi", "Yejin", "" ] ]
We introduce WildBench, an automated evaluation framework designed to benchmark large language models (LLMs) using challenging, real-world user queries. WildBench consists of 1,024 tasks carefully selected from over one million human-chatbot conversation logs. For automated evaluation with WildBench, we have developed two metrics, WB-Reward and WB-Score, which are computable using advanced LLMs such as GPT-4-turbo. WildBench evaluation uses task-specific checklists to evaluate model outputs systematically and provides structured explanations that justify the scores and comparisons, resulting in more reliable and interpretable automatic judgments. WB-Reward employs fine-grained pairwise comparisons between model responses, generating five potential outcomes: much better, slightly better, slightly worse, much worse, or a tie. Unlike previous evaluations that employed a single baseline model, we selected three baseline models at varying performance levels to ensure a comprehensive pairwise evaluation. Additionally, we propose a simple method to mitigate length bias, by converting outcomes of ``slightly better/worse'' to ``tie'' if the winner response exceeds the loser one by more than $K$ characters. WB-Score evaluates the quality of model outputs individually, making it a fast and cost-efficient evaluation metric. WildBench results demonstrate a strong correlation with the human-voted Elo ratings from Chatbot Arena on hard tasks. Specifically, WB-Reward achieves a Pearson correlation of 0.98 with top-ranking models. Additionally, WB-Score reaches 0.95, surpassing both ArenaHard's 0.91 and AlpacaEval2.0's 0.89 for length-controlled win rates, as well as the 0.87 for regular win rates.
2403.08838
Rui Zhang
Rui Zhang, Hanyue Wu, Zhenzhong Yin, Zhu Xiao, Yong Xiong, and Kezhong Liu
Predictive Clustering of Vessel Behavior Based on Hierarchical Trajectory Representation
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Vessel trajectory clustering, which aims to find similar trajectory patterns, has been widely leveraged in overwater applications. Most traditional methods use predefined rules and thresholds to identify discrete vessel behaviors. They aim for high-quality clustering and conduct clustering on entire sequences, whether the original trajectory or its sub-trajectories, failing to represent their evolution. To resolve this problem, we propose a Predictive Clustering of Hierarchical Vessel Behavior (PC-HiV). PC-HiV first uses hierarchical representations to transform every trajectory into a behavioral sequence. Then, it predicts evolution at each timestamp of the sequence based on the representations. By applying predictive clustering and latent encoding, PC-HiV improves clustering and predictions simultaneously. Experiments on real AIS datasets demonstrate PC-HiV's superiority over existing methods, showcasing its effectiveness in capturing behavioral evolution discrepancies between vessel types (tramp vs. liner) and within emission control areas. Results show that our method outperforms NN-Kmeans and Robust DAA by 3.9% and 6.4% of the purity score.
[ { "created": "Wed, 13 Mar 2024 12:05:02 GMT", "version": "v1" }, { "created": "Fri, 15 Mar 2024 06:22:16 GMT", "version": "v2" } ]
2024-03-18
[ [ "Zhang", "Rui", "" ], [ "Wu", "Hanyue", "" ], [ "Yin", "Zhenzhong", "" ], [ "Xiao", "Zhu", "" ], [ "Xiong", "Yong", "" ], [ "Liu", "Kezhong", "" ] ]
Vessel trajectory clustering, which aims to find similar trajectory patterns, has been widely leveraged in overwater applications. Most traditional methods use predefined rules and thresholds to identify discrete vessel behaviors. They aim for high-quality clustering and conduct clustering on entire sequences, whether the original trajectory or its sub-trajectories, failing to represent their evolution. To resolve this problem, we propose a Predictive Clustering of Hierarchical Vessel Behavior (PC-HiV). PC-HiV first uses hierarchical representations to transform every trajectory into a behavioral sequence. Then, it predicts evolution at each timestamp of the sequence based on the representations. By applying predictive clustering and latent encoding, PC-HiV improves clustering and predictions simultaneously. Experiments on real AIS datasets demonstrate PC-HiV's superiority over existing methods, showcasing its effectiveness in capturing behavioral evolution discrepancies between vessel types (tramp vs. liner) and within emission control areas. Results show that our method outperforms NN-Kmeans and Robust DAA by 3.9% and 6.4% of the purity score.
1811.03529
Mubariz Zaffar
Mubariz Zaffar, Shoaib Ehsan, Michael Milford and Klaus Mcdonald Maier
Memorable Maps: A Framework for Re-defining Places in Visual Place Recognition
13 pages, 25 figures, 1 table
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a cognition-inspired agnostic framework for building a map for Visual Place Recognition. This framework draws inspiration from human-memorability, utilizes the traditional image entropy concept and computes the static content in an image; thereby presenting a tri-folded criterion to assess the 'memorability' of an image for visual place recognition. A dataset namely 'ESSEX3IN1' is created, composed of highly confusing images from indoor, outdoor and natural scenes for analysis. When used in conjunction with state-of-the-art visual place recognition methods, the proposed framework provides significant performance boost to these techniques, as evidenced by results on ESSEX3IN1 and other public datasets.
[ { "created": "Thu, 8 Nov 2018 16:18:50 GMT", "version": "v1" }, { "created": "Thu, 21 Mar 2019 16:21:49 GMT", "version": "v2" } ]
2019-03-22
[ [ "Zaffar", "Mubariz", "" ], [ "Ehsan", "Shoaib", "" ], [ "Milford", "Michael", "" ], [ "Maier", "Klaus Mcdonald", "" ] ]
This paper presents a cognition-inspired agnostic framework for building a map for Visual Place Recognition. This framework draws inspiration from human-memorability, utilizes the traditional image entropy concept and computes the static content in an image; thereby presenting a tri-folded criterion to assess the 'memorability' of an image for visual place recognition. A dataset namely 'ESSEX3IN1' is created, composed of highly confusing images from indoor, outdoor and natural scenes for analysis. When used in conjunction with state-of-the-art visual place recognition methods, the proposed framework provides significant performance boost to these techniques, as evidenced by results on ESSEX3IN1 and other public datasets.
1906.02037
Nan Wang
Yiyi Tao, Yiling Jia, Nan Wang, Hongning Wang
The FacT: Taming Latent Factor Models for Explainability with Factorization Trees
In proceedings of SIGIR'19
null
null
null
cs.IR cs.LG stat.ML
http://creativecommons.org/licenses/by-sa/4.0/
Latent factor models have achieved great success in personalized recommendations, but they are also notoriously difficult to explain. In this work, we integrate regression trees to guide the learning of latent factor models for recommendation, and use the learnt tree structure to explain the resulting latent factors. Specifically, we build regression trees on users and items respectively with user-generated reviews, and associate a latent profile to each node on the trees to represent users and items. With the growth of regression tree, the latent factors are gradually refined under the regularization imposed by the tree structure. As a result, we are able to track the creation of latent profiles by looking into the path of each factor on regression trees, which thus serves as an explanation for the resulting recommendations. Extensive experiments on two large collections of Amazon and Yelp reviews demonstrate the advantage of our model over several competitive baseline algorithms. Besides, our extensive user study also confirms the practical value of explainable recommendations generated by our model.
[ { "created": "Mon, 3 Jun 2019 20:31:57 GMT", "version": "v1" } ]
2019-06-06
[ [ "Tao", "Yiyi", "" ], [ "Jia", "Yiling", "" ], [ "Wang", "Nan", "" ], [ "Wang", "Hongning", "" ] ]
Latent factor models have achieved great success in personalized recommendations, but they are also notoriously difficult to explain. In this work, we integrate regression trees to guide the learning of latent factor models for recommendation, and use the learnt tree structure to explain the resulting latent factors. Specifically, we build regression trees on users and items respectively with user-generated reviews, and associate a latent profile to each node on the trees to represent users and items. With the growth of regression tree, the latent factors are gradually refined under the regularization imposed by the tree structure. As a result, we are able to track the creation of latent profiles by looking into the path of each factor on regression trees, which thus serves as an explanation for the resulting recommendations. Extensive experiments on two large collections of Amazon and Yelp reviews demonstrate the advantage of our model over several competitive baseline algorithms. Besides, our extensive user study also confirms the practical value of explainable recommendations generated by our model.
2207.09608
No\"elle Rakotondravony
No\"elle Rakotondravony, Yiren Ding, and Lane Harrison
Probablement, Wahrscheinlich, Likely ? A Cross-Language Study of How People Verbalize Probabilities in Icon Array Visualizations
11 pages, 10 figures, conference paper
IEEE Transactions on Visualization and Computer Graphics 2023
10.1109/TVCG.2022.3209367
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Visualizations today are used across a wide range of languages and cultures. Yet the extent to which language impacts how we reason about data and visualizations remains unclear. In this paper, we explore the intersection of visualization and language through a cross-language study on estimative probability tasks with icon-array visualizations. Across Arabic, English, French, German, and Mandarin, n = 50 participants per language both chose probability expressions - e.g. likely, probable - to describe icon-array visualizations (Vis-to-Expression), and drew icon-array visualizations to match a given expression (Expression-to-Vis). Results suggest that there is no clear one-to-one mapping of probability expressions and associated visual ranges between languages. Several translated expressions fell significantly above or below the range of the corresponding English expressions. Compared to other languages, French and German respondents appear to exhibit high levels of consistency between the visualizations they drew and the words they chose. Participants across languages used similar words when describing scenarios above 80% chance, with more variance in expressions targeting mid-range and lower values. We discuss how these results suggest potential differences in the expressiveness of language as it relates to visualization interpretation and design goals, as well as practical implications for translation efforts and future studies at the intersection of languages, culture, and visualization. Experiment data, source code, and analysis scripts are available at the following repository: https://osf.io/g5d4r/.
[ { "created": "Wed, 20 Jul 2022 01:13:51 GMT", "version": "v1" }, { "created": "Wed, 27 Jul 2022 23:52:08 GMT", "version": "v2" }, { "created": "Mon, 2 Oct 2023 21:41:46 GMT", "version": "v3" } ]
2023-10-04
[ [ "Rakotondravony", "Noëlle", "" ], [ "Ding", "Yiren", "" ], [ "Harrison", "Lane", "" ] ]
Visualizations today are used across a wide range of languages and cultures. Yet the extent to which language impacts how we reason about data and visualizations remains unclear. In this paper, we explore the intersection of visualization and language through a cross-language study on estimative probability tasks with icon-array visualizations. Across Arabic, English, French, German, and Mandarin, n = 50 participants per language both chose probability expressions - e.g. likely, probable - to describe icon-array visualizations (Vis-to-Expression), and drew icon-array visualizations to match a given expression (Expression-to-Vis). Results suggest that there is no clear one-to-one mapping of probability expressions and associated visual ranges between languages. Several translated expressions fell significantly above or below the range of the corresponding English expressions. Compared to other languages, French and German respondents appear to exhibit high levels of consistency between the visualizations they drew and the words they chose. Participants across languages used similar words when describing scenarios above 80% chance, with more variance in expressions targeting mid-range and lower values. We discuss how these results suggest potential differences in the expressiveness of language as it relates to visualization interpretation and design goals, as well as practical implications for translation efforts and future studies at the intersection of languages, culture, and visualization. Experiment data, source code, and analysis scripts are available at the following repository: https://osf.io/g5d4r/.
2208.09635
Li Fan
Li Fan and Jianchang Liu
Mobile Robot Navigation in Complex Polygonal Workspaces Using Conformal Navigation Transformations
arXiv admin note: substantial text overlap with arXiv:2208.06876
null
null
null
cs.RO cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work proposes a novel transformation termed the conformal navigation transformation to achieve collision-free navigation of a robot in a workspace populated with arbitrary polygonal obstacles. The properties of the conformal navigation transformation in the polygonal workspace are investigated in this work as well as its capability to provide a solution to the navigation problem. %The properties of the conformal navigation transformation are investigated, which contribute to the solution of the robot navigation problem in complex polygonal environments. %which facilitates the navigation of robots in complex environments. The definition of the navigation function is generalized to accommodate non-smooth obstacle boundaries. Based on the proposed transformation and the generalized navigation function, a provably correct feedback controller is derived for the automatic guidance and motion control of the kinematic mobile robot. Moreover, an iterative method is proposed to construct the conformal navigation transformation in a multi-connected polygonal workspace, which transforms the multi-connected problem into multiple single-connected problems to achieve fast convergence.In addition to the analytic guarantees, the simulation study verifies the effectiveness of the proposed methodology in a workspace with non-trivial polygonal obstacles.
[ { "created": "Sat, 20 Aug 2022 08:29:49 GMT", "version": "v1" } ]
2022-08-23
[ [ "Fan", "Li", "" ], [ "Liu", "Jianchang", "" ] ]
This work proposes a novel transformation termed the conformal navigation transformation to achieve collision-free navigation of a robot in a workspace populated with arbitrary polygonal obstacles. The properties of the conformal navigation transformation in the polygonal workspace are investigated in this work as well as its capability to provide a solution to the navigation problem. %The properties of the conformal navigation transformation are investigated, which contribute to the solution of the robot navigation problem in complex polygonal environments. %which facilitates the navigation of robots in complex environments. The definition of the navigation function is generalized to accommodate non-smooth obstacle boundaries. Based on the proposed transformation and the generalized navigation function, a provably correct feedback controller is derived for the automatic guidance and motion control of the kinematic mobile robot. Moreover, an iterative method is proposed to construct the conformal navigation transformation in a multi-connected polygonal workspace, which transforms the multi-connected problem into multiple single-connected problems to achieve fast convergence.In addition to the analytic guarantees, the simulation study verifies the effectiveness of the proposed methodology in a workspace with non-trivial polygonal obstacles.
2001.07295
Clayton Morrison
Adarsh Pyarelal and Marco A. Valenzuela-Escarcega and Rebecca Sharp and Paul D. Hein, Jon Stephens, Pratik Bhandari, HeuiChan Lim, Saumya Debray, Clayton T. Morrison
AutoMATES: Automated Model Assembly from Text, Equations, and Software
8 pages, 6 figures, accepted to Modeling the World's Systems 2019
null
null
null
cs.AI cs.MM cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Models of complicated systems can be represented in different ways - in scientific papers, they are represented using natural language text as well as equations. But to be of real use, they must also be implemented as software, thus making code a third form of representing models. We introduce the AutoMATES project, which aims to build semantically-rich unified representations of models from scientific code and publications to facilitate the integration of computational models from different domains and allow for modeling large, complicated systems that span multiple domains and levels of abstraction.
[ { "created": "Tue, 21 Jan 2020 00:33:40 GMT", "version": "v1" } ]
2020-01-22
[ [ "Pyarelal", "Adarsh", "" ], [ "Valenzuela-Escarcega", "Marco A.", "" ], [ "Sharp", "Rebecca", "" ], [ "Hein", "Paul D.", "" ], [ "Stephens", "Jon", "" ], [ "Bhandari", "Pratik", "" ], [ "Lim", "HeuiChan", "" ], [ "Debray", "Saumya", "" ], [ "Morrison", "Clayton T.", "" ] ]
Models of complicated systems can be represented in different ways - in scientific papers, they are represented using natural language text as well as equations. But to be of real use, they must also be implemented as software, thus making code a third form of representing models. We introduce the AutoMATES project, which aims to build semantically-rich unified representations of models from scientific code and publications to facilitate the integration of computational models from different domains and allow for modeling large, complicated systems that span multiple domains and levels of abstraction.
2109.04300
Ruoxi Shi
Ruoxi Shi, Borui Yang, Yangzhou Jiang, Chenglong Zhao, Bingbing Ni
Energy Attack: On Transferring Adversarial Examples
Under Review for AAAI-22
null
null
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work we propose Energy Attack, a transfer-based black-box $L_\infty$-adversarial attack. The attack is parameter-free and does not require gradient approximation. In particular, we first obtain white-box adversarial perturbations of a surrogate model and divide these perturbations into small patches. Then we extract the unit component vectors and eigenvalues of these patches with principal component analysis (PCA). Base on the eigenvalues, we can model the energy distribution of adversarial perturbations. We then perform black-box attacks by sampling from the perturbation patches according to their energy distribution, and tiling the sampled patches to form a full-size adversarial perturbation. This can be done without the available access to victim models. Extensive experiments well demonstrate that the proposed Energy Attack achieves state-of-the-art performance in black-box attacks on various models and several datasets. Moreover, the extracted distribution is able to transfer among different model architectures and different datasets, and is therefore intrinsic to vision architectures.
[ { "created": "Thu, 9 Sep 2021 14:23:48 GMT", "version": "v1" } ]
2021-09-10
[ [ "Shi", "Ruoxi", "" ], [ "Yang", "Borui", "" ], [ "Jiang", "Yangzhou", "" ], [ "Zhao", "Chenglong", "" ], [ "Ni", "Bingbing", "" ] ]
In this work we propose Energy Attack, a transfer-based black-box $L_\infty$-adversarial attack. The attack is parameter-free and does not require gradient approximation. In particular, we first obtain white-box adversarial perturbations of a surrogate model and divide these perturbations into small patches. Then we extract the unit component vectors and eigenvalues of these patches with principal component analysis (PCA). Base on the eigenvalues, we can model the energy distribution of adversarial perturbations. We then perform black-box attacks by sampling from the perturbation patches according to their energy distribution, and tiling the sampled patches to form a full-size adversarial perturbation. This can be done without the available access to victim models. Extensive experiments well demonstrate that the proposed Energy Attack achieves state-of-the-art performance in black-box attacks on various models and several datasets. Moreover, the extracted distribution is able to transfer among different model architectures and different datasets, and is therefore intrinsic to vision architectures.
2407.15199
Ilya Ilyankou
Jingwei Guo, Meihui Wang, Ilya Ilyankou, Natchapon Jongwiriyanurak, Xiaowei Gao, Nicola Christie, James Haworth
Multiple Object Detection and Tracking in Panoramic Videos for Cycling Safety Analysis
null
null
null
null
cs.CV cs.CY
http://creativecommons.org/licenses/by/4.0/
Panoramic cycling videos can record 360{\deg} views around the cyclists. Thus, it is essential to conduct automatic road user analysis on them using computer vision models to provide data for studies on cycling safety. However, the features of panoramic data such as severe distortions, large number of small objects and boundary continuity have brought great challenges to the existing CV models, including poor performance and evaluation methods that are no longer applicable. In addition, due to the lack of data with annotations, it is not easy to re-train the models. In response to these problems, the project proposed and implemented a three-step methodology: (1) improve the prediction performance of the pre-trained object detection models on panoramic data by projecting the original image into 4 perspective sub-images; (2) introduce supports for boundary continuity and category information into DeepSORT, a commonly used multiple object tracking model, and set an improved detection model as its detector; (3) using the tracking results, develop an application for detecting the overtaking behaviour of the surrounding vehicles. Evaluated on the panoramic cycling dataset built by the project, the proposed methodology improves the average precision of YOLO v5m6 and Faster RCNN-FPN under any input resolution setting. In addition, it raises MOTA and IDF1 of DeepSORT by 7.6\% and 9.7\% respectively. When detecting the overtakes in the test videos, it achieves the F-score of 0.88. The code is available on GitHub at github.com/cuppp1998/360_object_tracking to ensure the reproducibility and further improvements of results.
[ { "created": "Sun, 21 Jul 2024 15:37:55 GMT", "version": "v1" } ]
2024-07-23
[ [ "Guo", "Jingwei", "" ], [ "Wang", "Meihui", "" ], [ "Ilyankou", "Ilya", "" ], [ "Jongwiriyanurak", "Natchapon", "" ], [ "Gao", "Xiaowei", "" ], [ "Christie", "Nicola", "" ], [ "Haworth", "James", "" ] ]
Panoramic cycling videos can record 360{\deg} views around the cyclists. Thus, it is essential to conduct automatic road user analysis on them using computer vision models to provide data for studies on cycling safety. However, the features of panoramic data such as severe distortions, large number of small objects and boundary continuity have brought great challenges to the existing CV models, including poor performance and evaluation methods that are no longer applicable. In addition, due to the lack of data with annotations, it is not easy to re-train the models. In response to these problems, the project proposed and implemented a three-step methodology: (1) improve the prediction performance of the pre-trained object detection models on panoramic data by projecting the original image into 4 perspective sub-images; (2) introduce supports for boundary continuity and category information into DeepSORT, a commonly used multiple object tracking model, and set an improved detection model as its detector; (3) using the tracking results, develop an application for detecting the overtaking behaviour of the surrounding vehicles. Evaluated on the panoramic cycling dataset built by the project, the proposed methodology improves the average precision of YOLO v5m6 and Faster RCNN-FPN under any input resolution setting. In addition, it raises MOTA and IDF1 of DeepSORT by 7.6\% and 9.7\% respectively. When detecting the overtakes in the test videos, it achieves the F-score of 0.88. The code is available on GitHub at github.com/cuppp1998/360_object_tracking to ensure the reproducibility and further improvements of results.
2208.10694
Jinpeng Li
Penghua Zhai, Enwei Zhu, Baolian Qi, Xin Wei, Jinpeng Li
Spiral Contrastive Learning: An Efficient 3D Representation Learning Method for Unannotated CT Lesions
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Computed tomography (CT) samples with pathological annotations are difficult to obtain. As a result, the computer-aided diagnosis (CAD) algorithms are trained on small datasets (e.g., LIDC-IDRI with 1,018 samples), limiting their accuracies and reliability. In the past five years, several works have tailored for unsupervised representations of CT lesions via two-dimensional (2D) and three-dimensional (3D) self-supervised learning (SSL) algorithms. The 2D algorithms have difficulty capturing 3D information, and existing 3D algorithms are computationally heavy. Light-weight 3D SSL remains the boundary to explore. In this paper, we propose the spiral contrastive learning (SCL), which yields 3D representations in a computationally efficient manner. SCL first transforms 3D lesions to the 2D plane using an information-preserving spiral transformation, and then learn transformation-invariant features using 2D contrastive learning. For the augmentation, we consider natural image augmentations and medical image augmentations. We evaluate SCL by training a classification head upon the embedding layer. Experimental results show that SCL achieves state-of-the-art accuracy on LIDC-IDRI (89.72%), LNDb (82.09%) and TianChi (90.16%) for unsupervised representation learning. With 10% annotated data for fine-tune, the performance of SCL is comparable to that of supervised learning algorithms (85.75% vs. 85.03% on LIDC-IDRI, 78.20% vs. 73.44% on LNDb and 87.85% vs. 83.34% on TianChi, respectively). Meanwhile, SCL reduces the computational effort by 66.98% compared to other 3D SSL algorithms, demonstrating the efficiency of the proposed method in unsupervised pre-training.
[ { "created": "Tue, 23 Aug 2022 02:31:03 GMT", "version": "v1" } ]
2022-08-24
[ [ "Zhai", "Penghua", "" ], [ "Zhu", "Enwei", "" ], [ "Qi", "Baolian", "" ], [ "Wei", "Xin", "" ], [ "Li", "Jinpeng", "" ] ]
Computed tomography (CT) samples with pathological annotations are difficult to obtain. As a result, the computer-aided diagnosis (CAD) algorithms are trained on small datasets (e.g., LIDC-IDRI with 1,018 samples), limiting their accuracies and reliability. In the past five years, several works have tailored for unsupervised representations of CT lesions via two-dimensional (2D) and three-dimensional (3D) self-supervised learning (SSL) algorithms. The 2D algorithms have difficulty capturing 3D information, and existing 3D algorithms are computationally heavy. Light-weight 3D SSL remains the boundary to explore. In this paper, we propose the spiral contrastive learning (SCL), which yields 3D representations in a computationally efficient manner. SCL first transforms 3D lesions to the 2D plane using an information-preserving spiral transformation, and then learn transformation-invariant features using 2D contrastive learning. For the augmentation, we consider natural image augmentations and medical image augmentations. We evaluate SCL by training a classification head upon the embedding layer. Experimental results show that SCL achieves state-of-the-art accuracy on LIDC-IDRI (89.72%), LNDb (82.09%) and TianChi (90.16%) for unsupervised representation learning. With 10% annotated data for fine-tune, the performance of SCL is comparable to that of supervised learning algorithms (85.75% vs. 85.03% on LIDC-IDRI, 78.20% vs. 73.44% on LNDb and 87.85% vs. 83.34% on TianChi, respectively). Meanwhile, SCL reduces the computational effort by 66.98% compared to other 3D SSL algorithms, demonstrating the efficiency of the proposed method in unsupervised pre-training.
2009.01385
Zohreh Azizi
Zohreh Azizi, Xuejing Lei, and C.-C Jay Kuo
Noise-Aware Texture-Preserving Low-Light Enhancement
Accepted by IEEE VCIP 2020. The final version will appear in IEEE VCIP 2020
null
null
null
cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A simple and effective low-light image enhancement method based on a noise-aware texture-preserving retinex model is proposed in this work. The new method, called NATLE, attempts to strike a balance between noise removal and natural texture preservation through a low-complexity solution. Its cost function includes an estimated piece-wise smooth illumination map and a noise-free texture-preserving reflectance map. Afterwards, illumination is adjusted to form the enhanced image together with the reflectance map. Extensive experiments are conducted on common low-light image enhancement datasets to demonstrate the superior performance of NATLE.
[ { "created": "Wed, 2 Sep 2020 23:30:03 GMT", "version": "v1" } ]
2020-09-04
[ [ "Azizi", "Zohreh", "" ], [ "Lei", "Xuejing", "" ], [ "Kuo", "C. -C Jay", "" ] ]
A simple and effective low-light image enhancement method based on a noise-aware texture-preserving retinex model is proposed in this work. The new method, called NATLE, attempts to strike a balance between noise removal and natural texture preservation through a low-complexity solution. Its cost function includes an estimated piece-wise smooth illumination map and a noise-free texture-preserving reflectance map. Afterwards, illumination is adjusted to form the enhanced image together with the reflectance map. Extensive experiments are conducted on common low-light image enhancement datasets to demonstrate the superior performance of NATLE.
1908.01463
Mohammadamin Baniasadi
Mohammadamin Baniasadi, and Ertem Tuncel
Minimum Energy Analysis for Robust Gaussian Joint Source-Channel Coding with a Square-Law Profile
Distortion-noise profile, fidelity-quality profile, energy-distortion tradeoff, energy-limited transmission, joint source-channel coding
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A distortion-noise profile is a function indicating the maximum allowed source distortion value for each noise level in the channel. In this paper, the minimum energy required to achieve a distortion noise profile is studied for Gaussian sources which are transmitted robustly over Gaussian channels. We provide improved lower and upper bounds for the minimum energy behavior of the square-law profile using a family of lower bounds and our proposed coding scheme.
[ { "created": "Mon, 5 Aug 2019 04:32:34 GMT", "version": "v1" }, { "created": "Wed, 27 Nov 2019 18:21:16 GMT", "version": "v2" } ]
2019-11-28
[ [ "Baniasadi", "Mohammadamin", "" ], [ "Tuncel", "Ertem", "" ] ]
A distortion-noise profile is a function indicating the maximum allowed source distortion value for each noise level in the channel. In this paper, the minimum energy required to achieve a distortion noise profile is studied for Gaussian sources which are transmitted robustly over Gaussian channels. We provide improved lower and upper bounds for the minimum energy behavior of the square-law profile using a family of lower bounds and our proposed coding scheme.
1712.05512
Saptarshi Sengupta
Saptarshi Sengupta, Sanchita Basak, Richard Alan Peters II
Data Clustering using a Hybrid of Fuzzy C-Means and Quantum-behaved Particle Swarm Optimization
6 pages, 6 figures, 6 tables
null
10.1109/CCWC.2018.8301693
null
cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fuzzy clustering has become a widely used data mining technique and plays an important role in grouping, traversing and selectively using data for user specified applications. The deterministic Fuzzy C-Means (FCM) algorithm may result in suboptimal solutions when applied to multidimensional data in real-world, time-constrained problems. In this paper the Quantum-behaved Particle Swarm Optimization (QPSO) with a fully connected topology is coupled with the Fuzzy C-Means Clustering algorithm and is tested on a suite of datasets from the UCI Machine Learning Repository. The global search ability of the QPSO algorithm helps in avoiding stagnation in local optima while the soft clustering approach of FCM helps to partition data based on membership probabilities. Clustering performance indices such as F-Measure, Accuracy, Quantization Error, Intercluster and Intracluster distances are reported for competitive techniques such as PSO K-Means, QPSO K-Means and QPSO FCM over all datasets considered. Experimental results indicate that QPSO FCM provides comparable and in most cases superior results when compared to the others.
[ { "created": "Fri, 15 Dec 2017 02:47:57 GMT", "version": "v1" } ]
2018-10-23
[ [ "Sengupta", "Saptarshi", "" ], [ "Basak", "Sanchita", "" ], [ "Peters", "Richard Alan", "II" ] ]
Fuzzy clustering has become a widely used data mining technique and plays an important role in grouping, traversing and selectively using data for user specified applications. The deterministic Fuzzy C-Means (FCM) algorithm may result in suboptimal solutions when applied to multidimensional data in real-world, time-constrained problems. In this paper the Quantum-behaved Particle Swarm Optimization (QPSO) with a fully connected topology is coupled with the Fuzzy C-Means Clustering algorithm and is tested on a suite of datasets from the UCI Machine Learning Repository. The global search ability of the QPSO algorithm helps in avoiding stagnation in local optima while the soft clustering approach of FCM helps to partition data based on membership probabilities. Clustering performance indices such as F-Measure, Accuracy, Quantization Error, Intercluster and Intracluster distances are reported for competitive techniques such as PSO K-Means, QPSO K-Means and QPSO FCM over all datasets considered. Experimental results indicate that QPSO FCM provides comparable and in most cases superior results when compared to the others.
1702.05664
Abhishek Kolagunda
Abhishek Kolagunda, Scott Sorensen, Philip Saponaro, Wayne Treible and Chandra Kambhamettu
Robust Shape Registration using Fuzzy Correspondences
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Shape registration is the process of aligning one 3D model to another. Most previous methods to align shapes with no known correspondences attempt to solve for both the transformation and correspondences iteratively. We present a shape registration approach that solves for the transformation using fuzzy correspondences to maximize the overlap between the given shape and the target shape. A coarse to fine approach with Levenberg-Marquardt method is used for optimization. Real and synthetic experiments show our approach is robust and outperforms other state of the art methods when point clouds are noisy, sparse, and have non-uniform density. Experiments show our method is more robust to initialization and can handle larger scale changes and rotation than other methods. We also show that the approach can be used for 2D-3D alignment via ray-point alignment.
[ { "created": "Sat, 18 Feb 2017 22:22:57 GMT", "version": "v1" } ]
2017-02-21
[ [ "Kolagunda", "Abhishek", "" ], [ "Sorensen", "Scott", "" ], [ "Saponaro", "Philip", "" ], [ "Treible", "Wayne", "" ], [ "Kambhamettu", "Chandra", "" ] ]
Shape registration is the process of aligning one 3D model to another. Most previous methods to align shapes with no known correspondences attempt to solve for both the transformation and correspondences iteratively. We present a shape registration approach that solves for the transformation using fuzzy correspondences to maximize the overlap between the given shape and the target shape. A coarse to fine approach with Levenberg-Marquardt method is used for optimization. Real and synthetic experiments show our approach is robust and outperforms other state of the art methods when point clouds are noisy, sparse, and have non-uniform density. Experiments show our method is more robust to initialization and can handle larger scale changes and rotation than other methods. We also show that the approach can be used for 2D-3D alignment via ray-point alignment.
2103.01216
Yan Gu
Yan Gu, Omar Obeya, Julian Shun
Parallel In-Place Algorithms: Theory and Practice
null
null
null
null
cs.DC cs.DS cs.PF
http://creativecommons.org/licenses/by/4.0/
Many parallel algorithms use at least linear auxiliary space in the size of the input to enable computations to be done independently without conflicts. Unfortunately, this extra space can be prohibitive for memory-limited machines, preventing large inputs from being processed. Therefore, it is desirable to design parallel in-place algorithms that use sublinear (or even polylogarithmic) auxiliary space. In this paper, we bridge the gap between theory and practice for parallel in-place (PIP) algorithms. We first define two computational models based on fork-join parallelism, which reflect modern parallel programming environments. We then introduce a variety of new parallel in-place algorithms that are simple and efficient, both in theory and in practice. Our algorithmic highlight is the Decomposable Property introduced in this paper, which enables existing non-in-place but highly-optimized parallel algorithms to be converted into parallel in-place algorithms. Using this property, we obtain algorithms for random permutation, list contraction, tree contraction, and merging that take linear work, $O(n^{1-\epsilon})$ auxiliary space, and $O(n^\epsilon\cdot\text{polylog}(n))$ span for $0<\epsilon<1$. We also present new parallel in-place algorithms for scan, filter, merge, connectivity, biconnectivity, and minimum spanning forest using other techniques. In addition to theoretical results, we present experimental results for implementations of many of our parallel in-place algorithms. We show that on a 72-core machine with two-way hyper-threading, the parallel in-place algorithms usually outperform existing parallel algorithms for the same problems that use linear auxiliary space, indicating that the theory developed in this paper indeed leads to practical benefits in terms of both space usage and running time.
[ { "created": "Mon, 1 Mar 2021 18:59:05 GMT", "version": "v1" } ]
2021-03-02
[ [ "Gu", "Yan", "" ], [ "Obeya", "Omar", "" ], [ "Shun", "Julian", "" ] ]
Many parallel algorithms use at least linear auxiliary space in the size of the input to enable computations to be done independently without conflicts. Unfortunately, this extra space can be prohibitive for memory-limited machines, preventing large inputs from being processed. Therefore, it is desirable to design parallel in-place algorithms that use sublinear (or even polylogarithmic) auxiliary space. In this paper, we bridge the gap between theory and practice for parallel in-place (PIP) algorithms. We first define two computational models based on fork-join parallelism, which reflect modern parallel programming environments. We then introduce a variety of new parallel in-place algorithms that are simple and efficient, both in theory and in practice. Our algorithmic highlight is the Decomposable Property introduced in this paper, which enables existing non-in-place but highly-optimized parallel algorithms to be converted into parallel in-place algorithms. Using this property, we obtain algorithms for random permutation, list contraction, tree contraction, and merging that take linear work, $O(n^{1-\epsilon})$ auxiliary space, and $O(n^\epsilon\cdot\text{polylog}(n))$ span for $0<\epsilon<1$. We also present new parallel in-place algorithms for scan, filter, merge, connectivity, biconnectivity, and minimum spanning forest using other techniques. In addition to theoretical results, we present experimental results for implementations of many of our parallel in-place algorithms. We show that on a 72-core machine with two-way hyper-threading, the parallel in-place algorithms usually outperform existing parallel algorithms for the same problems that use linear auxiliary space, indicating that the theory developed in this paper indeed leads to practical benefits in terms of both space usage and running time.
2406.15182
Shuang Wu
Yingying Fang, Shuang Wu, Zihao Jin, Caiwen Xu, Shiyi Wang, Simon Walsh, Guang Yang
DiffExplainer: Unveiling Black Box Models Via Counterfactual Generation
MICCAI 2024
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
In the field of medical imaging, particularly in tasks related to early disease detection and prognosis, understanding the reasoning behind AI model predictions is imperative for assessing their reliability. Conventional explanation methods encounter challenges in identifying decisive features in medical image classifications, especially when discriminative features are subtle or not immediately evident. To address this limitation, we propose an agent model capable of generating counterfactual images that prompt different decisions when plugged into a black box model. By employing this agent model, we can uncover influential image patterns that impact the black model's final predictions. Through our methodology, we efficiently identify features that influence decisions of the deep black box. We validated our approach in the rigorous domain of medical prognosis tasks, showcasing its efficacy and potential to enhance the reliability of deep learning models in medical image classification compared to existing interpretation methods. The code will be publicly available at https://github.com/ayanglab/DiffExplainer.
[ { "created": "Fri, 21 Jun 2024 14:27:02 GMT", "version": "v1" }, { "created": "Thu, 27 Jun 2024 03:54:50 GMT", "version": "v2" } ]
2024-06-28
[ [ "Fang", "Yingying", "" ], [ "Wu", "Shuang", "" ], [ "Jin", "Zihao", "" ], [ "Xu", "Caiwen", "" ], [ "Wang", "Shiyi", "" ], [ "Walsh", "Simon", "" ], [ "Yang", "Guang", "" ] ]
In the field of medical imaging, particularly in tasks related to early disease detection and prognosis, understanding the reasoning behind AI model predictions is imperative for assessing their reliability. Conventional explanation methods encounter challenges in identifying decisive features in medical image classifications, especially when discriminative features are subtle or not immediately evident. To address this limitation, we propose an agent model capable of generating counterfactual images that prompt different decisions when plugged into a black box model. By employing this agent model, we can uncover influential image patterns that impact the black model's final predictions. Through our methodology, we efficiently identify features that influence decisions of the deep black box. We validated our approach in the rigorous domain of medical prognosis tasks, showcasing its efficacy and potential to enhance the reliability of deep learning models in medical image classification compared to existing interpretation methods. The code will be publicly available at https://github.com/ayanglab/DiffExplainer.
1611.04928
Yong Cheng
Yong Cheng, Yang Liu, Qian Yang, Maosong Sun and Wei Xu
Neural Machine Translation with Pivot Languages
fix experiments and revise the paper
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While recent neural machine translation approaches have delivered state-of-the-art performance for resource-rich language pairs, they suffer from the data scarcity problem for resource-scarce language pairs. Although this problem can be alleviated by exploiting a pivot language to bridge the source and target languages, the source-to-pivot and pivot-to-target translation models are usually independently trained. In this work, we introduce a joint training algorithm for pivot-based neural machine translation. We propose three methods to connect the two models and enable them to interact with each other during training. Experiments on Europarl and WMT corpora show that joint training of source-to-pivot and pivot-to-target models leads to significant improvements over independent training across various languages.
[ { "created": "Tue, 15 Nov 2016 16:44:54 GMT", "version": "v1" }, { "created": "Tue, 21 Feb 2017 04:13:38 GMT", "version": "v2" } ]
2017-02-22
[ [ "Cheng", "Yong", "" ], [ "Liu", "Yang", "" ], [ "Yang", "Qian", "" ], [ "Sun", "Maosong", "" ], [ "Xu", "Wei", "" ] ]
While recent neural machine translation approaches have delivered state-of-the-art performance for resource-rich language pairs, they suffer from the data scarcity problem for resource-scarce language pairs. Although this problem can be alleviated by exploiting a pivot language to bridge the source and target languages, the source-to-pivot and pivot-to-target translation models are usually independently trained. In this work, we introduce a joint training algorithm for pivot-based neural machine translation. We propose three methods to connect the two models and enable them to interact with each other during training. Experiments on Europarl and WMT corpora show that joint training of source-to-pivot and pivot-to-target models leads to significant improvements over independent training across various languages.
2210.09925
Tegan Maharaj
Tegan Maharaj
Generalizing in the Real World with Representation Learning
PhD Thesis, Montreal Polytechnic
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by-nc-sa/4.0/
Machine learning (ML) formalizes the problem of getting computers to learn from experience as optimization of performance according to some metric(s) on a set of data examples. This is in contrast to requiring behaviour specified in advance (e.g. by hard-coded rules). Formalization of this problem has enabled great progress in many applications with large real-world impact, including translation, speech recognition, self-driving cars, and drug discovery. But practical instantiations of this formalism make many assumptions - for example, that data are i.i.d.: independent and identically distributed - whose soundness is seldom investigated. And in making great progress in such a short time, the field has developed many norms and ad-hoc standards, focused on a relatively small range of problem settings. As applications of ML, particularly in artificial intelligence (AI) systems, become more pervasive in the real world, we need to critically examine these assumptions, norms, and problem settings, as well as the methods that have become de-facto standards. There is much we still do not understand about how and why deep networks trained with stochastic gradient descent are able to generalize as well as they do, why they fail when they do, and how they will perform on out-of-distribution data. In this thesis I cover some of my work towards better understanding deep net generalization, identify several ways assumptions and problem settings fail to generalize to the real world, and propose ways to address those failures in practice.
[ { "created": "Tue, 18 Oct 2022 15:11:09 GMT", "version": "v1" } ]
2022-10-19
[ [ "Maharaj", "Tegan", "" ] ]
Machine learning (ML) formalizes the problem of getting computers to learn from experience as optimization of performance according to some metric(s) on a set of data examples. This is in contrast to requiring behaviour specified in advance (e.g. by hard-coded rules). Formalization of this problem has enabled great progress in many applications with large real-world impact, including translation, speech recognition, self-driving cars, and drug discovery. But practical instantiations of this formalism make many assumptions - for example, that data are i.i.d.: independent and identically distributed - whose soundness is seldom investigated. And in making great progress in such a short time, the field has developed many norms and ad-hoc standards, focused on a relatively small range of problem settings. As applications of ML, particularly in artificial intelligence (AI) systems, become more pervasive in the real world, we need to critically examine these assumptions, norms, and problem settings, as well as the methods that have become de-facto standards. There is much we still do not understand about how and why deep networks trained with stochastic gradient descent are able to generalize as well as they do, why they fail when they do, and how they will perform on out-of-distribution data. In this thesis I cover some of my work towards better understanding deep net generalization, identify several ways assumptions and problem settings fail to generalize to the real world, and propose ways to address those failures in practice.
2102.01211
Stefano Arrigoni
Stefano Arrigoni, Francesco Braghin, Federico Cheli
MPC path-planner for autonomous driving solved by genetic algorithm technique
null
null
10.1080/00423114.2021.1999991
null
cs.RO cs.SY eess.SY
http://creativecommons.org/licenses/by/4.0/
Autonomous vehicle's technology is expected to be disruptive for automotive industry in next years. This paper proposes a novel real-time trajectory planner based on a Nonlinear Model Predictive Control (NMPC) algorithm. A nonlinear single track vehicle model with Pacejka's lateral tyre formulas has been implemented. The numerical solution of the NMPC problem is obtained by means of the implementation of a novel genetic algorithm strategy. Numerical results are discussed through simulations that shown a reasonable behavior of the proposed strategy in presence of static or moving obstacles as well as in a wide rage of road friction conditions. Moreover a real-time implementation is made possible by the reported computational time analysis.
[ { "created": "Mon, 1 Feb 2021 22:24:26 GMT", "version": "v1" } ]
2022-02-15
[ [ "Arrigoni", "Stefano", "" ], [ "Braghin", "Francesco", "" ], [ "Cheli", "Federico", "" ] ]
Autonomous vehicle's technology is expected to be disruptive for automotive industry in next years. This paper proposes a novel real-time trajectory planner based on a Nonlinear Model Predictive Control (NMPC) algorithm. A nonlinear single track vehicle model with Pacejka's lateral tyre formulas has been implemented. The numerical solution of the NMPC problem is obtained by means of the implementation of a novel genetic algorithm strategy. Numerical results are discussed through simulations that shown a reasonable behavior of the proposed strategy in presence of static or moving obstacles as well as in a wide rage of road friction conditions. Moreover a real-time implementation is made possible by the reported computational time analysis.
2203.02177
Zheng Lian
Zheng Lian, Lan Chen, Licai Sun, Bin Liu, Jianhua Tao
GCNet: Graph Completion Network for Incomplete Multimodal Learning in Conversation
null
null
null
null
cs.LG cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Conversations have become a critical data format on social media platforms. Understanding conversation from emotion, content and other aspects also attracts increasing attention from researchers due to its widespread application in human-computer interaction. In real-world environments, we often encounter the problem of incomplete modalities, which has become a core issue of conversation understanding. To address this problem, researchers propose various methods. However, existing approaches are mainly designed for individual utterances rather than conversational data, which cannot fully exploit temporal and speaker information in conversations. To this end, we propose a novel framework for incomplete multimodal learning in conversations, called "Graph Complete Network (GCNet)", filling the gap of existing works. Our GCNet contains two well-designed graph neural network-based modules, "Speaker GNN" and "Temporal GNN", to capture temporal and speaker dependencies. To make full use of complete and incomplete data, we jointly optimize classification and reconstruction tasks in an end-to-end manner. To verify the effectiveness of our method, we conduct experiments on three benchmark conversational datasets. Experimental results demonstrate that our GCNet is superior to existing state-of-the-art approaches in incomplete multimodal learning. Code is available at https://github.com/zeroQiaoba/GCNet.
[ { "created": "Fri, 4 Mar 2022 08:13:18 GMT", "version": "v1" }, { "created": "Wed, 4 Jan 2023 02:16:09 GMT", "version": "v2" } ]
2023-01-05
[ [ "Lian", "Zheng", "" ], [ "Chen", "Lan", "" ], [ "Sun", "Licai", "" ], [ "Liu", "Bin", "" ], [ "Tao", "Jianhua", "" ] ]
Conversations have become a critical data format on social media platforms. Understanding conversation from emotion, content and other aspects also attracts increasing attention from researchers due to its widespread application in human-computer interaction. In real-world environments, we often encounter the problem of incomplete modalities, which has become a core issue of conversation understanding. To address this problem, researchers propose various methods. However, existing approaches are mainly designed for individual utterances rather than conversational data, which cannot fully exploit temporal and speaker information in conversations. To this end, we propose a novel framework for incomplete multimodal learning in conversations, called "Graph Complete Network (GCNet)", filling the gap of existing works. Our GCNet contains two well-designed graph neural network-based modules, "Speaker GNN" and "Temporal GNN", to capture temporal and speaker dependencies. To make full use of complete and incomplete data, we jointly optimize classification and reconstruction tasks in an end-to-end manner. To verify the effectiveness of our method, we conduct experiments on three benchmark conversational datasets. Experimental results demonstrate that our GCNet is superior to existing state-of-the-art approaches in incomplete multimodal learning. Code is available at https://github.com/zeroQiaoba/GCNet.
2301.12714
Hanlin Zhu
Hanlin Zhu, Paria Rashidinejad and Jiantao Jiao
Importance Weighted Actor-Critic for Optimal Conservative Offline Reinforcement Learning
24 pages, 3 figures
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose A-Crab (Actor-Critic Regularized by Average Bellman error), a new practical algorithm for offline reinforcement learning (RL) in complex environments with insufficient data coverage. Our algorithm combines the marginalized importance sampling framework with the actor-critic paradigm, where the critic returns evaluations of the actor (policy) that are pessimistic relative to the offline data and have a small average (importance-weighted) Bellman error. Compared to existing methods, our algorithm simultaneously offers a number of advantages: (1) It achieves the optimal statistical rate of $1/\sqrt{N}$ -- where $N$ is the size of offline dataset -- in converging to the best policy covered in the offline dataset, even when combined with general function approximators. (2) It relies on a weaker average notion of policy coverage (compared to the $\ell_\infty$ single-policy concentrability) that exploits the structure of policy visitations. (3) It outperforms the data-collection behavior policy over a wide range of specific hyperparameters. We provide both theoretical analysis and experimental results to validate the effectiveness of our proposed algorithm.
[ { "created": "Mon, 30 Jan 2023 07:53:53 GMT", "version": "v1" }, { "created": "Mon, 9 Oct 2023 08:03:14 GMT", "version": "v2" } ]
2023-10-10
[ [ "Zhu", "Hanlin", "" ], [ "Rashidinejad", "Paria", "" ], [ "Jiao", "Jiantao", "" ] ]
We propose A-Crab (Actor-Critic Regularized by Average Bellman error), a new practical algorithm for offline reinforcement learning (RL) in complex environments with insufficient data coverage. Our algorithm combines the marginalized importance sampling framework with the actor-critic paradigm, where the critic returns evaluations of the actor (policy) that are pessimistic relative to the offline data and have a small average (importance-weighted) Bellman error. Compared to existing methods, our algorithm simultaneously offers a number of advantages: (1) It achieves the optimal statistical rate of $1/\sqrt{N}$ -- where $N$ is the size of offline dataset -- in converging to the best policy covered in the offline dataset, even when combined with general function approximators. (2) It relies on a weaker average notion of policy coverage (compared to the $\ell_\infty$ single-policy concentrability) that exploits the structure of policy visitations. (3) It outperforms the data-collection behavior policy over a wide range of specific hyperparameters. We provide both theoretical analysis and experimental results to validate the effectiveness of our proposed algorithm.
2310.16436
Ge Zheng
Ge Zheng, Bin Yang, Jiajin Tang, Hong-Yu Zhou, Sibei Yang
DDCoT: Duty-Distinct Chain-of-Thought Prompting for Multimodal Reasoning in Language Models
24 pages, 13 figures, to be published in NeurIPS 2023
null
null
null
cs.CV cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A long-standing goal of AI systems is to perform complex multimodal reasoning like humans. Recently, large language models (LLMs) have made remarkable strides in such multi-step reasoning on the language modality solely by leveraging the chain of thought (CoT) to mimic human thinking. However, the transfer of these advancements to multimodal contexts introduces heightened challenges, including but not limited to the impractical need for labor-intensive annotation and the limitations in terms of flexibility, generalizability, and explainability. To evoke CoT reasoning in multimodality, this work first conducts an in-depth analysis of these challenges posed by multimodality and presents two key insights: "keeping critical thinking" and "letting everyone do their jobs" in multimodal CoT reasoning. Furthermore, this study proposes a novel DDCoT prompting that maintains a critical attitude through negative-space prompting and incorporates multimodality into reasoning by first dividing the reasoning responsibility of LLMs into reasoning and recognition and then integrating the visual recognition capability of visual models into the joint reasoning process. The rationales generated by DDCoT not only improve the reasoning abilities of both large and small language models in zero-shot prompting and fine-tuning learning, significantly outperforming state-of-the-art methods but also exhibit impressive generalizability and explainability.
[ { "created": "Wed, 25 Oct 2023 08:03:10 GMT", "version": "v1" }, { "created": "Thu, 26 Oct 2023 04:16:52 GMT", "version": "v2" } ]
2023-10-27
[ [ "Zheng", "Ge", "" ], [ "Yang", "Bin", "" ], [ "Tang", "Jiajin", "" ], [ "Zhou", "Hong-Yu", "" ], [ "Yang", "Sibei", "" ] ]
A long-standing goal of AI systems is to perform complex multimodal reasoning like humans. Recently, large language models (LLMs) have made remarkable strides in such multi-step reasoning on the language modality solely by leveraging the chain of thought (CoT) to mimic human thinking. However, the transfer of these advancements to multimodal contexts introduces heightened challenges, including but not limited to the impractical need for labor-intensive annotation and the limitations in terms of flexibility, generalizability, and explainability. To evoke CoT reasoning in multimodality, this work first conducts an in-depth analysis of these challenges posed by multimodality and presents two key insights: "keeping critical thinking" and "letting everyone do their jobs" in multimodal CoT reasoning. Furthermore, this study proposes a novel DDCoT prompting that maintains a critical attitude through negative-space prompting and incorporates multimodality into reasoning by first dividing the reasoning responsibility of LLMs into reasoning and recognition and then integrating the visual recognition capability of visual models into the joint reasoning process. The rationales generated by DDCoT not only improve the reasoning abilities of both large and small language models in zero-shot prompting and fine-tuning learning, significantly outperforming state-of-the-art methods but also exhibit impressive generalizability and explainability.
2005.00895
Henry Cabral Mr
Henry C. Nunes, Roben C. Lunardi, Avelin F. Zorzo, Regio A. Michelin and Salil S. Kanhere
Context-based smart contracts for appendable-block blockchains
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Currently, blockchain proposals are being adopted to solve security issues, such as data integrity, resilience, and non-repudiation. To improve certain aspects, e.g., energy consumption and latency, of traditional blockchains, different architectures, algorithms, and data management methods have been recently proposed. For example, appendable-block blockchain uses a different data structure designed to reduce latency in block and transaction insertion. It is especially applicable in domains such as Internet of Things (IoT), where both latency and energy are key concerns. However, the lack of some features available to other blockchains, such as Smart Contracts, limits the application of this model. To solve this, in this work, we propose the use of Smart Contracts in appendable-block blockchain through a new model called context-based appendable-block blockchain. This model also allows the execution of multiple smart contracts in parallel, featuring high performance in parallel computing scenarios. Furthermore, we present an implementation for the context-based appendable-block blockchain using an Ethereum Virtual Machine (EVM). Finally, we execute this implementation in four different testbed. The results demonstrated a performance improvement for parallel processing of smart contracts when using the proposed model.
[ { "created": "Sat, 2 May 2020 18:13:53 GMT", "version": "v1" } ]
2020-05-05
[ [ "Nunes", "Henry C.", "" ], [ "Lunardi", "Roben C.", "" ], [ "Zorzo", "Avelin F.", "" ], [ "Michelin", "Regio A.", "" ], [ "Kanhere", "Salil S.", "" ] ]
Currently, blockchain proposals are being adopted to solve security issues, such as data integrity, resilience, and non-repudiation. To improve certain aspects, e.g., energy consumption and latency, of traditional blockchains, different architectures, algorithms, and data management methods have been recently proposed. For example, appendable-block blockchain uses a different data structure designed to reduce latency in block and transaction insertion. It is especially applicable in domains such as Internet of Things (IoT), where both latency and energy are key concerns. However, the lack of some features available to other blockchains, such as Smart Contracts, limits the application of this model. To solve this, in this work, we propose the use of Smart Contracts in appendable-block blockchain through a new model called context-based appendable-block blockchain. This model also allows the execution of multiple smart contracts in parallel, featuring high performance in parallel computing scenarios. Furthermore, we present an implementation for the context-based appendable-block blockchain using an Ethereum Virtual Machine (EVM). Finally, we execute this implementation in four different testbed. The results demonstrated a performance improvement for parallel processing of smart contracts when using the proposed model.
2310.07579
Martin Pawelczyk
Martin Pawelczyk, Seth Neel, Himabindu Lakkaraju
In-Context Unlearning: Language Models as Few Shot Unlearners
Accepted at ICML 2024
null
null
null
cs.LG cs.AI cs.CR
http://creativecommons.org/licenses/by/4.0/
Machine unlearning, the study of efficiently removing the impact of specific training instances on a model, has garnered increased attention in recent years due to regulatory guidelines such as the \emph{Right to be Forgotten}. Achieving precise unlearning typically involves fully retraining the model and is computationally infeasible in case of very large models such as Large Language Models (LLMs). To this end, recent work has proposed several algorithms which approximate the removal of training data without retraining the model. These algorithms crucially rely on access to the model parameters in order to update them, an assumption that may not hold in practice due to computational constraints or having only query access to the LLMs. In this work, we propose a new class of unlearning methods for LLMs called ``In-Context Unlearning.'' This method unlearns instances from the model by simply providing specific kinds of inputs in context, without the need to update model parameters. To unlearn specific training instances, we present these instances to the LLMs at inference time along with labels that differ from their ground truth. Our experimental results demonstrate that in-context unlearning performs on par with, or in some cases outperforms other state-of-the-art methods that require access to model parameters, effectively removing the influence of specific instances on the model while preserving test accuracy.
[ { "created": "Wed, 11 Oct 2023 15:19:31 GMT", "version": "v1" }, { "created": "Thu, 12 Oct 2023 14:15:24 GMT", "version": "v2" }, { "created": "Tue, 4 Jun 2024 12:35:56 GMT", "version": "v3" }, { "created": "Thu, 6 Jun 2024 06:31:08 GMT", "version": "v4" } ]
2024-06-07
[ [ "Pawelczyk", "Martin", "" ], [ "Neel", "Seth", "" ], [ "Lakkaraju", "Himabindu", "" ] ]
Machine unlearning, the study of efficiently removing the impact of specific training instances on a model, has garnered increased attention in recent years due to regulatory guidelines such as the \emph{Right to be Forgotten}. Achieving precise unlearning typically involves fully retraining the model and is computationally infeasible in case of very large models such as Large Language Models (LLMs). To this end, recent work has proposed several algorithms which approximate the removal of training data without retraining the model. These algorithms crucially rely on access to the model parameters in order to update them, an assumption that may not hold in practice due to computational constraints or having only query access to the LLMs. In this work, we propose a new class of unlearning methods for LLMs called ``In-Context Unlearning.'' This method unlearns instances from the model by simply providing specific kinds of inputs in context, without the need to update model parameters. To unlearn specific training instances, we present these instances to the LLMs at inference time along with labels that differ from their ground truth. Our experimental results demonstrate that in-context unlearning performs on par with, or in some cases outperforms other state-of-the-art methods that require access to model parameters, effectively removing the influence of specific instances on the model while preserving test accuracy.
2001.11229
Monika Trimoska
Monika Trimoska and Sorina Ionica and Gilles Dequen
Parity (XOR) Reasoning for the Index Calculus Attack
18 pages
In: Simonis H. (eds) Principles and Practice of Constraint Programming. CP 2020. Lecture Notes in Computer Science, vol 12333. Springer, Cham
10.1007/978-3-030-58475-7_45
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cryptographic problems can often be reduced to solving Boolean polynomial systems, whose equivalent logical formulas can be treated using SAT solvers. Given the algebraic nature of the problem, the use of the logical XOR operator is common in SAT-based cryptanalysis. Recent works have focused on advanced techniques for handling parity (XOR) constraints, such as the Gaussian Elimination technique. First, we propose an original XOR-reasoning SAT solver, named WDSat (Weil Descent SAT solving), dedicated to a specific cryptographic problem. Secondly, we show that in some cases Gaussian Elimination on SAT instances does not work as well as Gaussian Elimination on algebraic systems. We demonstrate how this oversight is fixed in our solver, which is adapted to read instances in algebraic normal form (ANF). Finally, we propose a novel preprocessing technique based on the Minimal Vertex Cover Problem in graph theory. This preprocessing technique is, within the framework of multivariate Boolean polynomial systems, used as a DLL branching selection rule that leads to quick linearization of the underlying algebraic system. Our benchmarks use a model obtained from cryptographic instances for which a significant speedup is achieved using the findings in this paper. We further explain how our preprocessing technique can be used as an assessment of the security of a cryptographic system.
[ { "created": "Thu, 30 Jan 2020 09:27:14 GMT", "version": "v1" }, { "created": "Thu, 10 Sep 2020 09:17:53 GMT", "version": "v2" }, { "created": "Fri, 18 Dec 2020 14:46:12 GMT", "version": "v3" } ]
2020-12-21
[ [ "Trimoska", "Monika", "" ], [ "Ionica", "Sorina", "" ], [ "Dequen", "Gilles", "" ] ]
Cryptographic problems can often be reduced to solving Boolean polynomial systems, whose equivalent logical formulas can be treated using SAT solvers. Given the algebraic nature of the problem, the use of the logical XOR operator is common in SAT-based cryptanalysis. Recent works have focused on advanced techniques for handling parity (XOR) constraints, such as the Gaussian Elimination technique. First, we propose an original XOR-reasoning SAT solver, named WDSat (Weil Descent SAT solving), dedicated to a specific cryptographic problem. Secondly, we show that in some cases Gaussian Elimination on SAT instances does not work as well as Gaussian Elimination on algebraic systems. We demonstrate how this oversight is fixed in our solver, which is adapted to read instances in algebraic normal form (ANF). Finally, we propose a novel preprocessing technique based on the Minimal Vertex Cover Problem in graph theory. This preprocessing technique is, within the framework of multivariate Boolean polynomial systems, used as a DLL branching selection rule that leads to quick linearization of the underlying algebraic system. Our benchmarks use a model obtained from cryptographic instances for which a significant speedup is achieved using the findings in this paper. We further explain how our preprocessing technique can be used as an assessment of the security of a cryptographic system.
1609.09546
Wenjun Mei
Wenjun Mei, Noah E. Friedkin, Kyle Lewis, Francesco Bullo
Dynamic Models of Appraisal Networks Explaining Collective Learning
A preliminary version has been accepted by the 53rd IEEE Conference on Decision and Control. The journal version has been submitted to IEEE Transactions on Automatic Control
null
null
null
cs.SI cs.MA cs.SY math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper proposes models of learning process in teams of individuals who collectively execute a sequence of tasks and whose actions are determined by individual skill levels and networks of interpersonal appraisals and influence. The closely-related proposed models have increasing complexity, starting with a centralized manager-based assignment and learning model, and finishing with a social model of interpersonal appraisal, assignments, learning, and influences. We show how rational optimal behavior arises along the task sequence for each model, and discuss conditions of suboptimality. Our models are grounded in replicator dynamics from evolutionary games, influence networks from mathematical sociology, and transactive memory systems from organization science.
[ { "created": "Thu, 29 Sep 2016 23:16:24 GMT", "version": "v1" } ]
2016-10-03
[ [ "Mei", "Wenjun", "" ], [ "Friedkin", "Noah E.", "" ], [ "Lewis", "Kyle", "" ], [ "Bullo", "Francesco", "" ] ]
This paper proposes models of learning process in teams of individuals who collectively execute a sequence of tasks and whose actions are determined by individual skill levels and networks of interpersonal appraisals and influence. The closely-related proposed models have increasing complexity, starting with a centralized manager-based assignment and learning model, and finishing with a social model of interpersonal appraisal, assignments, learning, and influences. We show how rational optimal behavior arises along the task sequence for each model, and discuss conditions of suboptimality. Our models are grounded in replicator dynamics from evolutionary games, influence networks from mathematical sociology, and transactive memory systems from organization science.
2010.05584
Oliviero Riganelli
Oliviero Riganelli, Simone Paolo Mottadelli, Claudio Rota, Daniela Micucci, Leonardo Mariani
Data Loss Detector: Automatically Revealing Data Loss Bugs in Android Apps
for associated video presentation, see https://youtu.be/s6XZ7F8L3nY for associated slides, see https://www.slideshare.net/OlivieroRiganelli/oliviero-riganelli-data-loss-detector-automatically-revealing-data-loss-bugs-in-android-apps . In Proc. of the International Symposium on Software Testing and Analysis (ISSTA 2020)
null
10.1145/3395363.3397379
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Android apps must work correctly even if their execution is interrupted by external events. For instance, an app must work properly even if a phone call is received, or after its layout is redrawn because the smartphone has been rotated. Since these events may require destroying, when the execution is interrupted, and recreating, when the execution is resumed, the foreground activity of the app, the only way to prevent the loss of state information is saving and restoring it. This behavior must be explicitly implemented by app developers, who often miss to implement it properly, releasing apps affected by data loss problems, that is, apps that may lose state information when their execution is interrupted. Although several techniques can be used to automatically generate test cases for Android apps, the obtained test cases seldom include the interactions and the checks necessary to exercise and reveal data loss faults. To address this problem, this paper presents Data Loss Detector (DLD), a test case generation technique that integrates an exploration strategy, data-loss-revealing actions, and two customized oracle strategies for the detection of data loss failures. DLD has been able to reveal 75% of the faults in a benchmark of 54 Android app releases affected by 110 known data loss faults. DLD also revealed unknown data loss problems, outperforming competing approaches.
[ { "created": "Mon, 12 Oct 2020 10:19:31 GMT", "version": "v1" } ]
2020-10-13
[ [ "Riganelli", "Oliviero", "" ], [ "Mottadelli", "Simone Paolo", "" ], [ "Rota", "Claudio", "" ], [ "Micucci", "Daniela", "" ], [ "Mariani", "Leonardo", "" ] ]
Android apps must work correctly even if their execution is interrupted by external events. For instance, an app must work properly even if a phone call is received, or after its layout is redrawn because the smartphone has been rotated. Since these events may require destroying, when the execution is interrupted, and recreating, when the execution is resumed, the foreground activity of the app, the only way to prevent the loss of state information is saving and restoring it. This behavior must be explicitly implemented by app developers, who often miss to implement it properly, releasing apps affected by data loss problems, that is, apps that may lose state information when their execution is interrupted. Although several techniques can be used to automatically generate test cases for Android apps, the obtained test cases seldom include the interactions and the checks necessary to exercise and reveal data loss faults. To address this problem, this paper presents Data Loss Detector (DLD), a test case generation technique that integrates an exploration strategy, data-loss-revealing actions, and two customized oracle strategies for the detection of data loss failures. DLD has been able to reveal 75% of the faults in a benchmark of 54 Android app releases affected by 110 known data loss faults. DLD also revealed unknown data loss problems, outperforming competing approaches.
1709.01584
Jill-J\^enn Vie
Jill-J\^enn Vie, Florian Yger, Ryan Lahfa, Basile Clement, K\'evin Cocchi, Thomas Chalumeau and Hisashi Kashima
Using Posters to Recommend Anime and Mangas in a Cold-Start Scenario
6 pages, 3 figures, 1 table, accepted at the MANPU 2017 workshop, co-located with ICDAR 2017 in Kyoto on November 10, 2017
null
null
null
cs.IR cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
Item cold-start is a classical issue in recommender systems that affects anime and manga recommendations as well. This problem can be framed as follows: how to predict whether a user will like a manga that received few ratings from the community? Content-based techniques can alleviate this issue but require extra information, that is usually expensive to gather. In this paper, we use a deep learning technique, Illustration2Vec, to easily extract tag information from the manga and anime posters (e.g., sword, or ponytail). We propose BALSE (Blended Alternate Least Squares with Explanation), a new model for collaborative filtering, that benefits from this extra information to recommend mangas. We show, using real data from an online manga recommender system called Mangaki, that our model improves substantially the quality of recommendations, especially for less-known manga, and is able to provide an interpretation of the taste of the users.
[ { "created": "Sun, 3 Sep 2017 16:19:36 GMT", "version": "v1" }, { "created": "Thu, 7 Sep 2017 06:48:31 GMT", "version": "v2" } ]
2017-09-08
[ [ "Vie", "Jill-Jênn", "" ], [ "Yger", "Florian", "" ], [ "Lahfa", "Ryan", "" ], [ "Clement", "Basile", "" ], [ "Cocchi", "Kévin", "" ], [ "Chalumeau", "Thomas", "" ], [ "Kashima", "Hisashi", "" ] ]
Item cold-start is a classical issue in recommender systems that affects anime and manga recommendations as well. This problem can be framed as follows: how to predict whether a user will like a manga that received few ratings from the community? Content-based techniques can alleviate this issue but require extra information, that is usually expensive to gather. In this paper, we use a deep learning technique, Illustration2Vec, to easily extract tag information from the manga and anime posters (e.g., sword, or ponytail). We propose BALSE (Blended Alternate Least Squares with Explanation), a new model for collaborative filtering, that benefits from this extra information to recommend mangas. We show, using real data from an online manga recommender system called Mangaki, that our model improves substantially the quality of recommendations, especially for less-known manga, and is able to provide an interpretation of the taste of the users.
2006.03679
Milan Straka
Jan Haji\v{c}, Eduard Bej\v{c}ek, Jaroslava Hlav\'a\v{c}ov\'a, Marie Mikulov\'a, Milan Straka, Jan \v{S}t\v{e}p\'anek, Barbora \v{S}t\v{e}p\'ankov\'a
Prague Dependency Treebank -- Consolidated 1.0
Accepted at LREC 2020 (Proceedings of Language Resources and Evaluation, Marseille, France)
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a richly annotated and genre-diversified language resource, the Prague Dependency Treebank-Consolidated 1.0 (PDT-C 1.0), the purpose of which is - as it always been the case for the family of the Prague Dependency Treebanks - to serve both as a training data for various types of NLP tasks as well as for linguistically-oriented research. PDT-C 1.0 contains four different datasets of Czech, uniformly annotated using the standard PDT scheme (albeit not everything is annotated manually, as we describe in detail here). The texts come from different sources: daily newspaper articles, Czech translation of the Wall Street Journal, transcribed dialogs and a small amount of user-generated, short, often non-standard language segments typed into a web translator. Altogether, the treebank contains around 180,000 sentences with their morphological, surface and deep syntactic annotation. The diversity of the texts and annotations should serve well the NLP applications as well as it is an invaluable resource for linguistic research, including comparative studies regarding texts of different genres. The corpus is publicly and freely available.
[ { "created": "Fri, 5 Jun 2020 20:52:55 GMT", "version": "v1" } ]
2020-06-09
[ [ "Hajič", "Jan", "" ], [ "Bejček", "Eduard", "" ], [ "Hlaváčová", "Jaroslava", "" ], [ "Mikulová", "Marie", "" ], [ "Straka", "Milan", "" ], [ "Štěpánek", "Jan", "" ], [ "Štěpánková", "Barbora", "" ] ]
We present a richly annotated and genre-diversified language resource, the Prague Dependency Treebank-Consolidated 1.0 (PDT-C 1.0), the purpose of which is - as it always been the case for the family of the Prague Dependency Treebanks - to serve both as a training data for various types of NLP tasks as well as for linguistically-oriented research. PDT-C 1.0 contains four different datasets of Czech, uniformly annotated using the standard PDT scheme (albeit not everything is annotated manually, as we describe in detail here). The texts come from different sources: daily newspaper articles, Czech translation of the Wall Street Journal, transcribed dialogs and a small amount of user-generated, short, often non-standard language segments typed into a web translator. Altogether, the treebank contains around 180,000 sentences with their morphological, surface and deep syntactic annotation. The diversity of the texts and annotations should serve well the NLP applications as well as it is an invaluable resource for linguistic research, including comparative studies regarding texts of different genres. The corpus is publicly and freely available.
1804.02338
Nathan Sime
Paul Houston and Nathan Sime
Automatic symbolic computation for discontinuous Galerkin finite element methods
null
null
null
null
cs.NA cs.MS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The implementation of discontinuous Galerkin finite element methods (DGFEMs) represents a very challenging computational task, particularly for systems of coupled nonlinear PDEs, including multiphysics problems, whose parameters may consist of power series or functionals of the solution variables. Thereby, the exploitation of symbolic algebra to express a given DGFEM approximation of a PDE problem within a high level language, whose syntax closely resembles the mathematical definition, is an invaluable tool. Indeed, this then facilitates the automatic assembly of the resulting system of (nonlinear) equations, as well as the computation of Fr\'echet derivative(s) of the DGFEM scheme, needed, for example, within a Newton-type solver. However, even exploiting symbolic algebra, the discretisation of coupled systems of PDEs can still be extremely verbose and hard to debug. Thereby, in this article we develop a further layer of abstraction by designing a class structure for the automatic computation of DGFEM formulations. This work has been implemented within the FEniCS package, based on exploiting the Unified Form Language. Numerical examples are presented which highlight the simplicity of implementation of DGFEMs for the numerical approximation of a range of PDE problems.
[ { "created": "Fri, 6 Apr 2018 16:09:06 GMT", "version": "v1" } ]
2018-04-09
[ [ "Houston", "Paul", "" ], [ "Sime", "Nathan", "" ] ]
The implementation of discontinuous Galerkin finite element methods (DGFEMs) represents a very challenging computational task, particularly for systems of coupled nonlinear PDEs, including multiphysics problems, whose parameters may consist of power series or functionals of the solution variables. Thereby, the exploitation of symbolic algebra to express a given DGFEM approximation of a PDE problem within a high level language, whose syntax closely resembles the mathematical definition, is an invaluable tool. Indeed, this then facilitates the automatic assembly of the resulting system of (nonlinear) equations, as well as the computation of Fr\'echet derivative(s) of the DGFEM scheme, needed, for example, within a Newton-type solver. However, even exploiting symbolic algebra, the discretisation of coupled systems of PDEs can still be extremely verbose and hard to debug. Thereby, in this article we develop a further layer of abstraction by designing a class structure for the automatic computation of DGFEM formulations. This work has been implemented within the FEniCS package, based on exploiting the Unified Form Language. Numerical examples are presented which highlight the simplicity of implementation of DGFEMs for the numerical approximation of a range of PDE problems.
1301.0601
Christian R. Shelton
Christian R. Shelton
Reinforcement Learning with Partially Known World Dynamics
Appears in Proceedings of the Eighteenth Conference on Uncertainty in Artificial Intelligence (UAI2002)
null
null
UAI-P-2002-PG-461-468
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reinforcement learning would enjoy better success on real-world problems if domain knowledge could be imparted to the algorithm by the modelers. Most problems have both hidden state and unknown dynamics. Partially observable Markov decision processes (POMDPs) allow for the modeling of both. Unfortunately, they do not provide a natural framework in which to specify knowledge about the domain dynamics. The designer must either admit to knowing nothing about the dynamics or completely specify the dynamics (thereby turning it into a planning problem). We propose a new framework called a partially known Markov decision process (PKMDP) which allows the designer to specify known dynamics while still leaving portions of the environment s dynamics unknown.The model represents NOT ONLY the environment dynamics but also the agents knowledge of the dynamics. We present a reinforcement learning algorithm for this model based on importance sampling. The algorithm incorporates planning based on the known dynamics and learning about the unknown dynamics. Our results clearly demonstrate the ability to add domain knowledge and the resulting benefits for learning.
[ { "created": "Wed, 12 Dec 2012 15:58:25 GMT", "version": "v1" } ]
2013-01-07
[ [ "Shelton", "Christian R.", "" ] ]
Reinforcement learning would enjoy better success on real-world problems if domain knowledge could be imparted to the algorithm by the modelers. Most problems have both hidden state and unknown dynamics. Partially observable Markov decision processes (POMDPs) allow for the modeling of both. Unfortunately, they do not provide a natural framework in which to specify knowledge about the domain dynamics. The designer must either admit to knowing nothing about the dynamics or completely specify the dynamics (thereby turning it into a planning problem). We propose a new framework called a partially known Markov decision process (PKMDP) which allows the designer to specify known dynamics while still leaving portions of the environment s dynamics unknown.The model represents NOT ONLY the environment dynamics but also the agents knowledge of the dynamics. We present a reinforcement learning algorithm for this model based on importance sampling. The algorithm incorporates planning based on the known dynamics and learning about the unknown dynamics. Our results clearly demonstrate the ability to add domain knowledge and the resulting benefits for learning.
cs/0212019
Dr. Joerg D. Becker
Joerg D. Becker
Thinking, Learning, and Autonomous Problem Solving
9 pages, 4 figures
null
null
null
cs.NE
null
Ever increasing computational power will require methods for automatic programming. We present an alternative to genetic programming, based on a general model of thinking and learning. The advantage is that evolution takes place in the space of constructs and can thus exploit the mathematical structures of this space. The model is formalized, and a macro language is presented which allows for a formal yet intuitive description of the problem under consideration. A prototype has been developed to implement the scheme in PERL. This method will lead to a concentration on the analysis of problems, to a more rapid prototyping, to the treatment of new problem classes, and to the investigation of philosophical problems. We see fields of application in nonlinear differential equations, pattern recognition, robotics, model building, and animated pictures.
[ { "created": "Tue, 10 Dec 2002 15:18:33 GMT", "version": "v1" } ]
2007-05-23
[ [ "Becker", "Joerg D.", "" ] ]
Ever increasing computational power will require methods for automatic programming. We present an alternative to genetic programming, based on a general model of thinking and learning. The advantage is that evolution takes place in the space of constructs and can thus exploit the mathematical structures of this space. The model is formalized, and a macro language is presented which allows for a formal yet intuitive description of the problem under consideration. A prototype has been developed to implement the scheme in PERL. This method will lead to a concentration on the analysis of problems, to a more rapid prototyping, to the treatment of new problem classes, and to the investigation of philosophical problems. We see fields of application in nonlinear differential equations, pattern recognition, robotics, model building, and animated pictures.
1911.06501
Rob Alexander
Heather Hawkins and Rob Alexander
Situation Coverage Testing for a Simulated Autonomous Car -- an Initial Case Study
null
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is hard to test autonomous robot (AR) software because of the range and diversity of external situations (terrain, obstacles, humans, peer robots) that AR must deal with. Common measures of testing adequacy may not address this diversity. Explicit situation coverage has been proposed as a solution, but there has been little empirical study of its effectiveness. In this paper, we describe an implementation of situation coverage for testing a simple simulated autonomous road vehicle, and evaluate its ability to find seeded faults compared to a random test generation approach. In our experiments, the performance of the two methods is similar, with situation coverage having a very slight advantage. We conclude that situation coverage probably does not have a significant benefit over random generation for the type of simple, research-grade AR software used here. It will likely be valuable when applied to more complex and mature software.
[ { "created": "Fri, 15 Nov 2019 08:02:19 GMT", "version": "v1" } ]
2019-11-18
[ [ "Hawkins", "Heather", "" ], [ "Alexander", "Rob", "" ] ]
It is hard to test autonomous robot (AR) software because of the range and diversity of external situations (terrain, obstacles, humans, peer robots) that AR must deal with. Common measures of testing adequacy may not address this diversity. Explicit situation coverage has been proposed as a solution, but there has been little empirical study of its effectiveness. In this paper, we describe an implementation of situation coverage for testing a simple simulated autonomous road vehicle, and evaluate its ability to find seeded faults compared to a random test generation approach. In our experiments, the performance of the two methods is similar, with situation coverage having a very slight advantage. We conclude that situation coverage probably does not have a significant benefit over random generation for the type of simple, research-grade AR software used here. It will likely be valuable when applied to more complex and mature software.
2406.01587
Yupeng Zheng
Yupeng Zheng, Zebin Xing, Qichao Zhang, Bu Jin, Pengfei Li, Yuhang Zheng, Zhongpu Xia, Kun Zhan, Xianpeng Lang, Yaran Chen, Dongbin Zhao
PlanAgent: A Multi-modal Large Language Agent for Closed-loop Vehicle Motion Planning
This work has been submitted to the IEEE for possible publication
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Vehicle motion planning is an essential component of autonomous driving technology. Current rule-based vehicle motion planning methods perform satisfactorily in common scenarios but struggle to generalize to long-tailed situations. Meanwhile, learning-based methods have yet to achieve superior performance over rule-based approaches in large-scale closed-loop scenarios. To address these issues, we propose PlanAgent, the first mid-to-mid planning system based on a Multi-modal Large Language Model (MLLM). MLLM is used as a cognitive agent to introduce human-like knowledge, interpretability, and common-sense reasoning into the closed-loop planning. Specifically, PlanAgent leverages the power of MLLM through three core modules. First, an Environment Transformation module constructs a Bird's Eye View (BEV) map and a lane-graph-based textual description from the environment as inputs. Second, a Reasoning Engine module introduces a hierarchical chain-of-thought from scene understanding to lateral and longitudinal motion instructions, culminating in planner code generation. Last, a Reflection module is integrated to simulate and evaluate the generated planner for reducing MLLM's uncertainty. PlanAgent is endowed with the common-sense reasoning and generalization capability of MLLM, which empowers it to effectively tackle both common and complex long-tailed scenarios. Our proposed PlanAgent is evaluated on the large-scale and challenging nuPlan benchmarks. A comprehensive set of experiments convincingly demonstrates that PlanAgent outperforms the existing state-of-the-art in the closed-loop motion planning task. Codes will be soon released.
[ { "created": "Mon, 3 Jun 2024 17:59:27 GMT", "version": "v1" }, { "created": "Tue, 4 Jun 2024 07:48:11 GMT", "version": "v2" } ]
2024-06-05
[ [ "Zheng", "Yupeng", "" ], [ "Xing", "Zebin", "" ], [ "Zhang", "Qichao", "" ], [ "Jin", "Bu", "" ], [ "Li", "Pengfei", "" ], [ "Zheng", "Yuhang", "" ], [ "Xia", "Zhongpu", "" ], [ "Zhan", "Kun", "" ], [ "Lang", "Xianpeng", "" ], [ "Chen", "Yaran", "" ], [ "Zhao", "Dongbin", "" ] ]
Vehicle motion planning is an essential component of autonomous driving technology. Current rule-based vehicle motion planning methods perform satisfactorily in common scenarios but struggle to generalize to long-tailed situations. Meanwhile, learning-based methods have yet to achieve superior performance over rule-based approaches in large-scale closed-loop scenarios. To address these issues, we propose PlanAgent, the first mid-to-mid planning system based on a Multi-modal Large Language Model (MLLM). MLLM is used as a cognitive agent to introduce human-like knowledge, interpretability, and common-sense reasoning into the closed-loop planning. Specifically, PlanAgent leverages the power of MLLM through three core modules. First, an Environment Transformation module constructs a Bird's Eye View (BEV) map and a lane-graph-based textual description from the environment as inputs. Second, a Reasoning Engine module introduces a hierarchical chain-of-thought from scene understanding to lateral and longitudinal motion instructions, culminating in planner code generation. Last, a Reflection module is integrated to simulate and evaluate the generated planner for reducing MLLM's uncertainty. PlanAgent is endowed with the common-sense reasoning and generalization capability of MLLM, which empowers it to effectively tackle both common and complex long-tailed scenarios. Our proposed PlanAgent is evaluated on the large-scale and challenging nuPlan benchmarks. A comprehensive set of experiments convincingly demonstrates that PlanAgent outperforms the existing state-of-the-art in the closed-loop motion planning task. Codes will be soon released.
0909.1640
Bela Genge
Magyari Attila, Genge Bela and Haller Piroska
Certificate-based Single Sign-On Mechanism for Multi-Platform Distributed Systems
null
Acta Universitatis Sapientiae - Electrical and Mechanical Engineering, Vol. 1, pp. 113-123, 2009 (ISSN 2065-5916)
null
null
cs.CR cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a certificate-based single sign-on mechanism in distributed systems. The proposed security protocols and authentication mechanisms are integrated in a middleware. The novelty of our middleware lies on the use of XPCOM components, this way we provide a different services that can be used on every platform where Mozilla is available. The componen based architecture of the implemented services allows using the authentication components separately.
[ { "created": "Wed, 9 Sep 2009 07:20:05 GMT", "version": "v1" } ]
2009-09-10
[ [ "Attila", "Magyari", "" ], [ "Bela", "Genge", "" ], [ "Piroska", "Haller", "" ] ]
We propose a certificate-based single sign-on mechanism in distributed systems. The proposed security protocols and authentication mechanisms are integrated in a middleware. The novelty of our middleware lies on the use of XPCOM components, this way we provide a different services that can be used on every platform where Mozilla is available. The componen based architecture of the implemented services allows using the authentication components separately.
2009.07025
Aythami Morales
Alejandro Pe\~na and Ignacio Serna and Aythami Morales and Julian Fierrez
FairCVtest Demo: Understanding Bias in Multimodal Learning with a Testbed in Fair Automatic Recruitment
ACM Intl. Conf. on Multimodal Interaction (ICMI). arXiv admin note: substantial text overlap with arXiv:2004.07173
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the aim of studying how current multimodal AI algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data, this demonstrator experiments over an automated recruitment testbed based on Curriculum Vitae: FairCVtest. The presence of decision-making algorithms in society is rapidly increasing nowadays, while concerns about their transparency and the possibility of these algorithms becoming new sources of discrimination are arising. This demo shows the capacity of the Artificial Intelligence (AI) behind a recruitment tool to extract sensitive information from unstructured data, and exploit it in combination to data biases in undesirable (unfair) ways. Aditionally, the demo includes a new algorithm (SensitiveNets) for discrimination-aware learning which eliminates sensitive information in our multimodal AI framework.
[ { "created": "Sat, 12 Sep 2020 17:45:09 GMT", "version": "v1" } ]
2020-09-16
[ [ "Peña", "Alejandro", "" ], [ "Serna", "Ignacio", "" ], [ "Morales", "Aythami", "" ], [ "Fierrez", "Julian", "" ] ]
With the aim of studying how current multimodal AI algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data, this demonstrator experiments over an automated recruitment testbed based on Curriculum Vitae: FairCVtest. The presence of decision-making algorithms in society is rapidly increasing nowadays, while concerns about their transparency and the possibility of these algorithms becoming new sources of discrimination are arising. This demo shows the capacity of the Artificial Intelligence (AI) behind a recruitment tool to extract sensitive information from unstructured data, and exploit it in combination to data biases in undesirable (unfair) ways. Aditionally, the demo includes a new algorithm (SensitiveNets) for discrimination-aware learning which eliminates sensitive information in our multimodal AI framework.
2305.04166
Nghia Hieu Nguyen
Doanh C. Bui, Nghia Hieu Nguyen, Khang Nguyen
UIT-OpenViIC: A Novel Benchmark for Evaluating Image Captioning in Vietnamese
10 pages, 7 figures, submitted to Elsevier
null
null
null
cs.CV cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
Image Captioning is one of the vision-language tasks that still interest the research community worldwide in the 2020s. MS-COCO Caption benchmark is commonly used to evaluate the performance of advanced captioning models, although it was published in 2015. Recent captioning models trained on the MS-COCO Caption dataset only have good performance in language patterns of English; they do not have such good performance in contexts captured in Vietnam or fluently caption images using Vietnamese. To contribute to the low-resources research community as in Vietnam, we introduce a novel image captioning dataset in Vietnamese, the Open-domain Vietnamese Image Captioning dataset (UIT-OpenViIC). The introduced dataset includes complex scenes captured in Vietnam and manually annotated by Vietnamese under strict rules and supervision. In this paper, we present in more detail the dataset creation process. From preliminary analysis, we show that our dataset is challenging to recent state-of-the-art (SOTA) Transformer-based baselines, which performed well on the MS COCO dataset. Then, the modest results prove that UIT-OpenViIC has room to grow, which can be one of the standard benchmarks in Vietnamese for the research community to evaluate their captioning models. Furthermore, we present a CAMO approach that effectively enhances the image representation ability by a multi-level encoder output fusion mechanism, which helps improve the quality of generated captions compared to previous captioning models.
[ { "created": "Sun, 7 May 2023 02:48:47 GMT", "version": "v1" }, { "created": "Tue, 9 May 2023 12:46:06 GMT", "version": "v2" } ]
2023-05-10
[ [ "Bui", "Doanh C.", "" ], [ "Nguyen", "Nghia Hieu", "" ], [ "Nguyen", "Khang", "" ] ]
Image Captioning is one of the vision-language tasks that still interest the research community worldwide in the 2020s. MS-COCO Caption benchmark is commonly used to evaluate the performance of advanced captioning models, although it was published in 2015. Recent captioning models trained on the MS-COCO Caption dataset only have good performance in language patterns of English; they do not have such good performance in contexts captured in Vietnam or fluently caption images using Vietnamese. To contribute to the low-resources research community as in Vietnam, we introduce a novel image captioning dataset in Vietnamese, the Open-domain Vietnamese Image Captioning dataset (UIT-OpenViIC). The introduced dataset includes complex scenes captured in Vietnam and manually annotated by Vietnamese under strict rules and supervision. In this paper, we present in more detail the dataset creation process. From preliminary analysis, we show that our dataset is challenging to recent state-of-the-art (SOTA) Transformer-based baselines, which performed well on the MS COCO dataset. Then, the modest results prove that UIT-OpenViIC has room to grow, which can be one of the standard benchmarks in Vietnamese for the research community to evaluate their captioning models. Furthermore, we present a CAMO approach that effectively enhances the image representation ability by a multi-level encoder output fusion mechanism, which helps improve the quality of generated captions compared to previous captioning models.
2010.10477
Jeremy Cohen
Jeremy Cohen and Mark Woodbridge
RSEs in Research? RSEs in IT?: Finding a suitable home for RSEs
Submitted to/accepted by the Research Software Engineers in HPC (RSE-HPC-2020) workshop in conjunction with SC20. Original submitted/accepted version
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The term Research Software Engineer (RSE) was first used in the UK research community in 2012 to refer to individuals based in research environments who focus on the development of software to support and undertake research. Since then, the term has gained wide adoption and many RSE groups and teams have been set up at institutions within the UK and around the world. There is no "manual" for establishing an RSE group! These groups are often set up in an ad hoc manner based on what is practical within the context of each institution. Some of these groups are based in a central IT environment, others are based in research groups in academic departments. Is one option better than another? What are the pros and cons of different options? In this position paper we look at some arguments for where RSE teams fit best within academic institutions and research organisations and consider whether there is an ideal "home" for RSEs.
[ { "created": "Tue, 20 Oct 2020 17:35:13 GMT", "version": "v1" } ]
2020-10-21
[ [ "Cohen", "Jeremy", "" ], [ "Woodbridge", "Mark", "" ] ]
The term Research Software Engineer (RSE) was first used in the UK research community in 2012 to refer to individuals based in research environments who focus on the development of software to support and undertake research. Since then, the term has gained wide adoption and many RSE groups and teams have been set up at institutions within the UK and around the world. There is no "manual" for establishing an RSE group! These groups are often set up in an ad hoc manner based on what is practical within the context of each institution. Some of these groups are based in a central IT environment, others are based in research groups in academic departments. Is one option better than another? What are the pros and cons of different options? In this position paper we look at some arguments for where RSE teams fit best within academic institutions and research organisations and consider whether there is an ideal "home" for RSEs.
2302.05328
Jingnan Zheng
An Zhang, Jingnan Zheng, Xiang Wang, Yancheng Yuan, Tat-Seng Chua
Invariant Collaborative Filtering to Popularity Distribution Shift
null
2023 TheWebConf
10.1145/3543507.3583461
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Collaborative Filtering (CF) models, despite their great success, suffer from severe performance drops due to popularity distribution shifts, where these changes are ubiquitous and inevitable in real-world scenarios. Unfortunately, most leading popularity debiasing strategies, rather than tackling the vulnerability of CF models to varying popularity distributions, require prior knowledge of the test distribution to identify the degree of bias and further learn the popularity-entangled representations to mitigate the bias. Consequently, these models result in significant performance benefits in the target test set, while dramatically deviating the recommendation from users' true interests without knowing the popularity distribution in advance. In this work, we propose a novel learning framework, Invariant Collaborative Filtering (InvCF), to discover disentangled representations that faithfully reveal the latent preference and popularity semantics without making any assumption about the popularity distribution. At its core is the distillation of unbiased preference representations (i.e., user preference on item property), which are invariant to the change of popularity semantics, while filtering out the popularity feature that is unstable or outdated. Extensive experiments on five benchmark datasets and four evaluation settings (i.e., synthetic long-tail, unbiased, temporal split, and out-of-distribution evaluations) demonstrate that InvCF outperforms the state-of-the-art baselines in terms of popularity generalization ability on real recommendations. Visualization studies shed light on the advantages of InvCF for disentangled representation learning. Our codes are available at https://github.com/anzhang314/InvCF.
[ { "created": "Fri, 10 Feb 2023 15:45:59 GMT", "version": "v1" }, { "created": "Mon, 13 Feb 2023 11:00:47 GMT", "version": "v2" }, { "created": "Thu, 18 May 2023 04:01:30 GMT", "version": "v3" } ]
2023-05-19
[ [ "Zhang", "An", "" ], [ "Zheng", "Jingnan", "" ], [ "Wang", "Xiang", "" ], [ "Yuan", "Yancheng", "" ], [ "Chua", "Tat-Seng", "" ] ]
Collaborative Filtering (CF) models, despite their great success, suffer from severe performance drops due to popularity distribution shifts, where these changes are ubiquitous and inevitable in real-world scenarios. Unfortunately, most leading popularity debiasing strategies, rather than tackling the vulnerability of CF models to varying popularity distributions, require prior knowledge of the test distribution to identify the degree of bias and further learn the popularity-entangled representations to mitigate the bias. Consequently, these models result in significant performance benefits in the target test set, while dramatically deviating the recommendation from users' true interests without knowing the popularity distribution in advance. In this work, we propose a novel learning framework, Invariant Collaborative Filtering (InvCF), to discover disentangled representations that faithfully reveal the latent preference and popularity semantics without making any assumption about the popularity distribution. At its core is the distillation of unbiased preference representations (i.e., user preference on item property), which are invariant to the change of popularity semantics, while filtering out the popularity feature that is unstable or outdated. Extensive experiments on five benchmark datasets and four evaluation settings (i.e., synthetic long-tail, unbiased, temporal split, and out-of-distribution evaluations) demonstrate that InvCF outperforms the state-of-the-art baselines in terms of popularity generalization ability on real recommendations. Visualization studies shed light on the advantages of InvCF for disentangled representation learning. Our codes are available at https://github.com/anzhang314/InvCF.
2404.08127
Markus Roland Ernst
Markus R. Ernst, Francisco M. L\'opez, Arthur Aubret, Roland W. Fleming, Jochen Triesch
Self-Supervised Learning of Color Constancy
7 pages, 5 figures, submitted to the IEEE International Conference on Development and Learning (ICDL 2024)
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Color constancy (CC) describes the ability of the visual system to perceive an object as having a relatively constant color despite changes in lighting conditions. While CC and its limitations have been carefully characterized in humans, it is still unclear how the visual system acquires this ability during development. Here, we present a first study showing that CC develops in a neural network trained in a self-supervised manner through an invariance learning objective. During learning, objects are presented under changing illuminations, while the network aims to map subsequent views of the same object onto close-by latent representations. This gives rise to representations that are largely invariant to the illumination conditions, offering a plausible example of how CC could emerge during human cognitive development via a form of self-supervised learning.
[ { "created": "Thu, 11 Apr 2024 21:07:38 GMT", "version": "v1" } ]
2024-04-15
[ [ "Ernst", "Markus R.", "" ], [ "López", "Francisco M.", "" ], [ "Aubret", "Arthur", "" ], [ "Fleming", "Roland W.", "" ], [ "Triesch", "Jochen", "" ] ]
Color constancy (CC) describes the ability of the visual system to perceive an object as having a relatively constant color despite changes in lighting conditions. While CC and its limitations have been carefully characterized in humans, it is still unclear how the visual system acquires this ability during development. Here, we present a first study showing that CC develops in a neural network trained in a self-supervised manner through an invariance learning objective. During learning, objects are presented under changing illuminations, while the network aims to map subsequent views of the same object onto close-by latent representations. This gives rise to representations that are largely invariant to the illumination conditions, offering a plausible example of how CC could emerge during human cognitive development via a form of self-supervised learning.
cs/9909014
Joseph Y. Halpern
Joseph Y. Halpern and Richard A. Shore
Reasoning About Common Knowledge with Infinitely Many Agents
Preliminary version appears in 14th IEEE Symposium on Logic in Computer Science, 1999. This is the full version
null
null
null
cs.LO cs.AI
null
Complete axiomatizations and exponential-time decision procedures are provided for reasoning about knowledge and common knowledge when there are infinitely many agents. The results show that reasoning about knowledge and common knowledge with infinitely many agents is no harder than when there are finitely many agents, provided that we can check the cardinality of certain set differences G - G', where G and G' are sets of agents. Since our complexity results are independent of the cardinality of the sets G involved, they represent improvements over the previous results even with the sets of agents involved are finite. Moreover, our results make clear the extent to which issues of complexity and completeness depend on how the sets of agents involved are represented.
[ { "created": "Tue, 21 Sep 1999 20:43:46 GMT", "version": "v1" } ]
2007-05-23
[ [ "Halpern", "Joseph Y.", "" ], [ "Shore", "Richard A.", "" ] ]
Complete axiomatizations and exponential-time decision procedures are provided for reasoning about knowledge and common knowledge when there are infinitely many agents. The results show that reasoning about knowledge and common knowledge with infinitely many agents is no harder than when there are finitely many agents, provided that we can check the cardinality of certain set differences G - G', where G and G' are sets of agents. Since our complexity results are independent of the cardinality of the sets G involved, they represent improvements over the previous results even with the sets of agents involved are finite. Moreover, our results make clear the extent to which issues of complexity and completeness depend on how the sets of agents involved are represented.
2101.08394
Ehsan Hamzei
Ehsan Hamzei, Stephan Winter and Martin Tomko
Templates of generic geographic information for answering where-questions
27 pages, has supplementary material. International Journal of Geographical Information Science (2021)
null
10.1080/13658816.2020.1869977
null
cs.IR
http://creativecommons.org/licenses/by/4.0/
In everyday communication, where-questions are answered by place descriptions. To answer where-questions automatically, computers should be able to generate relevant place descriptions that satisfy inquirers' information needs. Human-generated answers to where-questions constructed based on a few anchor places that characterize the location of inquired places. The challenge for automatically generating such relevant responses stems from selecting relevant anchor places. In this paper, we present templates that allow to characterize the human-generated answers and to imitate their structure. These templates are patterns of generic geographic information derived and encoded from the largest available machine comprehension dataset, MS MARCO v2.1. In our approach, the toponyms in the questions and answers of the dataset are encoded into sequences of generic information. Next, sequence prediction methods are used to model the relation between the generic information in the questions and their answers. Finally, we evaluate the performance of predicting templates for answers to where-questions.
[ { "created": "Thu, 21 Jan 2021 01:47:02 GMT", "version": "v1" } ]
2022-03-24
[ [ "Hamzei", "Ehsan", "" ], [ "Winter", "Stephan", "" ], [ "Tomko", "Martin", "" ] ]
In everyday communication, where-questions are answered by place descriptions. To answer where-questions automatically, computers should be able to generate relevant place descriptions that satisfy inquirers' information needs. Human-generated answers to where-questions constructed based on a few anchor places that characterize the location of inquired places. The challenge for automatically generating such relevant responses stems from selecting relevant anchor places. In this paper, we present templates that allow to characterize the human-generated answers and to imitate their structure. These templates are patterns of generic geographic information derived and encoded from the largest available machine comprehension dataset, MS MARCO v2.1. In our approach, the toponyms in the questions and answers of the dataset are encoded into sequences of generic information. Next, sequence prediction methods are used to model the relation between the generic information in the questions and their answers. Finally, we evaluate the performance of predicting templates for answers to where-questions.
2308.06054
Ciaran Eising
Ken Power, Shailendra Deva, Ting Wang, Julius Li, Ciar\'an Eising
Hardware Accelerators in Autonomous Driving
null
Proceedings of the Irish Machine Vision and Image Processing Conference 2023
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Computing platforms in autonomous vehicles record large amounts of data from many sensors, process the data through machine learning models, and make decisions to ensure the vehicle's safe operation. Fast, accurate, and reliable decision-making is critical. Traditional computer processors lack the power and flexibility needed for the perception and machine vision demands of advanced autonomous driving tasks. Hardware accelerators are special-purpose coprocessors that help autonomous vehicles meet performance requirements for higher levels of autonomy. This paper provides an overview of ML accelerators with examples of their use for machine vision in autonomous vehicles. We offer recommendations for researchers and practitioners and highlight a trajectory for ongoing and future research in this emerging field.
[ { "created": "Fri, 11 Aug 2023 10:07:33 GMT", "version": "v1" } ]
2023-08-14
[ [ "Power", "Ken", "" ], [ "Deva", "Shailendra", "" ], [ "Wang", "Ting", "" ], [ "Li", "Julius", "" ], [ "Eising", "Ciarán", "" ] ]
Computing platforms in autonomous vehicles record large amounts of data from many sensors, process the data through machine learning models, and make decisions to ensure the vehicle's safe operation. Fast, accurate, and reliable decision-making is critical. Traditional computer processors lack the power and flexibility needed for the perception and machine vision demands of advanced autonomous driving tasks. Hardware accelerators are special-purpose coprocessors that help autonomous vehicles meet performance requirements for higher levels of autonomy. This paper provides an overview of ML accelerators with examples of their use for machine vision in autonomous vehicles. We offer recommendations for researchers and practitioners and highlight a trajectory for ongoing and future research in this emerging field.
2012.07717
Lorenzo Porzi
Lorenzo Porzi, Samuel Rota Bul\`o, Peter Kontschieder
Improving Panoptic Segmentation at All Scales
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Crop-based training strategies decouple training resolution from GPU memory consumption, allowing the use of large-capacity panoptic segmentation networks on multi-megapixel images. Using crops, however, can introduce a bias towards truncating or missing large objects. To address this, we propose a novel crop-aware bounding box regression loss (CABB loss), which promotes predictions to be consistent with the visible parts of the cropped objects, while not over-penalizing them for extending outside of the crop. We further introduce a novel data sampling and augmentation strategy which improves generalization across scales by counteracting the imbalanced distribution of object sizes. Combining these two contributions with a carefully designed, top-down panoptic segmentation architecture, we obtain new state-of-the-art results on the challenging Mapillary Vistas (MVD), Indian Driving and Cityscapes datasets, surpassing the previously best approach on MVD by +4.5% PQ and +5.2% mAP.
[ { "created": "Mon, 14 Dec 2020 17:11:00 GMT", "version": "v1" }, { "created": "Tue, 23 Mar 2021 13:31:57 GMT", "version": "v2" } ]
2021-03-24
[ [ "Porzi", "Lorenzo", "" ], [ "Bulò", "Samuel Rota", "" ], [ "Kontschieder", "Peter", "" ] ]
Crop-based training strategies decouple training resolution from GPU memory consumption, allowing the use of large-capacity panoptic segmentation networks on multi-megapixel images. Using crops, however, can introduce a bias towards truncating or missing large objects. To address this, we propose a novel crop-aware bounding box regression loss (CABB loss), which promotes predictions to be consistent with the visible parts of the cropped objects, while not over-penalizing them for extending outside of the crop. We further introduce a novel data sampling and augmentation strategy which improves generalization across scales by counteracting the imbalanced distribution of object sizes. Combining these two contributions with a carefully designed, top-down panoptic segmentation architecture, we obtain new state-of-the-art results on the challenging Mapillary Vistas (MVD), Indian Driving and Cityscapes datasets, surpassing the previously best approach on MVD by +4.5% PQ and +5.2% mAP.
2208.01242
Akond Rahman PhD
Akond Rahman and Chris Parnin
Detecting and Characterizing Propagation of Security Weaknesses in Puppet-based Infrastructure Management
14 pages, currently under review
null
null
null
cs.CR cs.SE
http://creativecommons.org/licenses/by/4.0/
Despite being beneficial for managing computing infrastructure automatically, Puppet manifests are susceptible to security weaknesses, e.g., hard-coded secrets and use of weak cryptography algorithms. Adequate mitigation of security weaknesses in Puppet manifests is thus necessary to secure computing infrastructure that are managed with Puppet manifests. A characterization of how security weaknesses propagate and affect Puppet-based infrastructure management, can inform practitioners on the relevance of the detected security weaknesses, as well as help them take necessary actions for mitigation. To that end, we conduct an empirical study with 17,629 Puppet manifests mined from 336 open source repositories. We construct Taint Tracker for Puppet Manifests (TaintPup), for which we observe 2.4 times more precision compared to that of a state-of-the-art security static analysis tool. TaintPup leverages Puppet-specific information flow analysis using which we characterize propagation of security weaknesses. From our empirical study, we observe security weaknesses to propagate into 4,457 resources, i.e, Puppet-specific code elements used to manage infrastructure. A single instance of a security weakness can propagate into as many as 35 distinct resources. We observe security weaknesses to propagate into 7 categories of resources, which include resources used to manage continuous integration servers and network controllers. According to our survey with 24 practitioners, propagation of security weaknesses into data storage-related resources is rated to have the most severe impact for Puppet-based infrastructure management.
[ { "created": "Tue, 2 Aug 2022 04:21:52 GMT", "version": "v1" } ]
2022-08-03
[ [ "Rahman", "Akond", "" ], [ "Parnin", "Chris", "" ] ]
Despite being beneficial for managing computing infrastructure automatically, Puppet manifests are susceptible to security weaknesses, e.g., hard-coded secrets and use of weak cryptography algorithms. Adequate mitigation of security weaknesses in Puppet manifests is thus necessary to secure computing infrastructure that are managed with Puppet manifests. A characterization of how security weaknesses propagate and affect Puppet-based infrastructure management, can inform practitioners on the relevance of the detected security weaknesses, as well as help them take necessary actions for mitigation. To that end, we conduct an empirical study with 17,629 Puppet manifests mined from 336 open source repositories. We construct Taint Tracker for Puppet Manifests (TaintPup), for which we observe 2.4 times more precision compared to that of a state-of-the-art security static analysis tool. TaintPup leverages Puppet-specific information flow analysis using which we characterize propagation of security weaknesses. From our empirical study, we observe security weaknesses to propagate into 4,457 resources, i.e, Puppet-specific code elements used to manage infrastructure. A single instance of a security weakness can propagate into as many as 35 distinct resources. We observe security weaknesses to propagate into 7 categories of resources, which include resources used to manage continuous integration servers and network controllers. According to our survey with 24 practitioners, propagation of security weaknesses into data storage-related resources is rated to have the most severe impact for Puppet-based infrastructure management.
2304.04718
Jin Xu
Jin Xu, Yangning Li, Xiangjin Xie, Yinghui Li, Niu Hu, Haitao Zheng, Yong Jiang
Investigating Graph Structure Information for Entity Alignment with Dangling Cases
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Entity alignment (EA) aims to discover the equivalent entities in different knowledge graphs (KGs), which play an important role in knowledge engineering. Recently, EA with dangling entities has been proposed as a more realistic setting, which assumes that not all entities have corresponding equivalent entities. In this paper, we focus on this setting. Some work has explored this problem by leveraging translation API, pre-trained word embeddings, and other off-the-shelf tools. However, these approaches over-rely on the side information (e.g., entity names), and fail to work when the side information is absent. On the contrary, they still insufficiently exploit the most fundamental graph structure information in KG. To improve the exploitation of the structural information, we propose a novel entity alignment framework called Weakly-Optimal Graph Contrastive Learning (WOGCL), which is refined on three dimensions : (i) Model. We propose a novel Gated Graph Attention Network to capture local and global graph structure similarity. (ii) Training. Two learning objectives: contrastive learning and optimal transport learning are designed to obtain distinguishable entity representations via the optimal transport plan. (iii) Inference. In the inference phase, a PageRank-based method is proposed to calculate higher-order structural similarity. Extensive experiments on two dangling benchmarks demonstrate that our WOGCL outperforms the current state-of-the-art methods with pure structural information in both traditional (relaxed) and dangling (consolidated) settings. The code will be public soon.
[ { "created": "Mon, 10 Apr 2023 17:24:43 GMT", "version": "v1" } ]
2023-04-11
[ [ "Xu", "Jin", "" ], [ "Li", "Yangning", "" ], [ "Xie", "Xiangjin", "" ], [ "Li", "Yinghui", "" ], [ "Hu", "Niu", "" ], [ "Zheng", "Haitao", "" ], [ "Jiang", "Yong", "" ] ]
Entity alignment (EA) aims to discover the equivalent entities in different knowledge graphs (KGs), which play an important role in knowledge engineering. Recently, EA with dangling entities has been proposed as a more realistic setting, which assumes that not all entities have corresponding equivalent entities. In this paper, we focus on this setting. Some work has explored this problem by leveraging translation API, pre-trained word embeddings, and other off-the-shelf tools. However, these approaches over-rely on the side information (e.g., entity names), and fail to work when the side information is absent. On the contrary, they still insufficiently exploit the most fundamental graph structure information in KG. To improve the exploitation of the structural information, we propose a novel entity alignment framework called Weakly-Optimal Graph Contrastive Learning (WOGCL), which is refined on three dimensions : (i) Model. We propose a novel Gated Graph Attention Network to capture local and global graph structure similarity. (ii) Training. Two learning objectives: contrastive learning and optimal transport learning are designed to obtain distinguishable entity representations via the optimal transport plan. (iii) Inference. In the inference phase, a PageRank-based method is proposed to calculate higher-order structural similarity. Extensive experiments on two dangling benchmarks demonstrate that our WOGCL outperforms the current state-of-the-art methods with pure structural information in both traditional (relaxed) and dangling (consolidated) settings. The code will be public soon.
1810.02481
Mostafa Zaman Chowdhury
Mostafa Zaman Chowdhury, Mohd. Noor Islam, Young Min Seo, Young Ki Lee, Sang Bum Kang, Sun Woong Choi, and Yeong Min Jang
Characterizing QoS Parameters and Application of Soft-QoS Scheme for 3G Wireless Networks
International Conference on Advanced Communication Technology (ICACT), Feb. 2008, Korea, pp. 60-94
null
10.1109/ICACT.2008.4493867
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In wireless communication systems, Quality of Service (QoS) is one of the most important issues from both the users and operators point of view. All the parameters related to QoS are not same important for all users and applications. The satisfaction level of different users also does not depend on same QoS parameters. In this paper, we discuss the QoS parameters and then propose a priority order of QoS parameters based on protocol layers and service applications. We present the relation among the QoS parameters those influence the performance of other QoS parameters and, finally, we demonstrate the numerical analysis results for our proposed soft-QoS scheme to reduce the dropped call rate which is the most important QoS parameter for all types of services
[ { "created": "Fri, 5 Oct 2018 01:17:38 GMT", "version": "v1" } ]
2018-10-08
[ [ "Chowdhury", "Mostafa Zaman", "" ], [ "Islam", "Mohd. Noor", "" ], [ "Seo", "Young Min", "" ], [ "Lee", "Young Ki", "" ], [ "Kang", "Sang Bum", "" ], [ "Choi", "Sun Woong", "" ], [ "Jang", "Yeong Min", "" ] ]
In wireless communication systems, Quality of Service (QoS) is one of the most important issues from both the users and operators point of view. All the parameters related to QoS are not same important for all users and applications. The satisfaction level of different users also does not depend on same QoS parameters. In this paper, we discuss the QoS parameters and then propose a priority order of QoS parameters based on protocol layers and service applications. We present the relation among the QoS parameters those influence the performance of other QoS parameters and, finally, we demonstrate the numerical analysis results for our proposed soft-QoS scheme to reduce the dropped call rate which is the most important QoS parameter for all types of services
2211.07591
Justus-Jonas Erker
Justus-Jonas Erker, Stefan Schaffer, Gerasimos Spanakis
Imagination is All You Need! Curved Contrastive Learning for Abstract Sequence Modeling Utilized on Long Short-Term Dialogue Planning
Accepted in ACL 2023 Findings
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Inspired by the curvature of space-time (Einstein, 1921), we introduce Curved Contrastive Learning (CCL), a novel representation learning technique for learning the relative turn distance between utterance pairs in multi-turn dialogues. The resulting bi-encoder models can guide transformers as a response ranking model towards a goal in a zero-shot fashion by projecting the goal utterance and the corresponding reply candidates into a latent space. Here the cosine similarity indicates the distance/reachability of a candidate utterance toward the corresponding goal. Furthermore, we explore how these forward-entailing language representations can be utilized for assessing the likelihood of sequences by the entailment strength i.e. through the cosine similarity of its individual members (encoded separately) as an emergent property in the curved space. These non-local properties allow us to imagine the likelihood of future patterns in dialogues, specifically by ordering/identifying future goal utterances that are multiple turns away, given a dialogue context. As part of our analysis, we investigate characteristics that make conversations (un)plannable and find strong evidence of planning capability over multiple turns (in 61.56% over 3 turns) in conversations from the DailyDialog (Li et al., 2017) dataset. Finally, we show how we achieve higher efficiency in sequence modeling tasks compared to previous work thanks to our relativistic approach, where only the last utterance needs to be encoded and computed during inference.
[ { "created": "Mon, 14 Nov 2022 18:16:48 GMT", "version": "v1" }, { "created": "Mon, 26 Jun 2023 18:05:48 GMT", "version": "v2" } ]
2023-06-28
[ [ "Erker", "Justus-Jonas", "" ], [ "Schaffer", "Stefan", "" ], [ "Spanakis", "Gerasimos", "" ] ]
Inspired by the curvature of space-time (Einstein, 1921), we introduce Curved Contrastive Learning (CCL), a novel representation learning technique for learning the relative turn distance between utterance pairs in multi-turn dialogues. The resulting bi-encoder models can guide transformers as a response ranking model towards a goal in a zero-shot fashion by projecting the goal utterance and the corresponding reply candidates into a latent space. Here the cosine similarity indicates the distance/reachability of a candidate utterance toward the corresponding goal. Furthermore, we explore how these forward-entailing language representations can be utilized for assessing the likelihood of sequences by the entailment strength i.e. through the cosine similarity of its individual members (encoded separately) as an emergent property in the curved space. These non-local properties allow us to imagine the likelihood of future patterns in dialogues, specifically by ordering/identifying future goal utterances that are multiple turns away, given a dialogue context. As part of our analysis, we investigate characteristics that make conversations (un)plannable and find strong evidence of planning capability over multiple turns (in 61.56% over 3 turns) in conversations from the DailyDialog (Li et al., 2017) dataset. Finally, we show how we achieve higher efficiency in sequence modeling tasks compared to previous work thanks to our relativistic approach, where only the last utterance needs to be encoded and computed during inference.
2107.13045
Alexander Dallmann
Alexander Dallmann, Daniel Zoller, Andreas Hotho
A Case Study on Sampling Strategies for Evaluating Neural Sequential Item Recommendation Models
null
null
10.1145/3460231.3475943
null
cs.IR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
At the present time, sequential item recommendation models are compared by calculating metrics on a small item subset (target set) to speed up computation. The target set contains the relevant item and a set of negative items that are sampled from the full item set. Two well-known strategies to sample negative items are uniform random sampling and sampling by popularity to better approximate the item frequency distribution in the dataset. Most recently published papers on sequential item recommendation rely on sampling by popularity to compare the evaluated models. However, recent work has already shown that an evaluation with uniform random sampling may not be consistent with the full ranking, that is, the model ranking obtained by evaluating a metric using the full item set as target set, which raises the question whether the ranking obtained by sampling by popularity is equal to the full ranking. In this work, we re-evaluate current state-of-the-art sequential recommender models from the point of view, whether these sampling strategies have an impact on the final ranking of the models. We therefore train four recently proposed sequential recommendation models on five widely known datasets. For each dataset and model, we employ three evaluation strategies. First, we compute the full model ranking. Then we evaluate all models on a target set sampled by the two different sampling strategies, uniform random sampling and sampling by popularity with the commonly used target set size of 100, compute the model ranking for each strategy and compare them with each other. Additionally, we vary the size of the sampled target set. Overall, we find that both sampling strategies can produce inconsistent rankings compared with the full ranking of the models. Furthermore, both sampling by popularity and uniform random sampling do not consistently produce the same ranking ...
[ { "created": "Tue, 27 Jul 2021 19:06:03 GMT", "version": "v1" } ]
2021-07-29
[ [ "Dallmann", "Alexander", "" ], [ "Zoller", "Daniel", "" ], [ "Hotho", "Andreas", "" ] ]
At the present time, sequential item recommendation models are compared by calculating metrics on a small item subset (target set) to speed up computation. The target set contains the relevant item and a set of negative items that are sampled from the full item set. Two well-known strategies to sample negative items are uniform random sampling and sampling by popularity to better approximate the item frequency distribution in the dataset. Most recently published papers on sequential item recommendation rely on sampling by popularity to compare the evaluated models. However, recent work has already shown that an evaluation with uniform random sampling may not be consistent with the full ranking, that is, the model ranking obtained by evaluating a metric using the full item set as target set, which raises the question whether the ranking obtained by sampling by popularity is equal to the full ranking. In this work, we re-evaluate current state-of-the-art sequential recommender models from the point of view, whether these sampling strategies have an impact on the final ranking of the models. We therefore train four recently proposed sequential recommendation models on five widely known datasets. For each dataset and model, we employ three evaluation strategies. First, we compute the full model ranking. Then we evaluate all models on a target set sampled by the two different sampling strategies, uniform random sampling and sampling by popularity with the commonly used target set size of 100, compute the model ranking for each strategy and compare them with each other. Additionally, we vary the size of the sampled target set. Overall, we find that both sampling strategies can produce inconsistent rankings compared with the full ranking of the models. Furthermore, both sampling by popularity and uniform random sampling do not consistently produce the same ranking ...
2308.13858
Jiuyu Liu
Jiuyu Liu and Yi Ma and Rahim Tafazolli
A Spatially Non-stationary Fading Channel Model for Simulation and (Semi-) Analytical Study of ELAA-MIMO
This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, a novel spatially non-stationary fading channel model is proposed for multiple-input multiple-output (MIMO) system with extremely-large aperture service-array (ELAA). The proposed model incorporates three key factors which cause the channel spatial non-stationarity: 1) link-wise path-loss; 2) shadowing effect; 3) line-of-sight (LoS)/non-LoS state. With appropriate parameter configurations, the proposed model can be used to generate computer-simulated channel data that matches the published measurement data from practical ELAA-MIMO channels. Given such appealing results, the proposed fading channel model is employed to study the cumulative distribution function (CDF) of ELAA-MIMO channel capacity. For all of our studied scenarios, it is unveiled that the ELAA-MIMO channel capacity obeys the skew-normal distribution. Moreover, the channel capacity is also found close to the Gaussian or Weibull distribution, depending on users' geo-location and distribution. More specifically, for single-user equivalent scenarios or multiuser scenarios with short user-to-ELAA distances (e.g., 1 m), the channel capacity is close to the Gaussian distribution; and for others, it is close to the Weibull distribution. Finally, the proposed channel model is also employed to study the impact of channel spatial non-stationarity on linear MIMO receivers through computer simulations. The proposed fading channel model is available at https://github.com/ELAA-MIMO/non-stationary-fading-channel-model.
[ { "created": "Sat, 26 Aug 2023 12:26:44 GMT", "version": "v1" } ]
2023-08-29
[ [ "Liu", "Jiuyu", "" ], [ "Ma", "Yi", "" ], [ "Tafazolli", "Rahim", "" ] ]
In this paper, a novel spatially non-stationary fading channel model is proposed for multiple-input multiple-output (MIMO) system with extremely-large aperture service-array (ELAA). The proposed model incorporates three key factors which cause the channel spatial non-stationarity: 1) link-wise path-loss; 2) shadowing effect; 3) line-of-sight (LoS)/non-LoS state. With appropriate parameter configurations, the proposed model can be used to generate computer-simulated channel data that matches the published measurement data from practical ELAA-MIMO channels. Given such appealing results, the proposed fading channel model is employed to study the cumulative distribution function (CDF) of ELAA-MIMO channel capacity. For all of our studied scenarios, it is unveiled that the ELAA-MIMO channel capacity obeys the skew-normal distribution. Moreover, the channel capacity is also found close to the Gaussian or Weibull distribution, depending on users' geo-location and distribution. More specifically, for single-user equivalent scenarios or multiuser scenarios with short user-to-ELAA distances (e.g., 1 m), the channel capacity is close to the Gaussian distribution; and for others, it is close to the Weibull distribution. Finally, the proposed channel model is also employed to study the impact of channel spatial non-stationarity on linear MIMO receivers through computer simulations. The proposed fading channel model is available at https://github.com/ELAA-MIMO/non-stationary-fading-channel-model.
2305.03292
Yuchen Shi
Yuchen Shi, Zheqi Zhu, Pingyi Fan, Khaled B. Letaief and Chenghui Peng
FedNC: A Secure and Efficient Federated Learning Method with Network Coding
null
null
null
null
cs.LG cs.CR cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Federated Learning (FL) is a promising distributed learning mechanism which still faces two major challenges, namely privacy breaches and system efficiency. In this work, we reconceptualize the FL system from the perspective of network information theory, and formulate an original FL communication framework, FedNC, which is inspired by Network Coding (NC). The main idea of FedNC is mixing the information of the local models by making random linear combinations of the original parameters, before uploading for further aggregation. Due to the benefits of the coding scheme, both theoretical and experimental analysis indicate that FedNC improves the performance of traditional FL in several important ways, including security, efficiency, and robustness. To the best of our knowledge, this is the first framework where NC is introduced in FL. As FL continues to evolve within practical network frameworks, more variants can be further designed based on FedNC.
[ { "created": "Fri, 5 May 2023 05:47:40 GMT", "version": "v1" }, { "created": "Tue, 16 May 2023 14:37:52 GMT", "version": "v2" }, { "created": "Tue, 9 Jan 2024 03:20:48 GMT", "version": "v3" } ]
2024-01-10
[ [ "Shi", "Yuchen", "" ], [ "Zhu", "Zheqi", "" ], [ "Fan", "Pingyi", "" ], [ "Letaief", "Khaled B.", "" ], [ "Peng", "Chenghui", "" ] ]
Federated Learning (FL) is a promising distributed learning mechanism which still faces two major challenges, namely privacy breaches and system efficiency. In this work, we reconceptualize the FL system from the perspective of network information theory, and formulate an original FL communication framework, FedNC, which is inspired by Network Coding (NC). The main idea of FedNC is mixing the information of the local models by making random linear combinations of the original parameters, before uploading for further aggregation. Due to the benefits of the coding scheme, both theoretical and experimental analysis indicate that FedNC improves the performance of traditional FL in several important ways, including security, efficiency, and robustness. To the best of our knowledge, this is the first framework where NC is introduced in FL. As FL continues to evolve within practical network frameworks, more variants can be further designed based on FedNC.
1306.0543
Misha Denil
Misha Denil, Babak Shakibi, Laurent Dinh, Marc'Aurelio Ranzato, Nando de Freitas
Predicting Parameters in Deep Learning
null
null
null
null
cs.LG cs.NE stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We demonstrate that there is significant redundancy in the parameterization of several deep learning models. Given only a few weight values for each feature it is possible to accurately predict the remaining values. Moreover, we show that not only can the parameter values be predicted, but many of them need not be learned at all. We train several different architectures by learning only a small number of weights and predicting the rest. In the best case we are able to predict more than 95% of the weights of a network without any drop in accuracy.
[ { "created": "Mon, 3 Jun 2013 19:16:26 GMT", "version": "v1" }, { "created": "Mon, 27 Oct 2014 11:49:08 GMT", "version": "v2" } ]
2014-10-28
[ [ "Denil", "Misha", "" ], [ "Shakibi", "Babak", "" ], [ "Dinh", "Laurent", "" ], [ "Ranzato", "Marc'Aurelio", "" ], [ "de Freitas", "Nando", "" ] ]
We demonstrate that there is significant redundancy in the parameterization of several deep learning models. Given only a few weight values for each feature it is possible to accurately predict the remaining values. Moreover, we show that not only can the parameter values be predicted, but many of them need not be learned at all. We train several different architectures by learning only a small number of weights and predicting the rest. In the best case we are able to predict more than 95% of the weights of a network without any drop in accuracy.
1802.00303
Thomas Gibson
Thomas H. Gibson, Lawrence Mitchell, David A. Ham, Colin J. Cotter
Slate: extending Firedrake's domain-specific abstraction to hybridized solvers for geoscience and beyond
Revisions for submission to GMD
Geoscientific Model Development 13:735-761 (2020)
10.5194/gmd-13-735-2020
null
cs.MS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Within the finite element community, discontinuous Galerkin (DG) and mixed finite element methods have become increasingly popular in simulating geophysical flows. However, robust and efficient solvers for the resulting saddle-point and elliptic systems arising from these discretizations continue to be an on-going challenge. One possible approach for addressing this issue is to employ a method known as hybridization, where the discrete equations are transformed such that classic static condensation and local post-processing methods can be employed. However, it is challenging to implement hybridization as performant parallel code within complex models, whilst maintaining separation of concerns between applications scientists and software experts. In this paper, we introduce a domain-specific abstraction within the Firedrake finite element library that permits the rapid execution of these hybridization techniques within a code-generating framework. The resulting framework composes naturally with Firedrake's solver environment, allowing for the implementation of hybridization and static condensation as runtime-configurable preconditioners via the Python interface to PETSc, petsc4py. We provide examples derived from second order elliptic problems and geophysical fluid dynamics. In addition, we demonstrate that hybridization shows great promise for improving the performance of solvers for mixed finite element discretizations of equations related to large-scale geophysical flows.
[ { "created": "Thu, 1 Feb 2018 14:37:13 GMT", "version": "v1" }, { "created": "Thu, 22 Feb 2018 19:24:55 GMT", "version": "v2" }, { "created": "Fri, 6 Jul 2018 14:14:24 GMT", "version": "v3" }, { "created": "Mon, 1 Apr 2019 19:34:09 GMT", "version": "v4" } ]
2020-08-26
[ [ "Gibson", "Thomas H.", "" ], [ "Mitchell", "Lawrence", "" ], [ "Ham", "David A.", "" ], [ "Cotter", "Colin J.", "" ] ]
Within the finite element community, discontinuous Galerkin (DG) and mixed finite element methods have become increasingly popular in simulating geophysical flows. However, robust and efficient solvers for the resulting saddle-point and elliptic systems arising from these discretizations continue to be an on-going challenge. One possible approach for addressing this issue is to employ a method known as hybridization, where the discrete equations are transformed such that classic static condensation and local post-processing methods can be employed. However, it is challenging to implement hybridization as performant parallel code within complex models, whilst maintaining separation of concerns between applications scientists and software experts. In this paper, we introduce a domain-specific abstraction within the Firedrake finite element library that permits the rapid execution of these hybridization techniques within a code-generating framework. The resulting framework composes naturally with Firedrake's solver environment, allowing for the implementation of hybridization and static condensation as runtime-configurable preconditioners via the Python interface to PETSc, petsc4py. We provide examples derived from second order elliptic problems and geophysical fluid dynamics. In addition, we demonstrate that hybridization shows great promise for improving the performance of solvers for mixed finite element discretizations of equations related to large-scale geophysical flows.
1606.03180
Yoshihiko Kakutani
Yoshihiko Kakutani
Calculi for Intuitionistic Normal Modal Logic
null
null
null
null
cs.LO cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper provides a call-by-name and a call-by-value term calculus, both of which have a Curry-Howard correspondence to the box fragment of the intuitionistic modal logic IK. The strong normalizability and the confluency of the calculi are shown. Moreover, we define a CPS transformation from the call-by-value calculus to the call-by-name calculus, and show its soundness and completeness.
[ { "created": "Fri, 10 Jun 2016 05:19:27 GMT", "version": "v1" } ]
2016-06-17
[ [ "Kakutani", "Yoshihiko", "" ] ]
This paper provides a call-by-name and a call-by-value term calculus, both of which have a Curry-Howard correspondence to the box fragment of the intuitionistic modal logic IK. The strong normalizability and the confluency of the calculi are shown. Moreover, we define a CPS transformation from the call-by-value calculus to the call-by-name calculus, and show its soundness and completeness.
1811.08973
Siddharth Karamcheti
Siddharth Karamcheti, Gideon Mann, and David Rosenberg
Improving Grey-Box Fuzzing by Modeling Program Behavior
5 pages, 3 figures
null
null
null
cs.AI cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Grey-box fuzzers such as American Fuzzy Lop (AFL) are popular tools for finding bugs and potential vulnerabilities in programs. While these fuzzers have been able to find vulnerabilities in many widely used programs, they are not efficient; of the millions of inputs executed by AFL in a typical fuzzing run, only a handful discover unseen behavior or trigger a crash. The remaining inputs are redundant, exhibiting behavior that has already been observed. Here, we present an approach to increase the efficiency of fuzzers like AFL by applying machine learning to directly model how programs behave. We learn a forward prediction model that maps program inputs to execution traces, training on the thousands of inputs collected during standard fuzzing. This learned model guides exploration by focusing on fuzzing inputs on which our model is the most uncertain (measured via the entropy of the predicted execution trace distribution). By focusing on executing inputs our learned model is unsure about, and ignoring any input whose behavior our model is certain about, we show that we can significantly limit wasteful execution. Through testing our approach on a set of binaries released as part of the DARPA Cyber Grand Challenge, we show that our approach is able to find a set of inputs that result in more code coverage and discovered crashes than baseline fuzzers with significantly fewer executions.
[ { "created": "Wed, 21 Nov 2018 23:34:57 GMT", "version": "v1" } ]
2018-11-26
[ [ "Karamcheti", "Siddharth", "" ], [ "Mann", "Gideon", "" ], [ "Rosenberg", "David", "" ] ]
Grey-box fuzzers such as American Fuzzy Lop (AFL) are popular tools for finding bugs and potential vulnerabilities in programs. While these fuzzers have been able to find vulnerabilities in many widely used programs, they are not efficient; of the millions of inputs executed by AFL in a typical fuzzing run, only a handful discover unseen behavior or trigger a crash. The remaining inputs are redundant, exhibiting behavior that has already been observed. Here, we present an approach to increase the efficiency of fuzzers like AFL by applying machine learning to directly model how programs behave. We learn a forward prediction model that maps program inputs to execution traces, training on the thousands of inputs collected during standard fuzzing. This learned model guides exploration by focusing on fuzzing inputs on which our model is the most uncertain (measured via the entropy of the predicted execution trace distribution). By focusing on executing inputs our learned model is unsure about, and ignoring any input whose behavior our model is certain about, we show that we can significantly limit wasteful execution. Through testing our approach on a set of binaries released as part of the DARPA Cyber Grand Challenge, we show that our approach is able to find a set of inputs that result in more code coverage and discovered crashes than baseline fuzzers with significantly fewer executions.
2106.07464
Stassa Patsantzis
Stassa Patsantzis and Stephen H. Muggleton
Meta-Interpretive Learning as Metarule Specialisation
29 pages. Submitted to the Machine Learning Journal Special Issue on Learning and Reasoning on June 1st, 2021. Revised and resubmitted on 16/09/21. Revised again and resubmitted on 09/12/2021. Accepted for publication on January 2021
null
null
null
cs.LG cs.AI cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In Meta-Interpretive Learning (MIL) the metarules, second-order datalog clauses acting as inductive bias, are manually defined by the user. In this work we show that second-order metarules for MIL can be learned by MIL. We define a generality ordering of metarules by $\theta$-subsumption and show that user-defined \emph{sort metarules} are derivable by specialisation of the most-general \emph{matrix metarules} in a language class; and that these matrix metarules are in turn derivable by specialisation of third-order \emph{punch metarules} with variables quantified over the set of atoms and for which only an upper bound on their number of literals need be user-defined. We show that the cardinality of a metarule language is polynomial in the number of literals in punch metarules. We re-frame MIL as metarule specialisation by resolution. We modify the MIL metarule specialisation operator to return new metarules rather than first-order clauses and prove the correctness of the new operator. We implement the new operator as TOIL, a sub-system of the MIL system Louise. Our experiments show that as user-defined sort metarules are progressively replaced by sort metarules learned by TOIL, Louise's predictive accuracy and training times are maintained. We conclude that automatically derived metarules can replace user-defined metarules.
[ { "created": "Wed, 9 Jun 2021 16:01:27 GMT", "version": "v1" }, { "created": "Sat, 11 Sep 2021 14:46:57 GMT", "version": "v2" }, { "created": "Thu, 16 Sep 2021 15:56:00 GMT", "version": "v3" }, { "created": "Wed, 3 Nov 2021 15:33:17 GMT", "version": "v4" }, { "created": "Wed, 8 Dec 2021 23:18:34 GMT", "version": "v5" }, { "created": "Fri, 11 Feb 2022 16:42:48 GMT", "version": "v6" } ]
2022-02-14
[ [ "Patsantzis", "Stassa", "" ], [ "Muggleton", "Stephen H.", "" ] ]
In Meta-Interpretive Learning (MIL) the metarules, second-order datalog clauses acting as inductive bias, are manually defined by the user. In this work we show that second-order metarules for MIL can be learned by MIL. We define a generality ordering of metarules by $\theta$-subsumption and show that user-defined \emph{sort metarules} are derivable by specialisation of the most-general \emph{matrix metarules} in a language class; and that these matrix metarules are in turn derivable by specialisation of third-order \emph{punch metarules} with variables quantified over the set of atoms and for which only an upper bound on their number of literals need be user-defined. We show that the cardinality of a metarule language is polynomial in the number of literals in punch metarules. We re-frame MIL as metarule specialisation by resolution. We modify the MIL metarule specialisation operator to return new metarules rather than first-order clauses and prove the correctness of the new operator. We implement the new operator as TOIL, a sub-system of the MIL system Louise. Our experiments show that as user-defined sort metarules are progressively replaced by sort metarules learned by TOIL, Louise's predictive accuracy and training times are maintained. We conclude that automatically derived metarules can replace user-defined metarules.
2210.11061
Pedro Miguel Sanchez Sanchez
Pedro Miguel S\'anchez S\'anchez, Alberto Huertas Celdr\'an, Enrique Tom\'as Mart\'inez Beltr\'an, Daniel Demeter, G\'er\^ome Bovet, Gregorio Mart\'inez P\'erez, Burkhard Stiller
Analyzing the Robustness of Decentralized Horizontal and Vertical Federated Learning Architectures in a Non-IID Scenario
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Federated learning (FL) allows participants to collaboratively train machine and deep learning models while protecting data privacy. However, the FL paradigm still presents drawbacks affecting its trustworthiness since malicious participants could launch adversarial attacks against the training process. Related work has studied the robustness of horizontal FL scenarios under different attacks. However, there is a lack of work evaluating the robustness of decentralized vertical FL and comparing it with horizontal FL architectures affected by adversarial attacks. Thus, this work proposes three decentralized FL architectures, one for horizontal and two for vertical scenarios, namely HoriChain, VertiChain, and VertiComb. These architectures present different neural networks and training protocols suitable for horizontal and vertical scenarios. Then, a decentralized, privacy-preserving, and federated use case with non-IID data to classify handwritten digits is deployed to evaluate the performance of the three architectures. Finally, a set of experiments computes and compares the robustness of the proposed architectures when they are affected by different data poisoning based on image watermarks and gradient poisoning adversarial attacks. The experiments show that even though particular configurations of both attacks can destroy the classification performance of the architectures, HoriChain is the most robust one.
[ { "created": "Thu, 20 Oct 2022 07:33:17 GMT", "version": "v1" } ]
2022-10-21
[ [ "Sánchez", "Pedro Miguel Sánchez", "" ], [ "Celdrán", "Alberto Huertas", "" ], [ "Beltrán", "Enrique Tomás Martínez", "" ], [ "Demeter", "Daniel", "" ], [ "Bovet", "Gérôme", "" ], [ "Pérez", "Gregorio Martínez", "" ], [ "Stiller", "Burkhard", "" ] ]
Federated learning (FL) allows participants to collaboratively train machine and deep learning models while protecting data privacy. However, the FL paradigm still presents drawbacks affecting its trustworthiness since malicious participants could launch adversarial attacks against the training process. Related work has studied the robustness of horizontal FL scenarios under different attacks. However, there is a lack of work evaluating the robustness of decentralized vertical FL and comparing it with horizontal FL architectures affected by adversarial attacks. Thus, this work proposes three decentralized FL architectures, one for horizontal and two for vertical scenarios, namely HoriChain, VertiChain, and VertiComb. These architectures present different neural networks and training protocols suitable for horizontal and vertical scenarios. Then, a decentralized, privacy-preserving, and federated use case with non-IID data to classify handwritten digits is deployed to evaluate the performance of the three architectures. Finally, a set of experiments computes and compares the robustness of the proposed architectures when they are affected by different data poisoning based on image watermarks and gradient poisoning adversarial attacks. The experiments show that even though particular configurations of both attacks can destroy the classification performance of the architectures, HoriChain is the most robust one.
2002.10394
Gr\'egoire Jauvion
Gr\'egoire Jauvion, Thibaut Cassard, Boris Quennehen, David Lissmyr
DeepPlume: Very High Resolution Real-Time Air Quality Mapping
8 pages, 8 figures
null
null
null
cs.LG cs.CY stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents an engine able to predict jointly the real-time concentration of the main pollutants harming people's health: nitrogen dioxyde (NO2), ozone (O3) and particulate matter (PM2.5 and PM10, which are respectively the particles whose size are below 2.5 um and 10 um). The engine covers a large part of the world and is fed with real-time official stations measures, atmospheric models' forecasts, land cover data, road networks and traffic estimates to produce predictions with a very high resolution in the range of a few dozens of meters. This resolution makes the engine adapted to very innovative applications like street-level air quality mapping or air quality adjusted routing. Plume Labs has deployed a similar prediction engine to build several products aiming at providing air quality data to individuals and businesses. For the sake of clarity and reproducibility, the engine presented here has been built specifically for this paper and differs quite significantly from the one used in Plume Labs' products. A major difference is in the data sources feeding the engine: in particular, this prediction engine does not include mobile sensors measurements.
[ { "created": "Fri, 14 Feb 2020 14:05:45 GMT", "version": "v1" } ]
2020-02-25
[ [ "Jauvion", "Grégoire", "" ], [ "Cassard", "Thibaut", "" ], [ "Quennehen", "Boris", "" ], [ "Lissmyr", "David", "" ] ]
This paper presents an engine able to predict jointly the real-time concentration of the main pollutants harming people's health: nitrogen dioxyde (NO2), ozone (O3) and particulate matter (PM2.5 and PM10, which are respectively the particles whose size are below 2.5 um and 10 um). The engine covers a large part of the world and is fed with real-time official stations measures, atmospheric models' forecasts, land cover data, road networks and traffic estimates to produce predictions with a very high resolution in the range of a few dozens of meters. This resolution makes the engine adapted to very innovative applications like street-level air quality mapping or air quality adjusted routing. Plume Labs has deployed a similar prediction engine to build several products aiming at providing air quality data to individuals and businesses. For the sake of clarity and reproducibility, the engine presented here has been built specifically for this paper and differs quite significantly from the one used in Plume Labs' products. A major difference is in the data sources feeding the engine: in particular, this prediction engine does not include mobile sensors measurements.
2404.02923
Mehdi Jabbari Zideh
Mehdi Jabbari Zideh, Mohammad Reza Khalghani, and Sarika Khushalani Solanki
An Unsupervised Adversarial Autoencoder for Cyber Attack Detection in Power Distribution Grids
null
null
null
null
cs.CR cs.AI cs.LG cs.SY eess.SY
http://creativecommons.org/licenses/by-nc-nd/4.0/
Detection of cyber attacks in smart power distribution grids with unbalanced configurations poses challenges due to the inherent nonlinear nature of these uncertain and stochastic systems. It originates from the intermittent characteristics of the distributed energy resources (DERs) generation and load variations. Moreover, the unknown behavior of cyber attacks, especially false data injection attacks (FDIAs) in the distribution grids with complex temporal correlations and the limited amount of labeled data increases the vulnerability of the grids and imposes a high risk in the secure and reliable operation of the grids. To address these challenges, this paper proposes an unsupervised adversarial autoencoder (AAE) model to detect FDIAs in unbalanced power distribution grids integrated with DERs, i.e., PV systems and wind generation. The proposed method utilizes long short-term memory (LSTM) in the structure of the autoencoder to capture the temporal dependencies in the time-series measurements and leverages the power of generative adversarial networks (GANs) for better reconstruction of the input data. The advantage of the proposed data-driven model is that it can detect anomalous points for the system operation without reliance on abstract models or mathematical representations. To evaluate the efficacy of the approach, it is tested on IEEE 13-bus and 123-bus systems with historical meteorological data (wind speed, ambient temperature, and solar irradiance) as well as historical real-world load data under three types of data falsification functions. The comparison of the detection results of the proposed model with other unsupervised learning methods verifies its superior performance in detecting cyber attacks in unbalanced power distribution grids.
[ { "created": "Sun, 31 Mar 2024 01:20:01 GMT", "version": "v1" } ]
2024-04-05
[ [ "Zideh", "Mehdi Jabbari", "" ], [ "Khalghani", "Mohammad Reza", "" ], [ "Solanki", "Sarika Khushalani", "" ] ]
Detection of cyber attacks in smart power distribution grids with unbalanced configurations poses challenges due to the inherent nonlinear nature of these uncertain and stochastic systems. It originates from the intermittent characteristics of the distributed energy resources (DERs) generation and load variations. Moreover, the unknown behavior of cyber attacks, especially false data injection attacks (FDIAs) in the distribution grids with complex temporal correlations and the limited amount of labeled data increases the vulnerability of the grids and imposes a high risk in the secure and reliable operation of the grids. To address these challenges, this paper proposes an unsupervised adversarial autoencoder (AAE) model to detect FDIAs in unbalanced power distribution grids integrated with DERs, i.e., PV systems and wind generation. The proposed method utilizes long short-term memory (LSTM) in the structure of the autoencoder to capture the temporal dependencies in the time-series measurements and leverages the power of generative adversarial networks (GANs) for better reconstruction of the input data. The advantage of the proposed data-driven model is that it can detect anomalous points for the system operation without reliance on abstract models or mathematical representations. To evaluate the efficacy of the approach, it is tested on IEEE 13-bus and 123-bus systems with historical meteorological data (wind speed, ambient temperature, and solar irradiance) as well as historical real-world load data under three types of data falsification functions. The comparison of the detection results of the proposed model with other unsupervised learning methods verifies its superior performance in detecting cyber attacks in unbalanced power distribution grids.
2004.10247
Larissa Shimomura
Larissa C. Shimomura (Eindhoven University of Technology), George Fletcher (Eindhoven University of Technology), Nikolay Yakovets (Eindhoven University of Technology)
GGDs: Graph Generating Dependencies
5 pages
null
null
null
cs.DB
http://creativecommons.org/licenses/by/4.0/
We propose Graph Generating Dependencies (GGDs), a new class of dependencies for property graphs. Extending the expressivity of state of the art constraint languages, GGDs can express both tuple- and equality-generating dependencies on property graphs, both of which find broad application in graph data management. We provide the formal definition of GGDs, analyze the validation problem for GGDs, and demonstrate the practical utility of GGDs.
[ { "created": "Tue, 21 Apr 2020 19:20:02 GMT", "version": "v1" }, { "created": "Tue, 26 May 2020 08:54:20 GMT", "version": "v2" }, { "created": "Mon, 15 Jun 2020 13:48:46 GMT", "version": "v3" } ]
2020-06-16
[ [ "Shimomura", "Larissa C.", "", "Eindhoven University of Technology" ], [ "Fletcher", "George", "", "Eindhoven University of Technology" ], [ "Yakovets", "Nikolay", "", "Eindhoven\n University of Technology" ] ]
We propose Graph Generating Dependencies (GGDs), a new class of dependencies for property graphs. Extending the expressivity of state of the art constraint languages, GGDs can express both tuple- and equality-generating dependencies on property graphs, both of which find broad application in graph data management. We provide the formal definition of GGDs, analyze the validation problem for GGDs, and demonstrate the practical utility of GGDs.
1812.07172
Risto Vuorio
Risto Vuorio, Shao-Hua Sun, Hexiang Hu, Joseph J. Lim
Toward Multimodal Model-Agnostic Meta-Learning
null
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gradient-based meta-learners such as MAML are able to learn a meta-prior from similar tasks to adapt to novel tasks from the same distribution with few gradient updates. One important limitation of such frameworks is that they seek a common initialization shared across the entire task distribution, substantially limiting the diversity of the task distributions that they are able to learn from. In this paper, we augment MAML with the capability to identify tasks sampled from a multimodal task distribution and adapt quickly through gradient updates. Specifically, we propose a multimodal MAML algorithm that is able to modulate its meta-learned prior according to the identified task, allowing faster adaptation. We evaluate the proposed model on a diverse set of problems including regression, few-shot image classification, and reinforcement learning. The results demonstrate the effectiveness of our model in modulating the meta-learned prior in response to the characteristics of tasks sampled from a multimodal distribution.
[ { "created": "Tue, 18 Dec 2018 05:08:54 GMT", "version": "v1" } ]
2018-12-19
[ [ "Vuorio", "Risto", "" ], [ "Sun", "Shao-Hua", "" ], [ "Hu", "Hexiang", "" ], [ "Lim", "Joseph J.", "" ] ]
Gradient-based meta-learners such as MAML are able to learn a meta-prior from similar tasks to adapt to novel tasks from the same distribution with few gradient updates. One important limitation of such frameworks is that they seek a common initialization shared across the entire task distribution, substantially limiting the diversity of the task distributions that they are able to learn from. In this paper, we augment MAML with the capability to identify tasks sampled from a multimodal task distribution and adapt quickly through gradient updates. Specifically, we propose a multimodal MAML algorithm that is able to modulate its meta-learned prior according to the identified task, allowing faster adaptation. We evaluate the proposed model on a diverse set of problems including regression, few-shot image classification, and reinforcement learning. The results demonstrate the effectiveness of our model in modulating the meta-learned prior in response to the characteristics of tasks sampled from a multimodal distribution.