id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
1404.3840
Chaochao Lu
Chaochao Lu, Xiaoou Tang
Surpassing Human-Level Face Verification Performance on LFW with GaussianFace
Appearing in Proceedings of the 29th AAAI Conference on Artificial Intelligence (AAAI-15), Oral Presentation
null
null
null
cs.CV cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Face verification remains a challenging problem in very complex conditions with large variations such as pose, illumination, expression, and occlusions. This problem is exacerbated when we rely unrealistically on a single training data source, which is often insufficient to cover the intrinsically complex face variations. This paper proposes a principled multi-task learning approach based on Discriminative Gaussian Process Latent Variable Model, named GaussianFace, to enrich the diversity of training data. In comparison to existing methods, our model exploits additional data from multiple source-domains to improve the generalization performance of face verification in an unknown target-domain. Importantly, our model can adapt automatically to complex data distributions, and therefore can well capture complex face variations inherent in multiple sources. Extensive experiments demonstrate the effectiveness of the proposed model in learning from diverse data sources and generalize to unseen domain. Specifically, the accuracy of our algorithm achieves an impressive accuracy rate of 98.52% on the well-known and challenging Labeled Faces in the Wild (LFW) benchmark. For the first time, the human-level performance in face verification (97.53%) on LFW is surpassed.
[ { "created": "Tue, 15 Apr 2014 07:51:23 GMT", "version": "v1" }, { "created": "Mon, 16 Jun 2014 14:37:38 GMT", "version": "v2" }, { "created": "Sat, 20 Dec 2014 03:37:36 GMT", "version": "v3" } ]
2014-12-23
[ [ "Lu", "Chaochao", "" ], [ "Tang", "Xiaoou", "" ] ]
Face verification remains a challenging problem in very complex conditions with large variations such as pose, illumination, expression, and occlusions. This problem is exacerbated when we rely unrealistically on a single training data source, which is often insufficient to cover the intrinsically complex face variations. This paper proposes a principled multi-task learning approach based on Discriminative Gaussian Process Latent Variable Model, named GaussianFace, to enrich the diversity of training data. In comparison to existing methods, our model exploits additional data from multiple source-domains to improve the generalization performance of face verification in an unknown target-domain. Importantly, our model can adapt automatically to complex data distributions, and therefore can well capture complex face variations inherent in multiple sources. Extensive experiments demonstrate the effectiveness of the proposed model in learning from diverse data sources and generalize to unseen domain. Specifically, the accuracy of our algorithm achieves an impressive accuracy rate of 98.52% on the well-known and challenging Labeled Faces in the Wild (LFW) benchmark. For the first time, the human-level performance in face verification (97.53%) on LFW is surpassed.
2401.00910
Saravanabalagi Ramachandran
Saravanabalagi Ramachandran and Nathaniel Cibik and Ganesh Sistu and John McDonald
WoodScape Motion Segmentation for Autonomous Driving -- CVPR 2023 OmniCV Workshop Challenge
CVPR 2023 OmniCV Workshop Challenge
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Motion segmentation is a complex yet indispensable task in autonomous driving. The challenges introduced by the ego-motion of the cameras, radial distortion in fisheye lenses, and the need for temporal consistency make the task more complicated, rendering traditional and standard Convolutional Neural Network (CNN) approaches less effective. The consequent laborious data labeling, representation of diverse and uncommon scenarios, and extensive data capture requirements underscore the imperative of synthetic data for improving machine learning model performance. To this end, we employ the PD-WoodScape synthetic dataset developed by Parallel Domain, alongside the WoodScape fisheye dataset. Thus, we present the WoodScape fisheye motion segmentation challenge for autonomous driving, held as part of the CVPR 2023 Workshop on Omnidirectional Computer Vision (OmniCV). As one of the first competitions focused on fisheye motion segmentation, we aim to explore and evaluate the potential and impact of utilizing synthetic data in this domain. In this paper, we provide a detailed analysis on the competition which attracted the participation of 112 global teams and a total of 234 submissions. This study delineates the complexities inherent in the task of motion segmentation, emphasizes the significance of fisheye datasets, articulate the necessity for synthetic datasets and the resultant domain gap they engender, outlining the foundational blueprint for devising successful solutions. Subsequently, we delve into the details of the baseline experiments and winning methods evaluating their qualitative and quantitative results, providing with useful insights.
[ { "created": "Sun, 31 Dec 2023 23:53:50 GMT", "version": "v1" }, { "created": "Tue, 16 Jan 2024 16:28:58 GMT", "version": "v2" } ]
2024-01-17
[ [ "Ramachandran", "Saravanabalagi", "" ], [ "Cibik", "Nathaniel", "" ], [ "Sistu", "Ganesh", "" ], [ "McDonald", "John", "" ] ]
Motion segmentation is a complex yet indispensable task in autonomous driving. The challenges introduced by the ego-motion of the cameras, radial distortion in fisheye lenses, and the need for temporal consistency make the task more complicated, rendering traditional and standard Convolutional Neural Network (CNN) approaches less effective. The consequent laborious data labeling, representation of diverse and uncommon scenarios, and extensive data capture requirements underscore the imperative of synthetic data for improving machine learning model performance. To this end, we employ the PD-WoodScape synthetic dataset developed by Parallel Domain, alongside the WoodScape fisheye dataset. Thus, we present the WoodScape fisheye motion segmentation challenge for autonomous driving, held as part of the CVPR 2023 Workshop on Omnidirectional Computer Vision (OmniCV). As one of the first competitions focused on fisheye motion segmentation, we aim to explore and evaluate the potential and impact of utilizing synthetic data in this domain. In this paper, we provide a detailed analysis on the competition which attracted the participation of 112 global teams and a total of 234 submissions. This study delineates the complexities inherent in the task of motion segmentation, emphasizes the significance of fisheye datasets, articulate the necessity for synthetic datasets and the resultant domain gap they engender, outlining the foundational blueprint for devising successful solutions. Subsequently, we delve into the details of the baseline experiments and winning methods evaluating their qualitative and quantitative results, providing with useful insights.
2406.09804
Arne Symons
Steven Colleman, Arne Symons, Victor J.B. Jung, Marian Verhelst
Optimizing Layer-Fused Scheduling of Transformer Networks on Multi-accelerator Platforms
Accepted to ISQED2024
null
null
null
cs.AR
http://creativecommons.org/licenses/by/4.0/
The impact of transformer networks is booming, yet, they come with significant computational complexity. It is therefore essential to understand how to optimally map and execute these networks on modern neural processor hardware. So far, literature on transformer scheduling optimization has been focusing on deployment on GPU and specific ASICs. This work enables extensive hardware/mapping exploration by extending the DSE framework Stream towards support for transformers across a wide variety of hardware architectures and different execution schedules. After validation, we explore the optimal schedule for transformer layers/attention heads and investigate whether layer fusion is beneficial to improve latency, energy or memory requirements. Our study shows that the memory requirements for active feature data can be drastically reduced, by adapting the execution schedule based on the size of the input of the attention head.
[ { "created": "Fri, 14 Jun 2024 07:56:37 GMT", "version": "v1" } ]
2024-06-17
[ [ "Colleman", "Steven", "" ], [ "Symons", "Arne", "" ], [ "Jung", "Victor J. B.", "" ], [ "Verhelst", "Marian", "" ] ]
The impact of transformer networks is booming, yet, they come with significant computational complexity. It is therefore essential to understand how to optimally map and execute these networks on modern neural processor hardware. So far, literature on transformer scheduling optimization has been focusing on deployment on GPU and specific ASICs. This work enables extensive hardware/mapping exploration by extending the DSE framework Stream towards support for transformers across a wide variety of hardware architectures and different execution schedules. After validation, we explore the optimal schedule for transformer layers/attention heads and investigate whether layer fusion is beneficial to improve latency, energy or memory requirements. Our study shows that the memory requirements for active feature data can be drastically reduced, by adapting the execution schedule based on the size of the input of the attention head.
2004.00055
Simon DeDeo
Scott Viteri and Simon DeDeo
Epistemic Phase Transitions in Mathematical Proofs
22 pages, 5 figures. Matches published version. Supplementary information available at https://www.sciencedirect.com/science/article/pii/S0010027722001081
Cognition, 225, 105120 (2022)
10.1016/j.cognition.2022.105120
null
cs.SC cs.AI math.HO physics.soc-ph q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mathematical proofs are both paradigms of certainty and some of the most explicitly-justified arguments that we have in the cultural record. Their very explicitness, however, leads to a paradox, because the probability of error grows exponentially as the argument expands. When a mathematician encounters a proof, how does she come to believe it? Here we show that, under a cognitively-plausible belief formation mechanism combining deductive and abductive reasoning, belief in mathematical arguments can undergo what we call an epistemic phase transition: a dramatic and rapidly-propagating jump from uncertainty to near-complete confidence at reasonable levels of claim-to-claim error rates. To show this, we analyze an unusual dataset of forty-eight machine-aided proofs from the formalized reasoning system Coq, including major theorems ranging from ancient to 21st Century mathematics, along with five hand-constructed cases including Euclid, Apollonius, Hernstein's Topics in Algebra, and Andrew Wiles's proof of Fermat's Last Theorem. Our results bear both on recent work in the history and philosophy of mathematics on how we understand proofs, and on a question, basic to cognitive science, of how we justify complex beliefs.
[ { "created": "Tue, 31 Mar 2020 18:39:56 GMT", "version": "v1" }, { "created": "Tue, 12 Apr 2022 15:25:22 GMT", "version": "v2" } ]
2022-04-13
[ [ "Viteri", "Scott", "" ], [ "DeDeo", "Simon", "" ] ]
Mathematical proofs are both paradigms of certainty and some of the most explicitly-justified arguments that we have in the cultural record. Their very explicitness, however, leads to a paradox, because the probability of error grows exponentially as the argument expands. When a mathematician encounters a proof, how does she come to believe it? Here we show that, under a cognitively-plausible belief formation mechanism combining deductive and abductive reasoning, belief in mathematical arguments can undergo what we call an epistemic phase transition: a dramatic and rapidly-propagating jump from uncertainty to near-complete confidence at reasonable levels of claim-to-claim error rates. To show this, we analyze an unusual dataset of forty-eight machine-aided proofs from the formalized reasoning system Coq, including major theorems ranging from ancient to 21st Century mathematics, along with five hand-constructed cases including Euclid, Apollonius, Hernstein's Topics in Algebra, and Andrew Wiles's proof of Fermat's Last Theorem. Our results bear both on recent work in the history and philosophy of mathematics on how we understand proofs, and on a question, basic to cognitive science, of how we justify complex beliefs.
2404.04068
Filip Seitl
Filip Seitl, Tom\'a\v{s} Kov\'a\v{r}\'ik, Soheyla Mirshahi, Jan Kry\v{s}t\r{u}fek, Rastislav Dujava, Mat\'u\v{s} Ondrei\v{c}ka, Herbert Ullrich, Petr Gronat
Assessing the quality of information extraction
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Advances in large language models have notably enhanced the efficiency of information extraction from unstructured and semi-structured data sources. As these technologies become integral to various applications, establishing an objective measure for the quality of information extraction becomes imperative. However, the scarcity of labeled data presents significant challenges to this endeavor. In this paper, we introduce an automatic framework to assess the quality of the information extraction/retrieval and its completeness. The framework focuses on information extraction in the form of entity and its properties. We discuss how to handle the input/output size limitations of the large language models and analyze their performance when extracting the information. In particular, we introduce scores to evaluate the quality of the extraction and provide an extensive discussion on how to interpret them.
[ { "created": "Fri, 5 Apr 2024 12:51:48 GMT", "version": "v1" }, { "created": "Wed, 22 May 2024 09:04:52 GMT", "version": "v2" } ]
2024-05-24
[ [ "Seitl", "Filip", "" ], [ "Kovářík", "Tomáš", "" ], [ "Mirshahi", "Soheyla", "" ], [ "Kryštůfek", "Jan", "" ], [ "Dujava", "Rastislav", "" ], [ "Ondreička", "Matúš", "" ], [ "Ullrich", "Herbert", "" ], [ "Gronat", "Petr", "" ] ]
Advances in large language models have notably enhanced the efficiency of information extraction from unstructured and semi-structured data sources. As these technologies become integral to various applications, establishing an objective measure for the quality of information extraction becomes imperative. However, the scarcity of labeled data presents significant challenges to this endeavor. In this paper, we introduce an automatic framework to assess the quality of the information extraction/retrieval and its completeness. The framework focuses on information extraction in the form of entity and its properties. We discuss how to handle the input/output size limitations of the large language models and analyze their performance when extracting the information. In particular, we introduce scores to evaluate the quality of the extraction and provide an extensive discussion on how to interpret them.
1910.01713
Vadim Arzamasov
Vadim Arzamasov and Klemens B\"ohm
REDS: Rule Extraction for Discovering Scenarios
null
null
10.1145/3448016.3457301
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Scenario discovery is the process of finding areas of interest, known as scenarios, in data spaces resulting from simulations. For instance, one might search for conditions, i.e., inputs of the simulation model, where the system is unstable. Subgroup discovery methods are commonly used for scenario discovery. They find scenarios in the form of hyperboxes, which are easy to comprehend. Given a computational budget, results tend to get worse as the number of inputs of the simulation model and the cost of simulations increase. We propose a new procedure for scenario discovery from few simulations, dubbed REDS. A key ingredient is using an intermediate machine learning model to label data for subsequent use by conventional subgroup discovery methods. We provide statistical arguments why this is an improvement. In our experiments, REDS reduces the number of simulations required by 50--75\% on average, depending on the quality measure. It is also useful as a semi-supervised subgroup discovery method and for discovering better scenarios from third-party data, when a simulation model is not available.
[ { "created": "Thu, 3 Oct 2019 20:40:18 GMT", "version": "v1" }, { "created": "Thu, 5 May 2022 11:13:58 GMT", "version": "v2" } ]
2022-05-06
[ [ "Arzamasov", "Vadim", "" ], [ "Böhm", "Klemens", "" ] ]
Scenario discovery is the process of finding areas of interest, known as scenarios, in data spaces resulting from simulations. For instance, one might search for conditions, i.e., inputs of the simulation model, where the system is unstable. Subgroup discovery methods are commonly used for scenario discovery. They find scenarios in the form of hyperboxes, which are easy to comprehend. Given a computational budget, results tend to get worse as the number of inputs of the simulation model and the cost of simulations increase. We propose a new procedure for scenario discovery from few simulations, dubbed REDS. A key ingredient is using an intermediate machine learning model to label data for subsequent use by conventional subgroup discovery methods. We provide statistical arguments why this is an improvement. In our experiments, REDS reduces the number of simulations required by 50--75\% on average, depending on the quality measure. It is also useful as a semi-supervised subgroup discovery method and for discovering better scenarios from third-party data, when a simulation model is not available.
2206.07082
Yunwen Lei
Yunwen Lei
Stability and Generalization of Stochastic Optimization with Nonconvex and Nonsmooth Problems
To appear in COLT 2023
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Stochastic optimization has found wide applications in minimizing objective functions in machine learning, which motivates a lot of theoretical studies to understand its practical success. Most of existing studies focus on the convergence of optimization errors, while the generalization analysis of stochastic optimization is much lagging behind. This is especially the case for nonconvex and nonsmooth problems often encountered in practice. In this paper, we initialize a systematic stability and generalization analysis of stochastic optimization on nonconvex and nonsmooth problems. We introduce novel algorithmic stability measures and establish their quantitative connection on the gap between population gradients and empirical gradients, which is then further extended to study the gap between the Moreau envelope of the empirical risk and that of the population risk. To our knowledge, these quantitative connection between stability and generalization in terms of either gradients or Moreau envelopes have not been studied in the literature. We introduce a class of sampling-determined algorithms, for which we develop bounds for three stability measures. Finally, we apply these discussions to derive error bounds for stochastic gradient descent and its adaptive variant, where we show how to achieve an implicit regularization by tuning the step sizes and the number of iterations.
[ { "created": "Tue, 14 Jun 2022 18:14:30 GMT", "version": "v1" }, { "created": "Wed, 21 Jun 2023 09:07:46 GMT", "version": "v2" }, { "created": "Tue, 18 Jul 2023 02:00:40 GMT", "version": "v3" } ]
2023-07-19
[ [ "Lei", "Yunwen", "" ] ]
Stochastic optimization has found wide applications in minimizing objective functions in machine learning, which motivates a lot of theoretical studies to understand its practical success. Most of existing studies focus on the convergence of optimization errors, while the generalization analysis of stochastic optimization is much lagging behind. This is especially the case for nonconvex and nonsmooth problems often encountered in practice. In this paper, we initialize a systematic stability and generalization analysis of stochastic optimization on nonconvex and nonsmooth problems. We introduce novel algorithmic stability measures and establish their quantitative connection on the gap between population gradients and empirical gradients, which is then further extended to study the gap between the Moreau envelope of the empirical risk and that of the population risk. To our knowledge, these quantitative connection between stability and generalization in terms of either gradients or Moreau envelopes have not been studied in the literature. We introduce a class of sampling-determined algorithms, for which we develop bounds for three stability measures. Finally, we apply these discussions to derive error bounds for stochastic gradient descent and its adaptive variant, where we show how to achieve an implicit regularization by tuning the step sizes and the number of iterations.
1711.00520
Yuxuan Wang
Yuxuan Wang, RJ Skerry-Ryan, Ying Xiao, Daisy Stanton, Joel Shor, Eric Battenberg, Rob Clark, Rif A. Saurous
Uncovering Latent Style Factors for Expressive Speech Synthesis
Submitted to NIPS ML4Audio workshop and ICASSP
null
null
null
cs.CL cs.SD
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Prosodic modeling is a core problem in speech synthesis. The key challenge is producing desirable prosody from textual input containing only phonetic information. In this preliminary study, we introduce the concept of "style tokens" in Tacotron, a recently proposed end-to-end neural speech synthesis model. Using style tokens, we aim to extract independent prosodic styles from training data. We show that without annotation data or an explicit supervision signal, our approach can automatically learn a variety of prosodic variations in a purely data-driven way. Importantly, each style token corresponds to a fixed style factor regardless of the given text sequence. As a result, we can control the prosodic style of synthetic speech in a somewhat predictable and globally consistent way.
[ { "created": "Wed, 1 Nov 2017 19:40:00 GMT", "version": "v1" } ]
2017-11-03
[ [ "Wang", "Yuxuan", "" ], [ "Skerry-Ryan", "RJ", "" ], [ "Xiao", "Ying", "" ], [ "Stanton", "Daisy", "" ], [ "Shor", "Joel", "" ], [ "Battenberg", "Eric", "" ], [ "Clark", "Rob", "" ], [ "Saurous", "Rif A.", "" ] ]
Prosodic modeling is a core problem in speech synthesis. The key challenge is producing desirable prosody from textual input containing only phonetic information. In this preliminary study, we introduce the concept of "style tokens" in Tacotron, a recently proposed end-to-end neural speech synthesis model. Using style tokens, we aim to extract independent prosodic styles from training data. We show that without annotation data or an explicit supervision signal, our approach can automatically learn a variety of prosodic variations in a purely data-driven way. Importantly, each style token corresponds to a fixed style factor regardless of the given text sequence. As a result, we can control the prosodic style of synthetic speech in a somewhat predictable and globally consistent way.
1504.01949
Fran\c{c}ois Dross
Fran\c{c}ois Dross, Mickael Montassier and Alexandre Pinlou
A lower bound on the order of the largest induced forest in planar graphs with high girth
12 pages, 6 figures. arXiv admin note: text overlap with arXiv:1409.1348
null
null
null
cs.DM math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We give here new upper bounds on the size of a smallest feedback vertex set in planar graphs with high girth. In particular, we prove that a planar graph with girth $g$ and size $m$ has a feedback vertex set of size at most $\frac{4m}{3g}$, improving the trivial bound of $\frac{2m}{g}$. We also prove that every $2$-connected graph with maximum degree $3$ and order $n$ has a feedback vertex set of size at most $\frac{n+2}{3}$.
[ { "created": "Wed, 8 Apr 2015 13:16:17 GMT", "version": "v1" } ]
2015-04-09
[ [ "Dross", "François", "" ], [ "Montassier", "Mickael", "" ], [ "Pinlou", "Alexandre", "" ] ]
We give here new upper bounds on the size of a smallest feedback vertex set in planar graphs with high girth. In particular, we prove that a planar graph with girth $g$ and size $m$ has a feedback vertex set of size at most $\frac{4m}{3g}$, improving the trivial bound of $\frac{2m}{g}$. We also prove that every $2$-connected graph with maximum degree $3$ and order $n$ has a feedback vertex set of size at most $\frac{n+2}{3}$.
2002.00837
Enyan Dai
Enyan Dai, Yiwei Sun, Suhang Wang
Ginger Cannot Cure Cancer: Battling Fake Health News with a Comprehensive Data Repository
null
null
null
null
cs.SI cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Nowadays, Internet is a primary source of attaining health information. Massive fake health news which is spreading over the Internet, has become a severe threat to public health. Numerous studies and research works have been done in fake news detection domain, however, few of them are designed to cope with the challenges in health news. For instance, the development of explainable is required for fake health news detection. To mitigate these problems, we construct a comprehensive repository, FakeHealth, which includes news contents with rich features, news reviews with detailed explanations, social engagements and a user-user social network. Moreover, exploratory analyses are conducted to understand the characteristics of the datasets, analyze useful patterns and validate the quality of the datasets for health fake news detection. We also discuss the novel and potential future research directions for the health fake news detection.
[ { "created": "Mon, 27 Jan 2020 17:27:58 GMT", "version": "v1" }, { "created": "Mon, 30 Mar 2020 06:08:08 GMT", "version": "v2" } ]
2020-03-31
[ [ "Dai", "Enyan", "" ], [ "Sun", "Yiwei", "" ], [ "Wang", "Suhang", "" ] ]
Nowadays, Internet is a primary source of attaining health information. Massive fake health news which is spreading over the Internet, has become a severe threat to public health. Numerous studies and research works have been done in fake news detection domain, however, few of them are designed to cope with the challenges in health news. For instance, the development of explainable is required for fake health news detection. To mitigate these problems, we construct a comprehensive repository, FakeHealth, which includes news contents with rich features, news reviews with detailed explanations, social engagements and a user-user social network. Moreover, exploratory analyses are conducted to understand the characteristics of the datasets, analyze useful patterns and validate the quality of the datasets for health fake news detection. We also discuss the novel and potential future research directions for the health fake news detection.
1805.07256
Petr Svarny
Petr \v{S}varn\'y and Mat\v{e}j Hoffmann
Safety of human-robot interaction through tactile sensors and peripersonal space representations
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Human-robot collaboration including close physical human-robot interaction (pHRI) is a current trend in industry and also science. The safety guidelines prescribe two modes of safety: (i) power and force limitation and (ii) speed and separation monitoring. We examine the potential of robots equipped with artificial sensitive skin and a protective safety zone around it (peripersonal space) to safe pHRI.
[ { "created": "Fri, 18 May 2018 14:55:08 GMT", "version": "v1" } ]
2018-05-21
[ [ "Švarný", "Petr", "" ], [ "Hoffmann", "Matěj", "" ] ]
Human-robot collaboration including close physical human-robot interaction (pHRI) is a current trend in industry and also science. The safety guidelines prescribe two modes of safety: (i) power and force limitation and (ii) speed and separation monitoring. We examine the potential of robots equipped with artificial sensitive skin and a protective safety zone around it (peripersonal space) to safe pHRI.
cs/0701166
Jim Gray
Jim Gray, Catharine van Ingen
Empirical Measurements of Disk Failure Rates and Error Rates
null
null
null
MSR-TR-2005-166
cs.DB cs.AR
null
The SATA advertised bit error rate of one error in 10 terabytes is frightening. We moved 2 PB through low-cost hardware and saw five disk read error events, several controller failures, and many system reboots caused by security patches. We conclude that SATA uncorrectable read errors are not yet a dominant system-fault source - they happen, but are rare compared to other problems. We also conclude that UER (uncorrectable error rate) is not the relevant metric for our needs. When an uncorrectable read error happens, there are typically several damaged storage blocks (and many uncorrectable read errors.) Also, some uncorrectable read errors may be masked by the operating system. The more meaningful metric for data architects is Mean Time To Data Loss (MTTDL.)
[ { "created": "Fri, 26 Jan 2007 00:29:02 GMT", "version": "v1" } ]
2007-05-23
[ [ "Gray", "Jim", "" ], [ "van Ingen", "Catharine", "" ] ]
The SATA advertised bit error rate of one error in 10 terabytes is frightening. We moved 2 PB through low-cost hardware and saw five disk read error events, several controller failures, and many system reboots caused by security patches. We conclude that SATA uncorrectable read errors are not yet a dominant system-fault source - they happen, but are rare compared to other problems. We also conclude that UER (uncorrectable error rate) is not the relevant metric for our needs. When an uncorrectable read error happens, there are typically several damaged storage blocks (and many uncorrectable read errors.) Also, some uncorrectable read errors may be masked by the operating system. The more meaningful metric for data architects is Mean Time To Data Loss (MTTDL.)
1604.01200
Zhong-Yuan Zhang
Zhong-Yuan Zhang and Yujie Gai and Yu-Fei Wang and Hui-Min Cheng and Xin Liu
On Equivalence of Likelihood Maximization of Stochastic Block Model and Constrained Nonnegative Matrix Factorization
null
null
null
null
cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Community structures detection in complex network is important for understanding not only the topological structures of the network, but also the functions of it. Stochastic block model and nonnegative matrix factorization are two widely used methods for community detection, which are proposed from different perspectives. In this paper, the relations between them are studied. The logarithm of likelihood function for stochastic block model can be reformulated under the framework of nonnegative matrix factorization. Besides the model equivalence, the algorithms employed by the two methods are different. Preliminary numerical experiments are carried out to compare the behaviors of the algorithms.
[ { "created": "Tue, 5 Apr 2016 09:47:34 GMT", "version": "v1" }, { "created": "Tue, 26 Apr 2016 14:51:32 GMT", "version": "v2" }, { "created": "Thu, 8 Dec 2016 14:40:41 GMT", "version": "v3" }, { "created": "Wed, 19 Apr 2017 08:17:29 GMT", "version": "v4" }, { "created": "Mon, 10 Jul 2017 14:01:52 GMT", "version": "v5" } ]
2017-07-11
[ [ "Zhang", "Zhong-Yuan", "" ], [ "Gai", "Yujie", "" ], [ "Wang", "Yu-Fei", "" ], [ "Cheng", "Hui-Min", "" ], [ "Liu", "Xin", "" ] ]
Community structures detection in complex network is important for understanding not only the topological structures of the network, but also the functions of it. Stochastic block model and nonnegative matrix factorization are two widely used methods for community detection, which are proposed from different perspectives. In this paper, the relations between them are studied. The logarithm of likelihood function for stochastic block model can be reformulated under the framework of nonnegative matrix factorization. Besides the model equivalence, the algorithms employed by the two methods are different. Preliminary numerical experiments are carried out to compare the behaviors of the algorithms.
1908.09961
Kien Do
Kien Do and Truyen Tran
Theory and Evaluation Metrics for Learning Disentangled Representations
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We make two theoretical contributions to disentanglement learning by (a) defining precise semantics of disentangled representations, and (b) establishing robust metrics for evaluation. First, we characterize the concept "disentangled representations" used in supervised and unsupervised methods along three dimensions-informativeness, separability and interpretability - which can be expressed and quantified explicitly using information-theoretic constructs. This helps explain the behaviors of several well-known disentanglement learning models. We then propose robust metrics for measuring informativeness, separability and interpretability. Through a comprehensive suite of experiments, we show that our metrics correctly characterize the representations learned by different methods and are consistent with qualitative (visual) results. Thus, the metrics allow disentanglement learning methods to be compared on a fair ground. We also empirically uncovered new interesting properties of VAE-based methods and interpreted them with our formulation. These findings are promising and hopefully will encourage the design of more theoretically driven models for learning disentangled representations.
[ { "created": "Mon, 26 Aug 2019 23:55:11 GMT", "version": "v1" }, { "created": "Tue, 4 Feb 2020 21:08:15 GMT", "version": "v2" }, { "created": "Thu, 18 Mar 2021 22:59:04 GMT", "version": "v3" } ]
2021-03-22
[ [ "Do", "Kien", "" ], [ "Tran", "Truyen", "" ] ]
We make two theoretical contributions to disentanglement learning by (a) defining precise semantics of disentangled representations, and (b) establishing robust metrics for evaluation. First, we characterize the concept "disentangled representations" used in supervised and unsupervised methods along three dimensions-informativeness, separability and interpretability - which can be expressed and quantified explicitly using information-theoretic constructs. This helps explain the behaviors of several well-known disentanglement learning models. We then propose robust metrics for measuring informativeness, separability and interpretability. Through a comprehensive suite of experiments, we show that our metrics correctly characterize the representations learned by different methods and are consistent with qualitative (visual) results. Thus, the metrics allow disentanglement learning methods to be compared on a fair ground. We also empirically uncovered new interesting properties of VAE-based methods and interpreted them with our formulation. These findings are promising and hopefully will encourage the design of more theoretically driven models for learning disentangled representations.
2210.12776
Boel Nelson
Boel Nelson, Elena Pagnin, Aslan Askarov
Metadata Privacy Beyond Tunneling for Instant Messaging
To appear at the 9th IEEE European Symposium on Security and Privacy
null
null
null
cs.CR
http://creativecommons.org/licenses/by-nc-nd/4.0/
Transport layer data leaks metadata unintentionally -- such as who communicates with whom. While tools for strong transport layer privacy exist, they have adoption obstacles, including performance overheads incompatible with mobile devices. We posit that by changing the objective of metadata privacy for $\textit{all traffic}$, we can open up a new design space for pragmatic approaches to transport layer privacy. As a first step in this direction, we propose using techniques from information flow control and present a principled approach to constructing formal models of systems with metadata privacy for $\textit{some}$, deniable, traffic. We prove that deniable traffic achieves metadata privacy against strong adversaries -- this constitutes the first bridging of information flow control and anonymous communication to our knowledge. Additionally, we show that existing state-of-the-art protocols can be extended to support metadata privacy, by designing a novel protocol for $\textit{deniable instant messaging}$ (DenIM), which is a variant of the Signal protocol. To show the efficacy of our approach, we implement and evaluate a proof-of-concept instant messaging system running DenIM on top of unmodified Signal. We empirically show that the DenIM on Signal can maintain low-latency for unmodified Signal traffic without breaking existing features, while at the same time supporting deniable Signal traffic.
[ { "created": "Sun, 23 Oct 2022 16:32:35 GMT", "version": "v1" }, { "created": "Thu, 25 May 2023 14:50:47 GMT", "version": "v2" }, { "created": "Wed, 6 Mar 2024 15:29:00 GMT", "version": "v3" } ]
2024-03-07
[ [ "Nelson", "Boel", "" ], [ "Pagnin", "Elena", "" ], [ "Askarov", "Aslan", "" ] ]
Transport layer data leaks metadata unintentionally -- such as who communicates with whom. While tools for strong transport layer privacy exist, they have adoption obstacles, including performance overheads incompatible with mobile devices. We posit that by changing the objective of metadata privacy for $\textit{all traffic}$, we can open up a new design space for pragmatic approaches to transport layer privacy. As a first step in this direction, we propose using techniques from information flow control and present a principled approach to constructing formal models of systems with metadata privacy for $\textit{some}$, deniable, traffic. We prove that deniable traffic achieves metadata privacy against strong adversaries -- this constitutes the first bridging of information flow control and anonymous communication to our knowledge. Additionally, we show that existing state-of-the-art protocols can be extended to support metadata privacy, by designing a novel protocol for $\textit{deniable instant messaging}$ (DenIM), which is a variant of the Signal protocol. To show the efficacy of our approach, we implement and evaluate a proof-of-concept instant messaging system running DenIM on top of unmodified Signal. We empirically show that the DenIM on Signal can maintain low-latency for unmodified Signal traffic without breaking existing features, while at the same time supporting deniable Signal traffic.
1908.01887
Yusuke Urakami
Yusuke Urakami, Alec Hodgkinson, Casey Carlin, Randall Leu, Luca Rigazio, Pieter Abbeel
DoorGym: A Scalable Door Opening Environment And Baseline Agent
Accepted to NeurIPS2019 Deep Reinforcement Learning Workshop. Full version
null
null
null
cs.RO cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In order to practically implement the door opening task, a policy ought to be robust to a wide distribution of door types and environment settings. Reinforcement Learning (RL) with Domain Randomization (DR) is a promising technique to enforce policy generalization, however, there are only a few accessible training environments that are inherently designed to train agents in domain randomized environments. We introduce DoorGym, an open-source door opening simulation framework designed to utilize domain randomization to train a stable policy. We intend for our environment to lie at the intersection of domain transfer, practical tasks, and realism. We also provide baseline Proximal Policy Optimization and Soft Actor-Critic implementations, which achieves success rates between 0% up to 95% for opening various types of doors in this environment. Moreover, the real-world transfer experiment shows the trained policy is able to work in the real world. Environment kit available here: https://github.com/PSVL/DoorGym/
[ { "created": "Mon, 5 Aug 2019 22:20:32 GMT", "version": "v1" }, { "created": "Wed, 7 Aug 2019 17:21:36 GMT", "version": "v2" }, { "created": "Wed, 13 May 2020 07:56:55 GMT", "version": "v3" }, { "created": "Tue, 24 May 2022 07:15:00 GMT", "version": "v4" } ]
2022-05-25
[ [ "Urakami", "Yusuke", "" ], [ "Hodgkinson", "Alec", "" ], [ "Carlin", "Casey", "" ], [ "Leu", "Randall", "" ], [ "Rigazio", "Luca", "" ], [ "Abbeel", "Pieter", "" ] ]
In order to practically implement the door opening task, a policy ought to be robust to a wide distribution of door types and environment settings. Reinforcement Learning (RL) with Domain Randomization (DR) is a promising technique to enforce policy generalization, however, there are only a few accessible training environments that are inherently designed to train agents in domain randomized environments. We introduce DoorGym, an open-source door opening simulation framework designed to utilize domain randomization to train a stable policy. We intend for our environment to lie at the intersection of domain transfer, practical tasks, and realism. We also provide baseline Proximal Policy Optimization and Soft Actor-Critic implementations, which achieves success rates between 0% up to 95% for opening various types of doors in this environment. Moreover, the real-world transfer experiment shows the trained policy is able to work in the real world. Environment kit available here: https://github.com/PSVL/DoorGym/
2002.11833
Jean Harb
Jean Harb, Tom Schaul, Doina Precup and Pierre-Luc Bacon
Policy Evaluation Networks
12 pages, 11 figures
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many reinforcement learning algorithms use value functions to guide the search for better policies. These methods estimate the value of a single policy while generalizing across many states. The core idea of this paper is to flip this convention and estimate the value of many policies, for a single set of states. This approach opens up the possibility of performing direct gradient ascent in policy space without seeing any new data. The main challenge for this approach is finding a way to represent complex policies that facilitates learning and generalization. To address this problem, we introduce a scalable, differentiable fingerprinting mechanism that retains essential policy information in a concise embedding. Our empirical results demonstrate that combining these three elements (learned Policy Evaluation Network, policy fingerprints, gradient ascent) can produce policies that outperform those that generated the training data, in zero-shot manner.
[ { "created": "Wed, 26 Feb 2020 23:00:27 GMT", "version": "v1" } ]
2020-02-28
[ [ "Harb", "Jean", "" ], [ "Schaul", "Tom", "" ], [ "Precup", "Doina", "" ], [ "Bacon", "Pierre-Luc", "" ] ]
Many reinforcement learning algorithms use value functions to guide the search for better policies. These methods estimate the value of a single policy while generalizing across many states. The core idea of this paper is to flip this convention and estimate the value of many policies, for a single set of states. This approach opens up the possibility of performing direct gradient ascent in policy space without seeing any new data. The main challenge for this approach is finding a way to represent complex policies that facilitates learning and generalization. To address this problem, we introduce a scalable, differentiable fingerprinting mechanism that retains essential policy information in a concise embedding. Our empirical results demonstrate that combining these three elements (learned Policy Evaluation Network, policy fingerprints, gradient ascent) can produce policies that outperform those that generated the training data, in zero-shot manner.
2304.08993
Rui Li
Rui Li, Dong Gong, Wei Yin, Hao Chen, Yu Zhu, Kaixuan Wang, Xiaozhi Chen, Jinqiu Sun, Yanning Zhang
Learning to Fuse Monocular and Multi-view Cues for Multi-frame Depth Estimation in Dynamic Scenes
Accepted by CVPR 2023. Code and models are available at: https://github.com/ruili3/dynamic-multiframe-depth
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-frame depth estimation generally achieves high accuracy relying on the multi-view geometric consistency. When applied in dynamic scenes, e.g., autonomous driving, this consistency is usually violated in the dynamic areas, leading to corrupted estimations. Many multi-frame methods handle dynamic areas by identifying them with explicit masks and compensating the multi-view cues with monocular cues represented as local monocular depth or features. The improvements are limited due to the uncontrolled quality of the masks and the underutilized benefits of the fusion of the two types of cues. In this paper, we propose a novel method to learn to fuse the multi-view and monocular cues encoded as volumes without needing the heuristically crafted masks. As unveiled in our analyses, the multi-view cues capture more accurate geometric information in static areas, and the monocular cues capture more useful contexts in dynamic areas. To let the geometric perception learned from multi-view cues in static areas propagate to the monocular representation in dynamic areas and let monocular cues enhance the representation of multi-view cost volume, we propose a cross-cue fusion (CCF) module, which includes the cross-cue attention (CCA) to encode the spatially non-local relative intra-relations from each source to enhance the representation of the other. Experiments on real-world datasets prove the significant effectiveness and generalization ability of the proposed method.
[ { "created": "Tue, 18 Apr 2023 13:55:24 GMT", "version": "v1" } ]
2023-04-19
[ [ "Li", "Rui", "" ], [ "Gong", "Dong", "" ], [ "Yin", "Wei", "" ], [ "Chen", "Hao", "" ], [ "Zhu", "Yu", "" ], [ "Wang", "Kaixuan", "" ], [ "Chen", "Xiaozhi", "" ], [ "Sun", "Jinqiu", "" ], [ "Zhang", "Yanning", "" ] ]
Multi-frame depth estimation generally achieves high accuracy relying on the multi-view geometric consistency. When applied in dynamic scenes, e.g., autonomous driving, this consistency is usually violated in the dynamic areas, leading to corrupted estimations. Many multi-frame methods handle dynamic areas by identifying them with explicit masks and compensating the multi-view cues with monocular cues represented as local monocular depth or features. The improvements are limited due to the uncontrolled quality of the masks and the underutilized benefits of the fusion of the two types of cues. In this paper, we propose a novel method to learn to fuse the multi-view and monocular cues encoded as volumes without needing the heuristically crafted masks. As unveiled in our analyses, the multi-view cues capture more accurate geometric information in static areas, and the monocular cues capture more useful contexts in dynamic areas. To let the geometric perception learned from multi-view cues in static areas propagate to the monocular representation in dynamic areas and let monocular cues enhance the representation of multi-view cost volume, we propose a cross-cue fusion (CCF) module, which includes the cross-cue attention (CCA) to encode the spatially non-local relative intra-relations from each source to enhance the representation of the other. Experiments on real-world datasets prove the significant effectiveness and generalization ability of the proposed method.
0708.3019
Ananthanarayanan Chockalingam
D. Sreedhar, A. Chockalingam, B. Sundar Rajan
Single-Symbol ML Decodable Distributed STBCs for Partially-Coherent Cooperative Networks
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Space-time block codes (STBCs) that are single-symbol decodable (SSD) in a co-located multiple antenna setting need not be SSD in a distributed cooperative communication setting. A relay network with N relays and a single source-destination pair is called a partially-coherent relay channel (PCRC) if the destination has perfect channel state information (CSI) of all the channels and the relays have only the phase information of the source-to-relay channels. In this paper, first, a new set of necessary and sufficient conditions for a STBC to be SSD for co-located multiple antenna communication is obtained. Then, this is extended to a set of necessary and sufficient conditions for a distributed STBC (DSTBC) to be SSD for a PCRC, by identifying the additional conditions. Using this, several SSD DSTBCs for PCRC are identified among the known classes of STBCs. It is proved that even if a SSD STBC for a co-located MIMO channel does not satisfy the additional conditions for the code to be SSD for a PCRC, single-symbol decoding of it in a PCRC gives full-diversity and only coding gain is lost. It is shown that when a DSTBC is SSD for a PCRC, then arbitrary coordinate interleaving of the in-phase and quadrature-phase components of the variables does not disturb its SSD property for PCRC. Finally, it is shown that the possibility of {\em channel phase compensation} operation at the relay nodes using partial CSI at the relays increases the possible rate of SSD DSTBCs from $\frac{2}{N}$ when the relays do not have CSI to 1/2, which is independent of N.
[ { "created": "Wed, 22 Aug 2007 13:58:36 GMT", "version": "v1" }, { "created": "Sat, 29 Nov 2008 04:34:11 GMT", "version": "v2" } ]
2008-11-29
[ [ "Sreedhar", "D.", "" ], [ "Chockalingam", "A.", "" ], [ "Rajan", "B. Sundar", "" ] ]
Space-time block codes (STBCs) that are single-symbol decodable (SSD) in a co-located multiple antenna setting need not be SSD in a distributed cooperative communication setting. A relay network with N relays and a single source-destination pair is called a partially-coherent relay channel (PCRC) if the destination has perfect channel state information (CSI) of all the channels and the relays have only the phase information of the source-to-relay channels. In this paper, first, a new set of necessary and sufficient conditions for a STBC to be SSD for co-located multiple antenna communication is obtained. Then, this is extended to a set of necessary and sufficient conditions for a distributed STBC (DSTBC) to be SSD for a PCRC, by identifying the additional conditions. Using this, several SSD DSTBCs for PCRC are identified among the known classes of STBCs. It is proved that even if a SSD STBC for a co-located MIMO channel does not satisfy the additional conditions for the code to be SSD for a PCRC, single-symbol decoding of it in a PCRC gives full-diversity and only coding gain is lost. It is shown that when a DSTBC is SSD for a PCRC, then arbitrary coordinate interleaving of the in-phase and quadrature-phase components of the variables does not disturb its SSD property for PCRC. Finally, it is shown that the possibility of {\em channel phase compensation} operation at the relay nodes using partial CSI at the relays increases the possible rate of SSD DSTBCs from $\frac{2}{N}$ when the relays do not have CSI to 1/2, which is independent of N.
2102.12621
Ba Dung Le Dr
Ba Dung Le and Tanveer Zia
Discrete Distribution Estimation with Local Differential Privacy: A Comparative Analysis
Accepted for publication to SPT-IoT 2021: The Fifth Workshop on Security, Privacy and Trust in the Internet of Things
null
null
null
cs.CR
http://creativecommons.org/licenses/by-nc-nd/4.0/
Local differential privacy is a promising privacy-preserving model for statistical aggregation of user data that prevents user privacy leakage from the data aggregator. This paper focuses on the problem of estimating the distribution of discrete user values with Local differential privacy. We review and present a comparative analysis on the performance of the existing discrete distribution estimation algorithms in terms of their accuracy on benchmark datasets. Our evaluation benchmarks include real-world and synthetic datasets of categorical individual values with the number of individuals from hundreds to millions and the domain size up to a few hundreds of values. The experimental results show that the Basic RAPPOR algorithm generally performs best for the benchmark datasets in the high privacy regime while the k-RR algorithm often gives the best estimation in the low privacy regime. In the medium privacy regime, the performance of the k-RR, the k-subset, and the HR algorithms are fairly competitive with each other and generally better than the performance of the Basic RAPPOR and the CMS algorithms.
[ { "created": "Thu, 25 Feb 2021 01:32:06 GMT", "version": "v1" } ]
2021-02-26
[ [ "Le", "Ba Dung", "" ], [ "Zia", "Tanveer", "" ] ]
Local differential privacy is a promising privacy-preserving model for statistical aggregation of user data that prevents user privacy leakage from the data aggregator. This paper focuses on the problem of estimating the distribution of discrete user values with Local differential privacy. We review and present a comparative analysis on the performance of the existing discrete distribution estimation algorithms in terms of their accuracy on benchmark datasets. Our evaluation benchmarks include real-world and synthetic datasets of categorical individual values with the number of individuals from hundreds to millions and the domain size up to a few hundreds of values. The experimental results show that the Basic RAPPOR algorithm generally performs best for the benchmark datasets in the high privacy regime while the k-RR algorithm often gives the best estimation in the low privacy regime. In the medium privacy regime, the performance of the k-RR, the k-subset, and the HR algorithms are fairly competitive with each other and generally better than the performance of the Basic RAPPOR and the CMS algorithms.
2005.06420
James Henderson
James Henderson
The Unstoppable Rise of Computational Linguistics in Deep Learning
13 pages. Accepted for publication at ACL 2020, in the theme track
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we trace the history of neural networks applied to natural language understanding tasks, and identify key contributions which the nature of language has made to the development of neural network architectures. We focus on the importance of variable binding and its instantiation in attention-based models, and argue that Transformer is not a sequence model but an induced-structure model. This perspective leads to predictions of the challenges facing research in deep learning architectures for natural language understanding.
[ { "created": "Wed, 13 May 2020 16:51:02 GMT", "version": "v1" }, { "created": "Sun, 17 May 2020 10:07:40 GMT", "version": "v2" }, { "created": "Thu, 11 Jun 2020 07:58:28 GMT", "version": "v3" } ]
2020-06-12
[ [ "Henderson", "James", "" ] ]
In this paper, we trace the history of neural networks applied to natural language understanding tasks, and identify key contributions which the nature of language has made to the development of neural network architectures. We focus on the importance of variable binding and its instantiation in attention-based models, and argue that Transformer is not a sequence model but an induced-structure model. This perspective leads to predictions of the challenges facing research in deep learning architectures for natural language understanding.
2303.05847
Xuanhua Yang
Xuanhua Yang, Jianxin Zhao, Shaoguo Liu, Liang Wang and Bo Zheng
Gradient Coordination for Quantifying and Maximizing Knowledge Transference in Multi-Task Learning
5 pages, 3 figures
null
null
null
cs.IR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-task learning (MTL) has been widely applied in online advertising and recommender systems. To address the negative transfer issue, recent studies have proposed optimization methods that thoroughly focus on the gradient alignment of directions or magnitudes. However, since prior study has proven that both general and specific knowledge exist in the limited shared capacity, overemphasizing on gradient alignment may crowd out task-specific knowledge, and vice versa. In this paper, we propose a transference-driven approach CoGrad that adaptively maximizes knowledge transference via Coordinated Gradient modification. We explicitly quantify the transference as loss reduction from one task to another, and then derive an auxiliary gradient from optimizing it. We perform the optimization by incorporating this gradient into original task gradients, making the model automatically maximize inter-task transfer and minimize individual losses. Thus, CoGrad can harmonize between general and specific knowledge to boost overall performance. Besides, we introduce an efficient approximation of the Hessian matrix, making CoGrad computationally efficient and simple to implement. Both offline and online experiments verify that CoGrad significantly outperforms previous methods.
[ { "created": "Fri, 10 Mar 2023 10:42:21 GMT", "version": "v1" } ]
2023-03-13
[ [ "Yang", "Xuanhua", "" ], [ "Zhao", "Jianxin", "" ], [ "Liu", "Shaoguo", "" ], [ "Wang", "Liang", "" ], [ "Zheng", "Bo", "" ] ]
Multi-task learning (MTL) has been widely applied in online advertising and recommender systems. To address the negative transfer issue, recent studies have proposed optimization methods that thoroughly focus on the gradient alignment of directions or magnitudes. However, since prior study has proven that both general and specific knowledge exist in the limited shared capacity, overemphasizing on gradient alignment may crowd out task-specific knowledge, and vice versa. In this paper, we propose a transference-driven approach CoGrad that adaptively maximizes knowledge transference via Coordinated Gradient modification. We explicitly quantify the transference as loss reduction from one task to another, and then derive an auxiliary gradient from optimizing it. We perform the optimization by incorporating this gradient into original task gradients, making the model automatically maximize inter-task transfer and minimize individual losses. Thus, CoGrad can harmonize between general and specific knowledge to boost overall performance. Besides, we introduce an efficient approximation of the Hessian matrix, making CoGrad computationally efficient and simple to implement. Both offline and online experiments verify that CoGrad significantly outperforms previous methods.
1806.09852
EPTCS
Kasper Dokter (CWI), Farhad Arbab (CWI)
Treo: Textual Syntax for Reo Connectors
In Proceedings MeTRiD 2018, arXiv:1806.09330
EPTCS 272, 2018, pp. 121-135
10.4204/EPTCS.272.10
null
cs.PL cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reo is an interaction-centric model of concurrency for compositional specification of communication and coordination protocols. Formal verification tools exist to ensure correctness and compliance of protocols specified in Reo, which can readily be (re)used in different applications, or composed into more complex protocols. Recent benchmarks show that compiling such high-level Reo specifications produces executable code that can compete with or even beat the performance of hand-crafted programs written in languages such as C or Java using conventional concurrency constructs. The original declarative graphical syntax of Reo does not support intuitive constructs for parameter passing, iteration, recursion, or conditional specification. This shortcoming hinders Reo's uptake in large-scale practical applications. Although a number of Reo-inspired syntax alternatives have appeared in the past, none of them follows the primary design principles of Reo: a) declarative specification; b) all channel types and their sorts are user-defined; and c) channels compose via shared nodes. In this paper, we offer a textual syntax for Reo that respects these principles and supports flexible parameter passing, iteration, recursion, and conditional specification. In on-going work, we use this textual syntax to compile Reo into target languages such as Java, Promela, and Maude.
[ { "created": "Tue, 26 Jun 2018 08:55:13 GMT", "version": "v1" } ]
2018-06-27
[ [ "Dokter", "Kasper", "", "CWI" ], [ "Arbab", "Farhad", "", "CWI" ] ]
Reo is an interaction-centric model of concurrency for compositional specification of communication and coordination protocols. Formal verification tools exist to ensure correctness and compliance of protocols specified in Reo, which can readily be (re)used in different applications, or composed into more complex protocols. Recent benchmarks show that compiling such high-level Reo specifications produces executable code that can compete with or even beat the performance of hand-crafted programs written in languages such as C or Java using conventional concurrency constructs. The original declarative graphical syntax of Reo does not support intuitive constructs for parameter passing, iteration, recursion, or conditional specification. This shortcoming hinders Reo's uptake in large-scale practical applications. Although a number of Reo-inspired syntax alternatives have appeared in the past, none of them follows the primary design principles of Reo: a) declarative specification; b) all channel types and their sorts are user-defined; and c) channels compose via shared nodes. In this paper, we offer a textual syntax for Reo that respects these principles and supports flexible parameter passing, iteration, recursion, and conditional specification. In on-going work, we use this textual syntax to compile Reo into target languages such as Java, Promela, and Maude.
1703.05539
Sven Hug
Sven E. Hug, Martin P. Braendle
The coverage of Microsoft Academic: Analyzing the publication output of a university
null
null
10.1007/s11192-017-2535-3
null
cs.DL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This is the first detailed study on the coverage of Microsoft Academic (MA). Based on the complete and verified publication list of a university, the coverage of MA was assessed and compared with two benchmark databases, Scopus and Web of Science (WoS), on the level of individual publications. Citation counts were analyzed, and issues related to data retrieval and data quality were examined. A Perl script was written to retrieve metadata from MA based on publication titles. The script is freely available on GitHub. We find that MA covers journal articles, working papers, and conference items to a substantial extent and indexes more document types than the benchmark databases (e.g., working papers, dissertations). MA clearly surpasses Scopus and WoS in covering book-related document types and conference items but falls slightly behind Scopus in journal articles. The coverage of MA is favorable for evaluative bibliometrics in most research fields, including economics/business, computer/information sciences, and mathematics. However, MA shows biases similar to Scopus and WoS with regard to the coverage of the humanities, non-English publications, and open-access publications. Rank correlations of citation counts are high between MA and the benchmark databases. We find that the publication year is correct for 89.5% of all publications and the number of authors is correct for 95.1% of the journal articles. Given the fast and ongoing development of MA, we conclude that MA is on the verge of becoming a bibliometric superpower. However, comprehensive studies on the quality of MA metadata are still lacking.
[ { "created": "Thu, 16 Mar 2017 09:56:33 GMT", "version": "v1" }, { "created": "Thu, 20 Apr 2017 23:29:36 GMT", "version": "v2" }, { "created": "Thu, 11 May 2017 13:06:37 GMT", "version": "v3" }, { "created": "Mon, 25 Sep 2017 17:42:22 GMT", "version": "v4" } ]
2019-08-06
[ [ "Hug", "Sven E.", "" ], [ "Braendle", "Martin P.", "" ] ]
This is the first detailed study on the coverage of Microsoft Academic (MA). Based on the complete and verified publication list of a university, the coverage of MA was assessed and compared with two benchmark databases, Scopus and Web of Science (WoS), on the level of individual publications. Citation counts were analyzed, and issues related to data retrieval and data quality were examined. A Perl script was written to retrieve metadata from MA based on publication titles. The script is freely available on GitHub. We find that MA covers journal articles, working papers, and conference items to a substantial extent and indexes more document types than the benchmark databases (e.g., working papers, dissertations). MA clearly surpasses Scopus and WoS in covering book-related document types and conference items but falls slightly behind Scopus in journal articles. The coverage of MA is favorable for evaluative bibliometrics in most research fields, including economics/business, computer/information sciences, and mathematics. However, MA shows biases similar to Scopus and WoS with regard to the coverage of the humanities, non-English publications, and open-access publications. Rank correlations of citation counts are high between MA and the benchmark databases. We find that the publication year is correct for 89.5% of all publications and the number of authors is correct for 95.1% of the journal articles. Given the fast and ongoing development of MA, we conclude that MA is on the verge of becoming a bibliometric superpower. However, comprehensive studies on the quality of MA metadata are still lacking.
1911.03897
Yaoyiran Li
Yaoyiran Li, Jing Jiang
Two-Headed Monster And Crossed Co-Attention Networks
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
This paper presents some preliminary investigations of a new co-attention mechanism in neural transduction models. We propose a paradigm, termed Two-Headed Monster (THM), which consists of two symmetric encoder modules and one decoder module connected with co-attention. As a specific and concrete implementation of THM, Crossed Co-Attention Networks (CCNs) are designed based on the Transformer model. We demonstrate CCNs on WMT 2014 EN-DE and WMT 2016 EN-FI translation tasks and our model outperforms the strong Transformer baseline by 0.51 (big) and 0.74 (base) BLEU points on EN-DE and by 0.17 (big) and 0.47 (base) BLEU points on EN-FI.
[ { "created": "Sun, 10 Nov 2019 10:55:12 GMT", "version": "v1" } ]
2019-11-12
[ [ "Li", "Yaoyiran", "" ], [ "Jiang", "Jing", "" ] ]
This paper presents some preliminary investigations of a new co-attention mechanism in neural transduction models. We propose a paradigm, termed Two-Headed Monster (THM), which consists of two symmetric encoder modules and one decoder module connected with co-attention. As a specific and concrete implementation of THM, Crossed Co-Attention Networks (CCNs) are designed based on the Transformer model. We demonstrate CCNs on WMT 2014 EN-DE and WMT 2016 EN-FI translation tasks and our model outperforms the strong Transformer baseline by 0.51 (big) and 0.74 (base) BLEU points on EN-DE and by 0.17 (big) and 0.47 (base) BLEU points on EN-FI.
2309.12650
Yixin Chen
Yixin Chen, Ourui Fu, Wenrui Shao, Zhaoheng Xie
FP-PET: Large Model, Multiple Loss And Focused Practice
null
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This study presents FP-PET, a comprehensive approach to medical image segmentation with a focus on CT and PET images. Utilizing a dataset from the AutoPet2023 Challenge, the research employs a variety of machine learning models, including STUNet-large, SwinUNETR, and VNet, to achieve state-of-the-art segmentation performance. The paper introduces an aggregated score that combines multiple evaluation metrics such as Dice score, false positive volume (FPV), and false negative volume (FNV) to provide a holistic measure of model effectiveness. The study also discusses the computational challenges and solutions related to model training, which was conducted on high-performance GPUs. Preprocessing and postprocessing techniques, including gaussian weighting schemes and morphological operations, are explored to further refine the segmentation output. The research offers valuable insights into the challenges and solutions for advanced medical image segmentation.
[ { "created": "Fri, 22 Sep 2023 06:44:28 GMT", "version": "v1" } ]
2023-09-25
[ [ "Chen", "Yixin", "" ], [ "Fu", "Ourui", "" ], [ "Shao", "Wenrui", "" ], [ "Xie", "Zhaoheng", "" ] ]
This study presents FP-PET, a comprehensive approach to medical image segmentation with a focus on CT and PET images. Utilizing a dataset from the AutoPet2023 Challenge, the research employs a variety of machine learning models, including STUNet-large, SwinUNETR, and VNet, to achieve state-of-the-art segmentation performance. The paper introduces an aggregated score that combines multiple evaluation metrics such as Dice score, false positive volume (FPV), and false negative volume (FNV) to provide a holistic measure of model effectiveness. The study also discusses the computational challenges and solutions related to model training, which was conducted on high-performance GPUs. Preprocessing and postprocessing techniques, including gaussian weighting schemes and morphological operations, are explored to further refine the segmentation output. The research offers valuable insights into the challenges and solutions for advanced medical image segmentation.
1810.12646
Uwe Reichel
Uwe D. Reichel and Katalin M\'ady and Jennifer Cole
Prosodic entrainment in dialog acts
This manuscript is under revision. Please contact the authors for information about updates
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We examined prosodic entrainment in spoken dialogs separately for several dialog acts in cooperative and competitive games. Entrainment was measured for intonation features derived from a superpositional intonation stylization as well as for rhythm features. The found differences can be related to the cooperative or competitive nature of the game, as well as to dialog act properties as its intrinsic authority, supportiveness and distributional characteristics. In cooperative games dialog acts with a high authority given by knowledge and with a high frequency showed the most entrainment. The results are discussed amongst others with respect to the degree of active entrainment control in cooperative behavior.
[ { "created": "Tue, 30 Oct 2018 10:53:42 GMT", "version": "v1" } ]
2018-10-31
[ [ "Reichel", "Uwe D.", "" ], [ "Mády", "Katalin", "" ], [ "Cole", "Jennifer", "" ] ]
We examined prosodic entrainment in spoken dialogs separately for several dialog acts in cooperative and competitive games. Entrainment was measured for intonation features derived from a superpositional intonation stylization as well as for rhythm features. The found differences can be related to the cooperative or competitive nature of the game, as well as to dialog act properties as its intrinsic authority, supportiveness and distributional characteristics. In cooperative games dialog acts with a high authority given by knowledge and with a high frequency showed the most entrainment. The results are discussed amongst others with respect to the degree of active entrainment control in cooperative behavior.
1210.5908
Carlos Gershenson
Keith D. Farnsworth, John Nelson and Carlos Gershenson
Living is information processing: from molecules to global systems
28 pages, 2 figures
Acta Biotheoretica, 61(2):203-222. 2013
10.1007/s10441-013-9179-3
null
cs.IT math.IT physics.bio-ph q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We extend the concept that life is an informational phenomenon, at every level of organisation, from molecules to the global ecological system. According to this thesis: (a) living is information processing, in which memory is maintained by both molecular states and ecological states as well as the more obvious nucleic acid coding; (b) this information processing has one overall function - to perpetuate itself; and (c) the processing method is filtration (cognition) of, and synthesis of, information at lower levels to appear at higher levels in complex systems (emergence). We show how information patterns, are united by the creation of mutual context, generating persistent consequences, to result in `functional information'. This constructive process forms arbitrarily large complexes of information, the combined effects of which include the functions of life. Molecules and simple organisms have already been measured in terms of functional information content; we show how quantification may be extended to each level of organisation up to the ecological. In terms of a computer analogy, life is both the data and the program and its biochemical structure is the way the information is embodied. This idea supports the seamless integration of life at all scales with the physical universe. The innovation reported here is essentially to integrate these ideas, basing information on the `general definition' of information, rather than simply the statistics of information, thereby explaining how functional information operates throughout life.
[ { "created": "Mon, 22 Oct 2012 14:42:31 GMT", "version": "v1" }, { "created": "Fri, 1 Mar 2013 16:10:43 GMT", "version": "v2" } ]
2014-02-17
[ [ "Farnsworth", "Keith D.", "" ], [ "Nelson", "John", "" ], [ "Gershenson", "Carlos", "" ] ]
We extend the concept that life is an informational phenomenon, at every level of organisation, from molecules to the global ecological system. According to this thesis: (a) living is information processing, in which memory is maintained by both molecular states and ecological states as well as the more obvious nucleic acid coding; (b) this information processing has one overall function - to perpetuate itself; and (c) the processing method is filtration (cognition) of, and synthesis of, information at lower levels to appear at higher levels in complex systems (emergence). We show how information patterns, are united by the creation of mutual context, generating persistent consequences, to result in `functional information'. This constructive process forms arbitrarily large complexes of information, the combined effects of which include the functions of life. Molecules and simple organisms have already been measured in terms of functional information content; we show how quantification may be extended to each level of organisation up to the ecological. In terms of a computer analogy, life is both the data and the program and its biochemical structure is the way the information is embodied. This idea supports the seamless integration of life at all scales with the physical universe. The innovation reported here is essentially to integrate these ideas, basing information on the `general definition' of information, rather than simply the statistics of information, thereby explaining how functional information operates throughout life.
1906.01127
Zequn Wang
Narendra Patwardhan and Zequn Wang
Proximal Reliability Optimization for Reinforcement Learning
12 pages, 6 figures
null
null
null
cs.LG cs.SY stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite the numerous advances, reinforcement learning remains away from widespread acceptance for autonomous controller design as compared to classical methods due to lack of ability to effectively tackle the reality gap. The reliance on absolute or deterministic reward as a metric for optimization process renders reinforcement learning highly susceptible to changes in problem dynamics. We introduce a novel framework that effectively quantizes the uncertainty of the design space and induces robustness in controllers by switching to a reliability-based optimization routine. The data efficiency of the method is maintained to match reward based optimization methods by employing a model-based approach. We prove the stability of learned neuro-controllers in both static and dynamic environments on classical reinforcement learning tasks such as Cart Pole balancing and Inverted Pendulum.
[ { "created": "Mon, 3 Jun 2019 23:43:16 GMT", "version": "v1" } ]
2019-06-05
[ [ "Patwardhan", "Narendra", "" ], [ "Wang", "Zequn", "" ] ]
Despite the numerous advances, reinforcement learning remains away from widespread acceptance for autonomous controller design as compared to classical methods due to lack of ability to effectively tackle the reality gap. The reliance on absolute or deterministic reward as a metric for optimization process renders reinforcement learning highly susceptible to changes in problem dynamics. We introduce a novel framework that effectively quantizes the uncertainty of the design space and induces robustness in controllers by switching to a reliability-based optimization routine. The data efficiency of the method is maintained to match reward based optimization methods by employing a model-based approach. We prove the stability of learned neuro-controllers in both static and dynamic environments on classical reinforcement learning tasks such as Cart Pole balancing and Inverted Pendulum.
2103.08783
Devlin Gualtieri Ph.D.
Devlin Gualtieri
One-Time Pads from the Digits of Pi
12 page PDF file (5 page article with 7 pages of computer source code). Comments are welcome
null
null
null
cs.OH
http://creativecommons.org/licenses/by-sa/4.0/
I present a method for generating one-time pads from the digits of pi. Computer code is given to generate such pads from passphrases in a method having an extremely low probability (<10^-53) of a successful discovery of the one-time pads by a brute-force attack. The advantages and disadvantages of this method are discussed.
[ { "created": "Tue, 16 Mar 2021 00:34:44 GMT", "version": "v1" } ]
2021-03-17
[ [ "Gualtieri", "Devlin", "" ] ]
I present a method for generating one-time pads from the digits of pi. Computer code is given to generate such pads from passphrases in a method having an extremely low probability (<10^-53) of a successful discovery of the one-time pads by a brute-force attack. The advantages and disadvantages of this method are discussed.
2107.14590
Yiyang Li
GuoLiang Li and Yiyang Li
Residual Tree Aggregation of Layers for Neural Machine Translation
null
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
Although attention-based Neural Machine Translation has achieved remarkable progress in recent layers, it still suffers from issue of making insufficient use of the output of each layer. In transformer, it only uses the top layer of encoder and decoder in the subsequent process, which makes it impossible to take advantage of the useful information in other layers. To address this issue, we propose a residual tree aggregation of layers for Transformer(RTAL), which helps to fuse information across layers. Specifically, we try to fuse the information across layers by constructing a post-order binary tree. In additional to the last node, we add the residual connection to the process of generating child nodes. Our model is based on the Neural Machine Translation model Transformer and we conduct our experiments on WMT14 English-to-German and WMT17 English-to-France translation tasks. Experimental results across language pairs show that the proposed approach outperforms the strong baseline model significantly
[ { "created": "Mon, 19 Jul 2021 09:32:10 GMT", "version": "v1" } ]
2021-08-02
[ [ "Li", "GuoLiang", "" ], [ "Li", "Yiyang", "" ] ]
Although attention-based Neural Machine Translation has achieved remarkable progress in recent layers, it still suffers from issue of making insufficient use of the output of each layer. In transformer, it only uses the top layer of encoder and decoder in the subsequent process, which makes it impossible to take advantage of the useful information in other layers. To address this issue, we propose a residual tree aggregation of layers for Transformer(RTAL), which helps to fuse information across layers. Specifically, we try to fuse the information across layers by constructing a post-order binary tree. In additional to the last node, we add the residual connection to the process of generating child nodes. Our model is based on the Neural Machine Translation model Transformer and we conduct our experiments on WMT14 English-to-German and WMT17 English-to-France translation tasks. Experimental results across language pairs show that the proposed approach outperforms the strong baseline model significantly
2408.01129
Haohao Qu
Haohao Qu, Liangbo Ning, Rui An, Wenqi Fan, Tyler Derr, Xin Xu, Qing Li
A Survey of Mamba
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep learning, as a vital technique, has sparked a notable revolution in artificial intelligence. As the most representative architecture, Transformers have empowered numerous advanced models, especially the large language models that comprise billions of parameters, becoming a cornerstone in deep learning. Despite the impressive achievements, Transformers still face inherent limitations, particularly the time-consuming inference resulting from the quadratic computation complexity of attention calculation. Recently, a novel architecture named Mamba, drawing inspiration from classical state space models, has emerged as a promising alternative for building foundation models, delivering comparable modeling abilities to Transformers while preserving near-linear scalability concerning sequence length. This has sparked an increasing number of studies actively exploring Mamba's potential to achieve impressive performance across diverse domains. Given such rapid evolution, there is a critical need for a systematic review that consolidates existing Mamba-empowered models, offering a comprehensive understanding of this emerging model architecture. In this survey, we therefore conduct an in-depth investigation of recent Mamba-associated studies, covering from three main aspects: the advancements of Mamba-based models, the techniques of adapting Mamba to diverse data, and the applications where Mamba can excel. Specifically, we first recall the foundational knowledge of various representative deep learning models and the details of Mamba as preliminaries. Then, to showcase the significance of Mamba, we comprehensively review the related studies focusing on Mamba models' architecture design, data adaptability, and applications. Finally, we present an discussion of current limitations and explore various promising research directions to provide deeper insights for future investigations.
[ { "created": "Fri, 2 Aug 2024 09:18:41 GMT", "version": "v1" } ]
2024-08-05
[ [ "Qu", "Haohao", "" ], [ "Ning", "Liangbo", "" ], [ "An", "Rui", "" ], [ "Fan", "Wenqi", "" ], [ "Derr", "Tyler", "" ], [ "Xu", "Xin", "" ], [ "Li", "Qing", "" ] ]
Deep learning, as a vital technique, has sparked a notable revolution in artificial intelligence. As the most representative architecture, Transformers have empowered numerous advanced models, especially the large language models that comprise billions of parameters, becoming a cornerstone in deep learning. Despite the impressive achievements, Transformers still face inherent limitations, particularly the time-consuming inference resulting from the quadratic computation complexity of attention calculation. Recently, a novel architecture named Mamba, drawing inspiration from classical state space models, has emerged as a promising alternative for building foundation models, delivering comparable modeling abilities to Transformers while preserving near-linear scalability concerning sequence length. This has sparked an increasing number of studies actively exploring Mamba's potential to achieve impressive performance across diverse domains. Given such rapid evolution, there is a critical need for a systematic review that consolidates existing Mamba-empowered models, offering a comprehensive understanding of this emerging model architecture. In this survey, we therefore conduct an in-depth investigation of recent Mamba-associated studies, covering from three main aspects: the advancements of Mamba-based models, the techniques of adapting Mamba to diverse data, and the applications where Mamba can excel. Specifically, we first recall the foundational knowledge of various representative deep learning models and the details of Mamba as preliminaries. Then, to showcase the significance of Mamba, we comprehensively review the related studies focusing on Mamba models' architecture design, data adaptability, and applications. Finally, we present an discussion of current limitations and explore various promising research directions to provide deeper insights for future investigations.
1910.09134
Jiaying Lu
Jiaying Lu, Xin Ye, Yi Ren, Yezhou Yang
Good, Better, Best: Textual Distractors Generation for Multiple-Choice Visual Question Answering via Reinforcement Learning
null
CVPR'2022 Workshop on Open-Domain Retrieval Under a Multi-Modal Setting
null
null
cs.CV cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multiple-choice VQA has drawn increasing attention from researchers and end-users recently. As the demand for automatically constructing large-scale multiple-choice VQA data grows, we introduce a novel task called textual Distractors Generation for VQA (DG-VQA) focusing on generating challenging yet meaningful distractors given the context image, question, and correct answer. The DG-VQA task aims at generating distractors without ground-truth training samples since such resources are rarely available. To tackle the DG-VQA unsupervisedly, we propose Gobbet, a reinforcement learning(RL) based framework that utilizes pre-trained VQA models as an alternative knowledge base to guide the distractor generation process. In Gobbet, a pre-trained VQA model serves as the environment in RL setting to provide feedback for the input multi-modal query, while a neural distractor generator serves as the agent to take actions accordingly. We propose to use existing VQA models' performance degradation as indicators of the quality of generated distractors. On the other hand, we show the utility of generated distractors through data augmentation experiments, since robustness is more and more important when AI models apply to unpredictable open-domain scenarios or security-sensitive applications. We further conduct a manual case study on the factors why distractors generated by Gobbet can fool existing models.
[ { "created": "Mon, 21 Oct 2019 03:32:17 GMT", "version": "v1" }, { "created": "Sat, 27 Mar 2021 21:01:47 GMT", "version": "v2" }, { "created": "Mon, 18 Apr 2022 19:44:03 GMT", "version": "v3" } ]
2022-04-20
[ [ "Lu", "Jiaying", "" ], [ "Ye", "Xin", "" ], [ "Ren", "Yi", "" ], [ "Yang", "Yezhou", "" ] ]
Multiple-choice VQA has drawn increasing attention from researchers and end-users recently. As the demand for automatically constructing large-scale multiple-choice VQA data grows, we introduce a novel task called textual Distractors Generation for VQA (DG-VQA) focusing on generating challenging yet meaningful distractors given the context image, question, and correct answer. The DG-VQA task aims at generating distractors without ground-truth training samples since such resources are rarely available. To tackle the DG-VQA unsupervisedly, we propose Gobbet, a reinforcement learning(RL) based framework that utilizes pre-trained VQA models as an alternative knowledge base to guide the distractor generation process. In Gobbet, a pre-trained VQA model serves as the environment in RL setting to provide feedback for the input multi-modal query, while a neural distractor generator serves as the agent to take actions accordingly. We propose to use existing VQA models' performance degradation as indicators of the quality of generated distractors. On the other hand, we show the utility of generated distractors through data augmentation experiments, since robustness is more and more important when AI models apply to unpredictable open-domain scenarios or security-sensitive applications. We further conduct a manual case study on the factors why distractors generated by Gobbet can fool existing models.
1911.11881
Yifei Fan
Chao Tang, Yifei Fan, Anthony Yezzi
An Adaptive View of Adversarial Robustness from Test-time Smoothing Defense
NeurIPS-2019 Workshop on Safety and Robustness in Decision Making
null
null
null
cs.LG cs.CR stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The safety and robustness of learning-based decision-making systems are under threats from adversarial examples, as imperceptible perturbations can mislead neural networks to completely different outputs. In this paper, we present an adaptive view of the issue via evaluating various test-time smoothing defense against white-box untargeted adversarial examples. Through controlled experiments with pretrained ResNet-152 on ImageNet, we first illustrate the non-monotonic relation between adversarial attacks and smoothing defenses. Then at the dataset level, we observe large variance among samples and show that it is easy to inflate accuracy (even to 100%) or build large-scale (i.e., with size ~10^4) subsets on which a designated method outperforms others by a large margin. Finally at the sample level, as different adversarial examples require different degrees of defense, the potential advantages of iterative methods are also discussed. We hope this paper reveal useful behaviors of test-time defenses, which could help improve the evaluation process for adversarial robustness in the future.
[ { "created": "Tue, 26 Nov 2019 23:45:25 GMT", "version": "v1" } ]
2019-11-28
[ [ "Tang", "Chao", "" ], [ "Fan", "Yifei", "" ], [ "Yezzi", "Anthony", "" ] ]
The safety and robustness of learning-based decision-making systems are under threats from adversarial examples, as imperceptible perturbations can mislead neural networks to completely different outputs. In this paper, we present an adaptive view of the issue via evaluating various test-time smoothing defense against white-box untargeted adversarial examples. Through controlled experiments with pretrained ResNet-152 on ImageNet, we first illustrate the non-monotonic relation between adversarial attacks and smoothing defenses. Then at the dataset level, we observe large variance among samples and show that it is easy to inflate accuracy (even to 100%) or build large-scale (i.e., with size ~10^4) subsets on which a designated method outperforms others by a large margin. Finally at the sample level, as different adversarial examples require different degrees of defense, the potential advantages of iterative methods are also discussed. We hope this paper reveal useful behaviors of test-time defenses, which could help improve the evaluation process for adversarial robustness in the future.
2303.14464
Ana Ozaki
Emilia Przybysz and Bimal Bhattarai and Cosimo Persia and Ana Ozaki and Ole-Christoffer Granmo and Jivitesh Sharma
Verifying Properties of Tsetlin Machines
12 pages, accepted at ISTM (https://istm.no/)
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Tsetlin Machines (TsMs) are a promising and interpretable machine learning method which can be applied for various classification tasks. We present an exact encoding of TsMs into propositional logic and formally verify properties of TsMs using a SAT solver. In particular, we introduce in this work a notion of similarity of machine learning models and apply our notion to check for similarity of TsMs. We also consider notions of robustness and equivalence from the literature and adapt them for TsMs. Then, we show the correctness of our encoding and provide results for the properties: adversarial robustness, equivalence, and similarity of TsMs. In our experiments, we employ the MNIST and IMDB datasets for (respectively) image and sentiment classification. We discuss the results for verifying robustness obtained with TsMs with those in the literature obtained with Binarized Neural Networks on MNIST.
[ { "created": "Sat, 25 Mar 2023 13:17:21 GMT", "version": "v1" }, { "created": "Sun, 2 Jul 2023 13:47:37 GMT", "version": "v2" } ]
2023-07-04
[ [ "Przybysz", "Emilia", "" ], [ "Bhattarai", "Bimal", "" ], [ "Persia", "Cosimo", "" ], [ "Ozaki", "Ana", "" ], [ "Granmo", "Ole-Christoffer", "" ], [ "Sharma", "Jivitesh", "" ] ]
Tsetlin Machines (TsMs) are a promising and interpretable machine learning method which can be applied for various classification tasks. We present an exact encoding of TsMs into propositional logic and formally verify properties of TsMs using a SAT solver. In particular, we introduce in this work a notion of similarity of machine learning models and apply our notion to check for similarity of TsMs. We also consider notions of robustness and equivalence from the literature and adapt them for TsMs. Then, we show the correctness of our encoding and provide results for the properties: adversarial robustness, equivalence, and similarity of TsMs. In our experiments, we employ the MNIST and IMDB datasets for (respectively) image and sentiment classification. We discuss the results for verifying robustness obtained with TsMs with those in the literature obtained with Binarized Neural Networks on MNIST.
0903.4796
Martin Vatshelle
B.-M. Bui-Xuan and J. A. Telle and M. Vatshelle (Department of Informatics, University of Bergen, Norway)
Fast FPT algorithms for vertex subset and vertex partitioning problems using neighborhood unions
The new version has runtimes expressed by number of equivalence classes, but no other changes
null
null
null
cs.DS cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce the graph parameter boolean-width, related to the number of different unions of neighborhoods across a cut of a graph. Boolean-width is similar to rank-width, which is related to the number of $GF[2]$-sums (1+1=0) of neighborhoods instead of the boolean-sums (1+1=1) used for boolean-width. We give algorithms for a large class of NP-hard vertex subset and vertex partitioning problems that are FPT when parameterized by either boolean-width, rank-width or clique-width, with runtime single exponential in either parameter if given the pertinent optimal decomposition. To compare boolean-width versus rank-width or clique-width, we first show that for any graph, the square root of its boolean-width is never more than its rank-width. Next, we exhibit a class of graphs, the Hsu-grids, for which we can solve NP-hard problems in polynomial time, if we use the right parameter. An $n \times \frac{n}{10}$ Hsu-grid on ${1/10}n^2$ vertices has boolean-width $\Theta(\log n)$ and rank-width $\Theta(n)$. Moreover, any optimal rank-decomposition of such a graph will have boolean-width $\Theta(n)$, i.e. exponential in the optimal boolean-width. A main open problem is to approximate the boolean-width better than what is given by the algorithm for rank-width [Hlin\v{e}n\'y and Oum, 2008]
[ { "created": "Fri, 27 Mar 2009 13:34:08 GMT", "version": "v1" }, { "created": "Fri, 3 Apr 2009 07:24:30 GMT", "version": "v2" }, { "created": "Wed, 9 Mar 2011 15:17:52 GMT", "version": "v3" } ]
2011-03-10
[ [ "Bui-Xuan", "B. -M.", "", "Department of\n Informatics, University of Bergen, Norway" ], [ "Telle", "J. A.", "", "Department of\n Informatics, University of Bergen, Norway" ], [ "Vatshelle", "M.", "", "Department of\n Informatics, University of Bergen, Norway" ] ]
We introduce the graph parameter boolean-width, related to the number of different unions of neighborhoods across a cut of a graph. Boolean-width is similar to rank-width, which is related to the number of $GF[2]$-sums (1+1=0) of neighborhoods instead of the boolean-sums (1+1=1) used for boolean-width. We give algorithms for a large class of NP-hard vertex subset and vertex partitioning problems that are FPT when parameterized by either boolean-width, rank-width or clique-width, with runtime single exponential in either parameter if given the pertinent optimal decomposition. To compare boolean-width versus rank-width or clique-width, we first show that for any graph, the square root of its boolean-width is never more than its rank-width. Next, we exhibit a class of graphs, the Hsu-grids, for which we can solve NP-hard problems in polynomial time, if we use the right parameter. An $n \times \frac{n}{10}$ Hsu-grid on ${1/10}n^2$ vertices has boolean-width $\Theta(\log n)$ and rank-width $\Theta(n)$. Moreover, any optimal rank-decomposition of such a graph will have boolean-width $\Theta(n)$, i.e. exponential in the optimal boolean-width. A main open problem is to approximate the boolean-width better than what is given by the algorithm for rank-width [Hlin\v{e}n\'y and Oum, 2008]
2309.13636
Ikechukwu Onyenwe
Nwafor Emmanuel O, Ngozi Maryrose Umeh, Ikechukwu Ekene Onyenwe
Development of an intelligent system for the detection of corona virus using artificial neural network
13 pages, 8 Figures
International Journal of Real-Time Application and Computing Systems (IJORTACS) Volume 1, Issue XI, November 2022, pp. 294-306
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
This paper presents the development of an intelligent system for the detection of coronavirus using artificial neural network. This was done after series of literature review which indicated that high fever accounts for 87.9% of the COVID-19 symptoms. 683 temperature data of COVID-19 patients at >= 38C^o were collected from Colliery hospital Enugu, Nigeria and used to train an artificial neural network detective model for the detection of COVID-19. The reference model generated was used converted into Verilog codes using Hardware Description Language (HDL) and then burn into a Field Programming Gate Array (FPGA) controller using FPGA tool in Matlab. The performance of the model when evaluated using confusion matrix, regression and means square error (MSE) showed that the regression value is 0.967; the accuracy is 97% and then MSE is 0.00100Mu. These results all implied that the new detection system for is reliable and very effective for the detection of COVID-19.
[ { "created": "Sun, 24 Sep 2023 13:30:50 GMT", "version": "v1" } ]
2023-09-26
[ [ "O", "Nwafor Emmanuel", "" ], [ "Umeh", "Ngozi Maryrose", "" ], [ "Onyenwe", "Ikechukwu Ekene", "" ] ]
This paper presents the development of an intelligent system for the detection of coronavirus using artificial neural network. This was done after series of literature review which indicated that high fever accounts for 87.9% of the COVID-19 symptoms. 683 temperature data of COVID-19 patients at >= 38C^o were collected from Colliery hospital Enugu, Nigeria and used to train an artificial neural network detective model for the detection of COVID-19. The reference model generated was used converted into Verilog codes using Hardware Description Language (HDL) and then burn into a Field Programming Gate Array (FPGA) controller using FPGA tool in Matlab. The performance of the model when evaluated using confusion matrix, regression and means square error (MSE) showed that the regression value is 0.967; the accuracy is 97% and then MSE is 0.00100Mu. These results all implied that the new detection system for is reliable and very effective for the detection of COVID-19.
1905.04779
Paolo Frasca
Francesco Acciani, Paolo Frasca, Geert Heijenk, Anton Stoorvogel
Stochastic String Stability of Vehicle Platoons via Cooperative Adaptive Cruise Control with Lossy Communication
10 pages, 7 figures; submitted to journal
null
null
null
cs.SY math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper is about obtaining stable vehicle platooning by using Cooperative Adaptive Cruise Control when the communication is unreliable and suffers from message losses. We model communication losses as independent random events and we propose an original design for the cooperative controller, which mitigates the effect of losses. This objective is obtained by a switching controller that has a twofold objective: on the one hand, it promotes both plant stability and string stability of the average error dynamics by an $H_infty$ approach, and on the other hand it minimizes the variance around the average. We show by simulations that the proposed controller is able to compensate even for high probability of losses.
[ { "created": "Sun, 12 May 2019 19:54:15 GMT", "version": "v1" } ]
2019-05-14
[ [ "Acciani", "Francesco", "" ], [ "Frasca", "Paolo", "" ], [ "Heijenk", "Geert", "" ], [ "Stoorvogel", "Anton", "" ] ]
This paper is about obtaining stable vehicle platooning by using Cooperative Adaptive Cruise Control when the communication is unreliable and suffers from message losses. We model communication losses as independent random events and we propose an original design for the cooperative controller, which mitigates the effect of losses. This objective is obtained by a switching controller that has a twofold objective: on the one hand, it promotes both plant stability and string stability of the average error dynamics by an $H_infty$ approach, and on the other hand it minimizes the variance around the average. We show by simulations that the proposed controller is able to compensate even for high probability of losses.
1702.08349
P{\aa}l Sunds{\o}y
P\r{a}l Sunds{\o}y
Big Data for Social Sciences: Measuring patterns of human behavior through large-scale mobile phone data
166 pages, PHD thesis
null
null
null
cs.CY cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Through seven publications this dissertation shows how anonymized mobile phone data can contribute to the social good and provide insights into human behaviour on a large scale. The size of the datasets analysed ranges from 500 million to 300 billion phone records, covering millions of people. The key contributions are two-fold: 1. Big Data for Social Good: Through prediction algorithms the results show how mobile phone data can be useful to predict important socio-economic indicators, such as income, illiteracy and poverty in developing countries. Such knowledge can be used to identify where vulnerable groups in society are, reduce economic shocks and is a critical component for monitoring poverty rates over time. Further, the dissertation demonstrates how mobile phone data can be used to better understand human behaviour during large shocks in society, exemplified by an analysis of data from the terror attack in Norway and a natural disaster on the south-coast in Bangladesh. This work leads to an increased understanding of how information spreads, and how millions of people move around. The intention is to identify displaced people faster, cheaper and more accurately than existing survey-based methods. 2. Big Data for efficient marketing: Finally, the dissertation offers an insight into how anonymised mobile phone data can be used to map out large social networks, covering millions of people, to understand how products spread inside these networks. Results show that by including social patterns and machine learning techniques in a large-scale marketing experiment in Asia, the adoption rate is increased by 13 times compared to the approach used by experienced marketers. A data-driven and scientific approach to marketing, through more tailored campaigns, contributes to less irrelevant offers for the customers, and better cost efficiency for the companies.
[ { "created": "Mon, 27 Feb 2017 16:09:48 GMT", "version": "v1" } ]
2017-02-28
[ [ "Sundsøy", "Pål", "" ] ]
Through seven publications this dissertation shows how anonymized mobile phone data can contribute to the social good and provide insights into human behaviour on a large scale. The size of the datasets analysed ranges from 500 million to 300 billion phone records, covering millions of people. The key contributions are two-fold: 1. Big Data for Social Good: Through prediction algorithms the results show how mobile phone data can be useful to predict important socio-economic indicators, such as income, illiteracy and poverty in developing countries. Such knowledge can be used to identify where vulnerable groups in society are, reduce economic shocks and is a critical component for monitoring poverty rates over time. Further, the dissertation demonstrates how mobile phone data can be used to better understand human behaviour during large shocks in society, exemplified by an analysis of data from the terror attack in Norway and a natural disaster on the south-coast in Bangladesh. This work leads to an increased understanding of how information spreads, and how millions of people move around. The intention is to identify displaced people faster, cheaper and more accurately than existing survey-based methods. 2. Big Data for efficient marketing: Finally, the dissertation offers an insight into how anonymised mobile phone data can be used to map out large social networks, covering millions of people, to understand how products spread inside these networks. Results show that by including social patterns and machine learning techniques in a large-scale marketing experiment in Asia, the adoption rate is increased by 13 times compared to the approach used by experienced marketers. A data-driven and scientific approach to marketing, through more tailored campaigns, contributes to less irrelevant offers for the customers, and better cost efficiency for the companies.
2306.04356
Lingfeng Yang
Lingfeng Yang, Yueze Wang, Xiang Li, Xinlong Wang, Jian Yang
Fine-Grained Visual Prompting
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Vision-Language Models (VLMs), such as CLIP, have demonstrated impressive zero-shot transfer capabilities in image-level visual perception. However, these models have shown limited performance in instance-level tasks that demand precise localization and recognition. Previous works have suggested that incorporating visual prompts, such as colorful boxes or circles, can improve the ability of models to recognize objects of interest. Nonetheless, compared to language prompting, visual prompting designs are rarely explored. Existing approaches, which employ coarse visual cues such as colorful boxes or circles, often result in sub-optimal performance due to the inclusion of irrelevant and noisy pixels. In this paper, we carefully study the visual prompting designs by exploring more fine-grained markings, such as segmentation masks and their variations. In addition, we introduce a new zero-shot framework that leverages pixel-level annotations acquired from a generalist segmentation model for fine-grained visual prompting. Consequently, our investigation reveals that a straightforward application of blur outside the target mask, referred to as the Blur Reverse Mask, exhibits exceptional effectiveness. This proposed prompting strategy leverages the precise mask annotations to reduce focus on weakly related regions while retaining spatial coherence between the target and the surrounding background. Our Fine-Grained Visual Prompting (FGVP) demonstrates superior performance in zero-shot comprehension of referring expressions on the RefCOCO, RefCOCO+, and RefCOCOg benchmarks. It outperforms prior methods by an average margin of 3.0% to 4.6%, with a maximum improvement of 12.5% on the RefCOCO+ testA subset. Code is available at https://github.com/ylingfeng/FGVP.
[ { "created": "Wed, 7 Jun 2023 11:39:56 GMT", "version": "v1" }, { "created": "Tue, 12 Dec 2023 06:36:53 GMT", "version": "v2" } ]
2023-12-13
[ [ "Yang", "Lingfeng", "" ], [ "Wang", "Yueze", "" ], [ "Li", "Xiang", "" ], [ "Wang", "Xinlong", "" ], [ "Yang", "Jian", "" ] ]
Vision-Language Models (VLMs), such as CLIP, have demonstrated impressive zero-shot transfer capabilities in image-level visual perception. However, these models have shown limited performance in instance-level tasks that demand precise localization and recognition. Previous works have suggested that incorporating visual prompts, such as colorful boxes or circles, can improve the ability of models to recognize objects of interest. Nonetheless, compared to language prompting, visual prompting designs are rarely explored. Existing approaches, which employ coarse visual cues such as colorful boxes or circles, often result in sub-optimal performance due to the inclusion of irrelevant and noisy pixels. In this paper, we carefully study the visual prompting designs by exploring more fine-grained markings, such as segmentation masks and their variations. In addition, we introduce a new zero-shot framework that leverages pixel-level annotations acquired from a generalist segmentation model for fine-grained visual prompting. Consequently, our investigation reveals that a straightforward application of blur outside the target mask, referred to as the Blur Reverse Mask, exhibits exceptional effectiveness. This proposed prompting strategy leverages the precise mask annotations to reduce focus on weakly related regions while retaining spatial coherence between the target and the surrounding background. Our Fine-Grained Visual Prompting (FGVP) demonstrates superior performance in zero-shot comprehension of referring expressions on the RefCOCO, RefCOCO+, and RefCOCOg benchmarks. It outperforms prior methods by an average margin of 3.0% to 4.6%, with a maximum improvement of 12.5% on the RefCOCO+ testA subset. Code is available at https://github.com/ylingfeng/FGVP.
2202.08446
Kaiyuan Yang
Dai Li, Akhil Pakala, Kaiyuan Yang
MeNTT: A Compact and Efficient Processing-in-Memory Number Theoretic Transform (NTT) Accelerator
This paper has been accepted to IEEE Transactions on Very Large Scale Integration (TVLSI)
null
null
null
cs.CR cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Lattice-based cryptography (LBC) exploiting Learning with Errors (LWE) problems is a promising candidate for post-quantum cryptography. Number theoretic transform (NTT) is the latency- and energy- dominant process in the computation of LWE problems. This paper presents a compact and efficient in-MEmory NTT accelerator, named MeNTT, which explores optimized computation in and near a 6T SRAM array. Specifically-designed peripherals enable fast and efficient modular operations. Moreover, a novel mapping strategy reduces the data flow between NTT stages into a unique pattern, which greatly simplifies the routing among processing units (i.e., SRAM column in this work), reducing energy and area overheads. The accelerator achieves significant latency and energy reductions over prior arts.
[ { "created": "Thu, 17 Feb 2022 04:36:10 GMT", "version": "v1" } ]
2022-02-18
[ [ "Li", "Dai", "" ], [ "Pakala", "Akhil", "" ], [ "Yang", "Kaiyuan", "" ] ]
Lattice-based cryptography (LBC) exploiting Learning with Errors (LWE) problems is a promising candidate for post-quantum cryptography. Number theoretic transform (NTT) is the latency- and energy- dominant process in the computation of LWE problems. This paper presents a compact and efficient in-MEmory NTT accelerator, named MeNTT, which explores optimized computation in and near a 6T SRAM array. Specifically-designed peripherals enable fast and efficient modular operations. Moreover, a novel mapping strategy reduces the data flow between NTT stages into a unique pattern, which greatly simplifies the routing among processing units (i.e., SRAM column in this work), reducing energy and area overheads. The accelerator achieves significant latency and energy reductions over prior arts.
1303.0462
M.M.A. Hashem
Moslema Jahan, M. M. A. Hashem and Gazi Abdullah Shahriar
Distributed Evolutionary Computation: A New Technique for Solving Large Number of Equations
null
International Journal of Parallel and Distributed Systems (IJPDS), Vol. 2, No.6, pp.31-49,(2011)
10.5121/ijdps.2011.2604
null
cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Evolutionary computation techniques have mostly been used to solve various optimization and learning problems successfully. Evolutionary algorithm is more effective to gain optimal solution(s) to solve complex problems than traditional methods. In case of problems with large set of parameters, evolutionary computation technique incurs a huge computational burden for a single processing unit. Taking this limitation into account, this paper presents a new distributed evolutionary computation technique, which decomposes decision vectors into smaller components and achieves optimal solution in a short time. In this technique, a Jacobi-based Time Variant Adaptive (JBTVA) Hybrid Evolutionary Algorithm is distributed incorporating cluster computation. Moreover, two new selection methods named Best All Selection (BAS) and Twin Selection (TS) are introduced for selecting best fit solution vector. Experimental results show that optimal solution is achieved for different kinds of problems having huge parameters and a considerable speedup is obtained in proposed distributed system.
[ { "created": "Sun, 3 Mar 2013 05:38:41 GMT", "version": "v1" } ]
2013-03-05
[ [ "Jahan", "Moslema", "" ], [ "Hashem", "M. M. A.", "" ], [ "Shahriar", "Gazi Abdullah", "" ] ]
Evolutionary computation techniques have mostly been used to solve various optimization and learning problems successfully. Evolutionary algorithm is more effective to gain optimal solution(s) to solve complex problems than traditional methods. In case of problems with large set of parameters, evolutionary computation technique incurs a huge computational burden for a single processing unit. Taking this limitation into account, this paper presents a new distributed evolutionary computation technique, which decomposes decision vectors into smaller components and achieves optimal solution in a short time. In this technique, a Jacobi-based Time Variant Adaptive (JBTVA) Hybrid Evolutionary Algorithm is distributed incorporating cluster computation. Moreover, two new selection methods named Best All Selection (BAS) and Twin Selection (TS) are introduced for selecting best fit solution vector. Experimental results show that optimal solution is achieved for different kinds of problems having huge parameters and a considerable speedup is obtained in proposed distributed system.
1910.02065
Oana-Maria Camburu
Oana-Maria Camburu, Eleonora Giunchiglia, Jakob Foerster, Thomas Lukasiewicz, Phil Blunsom
Can I Trust the Explainer? Verifying Post-hoc Explanatory Methods
null
NeurIPS 2019 Workshop on Safety and Robustness in Decision Making, Vancouver, Canada
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For AI systems to garner widespread public acceptance, we must develop methods capable of explaining the decisions of black-box models such as neural networks. In this work, we identify two issues of current explanatory methods. First, we show that two prevalent perspectives on explanations --- feature-additivity and feature-selection --- lead to fundamentally different instance-wise explanations. In the literature, explainers from different perspectives are currently being directly compared, despite their distinct explanation goals. The second issue is that current post-hoc explainers are either validated under simplistic scenarios (on simple models such as linear regression, or on models trained on syntactic datasets), or, when applied to real-world neural networks, explainers are commonly validated under the assumption that the learned models behave reasonably. However, neural networks often rely on unreasonable correlations, even when producing correct decisions. We introduce a verification framework for explanatory methods under the feature-selection perspective. Our framework is based on a non-trivial neural network architecture trained on a real-world task, and for which we are able to provide guarantees on its inner workings. We validate the efficacy of our evaluation by showing the failure modes of current explainers. We aim for this framework to provide a publicly available, off-the-shelf evaluation when the feature-selection perspective on explanations is needed.
[ { "created": "Fri, 4 Oct 2019 17:44:36 GMT", "version": "v1" }, { "created": "Wed, 9 Oct 2019 14:58:47 GMT", "version": "v2" }, { "created": "Thu, 5 Dec 2019 13:41:46 GMT", "version": "v3" } ]
2019-12-06
[ [ "Camburu", "Oana-Maria", "" ], [ "Giunchiglia", "Eleonora", "" ], [ "Foerster", "Jakob", "" ], [ "Lukasiewicz", "Thomas", "" ], [ "Blunsom", "Phil", "" ] ]
For AI systems to garner widespread public acceptance, we must develop methods capable of explaining the decisions of black-box models such as neural networks. In this work, we identify two issues of current explanatory methods. First, we show that two prevalent perspectives on explanations --- feature-additivity and feature-selection --- lead to fundamentally different instance-wise explanations. In the literature, explainers from different perspectives are currently being directly compared, despite their distinct explanation goals. The second issue is that current post-hoc explainers are either validated under simplistic scenarios (on simple models such as linear regression, or on models trained on syntactic datasets), or, when applied to real-world neural networks, explainers are commonly validated under the assumption that the learned models behave reasonably. However, neural networks often rely on unreasonable correlations, even when producing correct decisions. We introduce a verification framework for explanatory methods under the feature-selection perspective. Our framework is based on a non-trivial neural network architecture trained on a real-world task, and for which we are able to provide guarantees on its inner workings. We validate the efficacy of our evaluation by showing the failure modes of current explainers. We aim for this framework to provide a publicly available, off-the-shelf evaluation when the feature-selection perspective on explanations is needed.
2301.10901
Tao Jiang
Tao Jiang, Samuel Tan, Stephen Vavasis
Re-embedding data to strengthen recovery guarantees of clustering
null
null
null
null
cs.LG math.OC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a clustering method that involves chaining four known techniques into a pipeline yielding an algorithm with stronger recovery guarantees than any of the four components separately. Given $n$ points in $\mathbb R^d$, the first component of our pipeline, which we call leapfrog distances, is reminiscent of density-based clustering, yielding an $n\times n$ distance matrix. The leapfrog distances are then translated to new embeddings using multidimensional scaling and spectral methods, two other known techniques, yielding new embeddings of the $n$ points in $\mathbb R^{d'}$, where $d'$ satisfies $d'\ll d$ in general. Finally, sum-of-norms (SON) clustering is applied to the re-embedded points. Although the fourth step (SON clustering) can in principle be replaced by any other clustering method, our focus is on provable guarantees of recovery of underlying structure. Therefore, we establish that the re-embedding improves recovery SON clustering, since SON clustering is a well-studied method that already has provable guarantees.
[ { "created": "Thu, 26 Jan 2023 02:15:11 GMT", "version": "v1" } ]
2023-01-27
[ [ "Jiang", "Tao", "" ], [ "Tan", "Samuel", "" ], [ "Vavasis", "Stephen", "" ] ]
We propose a clustering method that involves chaining four known techniques into a pipeline yielding an algorithm with stronger recovery guarantees than any of the four components separately. Given $n$ points in $\mathbb R^d$, the first component of our pipeline, which we call leapfrog distances, is reminiscent of density-based clustering, yielding an $n\times n$ distance matrix. The leapfrog distances are then translated to new embeddings using multidimensional scaling and spectral methods, two other known techniques, yielding new embeddings of the $n$ points in $\mathbb R^{d'}$, where $d'$ satisfies $d'\ll d$ in general. Finally, sum-of-norms (SON) clustering is applied to the re-embedded points. Although the fourth step (SON clustering) can in principle be replaced by any other clustering method, our focus is on provable guarantees of recovery of underlying structure. Therefore, we establish that the re-embedding improves recovery SON clustering, since SON clustering is a well-studied method that already has provable guarantees.
0808.0234
S Birenjith
K. Sreeram, S. Birenjith, P. Vijay Kumar
DMT of Multi-hop Cooperative Networks - Part I: Basic Results
This submission is Part-I of a two-part paper, which is a detailed version of the previous submission arXiv:0802.1888
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this two-part paper, the DMT of cooperative multi-hop networks is examined. The focus is on single-source single-sink (ss-ss) multi-hop relay networks having slow-fading links and relays that potentially possess multiple antennas. The present paper examines the two end-points of the DMT of full-duplex networks. In particular, the maximum achievable diversity of arbitrary multi-terminal wireless networks is shown to be equal to the min-cut. The maximum multiplexing gain of arbitrary full-duplex ss-ss networks is shown to be equal to the min-cut rank, using a new connection to a deterministic network. We also prove some basic results including a proof that the colored noise encountered in AF protocols for cooperative networks can be treated as white noise for DMT computations. The DMT of a parallel channel with independent MIMO links is also computed here. As an application of these basic results, we prove that a linear tradeoff between maximum diversity and maximum multiplexing gain is achievable for full-duplex networks with single antenna nodes. All protocols in this paper are explicit and rely only upon amplify-and-forward (AF) relaying. Half duplex networks are studied, and explicit codes for all protocols proposed in both parts, are provided in the companion paper.
[ { "created": "Sat, 2 Aug 2008 07:29:50 GMT", "version": "v1" } ]
2008-08-05
[ [ "Sreeram", "K.", "" ], [ "Birenjith", "S.", "" ], [ "Kumar", "P. Vijay", "" ] ]
In this two-part paper, the DMT of cooperative multi-hop networks is examined. The focus is on single-source single-sink (ss-ss) multi-hop relay networks having slow-fading links and relays that potentially possess multiple antennas. The present paper examines the two end-points of the DMT of full-duplex networks. In particular, the maximum achievable diversity of arbitrary multi-terminal wireless networks is shown to be equal to the min-cut. The maximum multiplexing gain of arbitrary full-duplex ss-ss networks is shown to be equal to the min-cut rank, using a new connection to a deterministic network. We also prove some basic results including a proof that the colored noise encountered in AF protocols for cooperative networks can be treated as white noise for DMT computations. The DMT of a parallel channel with independent MIMO links is also computed here. As an application of these basic results, we prove that a linear tradeoff between maximum diversity and maximum multiplexing gain is achievable for full-duplex networks with single antenna nodes. All protocols in this paper are explicit and rely only upon amplify-and-forward (AF) relaying. Half duplex networks are studied, and explicit codes for all protocols proposed in both parts, are provided in the companion paper.
2210.06779
Weichen Yu
Weichen Yu, Hongyuan Yu, Yan Huang, Liang Wang
Generalized Inter-class Loss for Gait Recognition
to be published in ACMMM 2022
null
10.1145/3503161.3548311
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gait recognition is a unique biometric technique that can be performed at a long distance non-cooperatively and has broad applications in public safety and intelligent traffic systems. Previous gait works focus more on minimizing the intra-class variance while ignoring the significance in constraining inter-class variance. To this end, we propose a generalized inter-class loss which resolves the inter-class variance from both sample-level feature distribution and class-level feature distribution. Instead of equal penalty strength on pair scores, the proposed loss optimizes sample-level inter-class feature distribution by dynamically adjusting the pairwise weight. Further, in class-level distribution, generalized inter-class loss adds a constraint on the uniformity of inter-class feature distribution, which forces the feature representations to approximate a hypersphere and keep maximal inter-class variance. In addition, the proposed method automatically adjusts the margin between classes which enables the inter-class feature distribution to be more flexible. The proposed method can be generalized to different gait recognition networks and achieves significant improvements. We conduct a series of experiments on CASIA-B and OUMVLP, and the experimental results show that the proposed loss can significantly improve the performance and achieves the state-of-the-art performances.
[ { "created": "Thu, 13 Oct 2022 06:44:53 GMT", "version": "v1" } ]
2022-10-14
[ [ "Yu", "Weichen", "" ], [ "Yu", "Hongyuan", "" ], [ "Huang", "Yan", "" ], [ "Wang", "Liang", "" ] ]
Gait recognition is a unique biometric technique that can be performed at a long distance non-cooperatively and has broad applications in public safety and intelligent traffic systems. Previous gait works focus more on minimizing the intra-class variance while ignoring the significance in constraining inter-class variance. To this end, we propose a generalized inter-class loss which resolves the inter-class variance from both sample-level feature distribution and class-level feature distribution. Instead of equal penalty strength on pair scores, the proposed loss optimizes sample-level inter-class feature distribution by dynamically adjusting the pairwise weight. Further, in class-level distribution, generalized inter-class loss adds a constraint on the uniformity of inter-class feature distribution, which forces the feature representations to approximate a hypersphere and keep maximal inter-class variance. In addition, the proposed method automatically adjusts the margin between classes which enables the inter-class feature distribution to be more flexible. The proposed method can be generalized to different gait recognition networks and achieves significant improvements. We conduct a series of experiments on CASIA-B and OUMVLP, and the experimental results show that the proposed loss can significantly improve the performance and achieves the state-of-the-art performances.
2110.09991
Kaiyu Zheng
Kaiyu Zheng, Rohan Chitnis, Yoonchang Sung, George Konidaris, Stefanie Tellex
Towards Optimal Correlational Object Search
10 pages, 5 figures, 4 tables. IEEE Conference on Robotics and Automation (ICRA) 2022; minor fix in appendix & references
null
null
null
cs.RO cs.AI cs.CV
http://creativecommons.org/licenses/by/4.0/
In realistic applications of object search, robots will need to locate target objects in complex environments while coping with unreliable sensors, especially for small or hard-to-detect objects. In such settings, correlational information can be valuable for planning efficiently. Previous approaches that consider correlational information typically resort to ad-hoc, greedy search strategies. We introduce the Correlational Object Search POMDP (COS-POMDP), which models correlations while preserving optimal solutions with a reduced state space. We propose a hierarchical planning algorithm to scale up COS-POMDPs for practical domains. Our evaluation, conducted with the AI2-THOR household simulator and the YOLOv5 object detector, shows that our method finds objects more successfully and efficiently compared to baselines,particularly for hard-to-detect objects such as srub brush and remote control.
[ { "created": "Tue, 19 Oct 2021 14:03:43 GMT", "version": "v1" }, { "created": "Wed, 2 Mar 2022 05:42:32 GMT", "version": "v2" }, { "created": "Fri, 1 Apr 2022 14:23:38 GMT", "version": "v3" } ]
2022-04-04
[ [ "Zheng", "Kaiyu", "" ], [ "Chitnis", "Rohan", "" ], [ "Sung", "Yoonchang", "" ], [ "Konidaris", "George", "" ], [ "Tellex", "Stefanie", "" ] ]
In realistic applications of object search, robots will need to locate target objects in complex environments while coping with unreliable sensors, especially for small or hard-to-detect objects. In such settings, correlational information can be valuable for planning efficiently. Previous approaches that consider correlational information typically resort to ad-hoc, greedy search strategies. We introduce the Correlational Object Search POMDP (COS-POMDP), which models correlations while preserving optimal solutions with a reduced state space. We propose a hierarchical planning algorithm to scale up COS-POMDPs for practical domains. Our evaluation, conducted with the AI2-THOR household simulator and the YOLOv5 object detector, shows that our method finds objects more successfully and efficiently compared to baselines,particularly for hard-to-detect objects such as srub brush and remote control.
1203.3485
Matthew J. Johnson
Matthew J. Johnson, Alan Willsky
The Hierarchical Dirichlet Process Hidden Semi-Markov Model
Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence (UAI2010)
null
null
UAI-P-2010-PG-252-259
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There is much interest in the Hierarchical Dirichlet Process Hidden Markov Model (HDP-HMM) as a natural Bayesian nonparametric extension of the traditional HMM. However, in many settings the HDP-HMM's strict Markovian constraints are undesirable, particularly if we wish to learn or encode non-geometric state durations. We can extend the HDP-HMM to capture such structure by drawing upon explicit-duration semi-Markovianity, which has been developed in the parametric setting to allow construction of highly interpretable models that admit natural prior information on state durations. In this paper we introduce the explicitduration HDP-HSMM and develop posterior sampling algorithms for efficient inference in both the direct-assignment and weak-limit approximation settings. We demonstrate the utility of the model and our inference methods on synthetic data as well as experiments on a speaker diarization problem and an example of learning the patterns in Morse code.
[ { "created": "Thu, 15 Mar 2012 11:17:56 GMT", "version": "v1" } ]
2012-03-19
[ [ "Johnson", "Matthew J.", "" ], [ "Willsky", "Alan", "" ] ]
There is much interest in the Hierarchical Dirichlet Process Hidden Markov Model (HDP-HMM) as a natural Bayesian nonparametric extension of the traditional HMM. However, in many settings the HDP-HMM's strict Markovian constraints are undesirable, particularly if we wish to learn or encode non-geometric state durations. We can extend the HDP-HMM to capture such structure by drawing upon explicit-duration semi-Markovianity, which has been developed in the parametric setting to allow construction of highly interpretable models that admit natural prior information on state durations. In this paper we introduce the explicitduration HDP-HSMM and develop posterior sampling algorithms for efficient inference in both the direct-assignment and weak-limit approximation settings. We demonstrate the utility of the model and our inference methods on synthetic data as well as experiments on a speaker diarization problem and an example of learning the patterns in Morse code.
2002.02707
Gabriel John Dusing
Audris Mockus, Diomidis Spinellis, Zoe Kotti, Gabriel John Dusing
A Complete Set of Related Git Repositories Identified via Community Detection Approaches Based on Shared Commits
5 pages
null
10.1145/3379597.3387499
null
cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In order to understand the state and evolution of the entirety of open source software we need to get a handle on the set of distinct software projects. Most of open source projects presently utilize Git, which is a distributed version control system allowing easy creation of clones and resulting in numerous repositories that are almost entirely based on some parent repository from which they were cloned. Git commits are based on Merkle Tree and two commits are highly unlikely to be produced independently. Shared commits, therefore, appear like an excellent way to group cloned repositories and obtain an accurate map for such repositories. We use World of Code infrastructure containing approximately 2B commits and 100M repositories to create and share such a map. We discover that the largest group contains almost 14M repositories most of which are unrelated to each other. As it turns out, the developers can push git object to an arbitrary repository or pull objects from unrelated repositories, thus linking unrelated repositories. To address this, we apply Louvain community detection algorithm to this very large graph consisting of links between commits and projects. The approach successfully reduces the size of the megacluster with the largest group of highly interconnected projects containing under 100K repositories. We expect the tools that the resulting map of related projects as well as tools and methods to handle the very large graph will serve as a reference set for mining software projects and other applications. Further work is needed to determine different types of relationships among projects induced by shared commits and other relationships, for example, by shared source code or similar filenames.
[ { "created": "Fri, 7 Feb 2020 10:47:30 GMT", "version": "v1" }, { "created": "Sun, 15 Mar 2020 23:50:46 GMT", "version": "v2" }, { "created": "Thu, 26 Mar 2020 16:03:34 GMT", "version": "v3" }, { "created": "Mon, 6 Apr 2020 17:11:49 GMT", "version": "v4" } ]
2020-04-07
[ [ "Mockus", "Audris", "" ], [ "Spinellis", "Diomidis", "" ], [ "Kotti", "Zoe", "" ], [ "Dusing", "Gabriel John", "" ] ]
In order to understand the state and evolution of the entirety of open source software we need to get a handle on the set of distinct software projects. Most of open source projects presently utilize Git, which is a distributed version control system allowing easy creation of clones and resulting in numerous repositories that are almost entirely based on some parent repository from which they were cloned. Git commits are based on Merkle Tree and two commits are highly unlikely to be produced independently. Shared commits, therefore, appear like an excellent way to group cloned repositories and obtain an accurate map for such repositories. We use World of Code infrastructure containing approximately 2B commits and 100M repositories to create and share such a map. We discover that the largest group contains almost 14M repositories most of which are unrelated to each other. As it turns out, the developers can push git object to an arbitrary repository or pull objects from unrelated repositories, thus linking unrelated repositories. To address this, we apply Louvain community detection algorithm to this very large graph consisting of links between commits and projects. The approach successfully reduces the size of the megacluster with the largest group of highly interconnected projects containing under 100K repositories. We expect the tools that the resulting map of related projects as well as tools and methods to handle the very large graph will serve as a reference set for mining software projects and other applications. Further work is needed to determine different types of relationships among projects induced by shared commits and other relationships, for example, by shared source code or similar filenames.
1503.07940
Ananda Theertha Suresh
Alon Orlitsky and Ananda Theertha Suresh
Competitive Distribution Estimation
15 pages
null
null
null
cs.IT cs.DS cs.LG math.IT math.ST stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Estimating an unknown distribution from its samples is a fundamental problem in statistics. The common, min-max, formulation of this goal considers the performance of the best estimator over all distributions in a class. It shows that with $n$ samples, distributions over $k$ symbols can be learned to a KL divergence that decreases to zero with the sample size $n$, but grows unboundedly with the alphabet size $k$. Min-max performance can be viewed as regret relative to an oracle that knows the underlying distribution. We consider two natural and modest limits on the oracle's power. One where it knows the underlying distribution only up to symbol permutations, and the other where it knows the exact distribution but is restricted to use natural estimators that assign the same probability to symbols that appeared equally many times in the sample. We show that in both cases the competitive regret reduces to $\min(k/n,\tilde{\mathcal{O}}(1/\sqrt n))$, a quantity upper bounded uniformly for every alphabet size. This shows that distributions can be estimated nearly as well as when they are essentially known in advance, and nearly as well as when they are completely known in advance but need to be estimated via a natural estimator. We also provide an estimator that runs in linear time and incurs competitive regret of $\tilde{\mathcal{O}}(\min(k/n,1/\sqrt n))$, and show that for natural estimators this competitive regret is inevitable. We also demonstrate the effectiveness of competitive estimators using simulations.
[ { "created": "Fri, 27 Mar 2015 01:41:48 GMT", "version": "v1" } ]
2015-03-30
[ [ "Orlitsky", "Alon", "" ], [ "Suresh", "Ananda Theertha", "" ] ]
Estimating an unknown distribution from its samples is a fundamental problem in statistics. The common, min-max, formulation of this goal considers the performance of the best estimator over all distributions in a class. It shows that with $n$ samples, distributions over $k$ symbols can be learned to a KL divergence that decreases to zero with the sample size $n$, but grows unboundedly with the alphabet size $k$. Min-max performance can be viewed as regret relative to an oracle that knows the underlying distribution. We consider two natural and modest limits on the oracle's power. One where it knows the underlying distribution only up to symbol permutations, and the other where it knows the exact distribution but is restricted to use natural estimators that assign the same probability to symbols that appeared equally many times in the sample. We show that in both cases the competitive regret reduces to $\min(k/n,\tilde{\mathcal{O}}(1/\sqrt n))$, a quantity upper bounded uniformly for every alphabet size. This shows that distributions can be estimated nearly as well as when they are essentially known in advance, and nearly as well as when they are completely known in advance but need to be estimated via a natural estimator. We also provide an estimator that runs in linear time and incurs competitive regret of $\tilde{\mathcal{O}}(\min(k/n,1/\sqrt n))$, and show that for natural estimators this competitive regret is inevitable. We also demonstrate the effectiveness of competitive estimators using simulations.
1612.05778
Ning Xie
Changbo Chen, Svyatoslav Covanov, Farnam Mansouri, Marc Moreno Maza, Ning Xie and Yuzhen Xie
Parallel Integer Polynomial Multiplication
null
null
null
null
cs.SC cs.MS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a new algorithm for multiplying dense polynomials with integer coefficients in a parallel fashion, targeting multi-core processor architectures. Complexity estimates and experimental comparisons demonstrate the advantages of this new approach.
[ { "created": "Sat, 17 Dec 2016 14:54:52 GMT", "version": "v1" } ]
2016-12-20
[ [ "Chen", "Changbo", "" ], [ "Covanov", "Svyatoslav", "" ], [ "Mansouri", "Farnam", "" ], [ "Maza", "Marc Moreno", "" ], [ "Xie", "Ning", "" ], [ "Xie", "Yuzhen", "" ] ]
We propose a new algorithm for multiplying dense polynomials with integer coefficients in a parallel fashion, targeting multi-core processor architectures. Complexity estimates and experimental comparisons demonstrate the advantages of this new approach.
1709.01424
Maedeh Aghaei
Maedeh Aghaei, Mariella Dimiccoli, Cristian Canton Ferrer, Petia Radeva
Towards social pattern characterization in egocentric photo-streams
42 pages, 14 figures. Submitted to Elsevier, Computer Vision and Image Understanding (Under Review)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Following the increasingly popular trend of social interaction analysis in egocentric vision, this manuscript presents a comprehensive study for automatic social pattern characterization of a wearable photo-camera user, by relying on the visual analysis of egocentric photo-streams. The proposed framework consists of three major steps. The first step is to detect social interactions of the user where the impact of several social signals on the task is explored. The detected social events are inspected in the second step for categorization into different social meetings. These two steps act at event-level where each potential social event is modeled as a multi-dimensional time-series, whose dimensions correspond to a set of relevant features for each task, and LSTM is employed to classify the time-series. The last step of the framework is to characterize social patterns, which is essentially to infer the diversity and frequency of the social relations of the user through discovery of recurrences of the same people across the whole set of social events of the user. Experimental evaluation over a dataset acquired by 9 users demonstrates promising results on the task of social pattern characterization from egocentric photo-streams.
[ { "created": "Tue, 5 Sep 2017 14:50:00 GMT", "version": "v1" }, { "created": "Wed, 27 Sep 2017 16:02:18 GMT", "version": "v2" }, { "created": "Tue, 9 Jan 2018 11:14:53 GMT", "version": "v3" } ]
2018-01-10
[ [ "Aghaei", "Maedeh", "" ], [ "Dimiccoli", "Mariella", "" ], [ "Ferrer", "Cristian Canton", "" ], [ "Radeva", "Petia", "" ] ]
Following the increasingly popular trend of social interaction analysis in egocentric vision, this manuscript presents a comprehensive study for automatic social pattern characterization of a wearable photo-camera user, by relying on the visual analysis of egocentric photo-streams. The proposed framework consists of three major steps. The first step is to detect social interactions of the user where the impact of several social signals on the task is explored. The detected social events are inspected in the second step for categorization into different social meetings. These two steps act at event-level where each potential social event is modeled as a multi-dimensional time-series, whose dimensions correspond to a set of relevant features for each task, and LSTM is employed to classify the time-series. The last step of the framework is to characterize social patterns, which is essentially to infer the diversity and frequency of the social relations of the user through discovery of recurrences of the same people across the whole set of social events of the user. Experimental evaluation over a dataset acquired by 9 users demonstrates promising results on the task of social pattern characterization from egocentric photo-streams.
1802.10217
Rachit Dubey
Rachit Dubey, Pulkit Agrawal, Deepak Pathak, Thomas L. Griffiths, and Alexei A. Efros
Investigating Human Priors for Playing Video Games
ICML 2018
ICML 2018
null
null
cs.AI cs.LG
http://creativecommons.org/publicdomain/zero/1.0/
What makes humans so good at solving seemingly complex video games? Unlike computers, humans bring in a great deal of prior knowledge about the world, enabling efficient decision making. This paper investigates the role of human priors for solving video games. Given a sample game, we conduct a series of ablation studies to quantify the importance of various priors on human performance. We do this by modifying the video game environment to systematically mask different types of visual information that could be used by humans as priors. We find that removal of some prior knowledge causes a drastic degradation in the speed with which human players solve the game, e.g. from 2 minutes to over 20 minutes. Furthermore, our results indicate that general priors, such as the importance of objects and visual consistency, are critical for efficient game-play. Videos and the game manipulations are available at https://rach0012.github.io/humanRL_website/
[ { "created": "Wed, 28 Feb 2018 00:26:44 GMT", "version": "v1" }, { "created": "Mon, 4 Jun 2018 20:32:21 GMT", "version": "v2" }, { "created": "Wed, 25 Jul 2018 02:33:41 GMT", "version": "v3" } ]
2018-07-26
[ [ "Dubey", "Rachit", "" ], [ "Agrawal", "Pulkit", "" ], [ "Pathak", "Deepak", "" ], [ "Griffiths", "Thomas L.", "" ], [ "Efros", "Alexei A.", "" ] ]
What makes humans so good at solving seemingly complex video games? Unlike computers, humans bring in a great deal of prior knowledge about the world, enabling efficient decision making. This paper investigates the role of human priors for solving video games. Given a sample game, we conduct a series of ablation studies to quantify the importance of various priors on human performance. We do this by modifying the video game environment to systematically mask different types of visual information that could be used by humans as priors. We find that removal of some prior knowledge causes a drastic degradation in the speed with which human players solve the game, e.g. from 2 minutes to over 20 minutes. Furthermore, our results indicate that general priors, such as the importance of objects and visual consistency, are critical for efficient game-play. Videos and the game manipulations are available at https://rach0012.github.io/humanRL_website/
1909.13332
Natalia Tomashenko
Natalia Tomashenko, Antoine Caubriere, Yannick Esteve, Antoine Laurent, Emmanuel Morin
Recent Advances in End-to-End Spoken Language Understanding
null
Statistical Language and Speech Processing. SLSP 2019
10.1007/978-3-030-31372-2_4
null
cs.CL eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work investigates spoken language understanding (SLU) systems in the scenario when the semantic information is extracted directly from the speech signal by means of a single end-to-end neural network model. Two SLU tasks are considered: named entity recognition (NER) and semantic slot filling (SF). For these tasks, in order to improve the model performance, we explore various techniques including speaker adaptation, a modification of the connectionist temporal classification (CTC) training criterion, and sequential pretraining.
[ { "created": "Sun, 29 Sep 2019 18:01:00 GMT", "version": "v1" } ]
2019-10-29
[ [ "Tomashenko", "Natalia", "" ], [ "Caubriere", "Antoine", "" ], [ "Esteve", "Yannick", "" ], [ "Laurent", "Antoine", "" ], [ "Morin", "Emmanuel", "" ] ]
This work investigates spoken language understanding (SLU) systems in the scenario when the semantic information is extracted directly from the speech signal by means of a single end-to-end neural network model. Two SLU tasks are considered: named entity recognition (NER) and semantic slot filling (SF). For these tasks, in order to improve the model performance, we explore various techniques including speaker adaptation, a modification of the connectionist temporal classification (CTC) training criterion, and sequential pretraining.
1905.01102
Spyros Gidaris
Spyros Gidaris, Nikos Komodakis
Generating Classification Weights with GNN Denoising Autoencoders for Few-Shot Learning
Oral presentation at CVPR 2019. The code and models of our paper will be published on: https://github.com/gidariss/wDAE_GNN_FewShot
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given an initial recognition model already trained on a set of base classes, the goal of this work is to develop a meta-model for few-shot learning. The meta-model, given as input some novel classes with few training examples per class, must properly adapt the existing recognition model into a new model that can correctly classify in a unified way both the novel and the base classes. To accomplish this goal it must learn to output the appropriate classification weight vectors for those two types of classes. To build our meta-model we make use of two main innovations: we propose the use of a Denoising Autoencoder network (DAE) that (during training) takes as input a set of classification weights corrupted with Gaussian noise and learns to reconstruct the target-discriminative classification weights. In this case, the injected noise on the classification weights serves the role of regularizing the weight generating meta-model. Furthermore, in order to capture the co-dependencies between different classes in a given task instance of our meta-model, we propose to implement the DAE model as a Graph Neural Network (GNN). In order to verify the efficacy of our approach, we extensively evaluate it on ImageNet based few-shot benchmarks and we report strong results that surpass prior approaches. The code and models of our paper will be published on: https://github.com/gidariss/wDAE_GNN_FewShot
[ { "created": "Fri, 3 May 2019 10:11:54 GMT", "version": "v1" } ]
2019-05-06
[ [ "Gidaris", "Spyros", "" ], [ "Komodakis", "Nikos", "" ] ]
Given an initial recognition model already trained on a set of base classes, the goal of this work is to develop a meta-model for few-shot learning. The meta-model, given as input some novel classes with few training examples per class, must properly adapt the existing recognition model into a new model that can correctly classify in a unified way both the novel and the base classes. To accomplish this goal it must learn to output the appropriate classification weight vectors for those two types of classes. To build our meta-model we make use of two main innovations: we propose the use of a Denoising Autoencoder network (DAE) that (during training) takes as input a set of classification weights corrupted with Gaussian noise and learns to reconstruct the target-discriminative classification weights. In this case, the injected noise on the classification weights serves the role of regularizing the weight generating meta-model. Furthermore, in order to capture the co-dependencies between different classes in a given task instance of our meta-model, we propose to implement the DAE model as a Graph Neural Network (GNN). In order to verify the efficacy of our approach, we extensively evaluate it on ImageNet based few-shot benchmarks and we report strong results that surpass prior approaches. The code and models of our paper will be published on: https://github.com/gidariss/wDAE_GNN_FewShot
2309.14634
Yong-Hao Hu
Yong-Hao Hu, Kenichiro Ito, Ayumi Igarashi
Synchronizing Full-Body Avatar Transforms with WebRTC DataChannel on Educational Metaverse
null
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
Full-body avatars are suggested to be beneficial for communication in virtual environments, and consistency between users' voices and gestures is considered essential to ensure communication quality. This paper propose extending the functionality of a web-based VR platform to support the use of full-body avatars and delegated avatar transforms synchronization to WebRTC DataChannel to enhance the consistency between voices and gestures. Finally, we conducted a preliminary validation to confirm the consistency.
[ { "created": "Tue, 26 Sep 2023 03:28:09 GMT", "version": "v1" } ]
2023-09-27
[ [ "Hu", "Yong-Hao", "" ], [ "Ito", "Kenichiro", "" ], [ "Igarashi", "Ayumi", "" ] ]
Full-body avatars are suggested to be beneficial for communication in virtual environments, and consistency between users' voices and gestures is considered essential to ensure communication quality. This paper propose extending the functionality of a web-based VR platform to support the use of full-body avatars and delegated avatar transforms synchronization to WebRTC DataChannel to enhance the consistency between voices and gestures. Finally, we conducted a preliminary validation to confirm the consistency.
2004.12699
Lucas Carvalho Cordeiro
Mikhail R. Gadelha, Lucas C. Cordeiro and Denis A. Nicole
An Efficient Floating-Point Bit-Blasting API for Verifying C Programs
20 pages
null
null
null
cs.LO cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe a new SMT bit-blasting API for floating-points and evaluate it using different out-of-the-shelf SMT solvers during the verification of several C programs. The new floating-point API is part of the SMT backend in ESBMC, a state-of-the-art bounded model checker for C and C++. For the evaluation, we compared our floating-point API against the native floating-point APIs in Z3 and MathSAT. We show that Boolector, when using floating-point API, outperforms the solvers with native support for floating-points, correctly verifying more programs in less time. Experimental results also show that our floating-point API implemented in ESBMC is on par with other state-of-the-art software verifiers. Furthermore, when verifying programs with floating-point arithmetic, our new floating-point API produced no wrong answers.
[ { "created": "Mon, 27 Apr 2020 10:40:04 GMT", "version": "v1" }, { "created": "Wed, 29 Apr 2020 09:13:04 GMT", "version": "v2" } ]
2020-04-30
[ [ "Gadelha", "Mikhail R.", "" ], [ "Cordeiro", "Lucas C.", "" ], [ "Nicole", "Denis A.", "" ] ]
We describe a new SMT bit-blasting API for floating-points and evaluate it using different out-of-the-shelf SMT solvers during the verification of several C programs. The new floating-point API is part of the SMT backend in ESBMC, a state-of-the-art bounded model checker for C and C++. For the evaluation, we compared our floating-point API against the native floating-point APIs in Z3 and MathSAT. We show that Boolector, when using floating-point API, outperforms the solvers with native support for floating-points, correctly verifying more programs in less time. Experimental results also show that our floating-point API implemented in ESBMC is on par with other state-of-the-art software verifiers. Furthermore, when verifying programs with floating-point arithmetic, our new floating-point API produced no wrong answers.
1803.06563
Spyridon Samothrakis
Spyridon Samothrakis
Viewpoint: Artificial Intelligence and Labour
null
null
null
null
cs.CY cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The welfare of modern societies has been intrinsically linked to wage labour. With some exceptions, the modern human has to sell her labour-power to be able reproduce biologically and socially. Thus, a lingering fear of technological unemployment features predominately as a theme among Artificial Intelligence researchers. In this short paper we show that, if past trends are anything to go by, this fear is irrational. On the contrary, we argue that the main problem humanity will be facing is the normalisation of extremely long working hours.
[ { "created": "Sat, 17 Mar 2018 20:08:49 GMT", "version": "v1" } ]
2018-03-20
[ [ "Samothrakis", "Spyridon", "" ] ]
The welfare of modern societies has been intrinsically linked to wage labour. With some exceptions, the modern human has to sell her labour-power to be able reproduce biologically and socially. Thus, a lingering fear of technological unemployment features predominately as a theme among Artificial Intelligence researchers. In this short paper we show that, if past trends are anything to go by, this fear is irrational. On the contrary, we argue that the main problem humanity will be facing is the normalisation of extremely long working hours.
2104.01276
Stephen MacDonell
Amjed Tahir, Sherlock A. Licorish and Stephen G. MacDonell
Feature Evolution and Reuse -- An Exploratory Study of Eclipse
Conference paper, 6 pages, 5 figures, 2 tables
Proceedings of the 24th Asia-Pacific Software Engineering Conference (APSEC2017). Nanjing, China, IEEE, pp.582-587
10.1109/APSEC.2017.69
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the purported ways to increase productivity and reduce development time is to reuse existing features and modules. If reuse is adopted, logically then, it will have a direct impact on a system's evolution. However, the evidence in the literature is not clear on the extent to which reuse is practiced in real-world projects, nor how it is practiced. In this paper we report the results of an investigation of reuse and evolution of software features in one of the largest open-source ecosystems - Eclipse. Eclipse provides a leading example of how a system can grow dramatically in size and number of features while maintaining its quality. Our results demonstrate the extent of feature reuse and evolution and also patterns of reuse across ten different Eclipse releases (from Europa to Neon).
[ { "created": "Fri, 2 Apr 2021 23:50:53 GMT", "version": "v1" } ]
2021-04-06
[ [ "Tahir", "Amjed", "" ], [ "Licorish", "Sherlock A.", "" ], [ "MacDonell", "Stephen G.", "" ] ]
One of the purported ways to increase productivity and reduce development time is to reuse existing features and modules. If reuse is adopted, logically then, it will have a direct impact on a system's evolution. However, the evidence in the literature is not clear on the extent to which reuse is practiced in real-world projects, nor how it is practiced. In this paper we report the results of an investigation of reuse and evolution of software features in one of the largest open-source ecosystems - Eclipse. Eclipse provides a leading example of how a system can grow dramatically in size and number of features while maintaining its quality. Our results demonstrate the extent of feature reuse and evolution and also patterns of reuse across ten different Eclipse releases (from Europa to Neon).
1504.00953
Constantinos Psomas
Constantinos Psomas, Ioannis Krikidis
Outage Analysis of Full-Duplex Architectures in Cellular Networks
to appear in Proc. IEEE VTC 2015 Spring, Glasgow
null
10.1109/VTCSpring.2015.7145989
null
cs.IT cs.NI math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The implementation of full-duplex (FD) radio in wireless communications is a potential approach for achieving higher spectral efficiency. A possible application is its employment in the next generation of cellular networks. However, the performance of large-scale FD multiuser networks is an area mostly unexplored. Most of the related work focuses on the performance analysis of small-scale networks or on loop interference cancellation schemes. In this paper, we derive the outage probability performance of large-scale FD cellular networks in the context of two architectures: two-node and three-node. We show how the performance is affected with respect to the model's parameters and provide a comparison between the two architectures.
[ { "created": "Fri, 3 Apr 2015 22:32:36 GMT", "version": "v1" } ]
2016-11-17
[ [ "Psomas", "Constantinos", "" ], [ "Krikidis", "Ioannis", "" ] ]
The implementation of full-duplex (FD) radio in wireless communications is a potential approach for achieving higher spectral efficiency. A possible application is its employment in the next generation of cellular networks. However, the performance of large-scale FD multiuser networks is an area mostly unexplored. Most of the related work focuses on the performance analysis of small-scale networks or on loop interference cancellation schemes. In this paper, we derive the outage probability performance of large-scale FD cellular networks in the context of two architectures: two-node and three-node. We show how the performance is affected with respect to the model's parameters and provide a comparison between the two architectures.
2206.03997
Benjamin Lion
Benjamin Lion, Farhad Arbab, Carolyn Talcott
A Rewriting Framework for Interacting Cyber-Physical Agents
null
null
null
null
cs.LO
http://creativecommons.org/licenses/by/4.0/
The analysis of cyber-physical systems (CPS) is challenging due to the large state space and the continuous changes occurring in their constituent parts. Design practices favor modularity to help reducing this complexity. In a previous work, we proposed a discrete semantic model for CPS that captures both cyber and physical aspects as streams of discrete observations, which ultimately form the behavior of a component. This semantic model is denotational and compositional, where each composition operator algebraically models an interaction between a pair of components. In this paper, we propose a specification of components as rewrite systems. The specification is operational and executable, and we study conditions for its semantics as components to be compositional. We demonstrate our framework by modeling a coordination of robots moving on a shared field. We show that our system of robots can be coordinated by a protocol in order to exhibit a desired emerging behavior. We use an implementation of our framework in Maude to give practical results.
[ { "created": "Wed, 8 Jun 2022 16:21:03 GMT", "version": "v1" }, { "created": "Tue, 2 Aug 2022 07:23:30 GMT", "version": "v2" } ]
2022-08-03
[ [ "Lion", "Benjamin", "" ], [ "Arbab", "Farhad", "" ], [ "Talcott", "Carolyn", "" ] ]
The analysis of cyber-physical systems (CPS) is challenging due to the large state space and the continuous changes occurring in their constituent parts. Design practices favor modularity to help reducing this complexity. In a previous work, we proposed a discrete semantic model for CPS that captures both cyber and physical aspects as streams of discrete observations, which ultimately form the behavior of a component. This semantic model is denotational and compositional, where each composition operator algebraically models an interaction between a pair of components. In this paper, we propose a specification of components as rewrite systems. The specification is operational and executable, and we study conditions for its semantics as components to be compositional. We demonstrate our framework by modeling a coordination of robots moving on a shared field. We show that our system of robots can be coordinated by a protocol in order to exhibit a desired emerging behavior. We use an implementation of our framework in Maude to give practical results.
2312.08697
Guoqing Chao
Guoqing Chao, Yi Jiang, Dianhui Chu
Incomplete Contrastive Multi-View Clustering with High-Confidence Guiding
11pages, and it has been accepted by AAAI 2024
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Incomplete multi-view clustering becomes an important research problem, since multi-view data with missing values are ubiquitous in real-world applications. Although great efforts have been made for incomplete multi-view clustering, there are still some challenges: 1) most existing methods didn't make full use of multi-view information to deal with missing values; 2) most methods just employ the consistent information within multi-view data but ignore the complementary information; 3) For the existing incomplete multi-view clustering methods, incomplete multi-view representation learning and clustering are treated as independent processes, which leads to performance gap. In this work, we proposed a novel Incomplete Contrastive Multi-View Clustering method with high-confidence guiding (ICMVC). Firstly, we proposed a multi-view consistency relation transfer plus graph convolutional network to tackle missing values problem. Secondly, instance-level attention fusion and high-confidence guiding are proposed to exploit the complementary information while instance-level contrastive learning for latent representation is designed to employ the consistent information. Thirdly, an end-to-end framework is proposed to integrate multi-view missing values handling, multi-view representation learning and clustering assignment for joint optimization. Experiments compared with state-of-the-art approaches demonstrated the effectiveness and superiority of our method. Our code is publicly available at https://github.com/liunian-Jay/ICMVC.
[ { "created": "Thu, 14 Dec 2023 07:28:41 GMT", "version": "v1" } ]
2023-12-15
[ [ "Chao", "Guoqing", "" ], [ "Jiang", "Yi", "" ], [ "Chu", "Dianhui", "" ] ]
Incomplete multi-view clustering becomes an important research problem, since multi-view data with missing values are ubiquitous in real-world applications. Although great efforts have been made for incomplete multi-view clustering, there are still some challenges: 1) most existing methods didn't make full use of multi-view information to deal with missing values; 2) most methods just employ the consistent information within multi-view data but ignore the complementary information; 3) For the existing incomplete multi-view clustering methods, incomplete multi-view representation learning and clustering are treated as independent processes, which leads to performance gap. In this work, we proposed a novel Incomplete Contrastive Multi-View Clustering method with high-confidence guiding (ICMVC). Firstly, we proposed a multi-view consistency relation transfer plus graph convolutional network to tackle missing values problem. Secondly, instance-level attention fusion and high-confidence guiding are proposed to exploit the complementary information while instance-level contrastive learning for latent representation is designed to employ the consistent information. Thirdly, an end-to-end framework is proposed to integrate multi-view missing values handling, multi-view representation learning and clustering assignment for joint optimization. Experiments compared with state-of-the-art approaches demonstrated the effectiveness and superiority of our method. Our code is publicly available at https://github.com/liunian-Jay/ICMVC.
2202.08907
Holden Lee
Frederic Koehler and Holden Lee and Andrej Risteski
Sampling Approximately Low-Rank Ising Models: MCMC meets Variational Methods
43 pages
null
null
null
cs.DS cs.LG math.PR stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider Ising models on the hypercube with a general interaction matrix $J$, and give a polynomial time sampling algorithm when all but $O(1)$ eigenvalues of $J$ lie in an interval of length one, a situation which occurs in many models of interest. This was previously known for the Glauber dynamics when *all* eigenvalues fit in an interval of length one; however, a single outlier can force the Glauber dynamics to mix torpidly. Our general result implies the first polynomial time sampling algorithms for low-rank Ising models such as Hopfield networks with a fixed number of patterns and Bayesian clustering models with low-dimensional contexts, and greatly improves the polynomial time sampling regime for the antiferromagnetic/ferromagnetic Ising model with inconsistent field on expander graphs. It also improves on previous approximation algorithm results based on the naive mean-field approximation in variational methods and statistical physics. Our approach is based on a new fusion of ideas from the MCMC and variational inference worlds. As part of our algorithm, we define a new nonconvex variational problem which allows us to sample from an exponential reweighting of a distribution by a negative definite quadratic form, and show how to make this procedure provably efficient using stochastic gradient descent. On top of this, we construct a new simulated tempering chain (on an extended state space arising from the Hubbard-Stratonovich transform) which overcomes the obstacle posed by large positive eigenvalues, and combine it with the SGD-based sampler to solve the full problem.
[ { "created": "Thu, 17 Feb 2022 21:43:50 GMT", "version": "v1" } ]
2022-02-21
[ [ "Koehler", "Frederic", "" ], [ "Lee", "Holden", "" ], [ "Risteski", "Andrej", "" ] ]
We consider Ising models on the hypercube with a general interaction matrix $J$, and give a polynomial time sampling algorithm when all but $O(1)$ eigenvalues of $J$ lie in an interval of length one, a situation which occurs in many models of interest. This was previously known for the Glauber dynamics when *all* eigenvalues fit in an interval of length one; however, a single outlier can force the Glauber dynamics to mix torpidly. Our general result implies the first polynomial time sampling algorithms for low-rank Ising models such as Hopfield networks with a fixed number of patterns and Bayesian clustering models with low-dimensional contexts, and greatly improves the polynomial time sampling regime for the antiferromagnetic/ferromagnetic Ising model with inconsistent field on expander graphs. It also improves on previous approximation algorithm results based on the naive mean-field approximation in variational methods and statistical physics. Our approach is based on a new fusion of ideas from the MCMC and variational inference worlds. As part of our algorithm, we define a new nonconvex variational problem which allows us to sample from an exponential reweighting of a distribution by a negative definite quadratic form, and show how to make this procedure provably efficient using stochastic gradient descent. On top of this, we construct a new simulated tempering chain (on an extended state space arising from the Hubbard-Stratonovich transform) which overcomes the obstacle posed by large positive eigenvalues, and combine it with the SGD-based sampler to solve the full problem.
2201.12393
Matej Lieskovsk\'y
Matej Lieskovsk\'y (Computer Science Institute of Charles University, Faculty of Mathematics and Physics, Prague, Czechia. Partially supported by GA \v{C}R project 19-27871X.)
Better Algorithms for Online Bin Stretching via Computer Search
null
null
null
null
cs.DS
http://creativecommons.org/licenses/by-sa/4.0/
We present an algorithm for computing upper bounds for the Online Bin Stretching Problem with a small number of bins and the resulting upper bounds for 4, 5 and 6 bins. This both demonstrates the possibility of using computer search for upper bounds on a fundamentally real-valued online problem and improves upon the best bounds know so far, some of which have remained unchanged since 2001.
[ { "created": "Fri, 28 Jan 2022 19:50:34 GMT", "version": "v1" } ]
2022-02-01
[ [ "Lieskovský", "Matej", "", "Computer Science Institute of Charles University,\n Faculty of Mathematics and Physics, Prague, Czechia. Partially supported by\n GA ČR project 19-27871X." ] ]
We present an algorithm for computing upper bounds for the Online Bin Stretching Problem with a small number of bins and the resulting upper bounds for 4, 5 and 6 bins. This both demonstrates the possibility of using computer search for upper bounds on a fundamentally real-valued online problem and improves upon the best bounds know so far, some of which have remained unchanged since 2001.
1604.05657
Rafael Rodrigues da Silva
Rafael Rodrigues da Silva, Bo Wu and Hai Lin
Formal Design of Robot Integrated Task and Motion Planning
Submitted to the 55th IEEE Conference on Decision and Control (CDC16)
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Integrated Task and Motion Planning (ITMP) for mobile robots in a dynamic environment with moving obstacles is a challenging research question and attracts more and more attentions recently. Most existing methods either restrict to static environments or lack performance guarantees. This motivates us to investigate the ITMP problem using formal methods and propose a bottom-up compositional design approach called CoSMoP (Composition of Safe Motion Primitives). Our basic idea is to synthesize a global motion plan through composing simple local moves and actions, and to achieve its performance guarantee through modular and incremental verifications. The design consists of two steps. First, basic motion primitives are designed and verified locally. Then, a global motion path is built upon these certified motion primitives by concatenating them together. In particular, we model the motion primitives as hybrid automata and verify their safety through formulating as Differential Dynamic Logic (d$\mathcal{L}$). Furthermore, these proven safe motion primitives are composed based on an encoding to Satisfiability Modulo Theories (SMT) that takes into account the geometric constraints. Since d$\mathcal{L}$ allows compositional verification, the sequential composition of the safe motion primitives also preserves safety properties. Therefore, the CoSMoP generates correct plans for given task specifications that are formally proven safe even for moving obstacles. Illustrative examples are presented to show the effectiveness of the methods.
[ { "created": "Tue, 19 Apr 2016 17:25:02 GMT", "version": "v1" }, { "created": "Thu, 15 Dec 2016 17:02:10 GMT", "version": "v2" } ]
2016-12-16
[ [ "da Silva", "Rafael Rodrigues", "" ], [ "Wu", "Bo", "" ], [ "Lin", "Hai", "" ] ]
Integrated Task and Motion Planning (ITMP) for mobile robots in a dynamic environment with moving obstacles is a challenging research question and attracts more and more attentions recently. Most existing methods either restrict to static environments or lack performance guarantees. This motivates us to investigate the ITMP problem using formal methods and propose a bottom-up compositional design approach called CoSMoP (Composition of Safe Motion Primitives). Our basic idea is to synthesize a global motion plan through composing simple local moves and actions, and to achieve its performance guarantee through modular and incremental verifications. The design consists of two steps. First, basic motion primitives are designed and verified locally. Then, a global motion path is built upon these certified motion primitives by concatenating them together. In particular, we model the motion primitives as hybrid automata and verify their safety through formulating as Differential Dynamic Logic (d$\mathcal{L}$). Furthermore, these proven safe motion primitives are composed based on an encoding to Satisfiability Modulo Theories (SMT) that takes into account the geometric constraints. Since d$\mathcal{L}$ allows compositional verification, the sequential composition of the safe motion primitives also preserves safety properties. Therefore, the CoSMoP generates correct plans for given task specifications that are formally proven safe even for moving obstacles. Illustrative examples are presented to show the effectiveness of the methods.
2203.15108
Shashi Narayan
Shashi Narayan, Gon\c{c}alo Sim\~oes, Yao Zhao, Joshua Maynez, Dipanjan Das, Michael Collins and Mirella Lapata
A Well-Composed Text is Half Done! Composition Sampling for Diverse Conditional Generation
21 pages, ACL 2022
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
We propose Composition Sampling, a simple but effective method to generate diverse outputs for conditional generation of higher quality compared to previous stochastic decoding strategies. It builds on recently proposed plan-based neural generation models (Narayan et al, 2021) that are trained to first create a composition of the output and then generate by conditioning on it and the input. Our approach avoids text degeneration by first sampling a composition in the form of an entity chain and then using beam search to generate the best possible text grounded to this entity chain. Experiments on summarization (CNN/DailyMail and XSum) and question generation (SQuAD), using existing and newly proposed automatic metrics together with human-based evaluation, demonstrate that Composition Sampling is currently the best available decoding strategy for generating diverse meaningful outputs.
[ { "created": "Mon, 28 Mar 2022 21:24:03 GMT", "version": "v1" } ]
2022-03-30
[ [ "Narayan", "Shashi", "" ], [ "Simões", "Gonçalo", "" ], [ "Zhao", "Yao", "" ], [ "Maynez", "Joshua", "" ], [ "Das", "Dipanjan", "" ], [ "Collins", "Michael", "" ], [ "Lapata", "Mirella", "" ] ]
We propose Composition Sampling, a simple but effective method to generate diverse outputs for conditional generation of higher quality compared to previous stochastic decoding strategies. It builds on recently proposed plan-based neural generation models (Narayan et al, 2021) that are trained to first create a composition of the output and then generate by conditioning on it and the input. Our approach avoids text degeneration by first sampling a composition in the form of an entity chain and then using beam search to generate the best possible text grounded to this entity chain. Experiments on summarization (CNN/DailyMail and XSum) and question generation (SQuAD), using existing and newly proposed automatic metrics together with human-based evaluation, demonstrate that Composition Sampling is currently the best available decoding strategy for generating diverse meaningful outputs.
1101.4632
Serguei Mokhov
Serguei A. Mokhov and Marc-Andr\'e Laverdi\`ere and Ali Benssam and Djamel Benredjem
A Secure Web-Based File Exchange Server: Software Requirements Specification Document
13 pages, 3 figures; a December 2005 report
null
null
null
cs.CR
http://creativecommons.org/licenses/by/3.0/
This document presents brief software specification of a secure file exchange system prototype involving mutual authentication of the users via their browser and the application server with PKI-based certificates as credentials, the use of LDAP for credential management, and authentication between the application and database servers to maintain a high level of trust between all parties.
[ { "created": "Mon, 24 Jan 2011 19:49:14 GMT", "version": "v1" } ]
2011-01-25
[ [ "Mokhov", "Serguei A.", "" ], [ "Laverdière", "Marc-André", "" ], [ "Benssam", "Ali", "" ], [ "Benredjem", "Djamel", "" ] ]
This document presents brief software specification of a secure file exchange system prototype involving mutual authentication of the users via their browser and the application server with PKI-based certificates as credentials, the use of LDAP for credential management, and authentication between the application and database servers to maintain a high level of trust between all parties.
1401.6126
Andrew Gleibman Ph.D.
Andrew Gleibman
Delegating Custom Object Detection Tasks to a Universal Classification System
3 pages, 2 figures, 6 refs. arXiv admin note: substantial text overlap with arXiv:1310.7170
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, a concept of multipurpose object detection system, recently introduced in our previous work, is clarified. The business aspect of this method is transformation of a classifier into an object detector/locator via an image grid. This is a universal framework for locating objects of interest through classification. The framework standardizes and simplifies implementation of custom systems by doing only a custom analysis of the classification results on the image grid.
[ { "created": "Thu, 19 Dec 2013 08:17:24 GMT", "version": "v1" } ]
2014-01-24
[ [ "Gleibman", "Andrew", "" ] ]
In this paper, a concept of multipurpose object detection system, recently introduced in our previous work, is clarified. The business aspect of this method is transformation of a classifier into an object detector/locator via an image grid. This is a universal framework for locating objects of interest through classification. The framework standardizes and simplifies implementation of custom systems by doing only a custom analysis of the classification results on the image grid.
2305.13783
Guoming Huang
Shuqiao Huang, Xiru Wu, Guoming Huang
Deep Reinforcement Learning-based Multi-objective Path Planning on the Off-road Terrain Environment for Ground Vehicles
9 pages,8 figures
null
null
null
cs.RO cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Due to the vastly different energy consumption between up-slope and down-slope, a path with the shortest length on a complex off-road terrain environment (2.5D map) is not always the path with the least energy consumption. For any energy-sensitive vehicle, realizing a good trade-off between distance and energy consumption in 2.5D path planning is significantly meaningful. In this paper, we propose a deep reinforcement learning-based 2.5D multi-objective path planning method (DMOP). The DMOP can efficiently find the desired path in three steps: (1) Transform the high-resolution 2.5D map into a small-size map. (2) Use a trained deep Q network (DQN) to find the desired path on the small-size map. (3) Build the planned path to the original high-resolution map using a path-enhanced method. In addition, the hybrid exploration strategy and reward shaping theory are applied to train the DQN. The reward function is constructed with the information of terrain, distance, and border. Simulation results show that the proposed method can finish the multi-objective 2.5D path planning task with significantly high efficiency. With similar planned paths, the speed of the proposed method is more than 100 times faster than that of the A* method and 30 times faster than that of H3DM method. Also, simulation proves that the method has powerful reasoning capability that enables it to perform arbitrary untrained planning tasks.
[ { "created": "Tue, 23 May 2023 07:53:35 GMT", "version": "v1" }, { "created": "Wed, 12 Jul 2023 11:13:20 GMT", "version": "v2" } ]
2023-07-13
[ [ "Huang", "Shuqiao", "" ], [ "Wu", "Xiru", "" ], [ "Huang", "Guoming", "" ] ]
Due to the vastly different energy consumption between up-slope and down-slope, a path with the shortest length on a complex off-road terrain environment (2.5D map) is not always the path with the least energy consumption. For any energy-sensitive vehicle, realizing a good trade-off between distance and energy consumption in 2.5D path planning is significantly meaningful. In this paper, we propose a deep reinforcement learning-based 2.5D multi-objective path planning method (DMOP). The DMOP can efficiently find the desired path in three steps: (1) Transform the high-resolution 2.5D map into a small-size map. (2) Use a trained deep Q network (DQN) to find the desired path on the small-size map. (3) Build the planned path to the original high-resolution map using a path-enhanced method. In addition, the hybrid exploration strategy and reward shaping theory are applied to train the DQN. The reward function is constructed with the information of terrain, distance, and border. Simulation results show that the proposed method can finish the multi-objective 2.5D path planning task with significantly high efficiency. With similar planned paths, the speed of the proposed method is more than 100 times faster than that of the A* method and 30 times faster than that of H3DM method. Also, simulation proves that the method has powerful reasoning capability that enables it to perform arbitrary untrained planning tasks.
2112.12966
Manmeet Singh
Manmeet Singh, Bipin Kumar, Rajib Chattopadhyay, K Amarjyothi, Anup K Sutar, Sukanta Roy, Suryachandra A Rao, Ravi S. Nanjundiah
Machine learning for Earth System Science (ESS): A survey, status and future directions for South Asia
null
null
null
null
cs.LG physics.ao-ph
http://creativecommons.org/licenses/by/4.0/
This survey focuses on the current problems in Earth systems science where machine learning algorithms can be applied. It provides an overview of previous work, ongoing work at the Ministry of Earth Sciences, Gov. of India, and future applications of ML algorithms to some significant earth science problems. We provide a comparison of previous work with this survey, a mind map of multidimensional areas related to machine learning and a Gartner's hype cycle for machine learning in Earth system science (ESS). We mainly focus on the critical components in Earth Sciences, including atmospheric, Ocean, Seismology, and biosphere, and cover AI/ML applications to statistical downscaling and forecasting problems.
[ { "created": "Fri, 24 Dec 2021 06:44:55 GMT", "version": "v1" } ]
2021-12-28
[ [ "Singh", "Manmeet", "" ], [ "Kumar", "Bipin", "" ], [ "Chattopadhyay", "Rajib", "" ], [ "Amarjyothi", "K", "" ], [ "Sutar", "Anup K", "" ], [ "Roy", "Sukanta", "" ], [ "Rao", "Suryachandra A", "" ], [ "Nanjundiah", "Ravi S.", "" ] ]
This survey focuses on the current problems in Earth systems science where machine learning algorithms can be applied. It provides an overview of previous work, ongoing work at the Ministry of Earth Sciences, Gov. of India, and future applications of ML algorithms to some significant earth science problems. We provide a comparison of previous work with this survey, a mind map of multidimensional areas related to machine learning and a Gartner's hype cycle for machine learning in Earth system science (ESS). We mainly focus on the critical components in Earth Sciences, including atmospheric, Ocean, Seismology, and biosphere, and cover AI/ML applications to statistical downscaling and forecasting problems.
1010.0958
Partha Sarathi Mandal
Punit Sharma and Partha Sarathi Mandal
Reconstruction of Aggregation Tree in spite of Faulty Nodes in Wireless Sensor Networks
this is a 5 page paper. this paper has been submitted to WCSN 2010
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent advances in wireless sensor networks (WSNs) have led to many new promissing applications. However data communication between nodes consumes a large portion of the total energy of WSNs. Consequently efficient data aggregation technique can help greatly to reduce power consumption. Data aggregation has emerged as a basic approach in WSNs in order to reduce the number of transmissions of sensor nodes over {\it aggregation tree} and hence minimizing the overall power consumption in the network. If a sensor node fails during data aggregation then the aggregation tree is disconnected. Hence the WSNs rely on in-network aggregation for efficiency but a single faulty node can severely influence the outcome by contributing an arbitrary partial aggregate value. In this paper we have presented a distributed algorithm that reconstruct the aggregation tree from the initial aggregation tree excluding the faulty sensor node. This is a synchronous model that is completed in several rounds. Our proposed scheme can handle multiple number of faulty nodes as well.
[ { "created": "Tue, 5 Oct 2010 17:54:58 GMT", "version": "v1" } ]
2010-10-06
[ [ "Sharma", "Punit", "" ], [ "Mandal", "Partha Sarathi", "" ] ]
Recent advances in wireless sensor networks (WSNs) have led to many new promissing applications. However data communication between nodes consumes a large portion of the total energy of WSNs. Consequently efficient data aggregation technique can help greatly to reduce power consumption. Data aggregation has emerged as a basic approach in WSNs in order to reduce the number of transmissions of sensor nodes over {\it aggregation tree} and hence minimizing the overall power consumption in the network. If a sensor node fails during data aggregation then the aggregation tree is disconnected. Hence the WSNs rely on in-network aggregation for efficiency but a single faulty node can severely influence the outcome by contributing an arbitrary partial aggregate value. In this paper we have presented a distributed algorithm that reconstruct the aggregation tree from the initial aggregation tree excluding the faulty sensor node. This is a synchronous model that is completed in several rounds. Our proposed scheme can handle multiple number of faulty nodes as well.
1803.09730
Brent Schlotfeldt
Brent Schlotfeldt, Vasileios Tzoumas, Dinesh Thakur, George J. Pappas
Resilient Active Information Gathering with Mobile Robots
null
null
null
null
cs.RO cs.MA math.OC stat.AP stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Applications of safety, security, and rescue in robotics, such as multi-robot target tracking, involve the execution of information acquisition tasks by teams of mobile robots. However, in failure-prone or adversarial environments, robots get attacked, their communication channels get jammed, and their sensors may fail, resulting in the withdrawal of robots from the collective task, and consequently the inability of the remaining active robots to coordinate with each other. As a result, traditional design paradigms become insufficient and, in contrast, resilient designs against system-wide failures and attacks become important. In general, resilient design problems are hard, and even though they often involve objective functions that are monotone or submodular, scalable approximation algorithms for their solution have been hitherto unknown. In this paper, we provide the first algorithm, enabling the following capabilities: minimal communication, i.e., the algorithm is executed by the robots based only on minimal communication between them; system-wide resiliency, i.e., the algorithm is valid for any number of denial-of-service attacks and failures; and provable approximation performance, i.e., the algorithm ensures for all monotone (and not necessarily submodular) objective functions a solution that is finitely close to the optimal. We quantify our algorithm's approximation performance using a notion of curvature for monotone set functions. We support our theoretical analyses with simulated and real-world experiments, by considering an active information gathering scenario, namely, multi-robot target tracking.
[ { "created": "Mon, 26 Mar 2018 17:41:05 GMT", "version": "v1" }, { "created": "Thu, 2 Aug 2018 03:32:18 GMT", "version": "v2" }, { "created": "Sun, 2 Sep 2018 14:59:22 GMT", "version": "v3" } ]
2018-09-05
[ [ "Schlotfeldt", "Brent", "" ], [ "Tzoumas", "Vasileios", "" ], [ "Thakur", "Dinesh", "" ], [ "Pappas", "George J.", "" ] ]
Applications of safety, security, and rescue in robotics, such as multi-robot target tracking, involve the execution of information acquisition tasks by teams of mobile robots. However, in failure-prone or adversarial environments, robots get attacked, their communication channels get jammed, and their sensors may fail, resulting in the withdrawal of robots from the collective task, and consequently the inability of the remaining active robots to coordinate with each other. As a result, traditional design paradigms become insufficient and, in contrast, resilient designs against system-wide failures and attacks become important. In general, resilient design problems are hard, and even though they often involve objective functions that are monotone or submodular, scalable approximation algorithms for their solution have been hitherto unknown. In this paper, we provide the first algorithm, enabling the following capabilities: minimal communication, i.e., the algorithm is executed by the robots based only on minimal communication between them; system-wide resiliency, i.e., the algorithm is valid for any number of denial-of-service attacks and failures; and provable approximation performance, i.e., the algorithm ensures for all monotone (and not necessarily submodular) objective functions a solution that is finitely close to the optimal. We quantify our algorithm's approximation performance using a notion of curvature for monotone set functions. We support our theoretical analyses with simulated and real-world experiments, by considering an active information gathering scenario, namely, multi-robot target tracking.
2005.02934
Pierre-Alexandre Kamienny Mr
Pierre-Alexandre Kamienny, Matteo Pirotta, Alessandro Lazaric, Thibault Lavril, Nicolas Usunier, Ludovic Denoyer
Learning Adaptive Exploration Strategies in Dynamic Environments Through Informed Policy Regularization
18 pages
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the problem of learning exploration-exploitation strategies that effectively adapt to dynamic environments, where the task may change over time. While RNN-based policies could in principle represent such strategies, in practice their training time is prohibitive and the learning process often converges to poor solutions. In this paper, we consider the case where the agent has access to a description of the task (e.g., a task id or task parameters) at training time, but not at test time. We propose a novel algorithm that regularizes the training of an RNN-based policy using informed policies trained to maximize the reward in each task. This dramatically reduces the sample complexity of training RNN-based policies, without losing their representational power. As a result, our method learns exploration strategies that efficiently balance between gathering information about the unknown and changing task and maximizing the reward over time. We test the performance of our algorithm in a variety of environments where tasks may vary within each episode.
[ { "created": "Wed, 6 May 2020 16:14:48 GMT", "version": "v1" } ]
2020-05-07
[ [ "Kamienny", "Pierre-Alexandre", "" ], [ "Pirotta", "Matteo", "" ], [ "Lazaric", "Alessandro", "" ], [ "Lavril", "Thibault", "" ], [ "Usunier", "Nicolas", "" ], [ "Denoyer", "Ludovic", "" ] ]
We study the problem of learning exploration-exploitation strategies that effectively adapt to dynamic environments, where the task may change over time. While RNN-based policies could in principle represent such strategies, in practice their training time is prohibitive and the learning process often converges to poor solutions. In this paper, we consider the case where the agent has access to a description of the task (e.g., a task id or task parameters) at training time, but not at test time. We propose a novel algorithm that regularizes the training of an RNN-based policy using informed policies trained to maximize the reward in each task. This dramatically reduces the sample complexity of training RNN-based policies, without losing their representational power. As a result, our method learns exploration strategies that efficiently balance between gathering information about the unknown and changing task and maximizing the reward over time. We test the performance of our algorithm in a variety of environments where tasks may vary within each episode.
2305.04827
Jos\'e L. Risco-Mart\'in
J. Ignacio Hidalgo, J. Manuel Colmenar, Jos\'e L. Risco-Mart\'in, Alfredo Cuesta-Infante, Esther Maqueda, Marta Botella and Jos\'e Antonio Rubio
Modeling glycemia in humans by means of Grammatical Evolution
null
Applied Soft Computing, 20, pp. 40-53, 2014
10.1016/j.asoc.2013.11.006
null
cs.NE cs.AI
http://creativecommons.org/licenses/by-sa/4.0/
Diabetes mellitus is a disease that affects to hundreds of millions of people worldwide. Maintaining a good control of the disease is critical to avoid severe long-term complications. In recent years, several artificial pancreas systems have been proposed and developed, which are increasingly advanced. However there is still a lot of research to do. One of the main problems that arises in the (semi) automatic control of diabetes, is to get a model explaining how glycemia (glucose levels in blood) varies with insulin, food intakes and other factors, fitting the characteristics of each individual or patient. This paper proposes the application of evolutionary computation techniques to obtain customized models of patients, unlike most of previous approaches which obtain averaged models. The proposal is based on a kind of genetic programming based on grammars known as Grammatical Evolution (GE). The proposal has been tested with in-silico patient data and results are clearly positive. We present also a study of four different grammars and five objective functions. In the test phase the models characterized the glucose with a mean percentage average error of 13.69\%, modeling well also both hyper and hypoglycemic situations.
[ { "created": "Thu, 27 Apr 2023 14:33:52 GMT", "version": "v1" } ]
2023-05-09
[ [ "Hidalgo", "J. Ignacio", "" ], [ "Colmenar", "J. Manuel", "" ], [ "Risco-Martín", "José L.", "" ], [ "Cuesta-Infante", "Alfredo", "" ], [ "Maqueda", "Esther", "" ], [ "Botella", "Marta", "" ], [ "Rubio", "José Antonio", "" ] ]
Diabetes mellitus is a disease that affects to hundreds of millions of people worldwide. Maintaining a good control of the disease is critical to avoid severe long-term complications. In recent years, several artificial pancreas systems have been proposed and developed, which are increasingly advanced. However there is still a lot of research to do. One of the main problems that arises in the (semi) automatic control of diabetes, is to get a model explaining how glycemia (glucose levels in blood) varies with insulin, food intakes and other factors, fitting the characteristics of each individual or patient. This paper proposes the application of evolutionary computation techniques to obtain customized models of patients, unlike most of previous approaches which obtain averaged models. The proposal is based on a kind of genetic programming based on grammars known as Grammatical Evolution (GE). The proposal has been tested with in-silico patient data and results are clearly positive. We present also a study of four different grammars and five objective functions. In the test phase the models characterized the glucose with a mean percentage average error of 13.69\%, modeling well also both hyper and hypoglycemic situations.
2203.05553
Daniel McKee
Daniel McKee, Zitong Zhan, Bing Shuai, Davide Modolo, Joseph Tighe, Svetlana Lazebnik
Transfer of Representations to Video Label Propagation: Implementation Factors Matter
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work studies feature representations for dense label propagation in video, with a focus on recently proposed methods that learn video correspondence using self-supervised signals such as colorization or temporal cycle consistency. In the literature, these methods have been evaluated with an array of inconsistent settings, making it difficult to discern trends or compare performance fairly. Starting with a unified formulation of the label propagation algorithm that encompasses most existing variations, we systematically study the impact of important implementation factors in feature extraction and label propagation. Along the way, we report the accuracies of properly tuned supervised and unsupervised still image baselines, which are higher than those found in previous works. We also demonstrate that augmenting video-based correspondence cues with still-image-based ones can further improve performance. We then attempt a fair comparison of recent video-based methods on the DAVIS benchmark, showing convergence of best methods to performance levels near our strong ImageNet baseline, despite the usage of a variety of specialized video-based losses and training particulars. Additional comparisons on JHMDB and VIP datasets confirm the similar performance of current methods. We hope that this study will help to improve evaluation practices and better inform future research directions in temporal correspondence.
[ { "created": "Thu, 10 Mar 2022 18:58:22 GMT", "version": "v1" } ]
2022-03-11
[ [ "McKee", "Daniel", "" ], [ "Zhan", "Zitong", "" ], [ "Shuai", "Bing", "" ], [ "Modolo", "Davide", "" ], [ "Tighe", "Joseph", "" ], [ "Lazebnik", "Svetlana", "" ] ]
This work studies feature representations for dense label propagation in video, with a focus on recently proposed methods that learn video correspondence using self-supervised signals such as colorization or temporal cycle consistency. In the literature, these methods have been evaluated with an array of inconsistent settings, making it difficult to discern trends or compare performance fairly. Starting with a unified formulation of the label propagation algorithm that encompasses most existing variations, we systematically study the impact of important implementation factors in feature extraction and label propagation. Along the way, we report the accuracies of properly tuned supervised and unsupervised still image baselines, which are higher than those found in previous works. We also demonstrate that augmenting video-based correspondence cues with still-image-based ones can further improve performance. We then attempt a fair comparison of recent video-based methods on the DAVIS benchmark, showing convergence of best methods to performance levels near our strong ImageNet baseline, despite the usage of a variety of specialized video-based losses and training particulars. Additional comparisons on JHMDB and VIP datasets confirm the similar performance of current methods. We hope that this study will help to improve evaluation practices and better inform future research directions in temporal correspondence.
2310.00334
Avi Kaplan
Andrej Bogdanov, Krishnamoorthy Dinesh, Yuval Filmus, Yuval Ishai, Avi Kaplan, Sruthi Sekar
Bounded Simultaneous Messages
This version has a modified variant of the succinct subset sum candidate from the original version of this paper
null
null
null
cs.CC cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the following question of bounded simultaneous messages (BSM) protocols: Can computationally unbounded Alice and Bob evaluate a function $f(x,y)$ of their inputs by sending polynomial-size messages to a computationally bounded Carol? The special case where $f$ is the mod-2 inner-product function and Carol is bounded to AC$^0$ has been studied in previous works. The general question can be broadly motivated by applications in which distributed computation is more costly than local computation, including secure two-party computation. In this work, we initiate a more systematic study of the BSM model, with different functions $f$ and computational bounds on Carol. In particular, we give evidence against the existence of BSM protocols with polynomial-size Carol for naturally distributed variants of NP-complete languages.
[ { "created": "Sat, 30 Sep 2023 10:42:03 GMT", "version": "v1" }, { "created": "Thu, 5 Oct 2023 08:13:45 GMT", "version": "v2" }, { "created": "Sun, 26 Nov 2023 11:09:30 GMT", "version": "v3" }, { "created": "Thu, 21 Dec 2023 15:30:41 GMT", "version": "v4" } ]
2023-12-25
[ [ "Bogdanov", "Andrej", "" ], [ "Dinesh", "Krishnamoorthy", "" ], [ "Filmus", "Yuval", "" ], [ "Ishai", "Yuval", "" ], [ "Kaplan", "Avi", "" ], [ "Sekar", "Sruthi", "" ] ]
We consider the following question of bounded simultaneous messages (BSM) protocols: Can computationally unbounded Alice and Bob evaluate a function $f(x,y)$ of their inputs by sending polynomial-size messages to a computationally bounded Carol? The special case where $f$ is the mod-2 inner-product function and Carol is bounded to AC$^0$ has been studied in previous works. The general question can be broadly motivated by applications in which distributed computation is more costly than local computation, including secure two-party computation. In this work, we initiate a more systematic study of the BSM model, with different functions $f$ and computational bounds on Carol. In particular, we give evidence against the existence of BSM protocols with polynomial-size Carol for naturally distributed variants of NP-complete languages.
1601.03817
Weining Zhu
Weining Zhu
Entity-oriented spatial coding and discrete topological spatial relations
null
null
null
null
cs.CG cs.DM math.CO math.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Based on a newly proposed spatial data model - spatial chromatic model (SCM), we developed a spatial coding scheme, called full-coded ordinary arranged chromatic diagram (full-OACD). Full-OACD is a type of spatial tessellation, where space is partitioned into a number of subspaces such as cells, edges, and vertexes. These subspaces are called spatial particles and assigned with unique codes - chromatic codes. The generation, structures, computations, and properties of full-OACD are introduced and relations between chromatic codes and particle spatial topology are investigated, indicating that chromatic codes provide a potential useful and meaningful tool not only for spatial analysis in geographical information science, but also for other relevant disciplines such as discrete mathematics, topology, and computer science.
[ { "created": "Fri, 15 Jan 2016 05:17:40 GMT", "version": "v1" } ]
2016-01-18
[ [ "Zhu", "Weining", "" ] ]
Based on a newly proposed spatial data model - spatial chromatic model (SCM), we developed a spatial coding scheme, called full-coded ordinary arranged chromatic diagram (full-OACD). Full-OACD is a type of spatial tessellation, where space is partitioned into a number of subspaces such as cells, edges, and vertexes. These subspaces are called spatial particles and assigned with unique codes - chromatic codes. The generation, structures, computations, and properties of full-OACD are introduced and relations between chromatic codes and particle spatial topology are investigated, indicating that chromatic codes provide a potential useful and meaningful tool not only for spatial analysis in geographical information science, but also for other relevant disciplines such as discrete mathematics, topology, and computer science.
1203.3055
David Garcia Sanchez
David Garcia Sanchez, Bruno Lacarri\`ere, Marjorie Musy, Bernard Bourges
Application of sensitivity analysis in building energy simulations: combining first and second order elementary effects Methods
null
null
null
null
cs.CE stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sensitivity analysis plays an important role in the understanding of complex models. It helps to identify influence of input parameters in relation to the outputs. It can be also a tool to understand the behavior of the model and then can help in its development stage. This study aims to analyze and illustrate the potential usefulness of combining first and second-order sensitivity analysis, applied to a building energy model (ESP-r). Through the example of a collective building, a sensitivity analysis is performed using the method of elementary effects (also known as Morris method), including an analysis of interactions between the input parameters (second order analysis). Importance of higher-order analysis to better support the results of first order analysis, highlighted especially in such complex model. Several aspects are tackled to implement efficiently the multi-order sensitivity analysis: interval size of the variables, management of non-linearity, usefulness of various outputs.
[ { "created": "Wed, 14 Mar 2012 11:43:21 GMT", "version": "v1" }, { "created": "Fri, 28 Dec 2012 10:54:19 GMT", "version": "v2" } ]
2013-01-01
[ [ "Sanchez", "David Garcia", "" ], [ "Lacarrière", "Bruno", "" ], [ "Musy", "Marjorie", "" ], [ "Bourges", "Bernard", "" ] ]
Sensitivity analysis plays an important role in the understanding of complex models. It helps to identify influence of input parameters in relation to the outputs. It can be also a tool to understand the behavior of the model and then can help in its development stage. This study aims to analyze and illustrate the potential usefulness of combining first and second-order sensitivity analysis, applied to a building energy model (ESP-r). Through the example of a collective building, a sensitivity analysis is performed using the method of elementary effects (also known as Morris method), including an analysis of interactions between the input parameters (second order analysis). Importance of higher-order analysis to better support the results of first order analysis, highlighted especially in such complex model. Several aspects are tackled to implement efficiently the multi-order sensitivity analysis: interval size of the variables, management of non-linearity, usefulness of various outputs.
1102.1498
John Tadrous Mr.
John Tadrous and Mohammed Nafie
On Rate-Splitting by a Secondary Link in Multiple Access Primary Network
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An achievable rate region is obtained for a primary multiple access network coexisting with a secondary link of one transmitter and a corresponding receiver. The rate region depicts the sum primary rate versus the secondary rate and is established assuming that the secondary link performs rate-splitting. The achievable rate region is the union of two types of achievable rate regions. The first type is a rate region established assuming that the secondary receiver cannot decode any primary signal, whereas the second is established assuming that the secondary receiver can decode the signal of one primary receiver. The achievable rate region is determined first assuming discrete memoryless channel (DMC) then the results are applied to a Gaussian channel. In the Gaussian channel, the performance of rate-splitting is characterized for the two types of rate regions. Moreover, a necessary and sufficient condition to determine which primary signal that the secondary receiver can decode without degrading the range of primary achievable sum rates is provided. When this condition is satisfied by a certain primary user, the secondary receiver can decode its signal and achieve larger rates without reducing the primary achievable sum rates from the case in which it does not decode any primary signal. It is also shown that, the probability of having at least one primary user satisfying this condition grows with the primary signal to noise ratio.
[ { "created": "Tue, 8 Feb 2011 03:58:35 GMT", "version": "v1" } ]
2015-03-18
[ [ "Tadrous", "John", "" ], [ "Nafie", "Mohammed", "" ] ]
An achievable rate region is obtained for a primary multiple access network coexisting with a secondary link of one transmitter and a corresponding receiver. The rate region depicts the sum primary rate versus the secondary rate and is established assuming that the secondary link performs rate-splitting. The achievable rate region is the union of two types of achievable rate regions. The first type is a rate region established assuming that the secondary receiver cannot decode any primary signal, whereas the second is established assuming that the secondary receiver can decode the signal of one primary receiver. The achievable rate region is determined first assuming discrete memoryless channel (DMC) then the results are applied to a Gaussian channel. In the Gaussian channel, the performance of rate-splitting is characterized for the two types of rate regions. Moreover, a necessary and sufficient condition to determine which primary signal that the secondary receiver can decode without degrading the range of primary achievable sum rates is provided. When this condition is satisfied by a certain primary user, the secondary receiver can decode its signal and achieve larger rates without reducing the primary achievable sum rates from the case in which it does not decode any primary signal. It is also shown that, the probability of having at least one primary user satisfying this condition grows with the primary signal to noise ratio.
2407.00676
Yuchuan Tian
Yuchuan Tian, Jianhong Han, Hanting Chen, Yuanyuan Xi, Guoyang Zhang, Jie Hu, Chao Xu, Yunhe Wang
Instruct-IPT: All-in-One Image Processing Transformer via Weight Modulation
15 pages, 4 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Due to the unaffordable size and intensive computation costs of low-level vision models, All-in-One models that are designed to address a handful of low-level vision tasks simultaneously have been popular. However, existing All-in-One models are limited in terms of the range of tasks and performance. To overcome these limitations, we propose Instruct-IPT -- an All-in-One Image Processing Transformer that could effectively address manifold image restoration tasks with large inter-task gaps, such as denoising, deblurring, deraining, dehazing, and desnowing. Rather than popular feature adaptation methods, we propose weight modulation that adapts weights to specific tasks. Firstly, we figure out task-sensitive weights via a toy experiment and introduce task-specific biases on top of them. Secondly, we conduct rank analysis for a good compression strategy and perform low-rank decomposition on the biases. Thirdly, we propose synchronous training that updates the task-general backbone model and the task-specific biases simultaneously. In this way, the model is instructed to learn general and task-specific knowledge. Via our simple yet effective method that instructs the IPT to be task experts, Instruct-IPT could better cooperate between tasks with distinct characteristics at humble costs. Further, we propose to maneuver Instruct-IPT with text instructions for better user interfaces. We have conducted experiments on Instruct-IPT to demonstrate the effectiveness of our method on manifold tasks, and we have effectively extended our method to diffusion denoisers as well. The code is available at https://github.com/huawei-noah/Pretrained-IPT.
[ { "created": "Sun, 30 Jun 2024 12:13:34 GMT", "version": "v1" } ]
2024-07-02
[ [ "Tian", "Yuchuan", "" ], [ "Han", "Jianhong", "" ], [ "Chen", "Hanting", "" ], [ "Xi", "Yuanyuan", "" ], [ "Zhang", "Guoyang", "" ], [ "Hu", "Jie", "" ], [ "Xu", "Chao", "" ], [ "Wang", "Yunhe", "" ] ]
Due to the unaffordable size and intensive computation costs of low-level vision models, All-in-One models that are designed to address a handful of low-level vision tasks simultaneously have been popular. However, existing All-in-One models are limited in terms of the range of tasks and performance. To overcome these limitations, we propose Instruct-IPT -- an All-in-One Image Processing Transformer that could effectively address manifold image restoration tasks with large inter-task gaps, such as denoising, deblurring, deraining, dehazing, and desnowing. Rather than popular feature adaptation methods, we propose weight modulation that adapts weights to specific tasks. Firstly, we figure out task-sensitive weights via a toy experiment and introduce task-specific biases on top of them. Secondly, we conduct rank analysis for a good compression strategy and perform low-rank decomposition on the biases. Thirdly, we propose synchronous training that updates the task-general backbone model and the task-specific biases simultaneously. In this way, the model is instructed to learn general and task-specific knowledge. Via our simple yet effective method that instructs the IPT to be task experts, Instruct-IPT could better cooperate between tasks with distinct characteristics at humble costs. Further, we propose to maneuver Instruct-IPT with text instructions for better user interfaces. We have conducted experiments on Instruct-IPT to demonstrate the effectiveness of our method on manifold tasks, and we have effectively extended our method to diffusion denoisers as well. The code is available at https://github.com/huawei-noah/Pretrained-IPT.
1908.09530
Aakash K.T.
Aakash KT, Parikshit Sakurikar, Saurabh Saini, P. J. Narayanan
A Flexible Neural Renderer for Material Visualization
10 pages
null
null
null
cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Photo realism in computer generated imagery is crucially dependent on how well an artist is able to recreate real-world materials in the scene. The workflow for material modeling and editing typically involves manual tweaking of material parameters and uses a standard path tracing engine for visual feedback. A lot of time may be spent in iterative selection and rendering of materials at an appropriate quality. In this work, we propose a convolutional neural network based workflow which quickly generates high-quality ray traced material visualizations on a shaderball. Our novel architecture allows for control over environment lighting and assists material selection along with the ability to render spatially-varying materials. Additionally, our network enables control over environment lighting which gives an artist more freedom and provides better visualization of the rendered material. Comparison with state-of-the-art denoising and neural rendering techniques suggests that our neural renderer performs faster and better. We provide a interactive visualization tool and release our training dataset to foster further research in this area.
[ { "created": "Mon, 26 Aug 2019 08:52:53 GMT", "version": "v1" } ]
2019-08-27
[ [ "KT", "Aakash", "" ], [ "Sakurikar", "Parikshit", "" ], [ "Saini", "Saurabh", "" ], [ "Narayanan", "P. J.", "" ] ]
Photo realism in computer generated imagery is crucially dependent on how well an artist is able to recreate real-world materials in the scene. The workflow for material modeling and editing typically involves manual tweaking of material parameters and uses a standard path tracing engine for visual feedback. A lot of time may be spent in iterative selection and rendering of materials at an appropriate quality. In this work, we propose a convolutional neural network based workflow which quickly generates high-quality ray traced material visualizations on a shaderball. Our novel architecture allows for control over environment lighting and assists material selection along with the ability to render spatially-varying materials. Additionally, our network enables control over environment lighting which gives an artist more freedom and provides better visualization of the rendered material. Comparison with state-of-the-art denoising and neural rendering techniques suggests that our neural renderer performs faster and better. We provide a interactive visualization tool and release our training dataset to foster further research in this area.
1405.3491
Andrej Gajduk
Andrej Gajduk, Zoran Utkovski, Lasko Basnarkov, Ljupco Kocarev
Energy-efficiency in Decentralized Wireless Networks: A Game-theoretic Approach inspired by Evolutionary Biology
The paper is accepted for publication at the International Workshop on Physics-inspired Paradigms in Wireless Communications and Networks - PHYSCOMNET 2014, in conjunction with the 12th Intl. Symposium on Modelling and Optimization in Mobile, Ad Hoc, and Wireless Networks - WIOPT, May 12-16, 2014, Hammamet, Tunisia
null
null
null
cs.NI cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Energy efficiency is gaining importance in wireless communication networks which have nodes with limited energy supply and signal processing capabilities. We present a numerical study of cooperative communication scenarios based on simple local rules. This is in contrast to most of the approaches in the literature which enforce cooperation by using complex algorithms and require strategic complexity of the network nodes. The approach is motivated by recent results in evolutionary biology which suggest that, if certain mechanism is at work, cooperation can be favoured by natural selection, i. e. even selfish actions of the individual nodes can lead to emergence of cooperative behaviour in the network. The results of the simulations in the context of wireless communication networks verify these observations and indicate that uncomplicated local rules, followed by simple fitness evaluation, can generate network behaviour which yields global energy efficiency.
[ { "created": "Wed, 14 May 2014 13:31:18 GMT", "version": "v1" } ]
2014-11-27
[ [ "Gajduk", "Andrej", "" ], [ "Utkovski", "Zoran", "" ], [ "Basnarkov", "Lasko", "" ], [ "Kocarev", "Ljupco", "" ] ]
Energy efficiency is gaining importance in wireless communication networks which have nodes with limited energy supply and signal processing capabilities. We present a numerical study of cooperative communication scenarios based on simple local rules. This is in contrast to most of the approaches in the literature which enforce cooperation by using complex algorithms and require strategic complexity of the network nodes. The approach is motivated by recent results in evolutionary biology which suggest that, if certain mechanism is at work, cooperation can be favoured by natural selection, i. e. even selfish actions of the individual nodes can lead to emergence of cooperative behaviour in the network. The results of the simulations in the context of wireless communication networks verify these observations and indicate that uncomplicated local rules, followed by simple fitness evaluation, can generate network behaviour which yields global energy efficiency.
1901.10165
Fabio Cunial
Fabio Cunial and Djamal Belazzougui
Fully-functional bidirectional Burrows-Wheeler indexes
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given a string $T$ on an alphabet of size $\sigma$, we describe a bidirectional Burrows-Wheeler index that takes $O(|T|\log{\sigma})$ bits of space, and that supports the addition \emph{and removal} of one character, on the left or right side of any substring of $T$, in constant time. Previously known data structures that used the same space allowed constant-time addition to any substring of $T$, but they could support removal only from specific substrings of $T$. We also describe an index that supports bidirectional addition and removal in $O(\log{\log{|T|}})$ time, and that occupies a number of words proportional to the number of left and right extensions of the maximal repeats of $T$. We use such fully-functional indexes to implement bidirectional, frequency-aware, variable-order de Bruijn graphs in small space, with no upper bound on their order, and supporting natural criteria for increasing and decreasing the order during traversal.
[ { "created": "Tue, 29 Jan 2019 08:33:09 GMT", "version": "v1" }, { "created": "Sun, 9 Jun 2019 14:08:49 GMT", "version": "v2" } ]
2019-06-11
[ [ "Cunial", "Fabio", "" ], [ "Belazzougui", "Djamal", "" ] ]
Given a string $T$ on an alphabet of size $\sigma$, we describe a bidirectional Burrows-Wheeler index that takes $O(|T|\log{\sigma})$ bits of space, and that supports the addition \emph{and removal} of one character, on the left or right side of any substring of $T$, in constant time. Previously known data structures that used the same space allowed constant-time addition to any substring of $T$, but they could support removal only from specific substrings of $T$. We also describe an index that supports bidirectional addition and removal in $O(\log{\log{|T|}})$ time, and that occupies a number of words proportional to the number of left and right extensions of the maximal repeats of $T$. We use such fully-functional indexes to implement bidirectional, frequency-aware, variable-order de Bruijn graphs in small space, with no upper bound on their order, and supporting natural criteria for increasing and decreasing the order during traversal.
2306.04636
Liming Jiang
Shuai Yang, Liming Jiang, Ziwei Liu, Chen Change Loy
GP-UNIT: Generative Prior for Versatile Unsupervised Image-to-Image Translation
Accepted by IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI). Code: https://github.com/williamyang1991/GP-UNIT Project page: https://www.mmlab-ntu.com/project/gpunit/. arXiv admin note: substantial text overlap with arXiv:2204.03641
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent advances in deep learning have witnessed many successful unsupervised image-to-image translation models that learn correspondences between two visual domains without paired data. However, it is still a great challenge to build robust mappings between various domains especially for those with drastic visual discrepancies. In this paper, we introduce a novel versatile framework, Generative Prior-guided UNsupervised Image-to-image Translation (GP-UNIT), that improves the quality, applicability and controllability of the existing translation models. The key idea of GP-UNIT is to distill the generative prior from pre-trained class-conditional GANs to build coarse-level cross-domain correspondences, and to apply the learned prior to adversarial translations to excavate fine-level correspondences. With the learned multi-level content correspondences, GP-UNIT is able to perform valid translations between both close domains and distant domains. For close domains, GP-UNIT can be conditioned on a parameter to determine the intensity of the content correspondences during translation, allowing users to balance between content and style consistency. For distant domains, semi-supervised learning is explored to guide GP-UNIT to discover accurate semantic correspondences that are hard to learn solely from the appearance. We validate the superiority of GP-UNIT over state-of-the-art translation models in robust, high-quality and diversified translations between various domains through extensive experiments.
[ { "created": "Wed, 7 Jun 2023 17:59:22 GMT", "version": "v1" } ]
2023-06-08
[ [ "Yang", "Shuai", "" ], [ "Jiang", "Liming", "" ], [ "Liu", "Ziwei", "" ], [ "Loy", "Chen Change", "" ] ]
Recent advances in deep learning have witnessed many successful unsupervised image-to-image translation models that learn correspondences between two visual domains without paired data. However, it is still a great challenge to build robust mappings between various domains especially for those with drastic visual discrepancies. In this paper, we introduce a novel versatile framework, Generative Prior-guided UNsupervised Image-to-image Translation (GP-UNIT), that improves the quality, applicability and controllability of the existing translation models. The key idea of GP-UNIT is to distill the generative prior from pre-trained class-conditional GANs to build coarse-level cross-domain correspondences, and to apply the learned prior to adversarial translations to excavate fine-level correspondences. With the learned multi-level content correspondences, GP-UNIT is able to perform valid translations between both close domains and distant domains. For close domains, GP-UNIT can be conditioned on a parameter to determine the intensity of the content correspondences during translation, allowing users to balance between content and style consistency. For distant domains, semi-supervised learning is explored to guide GP-UNIT to discover accurate semantic correspondences that are hard to learn solely from the appearance. We validate the superiority of GP-UNIT over state-of-the-art translation models in robust, high-quality and diversified translations between various domains through extensive experiments.
2202.03654
Mohammad Vahid Jamali
Mohammad Vahid Jamali, Mohammad Fereydounian, Hessam Mahdavifar, Hamed Hassani
Low-Complexity Decoding of a Class of Reed-Muller Subcodes for Low-Capacity Channels
Accepted for presentation in the 2022 IEEE International Conference on Communications (ICC)
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a low-complexity and low-latency decoding algorithm for a class of Reed-Muller (RM) subcodes that are defined based on the product of smaller RM codes. More specifically, the input sequence is shaped as a multi-dimensional array, and the encoding over each dimension is done separately via a smaller RM encoder. Similarly, the decoding is performed over each dimension via a low-complexity decoder for smaller RM codes. The proposed construction is of particular interest to low-capacity channels that are relevant to emerging low-rate communication scenarios. We present an efficient soft-input soft-output (SISO) iterative decoding algorithm for the product of RM codes and demonstrate its superiority compared to hard decoding over RM code components. The proposed coding scheme has decoding (as well as encoding) complexity of $\mathcal{O}(n\log n)$ and latency of $\mathcal{O}(\log n)$ for blocklength $n$. This research renders a general framework toward efficient decoding of RM codes.
[ { "created": "Tue, 8 Feb 2022 05:18:37 GMT", "version": "v1" } ]
2022-02-09
[ [ "Jamali", "Mohammad Vahid", "" ], [ "Fereydounian", "Mohammad", "" ], [ "Mahdavifar", "Hessam", "" ], [ "Hassani", "Hamed", "" ] ]
We present a low-complexity and low-latency decoding algorithm for a class of Reed-Muller (RM) subcodes that are defined based on the product of smaller RM codes. More specifically, the input sequence is shaped as a multi-dimensional array, and the encoding over each dimension is done separately via a smaller RM encoder. Similarly, the decoding is performed over each dimension via a low-complexity decoder for smaller RM codes. The proposed construction is of particular interest to low-capacity channels that are relevant to emerging low-rate communication scenarios. We present an efficient soft-input soft-output (SISO) iterative decoding algorithm for the product of RM codes and demonstrate its superiority compared to hard decoding over RM code components. The proposed coding scheme has decoding (as well as encoding) complexity of $\mathcal{O}(n\log n)$ and latency of $\mathcal{O}(\log n)$ for blocklength $n$. This research renders a general framework toward efficient decoding of RM codes.
2406.13881
Luke Marzen
Luke Marzen, Akash Dutta, Ali Jannesari
Static Generation of Efficient OpenMP Offload Data Mappings
Accepted to the 2024 International Conference for High Performance Computing, Networking, Storage, and Analysis (SC24)
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Increasing heterogeneity in HPC architectures and compiler advancements have led to OpenMP being frequently used to enable computations on heterogeneous devices. However, the efficient movement of data on heterogeneous computing platforms is crucial for achieving high utilization. The implicit OpenMP data-mapping rules often result in redundant data transfer, which can be a bottleneck for program performance. Programmers must explicitly map data between the host and connected accelerator devices to achieve efficient data movement. For this, OpenMP offers the target data and target update constructs. Ensuring efficient data transfer requires programmers to reason about complex data flow. This can be a laborious and error-prone process since the programmer must keep a mental model of data validity and lifetime spanning multiple data environments. Any automated analysis should maximize data reuse, minimize data transfer, and must consider control flow and context from function call sites, making the analysis interprocedural and context sensitive. In this paper, we present a static analysis tool, OMPDart (OpenMP DAta Reduction Tool), for OpenMP programs that models data dependencies between host and device regions and applies source code transformations to achieve efficient data transfer. The analysis is based on a hybrid data structure that joins an Abstract Syntax Tree (AST) with a Control Flow Graph (CFG). Our evaluations on nine HPC benchmarks demonstrate that OMPDart is capable of generating effective data mapping constructs that substantially reduce data transfer between host and device. OMPDart helps reduce data transfers by 85% and improves runtime performance by 1.6x over an expert-defined implementation of LULESH 2.0.
[ { "created": "Wed, 19 Jun 2024 23:21:23 GMT", "version": "v1" } ]
2024-06-21
[ [ "Marzen", "Luke", "" ], [ "Dutta", "Akash", "" ], [ "Jannesari", "Ali", "" ] ]
Increasing heterogeneity in HPC architectures and compiler advancements have led to OpenMP being frequently used to enable computations on heterogeneous devices. However, the efficient movement of data on heterogeneous computing platforms is crucial for achieving high utilization. The implicit OpenMP data-mapping rules often result in redundant data transfer, which can be a bottleneck for program performance. Programmers must explicitly map data between the host and connected accelerator devices to achieve efficient data movement. For this, OpenMP offers the target data and target update constructs. Ensuring efficient data transfer requires programmers to reason about complex data flow. This can be a laborious and error-prone process since the programmer must keep a mental model of data validity and lifetime spanning multiple data environments. Any automated analysis should maximize data reuse, minimize data transfer, and must consider control flow and context from function call sites, making the analysis interprocedural and context sensitive. In this paper, we present a static analysis tool, OMPDart (OpenMP DAta Reduction Tool), for OpenMP programs that models data dependencies between host and device regions and applies source code transformations to achieve efficient data transfer. The analysis is based on a hybrid data structure that joins an Abstract Syntax Tree (AST) with a Control Flow Graph (CFG). Our evaluations on nine HPC benchmarks demonstrate that OMPDart is capable of generating effective data mapping constructs that substantially reduce data transfer between host and device. OMPDart helps reduce data transfers by 85% and improves runtime performance by 1.6x over an expert-defined implementation of LULESH 2.0.
1711.04596
William Blum
Mohit Rajpal, William Blum, Rishabh Singh
Not all bytes are equal: Neural byte sieve for fuzzing
null
null
null
null
cs.SE cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fuzzing is a popular dynamic program analysis technique used to find vulnerabilities in complex software. Fuzzing involves presenting a target program with crafted malicious input designed to cause crashes, buffer overflows, memory errors, and exceptions. Crafting malicious inputs in an efficient manner is a difficult open problem and often the best approach to generating such inputs is through applying uniform random mutations to pre-existing valid inputs (seed files). We present a learning technique that uses neural networks to learn patterns in the input files from past fuzzing explorations to guide future fuzzing explorations. In particular, the neural models learn a function to predict good (and bad) locations in input files to perform fuzzing mutations based on the past mutations and corresponding code coverage information. We implement several neural models including LSTMs and sequence-to-sequence models that can encode variable length input files. We incorporate our models in the state-of-the-art AFL (American Fuzzy Lop) fuzzer and show significant improvements in terms of code coverage, unique code paths, and crashes for various input formats including ELF, PNG, PDF, and XML.
[ { "created": "Fri, 10 Nov 2017 01:29:47 GMT", "version": "v1" } ]
2017-11-15
[ [ "Rajpal", "Mohit", "" ], [ "Blum", "William", "" ], [ "Singh", "Rishabh", "" ] ]
Fuzzing is a popular dynamic program analysis technique used to find vulnerabilities in complex software. Fuzzing involves presenting a target program with crafted malicious input designed to cause crashes, buffer overflows, memory errors, and exceptions. Crafting malicious inputs in an efficient manner is a difficult open problem and often the best approach to generating such inputs is through applying uniform random mutations to pre-existing valid inputs (seed files). We present a learning technique that uses neural networks to learn patterns in the input files from past fuzzing explorations to guide future fuzzing explorations. In particular, the neural models learn a function to predict good (and bad) locations in input files to perform fuzzing mutations based on the past mutations and corresponding code coverage information. We implement several neural models including LSTMs and sequence-to-sequence models that can encode variable length input files. We incorporate our models in the state-of-the-art AFL (American Fuzzy Lop) fuzzer and show significant improvements in terms of code coverage, unique code paths, and crashes for various input formats including ELF, PNG, PDF, and XML.
2106.14574
Paula Czarnowska
Paula Czarnowska, Yogarshi Vyas, Kashif Shah
Quantifying Social Biases in NLP: A Generalization and Empirical Comparison of Extrinsic Fairness Metrics
Accepted for publication in Transaction of the Association for Computational Linguistics (TACL), 2021. The arXiv version is a pre-MIT Press publication version
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Measuring bias is key for better understanding and addressing unfairness in NLP/ML models. This is often done via fairness metrics which quantify the differences in a model's behaviour across a range of demographic groups. In this work, we shed more light on the differences and similarities between the fairness metrics used in NLP. First, we unify a broad range of existing metrics under three generalized fairness metrics, revealing the connections between them. Next, we carry out an extensive empirical comparison of existing metrics and demonstrate that the observed differences in bias measurement can be systematically explained via differences in parameter choices for our generalized metrics.
[ { "created": "Mon, 28 Jun 2021 11:02:33 GMT", "version": "v1" } ]
2021-06-29
[ [ "Czarnowska", "Paula", "" ], [ "Vyas", "Yogarshi", "" ], [ "Shah", "Kashif", "" ] ]
Measuring bias is key for better understanding and addressing unfairness in NLP/ML models. This is often done via fairness metrics which quantify the differences in a model's behaviour across a range of demographic groups. In this work, we shed more light on the differences and similarities between the fairness metrics used in NLP. First, we unify a broad range of existing metrics under three generalized fairness metrics, revealing the connections between them. Next, we carry out an extensive empirical comparison of existing metrics and demonstrate that the observed differences in bias measurement can be systematically explained via differences in parameter choices for our generalized metrics.
2310.02861
Xiangyu Dong
Xiangyu Dong, Xingyi Zhang, Sibo Wang
Rayleigh Quotient Graph Neural Networks for Graph-level Anomaly Detection
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph-level anomaly detection has gained significant attention as it finds applications in various domains, such as cancer diagnosis and enzyme prediction. However, existing methods fail to capture the spectral properties of graph anomalies, resulting in unexplainable framework design and unsatisfying performance. In this paper, we re-investigate the spectral differences between anomalous and normal graphs. Our main observation shows a significant disparity in the accumulated spectral energy between these two classes. Moreover, we prove that the accumulated spectral energy of the graph signal can be represented by its Rayleigh Quotient, indicating that the Rayleigh Quotient is a driving factor behind the anomalous properties of graphs. Motivated by this, we propose Rayleigh Quotient Graph Neural Network (RQGNN), the first spectral GNN that explores the inherent spectral features of anomalous graphs for graph-level anomaly detection. Specifically, we introduce a novel framework with two components: the Rayleigh Quotient learning component (RQL) and Chebyshev Wavelet GNN with RQ-pooling (CWGNN-RQ). RQL explicitly captures the Rayleigh Quotient of graphs and CWGNN-RQ implicitly explores the spectral space of graphs. Extensive experiments on 10 real-world datasets show that RQGNN outperforms the best rival by 6.74% in Macro-F1 score and 1.44% in AUC, demonstrating the effectiveness of our framework. Our code is available at https://github.com/xydong127/RQGNN.
[ { "created": "Wed, 4 Oct 2023 14:47:27 GMT", "version": "v1" }, { "created": "Thu, 5 Oct 2023 03:33:59 GMT", "version": "v2" }, { "created": "Fri, 23 Feb 2024 08:54:13 GMT", "version": "v3" }, { "created": "Thu, 28 Mar 2024 03:29:34 GMT", "version": "v4" } ]
2024-03-29
[ [ "Dong", "Xiangyu", "" ], [ "Zhang", "Xingyi", "" ], [ "Wang", "Sibo", "" ] ]
Graph-level anomaly detection has gained significant attention as it finds applications in various domains, such as cancer diagnosis and enzyme prediction. However, existing methods fail to capture the spectral properties of graph anomalies, resulting in unexplainable framework design and unsatisfying performance. In this paper, we re-investigate the spectral differences between anomalous and normal graphs. Our main observation shows a significant disparity in the accumulated spectral energy between these two classes. Moreover, we prove that the accumulated spectral energy of the graph signal can be represented by its Rayleigh Quotient, indicating that the Rayleigh Quotient is a driving factor behind the anomalous properties of graphs. Motivated by this, we propose Rayleigh Quotient Graph Neural Network (RQGNN), the first spectral GNN that explores the inherent spectral features of anomalous graphs for graph-level anomaly detection. Specifically, we introduce a novel framework with two components: the Rayleigh Quotient learning component (RQL) and Chebyshev Wavelet GNN with RQ-pooling (CWGNN-RQ). RQL explicitly captures the Rayleigh Quotient of graphs and CWGNN-RQ implicitly explores the spectral space of graphs. Extensive experiments on 10 real-world datasets show that RQGNN outperforms the best rival by 6.74% in Macro-F1 score and 1.44% in AUC, demonstrating the effectiveness of our framework. Our code is available at https://github.com/xydong127/RQGNN.
2403.06515
Edon Kelmendi
Toghrul Karimov, Edon Kelmendi, Jo\"el Ouaknine, James Worrell
Multiple Reachability in Linear Dynamical Systems
null
null
null
null
cs.LO
http://creativecommons.org/licenses/by/4.0/
We consider reachability decision problems for linear dynamical systems: Given a linear map on $\mathbb{R}^d$ , together with source and target sets, determine whether there is a point in the source set whose orbit, obtained by repeatedly applying the linear map, enters the target set. When the source and target sets are semialgebraic, this problem can be reduced to a point-to-polytope reachability question. The latter is generally believed not to be substantially harder than the well-known Skolem and Positivity Problems. The situation is markedly different for multiple reachability, i.e. the question of whether the orbit visits the target set at least m times, for some given positive integer m. In this paper, we prove that when the source set is semialgebraic and the target set consists of a hyperplane, multiple reachability is undecidable; in fact we already obtain undecidability in ambient dimension d = 10 and with fixed m = 9. Moreover, as we observe that procedures for dimensions 3 up to 9 would imply strong results pertaining to effective solutions of Diophantine equations, we mainly focus on the affine plane ($\mathbb{R}^2$). We obtain two main positive results. We show that multiple reachability is decidable for halfplane targets, and that it is also decidable for general semialgebraic targets, provided the linear map is a rotation. The latter result involves a new method, based on intersections of algebraic subgroups with subvarieties, due to Bombieri and Zannier.
[ { "created": "Mon, 11 Mar 2024 08:43:42 GMT", "version": "v1" } ]
2024-03-12
[ [ "Karimov", "Toghrul", "" ], [ "Kelmendi", "Edon", "" ], [ "Ouaknine", "Joël", "" ], [ "Worrell", "James", "" ] ]
We consider reachability decision problems for linear dynamical systems: Given a linear map on $\mathbb{R}^d$ , together with source and target sets, determine whether there is a point in the source set whose orbit, obtained by repeatedly applying the linear map, enters the target set. When the source and target sets are semialgebraic, this problem can be reduced to a point-to-polytope reachability question. The latter is generally believed not to be substantially harder than the well-known Skolem and Positivity Problems. The situation is markedly different for multiple reachability, i.e. the question of whether the orbit visits the target set at least m times, for some given positive integer m. In this paper, we prove that when the source set is semialgebraic and the target set consists of a hyperplane, multiple reachability is undecidable; in fact we already obtain undecidability in ambient dimension d = 10 and with fixed m = 9. Moreover, as we observe that procedures for dimensions 3 up to 9 would imply strong results pertaining to effective solutions of Diophantine equations, we mainly focus on the affine plane ($\mathbb{R}^2$). We obtain two main positive results. We show that multiple reachability is decidable for halfplane targets, and that it is also decidable for general semialgebraic targets, provided the linear map is a rotation. The latter result involves a new method, based on intersections of algebraic subgroups with subvarieties, due to Bombieri and Zannier.
2311.10293
Yanzhao Wu
Yanzhao Wu, Ka-Ho Chow, Wenqi Wei, Ling Liu
Hierarchical Pruning of Deep Ensembles with Focal Diversity
To appear on ACM Transactions on Intelligent Systems and Technology
null
10.1145/3633286
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep neural network ensembles combine the wisdom of multiple deep neural networks to improve the generalizability and robustness over individual networks. It has gained increasing popularity to study deep ensemble techniques in the deep learning community. Some mission-critical applications utilize a large number of deep neural networks to form deep ensembles to achieve desired accuracy and resilience, which introduces high time and space costs for ensemble execution. However, it still remains a critical challenge whether a small subset of the entire deep ensemble can achieve the same or better generalizability and how to effectively identify these small deep ensembles for improving the space and time efficiency of ensemble execution. This paper presents a novel deep ensemble pruning approach, which can efficiently identify smaller deep ensembles and provide higher ensemble accuracy than the entire deep ensemble of a large number of member networks. Our hierarchical ensemble pruning approach (HQ) leverages three novel ensemble pruning techniques. First, we show that the focal diversity metrics can accurately capture the complementary capacity of the member networks of an ensemble, which can guide ensemble pruning. Second, we design a focal diversity based hierarchical pruning approach, which will iteratively find high quality deep ensembles with low cost and high accuracy. Third, we develop a focal diversity consensus method to integrate multiple focal diversity metrics to refine ensemble pruning results, where smaller deep ensembles can be effectively identified to offer high accuracy, high robustness and high efficiency. Evaluated using popular benchmark datasets, we demonstrate that the proposed hierarchical ensemble pruning approach can effectively identify high quality deep ensembles with better generalizability while being more time and space efficient in ensemble decision making.
[ { "created": "Fri, 17 Nov 2023 02:48:20 GMT", "version": "v1" } ]
2023-11-20
[ [ "Wu", "Yanzhao", "" ], [ "Chow", "Ka-Ho", "" ], [ "Wei", "Wenqi", "" ], [ "Liu", "Ling", "" ] ]
Deep neural network ensembles combine the wisdom of multiple deep neural networks to improve the generalizability and robustness over individual networks. It has gained increasing popularity to study deep ensemble techniques in the deep learning community. Some mission-critical applications utilize a large number of deep neural networks to form deep ensembles to achieve desired accuracy and resilience, which introduces high time and space costs for ensemble execution. However, it still remains a critical challenge whether a small subset of the entire deep ensemble can achieve the same or better generalizability and how to effectively identify these small deep ensembles for improving the space and time efficiency of ensemble execution. This paper presents a novel deep ensemble pruning approach, which can efficiently identify smaller deep ensembles and provide higher ensemble accuracy than the entire deep ensemble of a large number of member networks. Our hierarchical ensemble pruning approach (HQ) leverages three novel ensemble pruning techniques. First, we show that the focal diversity metrics can accurately capture the complementary capacity of the member networks of an ensemble, which can guide ensemble pruning. Second, we design a focal diversity based hierarchical pruning approach, which will iteratively find high quality deep ensembles with low cost and high accuracy. Third, we develop a focal diversity consensus method to integrate multiple focal diversity metrics to refine ensemble pruning results, where smaller deep ensembles can be effectively identified to offer high accuracy, high robustness and high efficiency. Evaluated using popular benchmark datasets, we demonstrate that the proposed hierarchical ensemble pruning approach can effectively identify high quality deep ensembles with better generalizability while being more time and space efficient in ensemble decision making.
2404.11895
Wei Wu
Wei Wu, Qingnan Fan, Shuai Qin, Hong Gu, Ruoyu Zhao, Antoni B. Chan
FreeDiff: Progressive Frequency Truncation for Image Editing with Diffusion Models
Accepted by ECCV-2024
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Precise image editing with text-to-image models has attracted increasing interest due to their remarkable generative capabilities and user-friendly nature. However, such attempts face the pivotal challenge of misalignment between the intended precise editing target regions and the broader area impacted by the guidance in practice. Despite excellent methods leveraging attention mechanisms that have been developed to refine the editing guidance, these approaches necessitate modifications through complex network architecture and are limited to specific editing tasks. In this work, we re-examine the diffusion process and misalignment problem from a frequency perspective, revealing that, due to the power law of natural images and the decaying noise schedule, the denoising network primarily recovers low-frequency image components during the earlier timesteps and thus brings excessive low-frequency signals for editing. Leveraging this insight, we introduce a novel fine-tuning free approach that employs progressive $\textbf{Fre}$qu$\textbf{e}$ncy truncation to refine the guidance of $\textbf{Diff}$usion models for universal editing tasks ($\textbf{FreeDiff}$). Our method achieves comparable results with state-of-the-art methods across a variety of editing tasks and on a diverse set of images, highlighting its potential as a versatile tool in image editing applications.
[ { "created": "Thu, 18 Apr 2024 04:47:28 GMT", "version": "v1" }, { "created": "Tue, 13 Aug 2024 06:48:37 GMT", "version": "v2" } ]
2024-08-14
[ [ "Wu", "Wei", "" ], [ "Fan", "Qingnan", "" ], [ "Qin", "Shuai", "" ], [ "Gu", "Hong", "" ], [ "Zhao", "Ruoyu", "" ], [ "Chan", "Antoni B.", "" ] ]
Precise image editing with text-to-image models has attracted increasing interest due to their remarkable generative capabilities and user-friendly nature. However, such attempts face the pivotal challenge of misalignment between the intended precise editing target regions and the broader area impacted by the guidance in practice. Despite excellent methods leveraging attention mechanisms that have been developed to refine the editing guidance, these approaches necessitate modifications through complex network architecture and are limited to specific editing tasks. In this work, we re-examine the diffusion process and misalignment problem from a frequency perspective, revealing that, due to the power law of natural images and the decaying noise schedule, the denoising network primarily recovers low-frequency image components during the earlier timesteps and thus brings excessive low-frequency signals for editing. Leveraging this insight, we introduce a novel fine-tuning free approach that employs progressive $\textbf{Fre}$qu$\textbf{e}$ncy truncation to refine the guidance of $\textbf{Diff}$usion models for universal editing tasks ($\textbf{FreeDiff}$). Our method achieves comparable results with state-of-the-art methods across a variety of editing tasks and on a diverse set of images, highlighting its potential as a versatile tool in image editing applications.
1110.0025
N. Nisan
N. Nisan, A. Ronen
Computationally Feasible VCG Mechanisms
null
Journal Of Artificial Intelligence Research, Volume 29, pages 19-47, 2007
10.1613/jair.2046
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A major achievement of mechanism design theory is a general method for the construction of truthful mechanisms called VCG (Vickrey, Clarke, Groves). When applying this method to complex problems such as combinatorial auctions, a difficulty arises: VCG mechanisms are required to compute optimal outcomes and are, therefore, computationally infeasible. However, if the optimal outcome is replaced by the results of a sub-optimal algorithm, the resulting mechanism (termed VCG-based) is no longer necessarily truthful. The first part of this paper studies this phenomenon in depth and shows that it is near universal. Specifically, we prove that essentially all reasonable approximations or heuristics for combinatorial auctions as well as a wide class of cost minimization problems yield non-truthful VCG-based mechanisms. We generalize these results for affine maximizers. The second part of this paper proposes a general method for circumventing the above problem. We introduce a modification of VCG-based mechanisms in which the agents are given a chance to improve the output of the underlying algorithm. When the agents behave truthfully, the welfare obtained by the mechanism is at least as good as the one obtained by the algorithms output. We provide a strong rationale for truth-telling behavior. Our method satisfies individual rationality as well.
[ { "created": "Fri, 30 Sep 2011 20:55:24 GMT", "version": "v1" } ]
2011-10-04
[ [ "Nisan", "N.", "" ], [ "Ronen", "A.", "" ] ]
A major achievement of mechanism design theory is a general method for the construction of truthful mechanisms called VCG (Vickrey, Clarke, Groves). When applying this method to complex problems such as combinatorial auctions, a difficulty arises: VCG mechanisms are required to compute optimal outcomes and are, therefore, computationally infeasible. However, if the optimal outcome is replaced by the results of a sub-optimal algorithm, the resulting mechanism (termed VCG-based) is no longer necessarily truthful. The first part of this paper studies this phenomenon in depth and shows that it is near universal. Specifically, we prove that essentially all reasonable approximations or heuristics for combinatorial auctions as well as a wide class of cost minimization problems yield non-truthful VCG-based mechanisms. We generalize these results for affine maximizers. The second part of this paper proposes a general method for circumventing the above problem. We introduce a modification of VCG-based mechanisms in which the agents are given a chance to improve the output of the underlying algorithm. When the agents behave truthfully, the welfare obtained by the mechanism is at least as good as the one obtained by the algorithms output. We provide a strong rationale for truth-telling behavior. Our method satisfies individual rationality as well.
2101.01168
Wieslaw Kopec
Rafa{\l} Mas{\l}yk, Kinga Skorupska, Piotr Gago, Marcin Niewi\'nski, Barbara Karpowicz, Anna Jaskulska, Katarzyna Abramczuk, Wies{\l}aw Kope\'c
Deploying Crowdsourcing for Workflow Driven Business Process
null
null
null
null
cs.HC cs.CY cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The main goal of this paper is to discuss how to integrate the possibilities of crowdsourcing platforms with systems supporting workflow to enable the engagement and interaction with business tasks of a wider group of people. Thus, this work is an attempt to expand the functional capabilities of typical business systems by allowing selected process tasks to be performed by unlimited human resources. Opening business tasks to crowdsourcing, within established Business Process Management Systems (BPMS) will improve the flexibility of company processes and allow for lower work-load and greater specialization among the staff employed on-site. The presented conceptual work is based on the current international standards in this field, promoted by Workflows Management Coalition. To this end, the functioning of business platforms was analysed and their functionality was presented visually, followed by a proposal and a discussion of how to implement crowdsourcing into workflow systems.
[ { "created": "Mon, 4 Jan 2021 18:57:21 GMT", "version": "v1" } ]
2021-01-05
[ [ "Masłyk", "Rafał", "" ], [ "Skorupska", "Kinga", "" ], [ "Gago", "Piotr", "" ], [ "Niewiński", "Marcin", "" ], [ "Karpowicz", "Barbara", "" ], [ "Jaskulska", "Anna", "" ], [ "Abramczuk", "Katarzyna", "" ], [ "Kopeć", "Wiesław", "" ] ]
The main goal of this paper is to discuss how to integrate the possibilities of crowdsourcing platforms with systems supporting workflow to enable the engagement and interaction with business tasks of a wider group of people. Thus, this work is an attempt to expand the functional capabilities of typical business systems by allowing selected process tasks to be performed by unlimited human resources. Opening business tasks to crowdsourcing, within established Business Process Management Systems (BPMS) will improve the flexibility of company processes and allow for lower work-load and greater specialization among the staff employed on-site. The presented conceptual work is based on the current international standards in this field, promoted by Workflows Management Coalition. To this end, the functioning of business platforms was analysed and their functionality was presented visually, followed by a proposal and a discussion of how to implement crowdsourcing into workflow systems.
2407.08506
I\~nigo Iturrate
Diego Dall'Alba, Lorenzo Busellato, Thiusius Rajeeth Savarimuthu, Zhuoqi Cheng, I\~nigo Iturrate
Imitation Learning for Robotic Assisted Ultrasound Examination of Deep Venous Thrombosis using Kernelized Movement Primitives
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep Vein Thrombosis (DVT) is a common yet potentially fatal condition, often leading to critical complications like pulmonary embolism. DVT is commonly diagnosed using Ultrasound (US) imaging, which can be inconsistent due to its high dependence on the operator's skill. Robotic US Systems (RUSs) aim to improve diagnostic test consistency but face challenges with the complex scanning pattern needed for DVT assessment, where precise control over US probe pressure is crucial for indirectly detecting occlusions. This work introduces an imitation learning method, based on Kernelized Movement Primitives (KMP), to standardize DVT US exams by training an autonomous robotic controller using sonographer demonstrations. A new recording device design enhances demonstration ergonomics, integrating with US probes and enabling seamless force and position data recording. KMPs are used to capture scanning skills, linking scan trajectory and force, enabling generalization beyond the demonstrations. Our approach, evaluated on synthetic models and volunteers, shows that the KMP-based RUS can replicate an expert's force control and image quality in DVT US examination. It outperforms previous methods using manually defined force profiles, improving exam standardization and reducing reliance on specialized sonographers.
[ { "created": "Thu, 11 Jul 2024 13:44:41 GMT", "version": "v1" } ]
2024-07-12
[ [ "Dall'Alba", "Diego", "" ], [ "Busellato", "Lorenzo", "" ], [ "Savarimuthu", "Thiusius Rajeeth", "" ], [ "Cheng", "Zhuoqi", "" ], [ "Iturrate", "Iñigo", "" ] ]
Deep Vein Thrombosis (DVT) is a common yet potentially fatal condition, often leading to critical complications like pulmonary embolism. DVT is commonly diagnosed using Ultrasound (US) imaging, which can be inconsistent due to its high dependence on the operator's skill. Robotic US Systems (RUSs) aim to improve diagnostic test consistency but face challenges with the complex scanning pattern needed for DVT assessment, where precise control over US probe pressure is crucial for indirectly detecting occlusions. This work introduces an imitation learning method, based on Kernelized Movement Primitives (KMP), to standardize DVT US exams by training an autonomous robotic controller using sonographer demonstrations. A new recording device design enhances demonstration ergonomics, integrating with US probes and enabling seamless force and position data recording. KMPs are used to capture scanning skills, linking scan trajectory and force, enabling generalization beyond the demonstrations. Our approach, evaluated on synthetic models and volunteers, shows that the KMP-based RUS can replicate an expert's force control and image quality in DVT US examination. It outperforms previous methods using manually defined force profiles, improving exam standardization and reducing reliance on specialized sonographers.
1903.09402
Akihito Taya
Akihito Taya, Takayuki Nishio, Masahiro Morikura, Koji Yamamoto
Concurrent Transmission Scheduling for Perceptual Data Sharing in mmWave Vehicular Networks
IEICE TRANS. INF. & SYST
IEICE TRANSACTIONS on Information and Systems, Vol.E102-D,No.5, May 2019
10.1587/transinf.2018NTP0008
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sharing perceptual data with other vehicles enhances the traffic safety of autonomous vehicles because it helps vehicles locate other vehicles and pedestrians in their blind spots. Such safety applications require high throughput and short delay, which cannot be achieved by conventional microwave vehicular communication systems. Therefore, millimeter-wave (mmWave) communications are considered to be a key technology for sharing perceptual data because of their wide bandwidth. One of the challenges of data sharing in mmWave communications is broadcasting because narrow-beam directional antennas are used to obtain high gain. Because many vehicles should share their perceptual data to others within a short time frame in order to enlarge the areas that can be perceived based on shared perceptual data, an efficient scheduling for concurrent transmission that improves spatial reuse is required for perceptual data sharing. This paper proposes a data sharing algorithm that employs a graph-based concurrent transmission scheduling. The proposed algorithm realizes concurrent transmission to improve spatial reuse by designing a rule that is utilized to determine if the two pairs of transmitters and receivers interfere with each other by considering the radio propagation characteristics of narrow-beam antennas. A prioritization method that considers the geographical information in perceptual data is also designed to enlarge perceivable areas in situations where data sharing time is limited and not all data can be shared. Simulation results demonstrate that the proposed algorithm doubles the area of the cooperatively perceivable region compared with a conventional algorithm that does not consider mmWave communications because the proposed algorithm achieves high-throughput transmission by improving spatial reuse. The prioritization also enlarges the perceivable region by a maximum of 20%.
[ { "created": "Fri, 22 Mar 2019 08:38:11 GMT", "version": "v1" } ]
2019-03-26
[ [ "Taya", "Akihito", "" ], [ "Nishio", "Takayuki", "" ], [ "Morikura", "Masahiro", "" ], [ "Yamamoto", "Koji", "" ] ]
Sharing perceptual data with other vehicles enhances the traffic safety of autonomous vehicles because it helps vehicles locate other vehicles and pedestrians in their blind spots. Such safety applications require high throughput and short delay, which cannot be achieved by conventional microwave vehicular communication systems. Therefore, millimeter-wave (mmWave) communications are considered to be a key technology for sharing perceptual data because of their wide bandwidth. One of the challenges of data sharing in mmWave communications is broadcasting because narrow-beam directional antennas are used to obtain high gain. Because many vehicles should share their perceptual data to others within a short time frame in order to enlarge the areas that can be perceived based on shared perceptual data, an efficient scheduling for concurrent transmission that improves spatial reuse is required for perceptual data sharing. This paper proposes a data sharing algorithm that employs a graph-based concurrent transmission scheduling. The proposed algorithm realizes concurrent transmission to improve spatial reuse by designing a rule that is utilized to determine if the two pairs of transmitters and receivers interfere with each other by considering the radio propagation characteristics of narrow-beam antennas. A prioritization method that considers the geographical information in perceptual data is also designed to enlarge perceivable areas in situations where data sharing time is limited and not all data can be shared. Simulation results demonstrate that the proposed algorithm doubles the area of the cooperatively perceivable region compared with a conventional algorithm that does not consider mmWave communications because the proposed algorithm achieves high-throughput transmission by improving spatial reuse. The prioritization also enlarges the perceivable region by a maximum of 20%.
2204.11590
Zhenyu Li
Zhenyu Li, Zehui Chen, Ang Li, Liangji Fang, Qinhong Jiang, Xianming Liu, Junjun Jiang
Unsupervised Domain Adaptation for Monocular 3D Object Detection via Self-Training
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Monocular 3D object detection (Mono3D) has achieved unprecedented success with the advent of deep learning techniques and emerging large-scale autonomous driving datasets. However, drastic performance degradation remains an unwell-studied challenge for practical cross-domain deployment as the lack of labels on the target domain. In this paper, we first comprehensively investigate the significant underlying factor of the domain gap in Mono3D, where the critical observation is a depth-shift issue caused by the geometric misalignment of domains. Then, we propose STMono3D, a new self-teaching framework for unsupervised domain adaptation on Mono3D. To mitigate the depth-shift, we introduce the geometry-aligned multi-scale training strategy to disentangle the camera parameters and guarantee the geometry consistency of domains. Based on this, we develop a teacher-student paradigm to generate adaptive pseudo labels on the target domain. Benefiting from the end-to-end framework that provides richer information of the pseudo labels, we propose the quality-aware supervision strategy to take instance-level pseudo confidences into account and improve the effectiveness of the target-domain training process. Moreover, the positive focusing training strategy and dynamic threshold are proposed to handle tremendous FN and FP pseudo samples. STMono3D achieves remarkable performance on all evaluated datasets and even surpasses fully supervised results on the KITTI 3D object detection dataset. To the best of our knowledge, this is the first study to explore effective UDA methods for Mono3D.
[ { "created": "Mon, 25 Apr 2022 12:23:07 GMT", "version": "v1" }, { "created": "Wed, 27 Apr 2022 02:08:21 GMT", "version": "v2" } ]
2022-04-28
[ [ "Li", "Zhenyu", "" ], [ "Chen", "Zehui", "" ], [ "Li", "Ang", "" ], [ "Fang", "Liangji", "" ], [ "Jiang", "Qinhong", "" ], [ "Liu", "Xianming", "" ], [ "Jiang", "Junjun", "" ] ]
Monocular 3D object detection (Mono3D) has achieved unprecedented success with the advent of deep learning techniques and emerging large-scale autonomous driving datasets. However, drastic performance degradation remains an unwell-studied challenge for practical cross-domain deployment as the lack of labels on the target domain. In this paper, we first comprehensively investigate the significant underlying factor of the domain gap in Mono3D, where the critical observation is a depth-shift issue caused by the geometric misalignment of domains. Then, we propose STMono3D, a new self-teaching framework for unsupervised domain adaptation on Mono3D. To mitigate the depth-shift, we introduce the geometry-aligned multi-scale training strategy to disentangle the camera parameters and guarantee the geometry consistency of domains. Based on this, we develop a teacher-student paradigm to generate adaptive pseudo labels on the target domain. Benefiting from the end-to-end framework that provides richer information of the pseudo labels, we propose the quality-aware supervision strategy to take instance-level pseudo confidences into account and improve the effectiveness of the target-domain training process. Moreover, the positive focusing training strategy and dynamic threshold are proposed to handle tremendous FN and FP pseudo samples. STMono3D achieves remarkable performance on all evaluated datasets and even surpasses fully supervised results on the KITTI 3D object detection dataset. To the best of our knowledge, this is the first study to explore effective UDA methods for Mono3D.
1811.05654
Sandeep Juneja
Sandeep Juneja and Subhashini Krishnasamy
Sample complexity of partition identification using multi-armed bandits
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given a vector of probability distributions, or arms, each of which can be sampled independently, we consider the problem of identifying the partition to which this vector belongs from a finitely partitioned universe of such vector of distributions. We study this as a pure exploration problem in multi armed bandit settings and develop sample complexity bounds on the total mean number of samples required for identifying the correct partition with high probability. This framework subsumes well studied problems such as finding the best arm or the best few arms. We consider distributions belonging to the single parameter exponential family and primarily consider partitions where the vector of means of arms lie either in a given set or its complement. The sets considered correspond to distributions where there exists a mean above a specified threshold, where the set is a half space and where either the set or its complement is a polytope, or more generally, a convex set. In these settings, we characterize the lower bounds on mean number of samples for each arm highlighting their dependence on the problem geometry. Further, inspired by the lower bounds, we propose algorithms that can match these bounds asymptotically with decreasing probability of error. Applications of this framework may be diverse. We briefly discuss one associated with finance.
[ { "created": "Wed, 14 Nov 2018 05:41:08 GMT", "version": "v1" }, { "created": "Tue, 5 Feb 2019 12:09:25 GMT", "version": "v2" } ]
2019-02-06
[ [ "Juneja", "Sandeep", "" ], [ "Krishnasamy", "Subhashini", "" ] ]
Given a vector of probability distributions, or arms, each of which can be sampled independently, we consider the problem of identifying the partition to which this vector belongs from a finitely partitioned universe of such vector of distributions. We study this as a pure exploration problem in multi armed bandit settings and develop sample complexity bounds on the total mean number of samples required for identifying the correct partition with high probability. This framework subsumes well studied problems such as finding the best arm or the best few arms. We consider distributions belonging to the single parameter exponential family and primarily consider partitions where the vector of means of arms lie either in a given set or its complement. The sets considered correspond to distributions where there exists a mean above a specified threshold, where the set is a half space and where either the set or its complement is a polytope, or more generally, a convex set. In these settings, we characterize the lower bounds on mean number of samples for each arm highlighting their dependence on the problem geometry. Further, inspired by the lower bounds, we propose algorithms that can match these bounds asymptotically with decreasing probability of error. Applications of this framework may be diverse. We briefly discuss one associated with finance.
1805.11489
Matthieu Lequesne
Alain Couvreur and Matthieu Lequesne and Jean-Pierre Tillich
Recovering short secret keys of RLCE in polynomial time
null
null
null
null
cs.CR cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a key recovery attack against Y. Wang's Random Linear Code Encryption (RLCE) scheme recently submitted to the NIST call for post-quantum cryptography. This attack recovers the secret key for all the short key parameters proposed by the author.
[ { "created": "Tue, 29 May 2018 14:12:25 GMT", "version": "v1" } ]
2018-05-30
[ [ "Couvreur", "Alain", "" ], [ "Lequesne", "Matthieu", "" ], [ "Tillich", "Jean-Pierre", "" ] ]
We present a key recovery attack against Y. Wang's Random Linear Code Encryption (RLCE) scheme recently submitted to the NIST call for post-quantum cryptography. This attack recovers the secret key for all the short key parameters proposed by the author.
1909.04238
Ming Wu
Ming Wu, Pengcheng Wang, Kangqi Yin, Haoyu Cheng, Yun Xu and Chanchal K.Roy
LVMapper: A Large-variance Clone Detector Using Sequencing Alignment Approach
null
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To detect large-variance code clones (i.e. clones with relatively more differences) in large-scale code repositories is difficult because most current tools can only detect almost identical or very similar clones. It will make promotion and changes to some software applications such as bug detection, code completion, software analysis, etc. Recently, CCAligner made an attempt to detect clones with relatively concentrated modifications called large-gap clones. Our contribution is to develop a novel and effective detection approach of large-variance clones to more general cases for not only the concentrated code modifications but also the scattered code modifications. A detector named LVMapper is proposed, borrowing and changing the approach of sequencing alignment in bioinformatics which can find two similar sequences with more differences. The ability of LVMapper was tested on both self-synthetic datasets and real cases, and the results show substantial improvement in detecting large-variance clones compared with other state-of-the-art tools including CCAligner. Furthermore, our new tool also presents good recall and precision for general Type-1, Type-2 and Type-3 clones on the widely used benchmarking dataset, BigCloneBench.
[ { "created": "Tue, 10 Sep 2019 02:01:28 GMT", "version": "v1" } ]
2019-09-11
[ [ "Wu", "Ming", "" ], [ "Wang", "Pengcheng", "" ], [ "Yin", "Kangqi", "" ], [ "Cheng", "Haoyu", "" ], [ "Xu", "Yun", "" ], [ "Roy", "Chanchal K.", "" ] ]
To detect large-variance code clones (i.e. clones with relatively more differences) in large-scale code repositories is difficult because most current tools can only detect almost identical or very similar clones. It will make promotion and changes to some software applications such as bug detection, code completion, software analysis, etc. Recently, CCAligner made an attempt to detect clones with relatively concentrated modifications called large-gap clones. Our contribution is to develop a novel and effective detection approach of large-variance clones to more general cases for not only the concentrated code modifications but also the scattered code modifications. A detector named LVMapper is proposed, borrowing and changing the approach of sequencing alignment in bioinformatics which can find two similar sequences with more differences. The ability of LVMapper was tested on both self-synthetic datasets and real cases, and the results show substantial improvement in detecting large-variance clones compared with other state-of-the-art tools including CCAligner. Furthermore, our new tool also presents good recall and precision for general Type-1, Type-2 and Type-3 clones on the widely used benchmarking dataset, BigCloneBench.