id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
1911.10090
Matteo Poggi
Filippo Aleotti, Matteo Poggi, Fabio Tosi, Stefano Mattoccia
Learning End-To-End Scene Flow by Distilling Single Tasks Knowledge
Accepted to AAAI 2020. Project page: https://vision.disi.unibo.it/~faleotti/dwarf.html
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Scene flow is a challenging task aimed at jointly estimating the 3D structure and motion of the sensed environment. Although deep learning solutions achieve outstanding performance in terms of accuracy, these approaches divide the whole problem into standalone tasks (stereo and optical flow) addressing them with independent networks. Such a strategy dramatically increases the complexity of the training procedure and requires power-hungry GPUs to infer scene flow barely at 1 FPS. Conversely, we propose DWARF, a novel and lightweight architecture able to infer full scene flow jointly reasoning about depth and optical flow easily and elegantly trainable end-to-end from scratch. Moreover, since ground truth images for full scene flow are scarce, we propose to leverage on the knowledge learned by networks specialized in stereo or flow, for which much more data are available, to distill proxy annotations. Exhaustive experiments show that i) DWARF runs at about 10 FPS on a single high-end GPU and about 1 FPS on NVIDIA Jetson TX2 embedded at KITTI resolution, with moderate drop in accuracy compared to 10x deeper models, ii) learning from many distilled samples is more effective than from the few, annotated ones available. Code available at: https://github.com/FilippoAleotti/Dwarf-Tensorflow
[ { "created": "Fri, 22 Nov 2019 15:38:14 GMT", "version": "v1" } ]
2019-11-25
[ [ "Aleotti", "Filippo", "" ], [ "Poggi", "Matteo", "" ], [ "Tosi", "Fabio", "" ], [ "Mattoccia", "Stefano", "" ] ]
Scene flow is a challenging task aimed at jointly estimating the 3D structure and motion of the sensed environment. Although deep learning solutions achieve outstanding performance in terms of accuracy, these approaches divide the whole problem into standalone tasks (stereo and optical flow) addressing them with independent networks. Such a strategy dramatically increases the complexity of the training procedure and requires power-hungry GPUs to infer scene flow barely at 1 FPS. Conversely, we propose DWARF, a novel and lightweight architecture able to infer full scene flow jointly reasoning about depth and optical flow easily and elegantly trainable end-to-end from scratch. Moreover, since ground truth images for full scene flow are scarce, we propose to leverage on the knowledge learned by networks specialized in stereo or flow, for which much more data are available, to distill proxy annotations. Exhaustive experiments show that i) DWARF runs at about 10 FPS on a single high-end GPU and about 1 FPS on NVIDIA Jetson TX2 embedded at KITTI resolution, with moderate drop in accuracy compared to 10x deeper models, ii) learning from many distilled samples is more effective than from the few, annotated ones available. Code available at: https://github.com/FilippoAleotti/Dwarf-Tensorflow
2107.04027
Jerome Darmont
Etienne Scholly (ERIC), Pegdwend\'e Sawadogo (ERIC), Pengfei Liu (ERIC), Javier Espinosa-Oviedo (ERIC), C\'ecile Favre (ERIC), Sabine Loudcher (ERIC), J\'er\^ome Darmont (ERIC), Camille No\^us
goldMEDAL : une nouvelle contribution {\`a} la mod{\'e}lisation g{\'e}n{\'e}rique des m{\'e}tadonn{\'e}es des lacs de donn{\'e}es
in French. 17e journ{\'e}es Business Intelligence et Big Data (EDA 2021), Jul 2021, Toulouse, France
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We summarize here a paper published in 2021 in the DOLAP international workshop DOLAP associated with the EDBT and ICDT conferences. We propose goldMEDAL, a generic metadata model for data lakes based on four concepts and a three-level modeling: conceptual, logical and physical.
[ { "created": "Mon, 5 Jul 2021 07:56:27 GMT", "version": "v1" } ]
2021-07-09
[ [ "Scholly", "Etienne", "", "ERIC" ], [ "Sawadogo", "Pegdwendé", "", "ERIC" ], [ "Liu", "Pengfei", "", "ERIC" ], [ "Espinosa-Oviedo", "Javier", "", "ERIC" ], [ "Favre", "Cécile", "", "ERIC" ], [ "Loudcher", "Sabine", "", "ERIC" ], [ "Darmont", "Jérôme", "", "ERIC" ], [ "Noûs", "Camille", "" ] ]
We summarize here a paper published in 2021 in the DOLAP international workshop DOLAP associated with the EDBT and ICDT conferences. We propose goldMEDAL, a generic metadata model for data lakes based on four concepts and a three-level modeling: conceptual, logical and physical.
1708.06794
Jonti Talukdar
Jonti Talukdar and Bhavana Mehta
Human Action Recognition System using Good Features and Multilayer Perceptron Network
6 pages, 7 Figures, IEEE International Conference on Communication and Signal Processing 2017 (ICCSP 2017)
null
null
null
cs.CV cs.AI cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Human action recognition involves the characterization of human actions through the automated analysis of video data and is integral in the development of smart computer vision systems. However, several challenges like dynamic backgrounds, camera stabilization, complex actions, occlusions etc. make action recognition in a real time and robust fashion difficult. Several complex approaches exist but are computationally intensive. This paper presents a novel approach of using a combination of good features along with iterative optical flow algorithm to compute feature vectors which are classified using a multilayer perceptron (MLP) network. The use of multiple features for motion descriptors enhances the quality of tracking. Resilient backpropagation algorithm is used for training the feedforward neural network reducing the learning time. The overall system accuracy is improved by optimizing the various parameters of the multilayer perceptron network.
[ { "created": "Tue, 22 Aug 2017 19:39:45 GMT", "version": "v1" } ]
2017-08-24
[ [ "Talukdar", "Jonti", "" ], [ "Mehta", "Bhavana", "" ] ]
Human action recognition involves the characterization of human actions through the automated analysis of video data and is integral in the development of smart computer vision systems. However, several challenges like dynamic backgrounds, camera stabilization, complex actions, occlusions etc. make action recognition in a real time and robust fashion difficult. Several complex approaches exist but are computationally intensive. This paper presents a novel approach of using a combination of good features along with iterative optical flow algorithm to compute feature vectors which are classified using a multilayer perceptron (MLP) network. The use of multiple features for motion descriptors enhances the quality of tracking. Resilient backpropagation algorithm is used for training the feedforward neural network reducing the learning time. The overall system accuracy is improved by optimizing the various parameters of the multilayer perceptron network.
1306.2552
Francesco Silvestri
Andrea Pietracaprina and Geppino Pucci and Francesco Silvestri and Fabio Vandin
Space-Efficient Parallel Algorithms for Combinatorial Search Problems
Extended version of the paper in the Proc. of 38th International Symposium on Mathematical Foundations of Computer Science (MFCS)
null
10.4230/LIPIcs.STACS.2014.627
null
cs.DS cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present space-efficient parallel strategies for two fundamental combinatorial search problems, namely, backtrack search and branch-and-bound, both involving the visit of an $n$-node tree of height $h$ under the assumption that a node can be accessed only through its father or its children. For both problems we propose efficient algorithms that run on a $p$-processor distributed-memory machine. For backtrack search, we give a deterministic algorithm running in $O(n/p+h\log p)$ time, and a Las Vegas algorithm requiring optimal $O(n/p+h)$ time, with high probability. Building on the backtrack search algorithm, we also derive a Las Vegas algorithm for branch-and-bound which runs in $O((n/p+h\log p \log n)h\log^2 n)$ time, with high probability. A remarkable feature of our algorithms is the use of only constant space per processor, which constitutes a significant improvement upon previous algorithms whose space requirements per processor depend on the (possibly huge) tree to be explored.
[ { "created": "Tue, 11 Jun 2013 15:29:17 GMT", "version": "v1" }, { "created": "Wed, 26 Mar 2014 13:17:39 GMT", "version": "v2" } ]
2014-03-27
[ [ "Pietracaprina", "Andrea", "" ], [ "Pucci", "Geppino", "" ], [ "Silvestri", "Francesco", "" ], [ "Vandin", "Fabio", "" ] ]
We present space-efficient parallel strategies for two fundamental combinatorial search problems, namely, backtrack search and branch-and-bound, both involving the visit of an $n$-node tree of height $h$ under the assumption that a node can be accessed only through its father or its children. For both problems we propose efficient algorithms that run on a $p$-processor distributed-memory machine. For backtrack search, we give a deterministic algorithm running in $O(n/p+h\log p)$ time, and a Las Vegas algorithm requiring optimal $O(n/p+h)$ time, with high probability. Building on the backtrack search algorithm, we also derive a Las Vegas algorithm for branch-and-bound which runs in $O((n/p+h\log p \log n)h\log^2 n)$ time, with high probability. A remarkable feature of our algorithms is the use of only constant space per processor, which constitutes a significant improvement upon previous algorithms whose space requirements per processor depend on the (possibly huge) tree to be explored.
1705.06419
Myoungsoo Jung
Myoungsoo Jung, Jie Zhang, Ahmed Abulila, Miryeong Kwon, Narges Shahidi, John Shalf, Nam Sung Kim and Mahmut Kandemir
SimpleSSD: Modeling Solid State Drives for Holistic System Simulation
This paper has been accepted at IEEE Computer Architecture Letters (CAL)
null
10.1109/LCA.2017.2750658
null
cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Existing solid state drive (SSD) simulators unfortunately lack hardware and/or software architecture models. Consequently, they are far from capturing the critical features of contemporary SSD devices. More importantly, while the performance of modern systems that adopt SSDs can vary based on their numerous internal design parameters and storage-level configurations, a full system simulation with traditional SSD models often requires unreasonably long runtimes and excessive computational resources. In this work, we propose SimpleSSD, a highfidelity simulator that models all detailed characteristics of hardware and software, while simplifying the nondescript features of storage internals. In contrast to existing SSD simulators, SimpleSSD can easily be integrated into publicly-available full system simulators. In addition, it can accommodate a complete storage stack and evaluate the performance of SSDs along with diverse memory technologies and microarchitectures. Thus, it facilitates simulations that explore the full design space at different levels of system abstraction.
[ { "created": "Thu, 18 May 2017 05:08:34 GMT", "version": "v1" }, { "created": "Thu, 14 Sep 2017 14:28:11 GMT", "version": "v2" } ]
2017-09-15
[ [ "Jung", "Myoungsoo", "" ], [ "Zhang", "Jie", "" ], [ "Abulila", "Ahmed", "" ], [ "Kwon", "Miryeong", "" ], [ "Shahidi", "Narges", "" ], [ "Shalf", "John", "" ], [ "Kim", "Nam Sung", "" ], [ "Kandemir", "Mahmut", "" ] ]
Existing solid state drive (SSD) simulators unfortunately lack hardware and/or software architecture models. Consequently, they are far from capturing the critical features of contemporary SSD devices. More importantly, while the performance of modern systems that adopt SSDs can vary based on their numerous internal design parameters and storage-level configurations, a full system simulation with traditional SSD models often requires unreasonably long runtimes and excessive computational resources. In this work, we propose SimpleSSD, a highfidelity simulator that models all detailed characteristics of hardware and software, while simplifying the nondescript features of storage internals. In contrast to existing SSD simulators, SimpleSSD can easily be integrated into publicly-available full system simulators. In addition, it can accommodate a complete storage stack and evaluate the performance of SSDs along with diverse memory technologies and microarchitectures. Thus, it facilitates simulations that explore the full design space at different levels of system abstraction.
2311.18170
Aditya Powari
Aditya Powari and Ozgur B. Akan
Odor Intensity Shift Keying (OISK) and Channel Capacity of Odor-based Molecular Communications in Internet of Everything
null
null
null
null
cs.ET
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Molecular communication is a new, active area of research that has created a paradigm shift in the way a communication system is perceived. An artificial molecular communication network is created using biological molecules for encoding, transmitting and decoding the symbols to convey information. In addition to typical biological molecules, we are also exploring other classes of molecules that possess unique distinctive features which can be potentially exploited for establishing reliable communications. Odor molecules are one such class of molecules which possess several distinctive features such as Intensity, Headonic tone which provides a basis to convey the information in an olfactory communication system. In our work, we investigate the ICT (information and communication theory) perspective of the olfactory communications by evaluating the channel capacity of an odor molecular communication (OMC) system with the help of a novel modulation scheme viz. odor intensity shift keying (OISK), where information is being conveyed from the intensity level of an odor. Furthermore, we also analyse the effects of critical parameters like temperature and noise on the achievable channel capacity to provide an insight about the resilience of the proposed OMC system towards any such anomaly faced by it.
[ { "created": "Thu, 30 Nov 2023 01:19:42 GMT", "version": "v1" } ]
2023-12-01
[ [ "Powari", "Aditya", "" ], [ "Akan", "Ozgur B.", "" ] ]
Molecular communication is a new, active area of research that has created a paradigm shift in the way a communication system is perceived. An artificial molecular communication network is created using biological molecules for encoding, transmitting and decoding the symbols to convey information. In addition to typical biological molecules, we are also exploring other classes of molecules that possess unique distinctive features which can be potentially exploited for establishing reliable communications. Odor molecules are one such class of molecules which possess several distinctive features such as Intensity, Headonic tone which provides a basis to convey the information in an olfactory communication system. In our work, we investigate the ICT (information and communication theory) perspective of the olfactory communications by evaluating the channel capacity of an odor molecular communication (OMC) system with the help of a novel modulation scheme viz. odor intensity shift keying (OISK), where information is being conveyed from the intensity level of an odor. Furthermore, we also analyse the effects of critical parameters like temperature and noise on the achievable channel capacity to provide an insight about the resilience of the proposed OMC system towards any such anomaly faced by it.
1902.00193
Afshin Rahimi
Afshin Rahimi, Yuan Li and Trevor Cohn
Massively Multilingual Transfer for NER
The first and the second author have equally contributed to this work
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In cross-lingual transfer, NLP models over one or more source languages are applied to a low-resource target language. While most prior work has used a single source model or a few carefully selected models, here we consider a `massive' setting with many such models. This setting raises the problem of poor transfer, particularly from distant languages. We propose two techniques for modulating the transfer, suitable for zero-shot or few-shot learning, respectively. Evaluating on named entity recognition, we show that our techniques are much more effective than strong baselines, including standard ensembling, and our unsupervised method rivals oracle selection of the single best individual model.
[ { "created": "Fri, 1 Feb 2019 05:49:45 GMT", "version": "v1" }, { "created": "Tue, 14 May 2019 07:25:18 GMT", "version": "v2" }, { "created": "Tue, 4 Jun 2019 04:40:53 GMT", "version": "v3" }, { "created": "Wed, 5 Jun 2019 01:30:40 GMT", "version": "v4" } ]
2019-06-06
[ [ "Rahimi", "Afshin", "" ], [ "Li", "Yuan", "" ], [ "Cohn", "Trevor", "" ] ]
In cross-lingual transfer, NLP models over one or more source languages are applied to a low-resource target language. While most prior work has used a single source model or a few carefully selected models, here we consider a `massive' setting with many such models. This setting raises the problem of poor transfer, particularly from distant languages. We propose two techniques for modulating the transfer, suitable for zero-shot or few-shot learning, respectively. Evaluating on named entity recognition, we show that our techniques are much more effective than strong baselines, including standard ensembling, and our unsupervised method rivals oracle selection of the single best individual model.
2312.07158
Yuwei Han
Yuwei Han, Yuni Lai, Yulin Zhu and Kai Zhou
Cost Aware Untargeted Poisoning Attack against Graph Neural Networks,
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Graph Neural Networks (GNNs) have become widely used in the field of graph mining. However, these networks are vulnerable to structural perturbations. While many research efforts have focused on analyzing vulnerability through poisoning attacks, we have identified an inefficiency in current attack losses. These losses steer the attack strategy towards modifying edges targeting misclassified nodes or resilient nodes, resulting in a waste of structural adversarial perturbation. To address this issue, we propose a novel attack loss framework called the Cost Aware Poisoning Attack (CA-attack) to improve the allocation of the attack budget by dynamically considering the classification margins of nodes. Specifically, it prioritizes nodes with smaller positive margins while postponing nodes with negative margins. Our experiments demonstrate that the proposed CA-attack significantly enhances existing attack strategies
[ { "created": "Tue, 12 Dec 2023 10:54:02 GMT", "version": "v1" } ]
2023-12-13
[ [ "Han", "Yuwei", "" ], [ "Lai", "Yuni", "" ], [ "Zhu", "Yulin", "" ], [ "Zhou", "Kai", "" ] ]
Graph Neural Networks (GNNs) have become widely used in the field of graph mining. However, these networks are vulnerable to structural perturbations. While many research efforts have focused on analyzing vulnerability through poisoning attacks, we have identified an inefficiency in current attack losses. These losses steer the attack strategy towards modifying edges targeting misclassified nodes or resilient nodes, resulting in a waste of structural adversarial perturbation. To address this issue, we propose a novel attack loss framework called the Cost Aware Poisoning Attack (CA-attack) to improve the allocation of the attack budget by dynamically considering the classification margins of nodes. Specifically, it prioritizes nodes with smaller positive margins while postponing nodes with negative margins. Our experiments demonstrate that the proposed CA-attack significantly enhances existing attack strategies
2103.14456
Nasir Mehmood Minhas
Nasir Mehmood Minhas
Authorship ethics: an overview of research on the state of practice
10 pages, 6 tables, paper is accepted in ICSE 2021 Workshop SEthics (2nd Workshop on Ethics in Software Engineering Research and Practice)
null
null
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
Authorship ethics is a central topic of discussion in research ethics fora. There are various guidelines for authorship (i.e., naming and order). It is not easy to decide the authorship in the presence of varying authorship guidelines. This paper gives an overview of research on authorship practices and issues. It presents a review of 16 empirical research papers published between 2014 -- 2020. The objective is to learn how various research disciplines handle authorship. What are the authorship practices in various research disciplines, and what are the issues associated with these practices?
[ { "created": "Thu, 25 Mar 2021 17:04:43 GMT", "version": "v1" } ]
2021-03-29
[ [ "Minhas", "Nasir Mehmood", "" ] ]
Authorship ethics is a central topic of discussion in research ethics fora. There are various guidelines for authorship (i.e., naming and order). It is not easy to decide the authorship in the presence of varying authorship guidelines. This paper gives an overview of research on authorship practices and issues. It presents a review of 16 empirical research papers published between 2014 -- 2020. The objective is to learn how various research disciplines handle authorship. What are the authorship practices in various research disciplines, and what are the issues associated with these practices?
2011.05538
Shiguang Liu Prof.
Shiguang Liu, Dinesh Manocha
Sound Synthesis, Propagation, and Rendering: A Survey
27 pages
null
null
null
cs.SD cs.GR
http://creativecommons.org/licenses/by/4.0/
Sound, as a crucial sensory channel, plays a vital role in improving the reality and immersiveness of a virtual environment, following only vision in importance. Sound can provide important clues such as sound directionality and spatial size. This paper gives a broad overview of research works on sound simulation in virtual reality, games, multimedia, computer-aided design. We first survey various sound synthesis methods, including harmonic synthesis, texture synthesis, spectral analysis, and physics-based synthesis. Then, we summarize popular sound propagation techniques, namely wave-based methods, geometric-based methods, and hybrid methods. Next, the sound rendering methods are reviewed. We further demonstrate the latest deep learning based sound simulation approaches. Finally, we point to some future directions of this field. To the best of our knowledge, this is the first attempt to provide a comprehensive summary of sound research in the field of computer graphics.
[ { "created": "Wed, 11 Nov 2020 04:08:38 GMT", "version": "v1" }, { "created": "Fri, 13 Nov 2020 14:04:36 GMT", "version": "v2" }, { "created": "Mon, 30 Nov 2020 10:25:51 GMT", "version": "v3" }, { "created": "Wed, 30 Dec 2020 09:27:55 GMT", "version": "v4" }, { "created": "Tue, 4 May 2021 03:11:49 GMT", "version": "v5" } ]
2021-05-05
[ [ "Liu", "Shiguang", "" ], [ "Manocha", "Dinesh", "" ] ]
Sound, as a crucial sensory channel, plays a vital role in improving the reality and immersiveness of a virtual environment, following only vision in importance. Sound can provide important clues such as sound directionality and spatial size. This paper gives a broad overview of research works on sound simulation in virtual reality, games, multimedia, computer-aided design. We first survey various sound synthesis methods, including harmonic synthesis, texture synthesis, spectral analysis, and physics-based synthesis. Then, we summarize popular sound propagation techniques, namely wave-based methods, geometric-based methods, and hybrid methods. Next, the sound rendering methods are reviewed. We further demonstrate the latest deep learning based sound simulation approaches. Finally, we point to some future directions of this field. To the best of our knowledge, this is the first attempt to provide a comprehensive summary of sound research in the field of computer graphics.
1808.02180
Dacheng Tao
Fengxiang He, Tongliang Liu, Geoffrey I Webb, and Dacheng Tao
Instance-Dependent PU Learning by Bayesian Optimal Relabeling
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When learning from positive and unlabelled data, it is a strong assumption that the positive observations are randomly sampled from the distribution of $X$ conditional on $Y = 1$, where X stands for the feature and Y the label. Most existing algorithms are optimally designed under the assumption. However, for many real-world applications, the observed positive examples are dependent on the conditional probability $P(Y = 1|X)$ and should be sampled biasedly. In this paper, we assume that a positive example with a higher $P(Y = 1|X)$ is more likely to be labelled and propose a probabilistic-gap based PU learning algorithms. Specifically, by treating the unlabelled data as noisy negative examples, we could automatically label a group positive and negative examples whose labels are identical to the ones assigned by a Bayesian optimal classifier with a consistency guarantee. The relabelled examples have a biased domain, which is remedied by the kernel mean matching technique. The proposed algorithm is model-free and thus do not have any parameters to tune. Experimental results demonstrate that our method works well on both generated and real-world datasets.
[ { "created": "Tue, 7 Aug 2018 01:47:57 GMT", "version": "v1" }, { "created": "Tue, 3 Mar 2020 02:47:49 GMT", "version": "v2" } ]
2020-03-04
[ [ "He", "Fengxiang", "" ], [ "Liu", "Tongliang", "" ], [ "Webb", "Geoffrey I", "" ], [ "Tao", "Dacheng", "" ] ]
When learning from positive and unlabelled data, it is a strong assumption that the positive observations are randomly sampled from the distribution of $X$ conditional on $Y = 1$, where X stands for the feature and Y the label. Most existing algorithms are optimally designed under the assumption. However, for many real-world applications, the observed positive examples are dependent on the conditional probability $P(Y = 1|X)$ and should be sampled biasedly. In this paper, we assume that a positive example with a higher $P(Y = 1|X)$ is more likely to be labelled and propose a probabilistic-gap based PU learning algorithms. Specifically, by treating the unlabelled data as noisy negative examples, we could automatically label a group positive and negative examples whose labels are identical to the ones assigned by a Bayesian optimal classifier with a consistency guarantee. The relabelled examples have a biased domain, which is remedied by the kernel mean matching technique. The proposed algorithm is model-free and thus do not have any parameters to tune. Experimental results demonstrate that our method works well on both generated and real-world datasets.
2112.12773
Gizem Gezici
Gizem Gezici
Customising Ranking Models for Enterprise Search on Bilingual Click-Through Dataset
null
null
null
null
cs.IR
http://creativecommons.org/licenses/by-nc-nd/4.0/
In this work, we provide the details about the process of establishing an end-to-end system for enterprise search on bilingual click-through dataset. The first part of the paper will be about the high-level workflow of the system. Then, in the second part we will elaborately mention about the ranking models to improve the search results in the vertical search of the technical documents in enterprise domain. Throughout the paper, we will mention the way which we combine the methods in IR literature. Finally, in the last part of the paper we will report our results using different ranking algorithms with $NDCG@k$ where k is the cut-off value.
[ { "created": "Thu, 23 Dec 2021 18:48:35 GMT", "version": "v1" } ]
2021-12-24
[ [ "Gezici", "Gizem", "" ] ]
In this work, we provide the details about the process of establishing an end-to-end system for enterprise search on bilingual click-through dataset. The first part of the paper will be about the high-level workflow of the system. Then, in the second part we will elaborately mention about the ranking models to improve the search results in the vertical search of the technical documents in enterprise domain. Throughout the paper, we will mention the way which we combine the methods in IR literature. Finally, in the last part of the paper we will report our results using different ranking algorithms with $NDCG@k$ where k is the cut-off value.
2110.11984
Corinna Coupette
Corinna Coupette, Dirk Hartung, Janis Beckedorf, Maximilian B\"other, Daniel Martin Katz
Law Smells: Defining and Detecting Problematic Patterns in Legal Drafting
36 pages, 11 figures
null
null
null
cs.IR cs.CL cs.CY cs.SE cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Building on the computer science concept of code smells, we initiate the study of law smells, i.e., patterns in legal texts that pose threats to the comprehensibility and maintainability of the law. With five intuitive law smells as running examples - namely, duplicated phrase, long element, large reference tree, ambiguous syntax, and natural language obsession -, we develop a comprehensive law smell taxonomy. This taxonomy classifies law smells by when they can be detected, which aspects of law they relate to, and how they can be discovered. We introduce text-based and graph-based methods to identify instances of law smells, confirming their utility in practice using the United States Code as a test case. Our work demonstrates how ideas from software engineering can be leveraged to assess and improve the quality of legal code, thus drawing attention to an understudied area in the intersection of law and computer science and highlighting the potential of computational legal drafting.
[ { "created": "Fri, 15 Oct 2021 06:37:13 GMT", "version": "v1" } ]
2021-10-26
[ [ "Coupette", "Corinna", "" ], [ "Hartung", "Dirk", "" ], [ "Beckedorf", "Janis", "" ], [ "Böther", "Maximilian", "" ], [ "Katz", "Daniel Martin", "" ] ]
Building on the computer science concept of code smells, we initiate the study of law smells, i.e., patterns in legal texts that pose threats to the comprehensibility and maintainability of the law. With five intuitive law smells as running examples - namely, duplicated phrase, long element, large reference tree, ambiguous syntax, and natural language obsession -, we develop a comprehensive law smell taxonomy. This taxonomy classifies law smells by when they can be detected, which aspects of law they relate to, and how they can be discovered. We introduce text-based and graph-based methods to identify instances of law smells, confirming their utility in practice using the United States Code as a test case. Our work demonstrates how ideas from software engineering can be leveraged to assess and improve the quality of legal code, thus drawing attention to an understudied area in the intersection of law and computer science and highlighting the potential of computational legal drafting.
1612.05419
Sumedh Tirodkar
Sumedh Tirodkar and Sundar Vishwanathan
Maximum Matching on Trees in the Online Preemptive and the Incremental Dynamic Graph Models
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the Maximum Cardinality Matching (MCM) and the Maximum Weight Matching (MWM) problems, on trees and on some special classes of graphs, in the Online Preemptive and the Incremental Dynamic Graph models. In the {\em Online Preemptive} model, the edges of a graph are revealed one by one and the algorithm is required to always maintain a valid matching. On seeing an edge, the algorithm has to either accept or reject the edge. If accepted, then the adjacent edges are discarded, and all rejections are permanent. In this model, the complexity of the problems is settled for deterministic algorithms. Epstein et al. gave a $5.356$-competitive randomized algorithm for MWM, and also proved a lower bound of $1.693$ for MCM. The same lower bound applies for MWM. In this paper we show that some of the results can be improved in the case of trees and some special classes of graphs. In the online preemptive model, we present a $64/33$-competitive (in expectation) randomized algorithm for MCM on trees. Inspired by the above mentioned algorithm for MCM, we present the main result of the paper, a randomized algorithm for MCM with a "worst case" update time of $O(1)$, in the incremental dynamic graph model, which is $3/2$-approximate (in expectation) on trees, and $1.8$-approximate (in expectation) on general graphs with maximum degree $3$. Note that this algorithm works only against an oblivious adversary. Hence, we derandomize this algorithm, and give a $(3/2 + \epsilon)$-approximate deterministic algorithm for MCM on trees, with an amortized update time of $O(1/\epsilon)$. We also present a minor result for MWM in the online preemptive model, a $3$-competitive (in expectation) randomized algorithm on growing trees (where the input revealed upto any stage is always a tree, i.e. a new edge never connects two disconnected trees).
[ { "created": "Fri, 16 Dec 2016 10:38:56 GMT", "version": "v1" }, { "created": "Thu, 18 May 2017 07:54:00 GMT", "version": "v2" }, { "created": "Sat, 20 Jan 2018 02:22:26 GMT", "version": "v3" } ]
2018-01-23
[ [ "Tirodkar", "Sumedh", "" ], [ "Vishwanathan", "Sundar", "" ] ]
We study the Maximum Cardinality Matching (MCM) and the Maximum Weight Matching (MWM) problems, on trees and on some special classes of graphs, in the Online Preemptive and the Incremental Dynamic Graph models. In the {\em Online Preemptive} model, the edges of a graph are revealed one by one and the algorithm is required to always maintain a valid matching. On seeing an edge, the algorithm has to either accept or reject the edge. If accepted, then the adjacent edges are discarded, and all rejections are permanent. In this model, the complexity of the problems is settled for deterministic algorithms. Epstein et al. gave a $5.356$-competitive randomized algorithm for MWM, and also proved a lower bound of $1.693$ for MCM. The same lower bound applies for MWM. In this paper we show that some of the results can be improved in the case of trees and some special classes of graphs. In the online preemptive model, we present a $64/33$-competitive (in expectation) randomized algorithm for MCM on trees. Inspired by the above mentioned algorithm for MCM, we present the main result of the paper, a randomized algorithm for MCM with a "worst case" update time of $O(1)$, in the incremental dynamic graph model, which is $3/2$-approximate (in expectation) on trees, and $1.8$-approximate (in expectation) on general graphs with maximum degree $3$. Note that this algorithm works only against an oblivious adversary. Hence, we derandomize this algorithm, and give a $(3/2 + \epsilon)$-approximate deterministic algorithm for MCM on trees, with an amortized update time of $O(1/\epsilon)$. We also present a minor result for MWM in the online preemptive model, a $3$-competitive (in expectation) randomized algorithm on growing trees (where the input revealed upto any stage is always a tree, i.e. a new edge never connects two disconnected trees).
2210.03119
Roberto Barros
Roberto Souto Maior de Barros, Silas Garrido Teixeira de Carvalho Santos, Jean Paul Barddal
Evaluating k-NN in the Classification of Data Streams with Concept Drift
25 pages, 10 tables, 7 figures + 30 pages appendix
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data streams are often defined as large amounts of data flowing continuously at high speed. Moreover, these data are likely subject to changes in data distribution, known as concept drift. Given all the reasons mentioned above, learning from streams is often online and under restrictions of memory consumption and run-time. Although many classification algorithms exist, most of the works published in the area use Naive Bayes (NB) and Hoeffding Trees (HT) as base learners in their experiments. This article proposes an in-depth evaluation of k-Nearest Neighbors (k-NN) as a candidate for classifying data streams subjected to concept drift. It also analyses the complexity in time and the two main parameters of k-NN, i.e., the number of nearest neighbors used for predictions (k), and window size (w). We compare different parameter values for k-NN and contrast it to NB and HT both with and without a drift detector (RDDM) in many datasets. We formulated and answered 10 research questions which led to the conclusion that k-NN is a worthy candidate for data stream classification, especially when the run-time constraint is not too restrictive.
[ { "created": "Thu, 6 Oct 2022 00:17:13 GMT", "version": "v1" } ]
2022-10-10
[ [ "de Barros", "Roberto Souto Maior", "" ], [ "Santos", "Silas Garrido Teixeira de Carvalho", "" ], [ "Barddal", "Jean Paul", "" ] ]
Data streams are often defined as large amounts of data flowing continuously at high speed. Moreover, these data are likely subject to changes in data distribution, known as concept drift. Given all the reasons mentioned above, learning from streams is often online and under restrictions of memory consumption and run-time. Although many classification algorithms exist, most of the works published in the area use Naive Bayes (NB) and Hoeffding Trees (HT) as base learners in their experiments. This article proposes an in-depth evaluation of k-Nearest Neighbors (k-NN) as a candidate for classifying data streams subjected to concept drift. It also analyses the complexity in time and the two main parameters of k-NN, i.e., the number of nearest neighbors used for predictions (k), and window size (w). We compare different parameter values for k-NN and contrast it to NB and HT both with and without a drift detector (RDDM) in many datasets. We formulated and answered 10 research questions which led to the conclusion that k-NN is a worthy candidate for data stream classification, especially when the run-time constraint is not too restrictive.
1901.02256
Tawfik Masrour
Ibtissam El Hassani, Choumicha El Mazgualdi and Tawfik Masrour
Artificial Intelligence and Machine Learning to Predict and Improve Efficiency in Manufacturing Industry
8 pages
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The overall equipment effectiveness (OEE) is a performance measurement metric widely used. Its calculation provides to the managers the possibility to identify the main losses that reduce the machine effectiveness and then take the necessary decisions in order to improve the situation. However, this calculation is done a-posterior which is often too late. In the present research, we implemented different Machine Learning algorithms namely; Support vector machine, Optimized Support vector Machine (using Genetic Algorithm), Random Forest, XGBoost and Deep Learning to predict the estimate OEE value. The data used to train our models was provided by an automotive cable production industry. The results show that the Deep Learning and Random Forest are more accurate and present better performance for the prediction of the overall equipment effectiveness in our case study.
[ { "created": "Tue, 8 Jan 2019 11:12:37 GMT", "version": "v1" }, { "created": "Sun, 3 Feb 2019 17:26:14 GMT", "version": "v2" } ]
2019-02-05
[ [ "Hassani", "Ibtissam El", "" ], [ "Mazgualdi", "Choumicha El", "" ], [ "Masrour", "Tawfik", "" ] ]
The overall equipment effectiveness (OEE) is a performance measurement metric widely used. Its calculation provides to the managers the possibility to identify the main losses that reduce the machine effectiveness and then take the necessary decisions in order to improve the situation. However, this calculation is done a-posterior which is often too late. In the present research, we implemented different Machine Learning algorithms namely; Support vector machine, Optimized Support vector Machine (using Genetic Algorithm), Random Forest, XGBoost and Deep Learning to predict the estimate OEE value. The data used to train our models was provided by an automotive cable production industry. The results show that the Deep Learning and Random Forest are more accurate and present better performance for the prediction of the overall equipment effectiveness in our case study.
1807.08738
Krzysztof Nowicki
Krzysztof Nowicki
Random Sampling Applied to the MST Problem in the Node Congested Clique Model
simplified and corrected version
null
null
null
cs.DS cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Congested Clique model proposed by Lotker et al.[SICOMP'05] was introduced in order to provide a simple abstraction for overlay networks. Congested Clique is a model of distributed (or parallel) computing, in which there are $n$ players with unique identifiers from set [n], which perform computations in synchronous rounds. Each round consists of the phase of unlimited local computation and the communication phase. While communicating, each pair of players is allowed to exchange a single message of size $O(\log n)$ bits. Since, in a single round, each player can communicate with even $\Theta(n)$ other players, the model seems to be to powerful to imitate bandwidth restriction emerging from the underlying network. In this paper we study a restricted version of the Congested Clique model, the Node Congested Clique (NCC) model, proposed by Augustine et al.[arxiv1805], in which a player is allowed to send/receive only $O(\log n)$ messages per communication phase. More precisely, we provide communication primitives that improve the round complexity of the MST algorithm by Augustine et al. [arxiv1805] to $O(\log^3 n)$ rounds, and give an $O(\log^2 n)$ round algorithm solving the Spanning Forest (SF) problem. Furthermore, we present an approach based on the random sampling technique by Karger et al.[JACM'95] that gives an $O(\log^2 n \log \Delta / \log \log n)$ round algorithm for the Minimum Spanning Forest (MSF) problem. Besides the faster SF/ MSF algorithms we consider the key contributions to be - an efficient implementation of basic protocols in the NCC model - a tighter analysis of a special case of the sampling approach by Karger et al.[JACM'95] and related results by Pemmaraju and Sardeshmukh [FSTTCS'16] - efficient k-sparse recovery data structure that requires $O((k +\log n)\log n\log k)$ bits and provides recovery procedure that requires $O((k +\log n)\log k)$ steps
[ { "created": "Mon, 23 Jul 2018 17:35:28 GMT", "version": "v1" }, { "created": "Mon, 27 Aug 2018 14:38:16 GMT", "version": "v2" } ]
2018-08-28
[ [ "Nowicki", "Krzysztof", "" ] ]
The Congested Clique model proposed by Lotker et al.[SICOMP'05] was introduced in order to provide a simple abstraction for overlay networks. Congested Clique is a model of distributed (or parallel) computing, in which there are $n$ players with unique identifiers from set [n], which perform computations in synchronous rounds. Each round consists of the phase of unlimited local computation and the communication phase. While communicating, each pair of players is allowed to exchange a single message of size $O(\log n)$ bits. Since, in a single round, each player can communicate with even $\Theta(n)$ other players, the model seems to be to powerful to imitate bandwidth restriction emerging from the underlying network. In this paper we study a restricted version of the Congested Clique model, the Node Congested Clique (NCC) model, proposed by Augustine et al.[arxiv1805], in which a player is allowed to send/receive only $O(\log n)$ messages per communication phase. More precisely, we provide communication primitives that improve the round complexity of the MST algorithm by Augustine et al. [arxiv1805] to $O(\log^3 n)$ rounds, and give an $O(\log^2 n)$ round algorithm solving the Spanning Forest (SF) problem. Furthermore, we present an approach based on the random sampling technique by Karger et al.[JACM'95] that gives an $O(\log^2 n \log \Delta / \log \log n)$ round algorithm for the Minimum Spanning Forest (MSF) problem. Besides the faster SF/ MSF algorithms we consider the key contributions to be - an efficient implementation of basic protocols in the NCC model - a tighter analysis of a special case of the sampling approach by Karger et al.[JACM'95] and related results by Pemmaraju and Sardeshmukh [FSTTCS'16] - efficient k-sparse recovery data structure that requires $O((k +\log n)\log n\log k)$ bits and provides recovery procedure that requires $O((k +\log n)\log k)$ steps
1503.06144
Sarai Mendoza Armenta
Sarai Mendoza-Armenta, Ian Dobson
Applying a formula for generator redispatch to damp interarea oscillations using synchrophasors
To appear in IEEE Transactions on Power Systems, accepted September 2015
IEEE Transactions on Power Systems, vol. 31, no. 4, July 2016, pp. 3119-3128
10.1109/TPWRS.2015.2485519
null
cs.SY math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
If an interarea oscillatory mode has insufficient damping, generator redispatch can be used to improve its damping. We explain and apply a new analytic formula for the modal sensitivity to rank the best pairs of generators to redispatch. The formula requires some dynamic power system data and we show how to obtain that data from synchrophasor measurements. The application of the formula to damp interarea modes is explained and illustrated with interarea modes of the New England 10-machine power system.
[ { "created": "Fri, 20 Mar 2015 16:21:08 GMT", "version": "v1" }, { "created": "Wed, 30 Sep 2015 21:31:49 GMT", "version": "v2" } ]
2016-11-23
[ [ "Mendoza-Armenta", "Sarai", "" ], [ "Dobson", "Ian", "" ] ]
If an interarea oscillatory mode has insufficient damping, generator redispatch can be used to improve its damping. We explain and apply a new analytic formula for the modal sensitivity to rank the best pairs of generators to redispatch. The formula requires some dynamic power system data and we show how to obtain that data from synchrophasor measurements. The application of the formula to damp interarea modes is explained and illustrated with interarea modes of the New England 10-machine power system.
2104.05046
Suyash Shandilya
Suyash Shandilya
Print Error Detection using Convolutional Neural Networks
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper discusses the need of an automated system for detecting print errors and the efficacy of Convolutional Neural Networks in such an application. We recognise the need of a dataset containing print error samples and propose a way to generate one artificially. We discuss the algorithms to generate such data along with the limitaions and advantages of such an apporach. Our final trained network gives a remarkable accuracy of 99.83\% in testing. We further evaluate how such efficiency was achieved and what modifications can be tested to further the results.
[ { "created": "Sun, 11 Apr 2021 16:30:17 GMT", "version": "v1" } ]
2021-04-13
[ [ "Shandilya", "Suyash", "" ] ]
This paper discusses the need of an automated system for detecting print errors and the efficacy of Convolutional Neural Networks in such an application. We recognise the need of a dataset containing print error samples and propose a way to generate one artificially. We discuss the algorithms to generate such data along with the limitaions and advantages of such an apporach. Our final trained network gives a remarkable accuracy of 99.83\% in testing. We further evaluate how such efficiency was achieved and what modifications can be tested to further the results.
2309.10908
Alicia Wolfe
Alicia P. Wolfe, Oliver Diamond, Brigitte Goeler-Slough, Remi Feuerman, Magdalena Kisielinska, Victoria Manfredi
Multicopy Reinforcement Learning Agents
Updates from earlier version: added a more basic "multiagent" algorithm to compare to and comparison graphs
null
null
null
cs.MA cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper examines a novel type of multi-agent problem, in which an agent makes multiple identical copies of itself in order to achieve a single agent task better or more efficiently. This strategy improves performance if the environment is noisy and the task is sometimes unachievable by a single agent copy. We propose a learning algorithm for this multicopy problem which takes advantage of the structure of the value function to efficiently learn how to balance the advantages and costs of adding additional copies.
[ { "created": "Tue, 19 Sep 2023 20:03:17 GMT", "version": "v1" }, { "created": "Mon, 6 May 2024 12:43:26 GMT", "version": "v2" } ]
2024-05-07
[ [ "Wolfe", "Alicia P.", "" ], [ "Diamond", "Oliver", "" ], [ "Goeler-Slough", "Brigitte", "" ], [ "Feuerman", "Remi", "" ], [ "Kisielinska", "Magdalena", "" ], [ "Manfredi", "Victoria", "" ] ]
This paper examines a novel type of multi-agent problem, in which an agent makes multiple identical copies of itself in order to achieve a single agent task better or more efficiently. This strategy improves performance if the environment is noisy and the task is sometimes unachievable by a single agent copy. We propose a learning algorithm for this multicopy problem which takes advantage of the structure of the value function to efficiently learn how to balance the advantages and costs of adding additional copies.
2307.10227
Joohyung Lee
Enrico Giunchiglia, Joohyung Lee, Vladimir Lifschitz, Hudson Turner
Causal Laws and Multi-Valued Fluents
7 pages, In Proceedings of Workshop on Nonmonotonic Reasoning, Action and Change (NRAC 2001)
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper continues the line of work on representing properties of actions in nonmonotonic formalisms that stresses the distinction between being "true" and being "caused", as in the system of causal logic introduced by McCain and Turner and in the action language C proposed by Giunchiglia and Lifschitz. The only fluents directly representable in language C+ are truth-valued fluents, which is often inconvenient. We show that both causal logic and language C can be extended to allow values from arbitrary nonempty sets. Our extension of language C, called C+, also makes it possible to describe actions in terms of their attributes, which is important from the perspective of elaboration tolerance. We describe an embedding of C+ in causal theories with multi-valued constants, relate C+ to Pednault's action language ADL, and show how multi-valued constants can be eliminated in favor of Boolean constants.
[ { "created": "Sat, 15 Jul 2023 06:41:08 GMT", "version": "v1" } ]
2023-07-21
[ [ "Giunchiglia", "Enrico", "" ], [ "Lee", "Joohyung", "" ], [ "Lifschitz", "Vladimir", "" ], [ "Turner", "Hudson", "" ] ]
This paper continues the line of work on representing properties of actions in nonmonotonic formalisms that stresses the distinction between being "true" and being "caused", as in the system of causal logic introduced by McCain and Turner and in the action language C proposed by Giunchiglia and Lifschitz. The only fluents directly representable in language C+ are truth-valued fluents, which is often inconvenient. We show that both causal logic and language C can be extended to allow values from arbitrary nonempty sets. Our extension of language C, called C+, also makes it possible to describe actions in terms of their attributes, which is important from the perspective of elaboration tolerance. We describe an embedding of C+ in causal theories with multi-valued constants, relate C+ to Pednault's action language ADL, and show how multi-valued constants can be eliminated in favor of Boolean constants.
2003.07989
Simon Zhang
Simon Zhang, Mengbai Xiao and Hao Wang
GPU-Accelerated Computation of Vietoris-Rips Persistence Barcodes
36 pages, 15 figures. To be published in Symposium on Computational Geometry 2020
null
null
null
cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The computation of Vietoris-Rips persistence barcodes is both execution-intensive and memory-intensive. In this paper, we study the computational structure of Vietoris-Rips persistence barcodes, and identify several unique mathematical properties and algorithmic opportunities with connections to the GPU. Mathematically and empirically, we look into the properties of apparent pairs, which are independently identifiable persistence pairs comprising up to 99% of persistence pairs. We give theoretical upper and lower bounds of the apparent pair rate and model the average case. We also design massively parallel algorithms to take advantage of the very large number of simplices that can be processed independently of each other. Having identified these opportunities, we develop a GPU-accelerated software for computing Vietoris-Rips persistence barcodes, called Ripser++. The software achieves up to 30x speedup over the total execution time of the original Ripser and also reduces CPU-memory usage by up to 2.0x. We believe our GPU-acceleration based efforts open a new chapter for the advancement of topological data analysis in the post-Moore's Law era.
[ { "created": "Tue, 17 Mar 2020 23:57:37 GMT", "version": "v1" }, { "created": "Tue, 24 Mar 2020 14:14:06 GMT", "version": "v2" }, { "created": "Sat, 3 Oct 2020 03:49:34 GMT", "version": "v3" } ]
2020-10-06
[ [ "Zhang", "Simon", "" ], [ "Xiao", "Mengbai", "" ], [ "Wang", "Hao", "" ] ]
The computation of Vietoris-Rips persistence barcodes is both execution-intensive and memory-intensive. In this paper, we study the computational structure of Vietoris-Rips persistence barcodes, and identify several unique mathematical properties and algorithmic opportunities with connections to the GPU. Mathematically and empirically, we look into the properties of apparent pairs, which are independently identifiable persistence pairs comprising up to 99% of persistence pairs. We give theoretical upper and lower bounds of the apparent pair rate and model the average case. We also design massively parallel algorithms to take advantage of the very large number of simplices that can be processed independently of each other. Having identified these opportunities, we develop a GPU-accelerated software for computing Vietoris-Rips persistence barcodes, called Ripser++. The software achieves up to 30x speedup over the total execution time of the original Ripser and also reduces CPU-memory usage by up to 2.0x. We believe our GPU-acceleration based efforts open a new chapter for the advancement of topological data analysis in the post-Moore's Law era.
2209.00447
Ziyue Zhu Ms
Ziyue Zhu
Identifying Films with Noir Characteristics Using Audience's Tags on MovieLens
null
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the noir classification problem by exploring noir attributes and what films are likely to be regarded as noirish from the perspective of a wide Internet audience. We use a dataset consisting of more than 30,000 films with relevant tags added by users of MovieLens, a web-based recommendation system. Based on this data, we develop a statistical model to identify films with noir characteristics using these free-form tags. After retrieving information for describing films from tags, we implement a one-class nearest neighbors algorithm to recognize noirish films by learning from IMDb-labeled noirs. Our analysis evidences film noirs' close relationship with German Expressionism, French Poetic Realism, British thrillers, and American pre-code crime pictures, revealing the similarities and differences between neo noirs after 1960 and noirs in the classic period.
[ { "created": "Wed, 24 Aug 2022 17:08:54 GMT", "version": "v1" } ]
2022-09-02
[ [ "Zhu", "Ziyue", "" ] ]
We consider the noir classification problem by exploring noir attributes and what films are likely to be regarded as noirish from the perspective of a wide Internet audience. We use a dataset consisting of more than 30,000 films with relevant tags added by users of MovieLens, a web-based recommendation system. Based on this data, we develop a statistical model to identify films with noir characteristics using these free-form tags. After retrieving information for describing films from tags, we implement a one-class nearest neighbors algorithm to recognize noirish films by learning from IMDb-labeled noirs. Our analysis evidences film noirs' close relationship with German Expressionism, French Poetic Realism, British thrillers, and American pre-code crime pictures, revealing the similarities and differences between neo noirs after 1960 and noirs in the classic period.
2211.02760
David Rapado-Rincon
David Rapado Rincon, Eldert J. van Henten, Gert Kootstra
Development and evaluation of automated localisation and reconstruction of all fruits on tomato plants in a greenhouse based on multi-view perception and 3D multi-object tracking
null
null
10.1016/j.biosystemseng.2023.06.003
null
cs.RO cs.CV
http://creativecommons.org/licenses/by/4.0/
The ability to accurately represent and localise relevant objects is essential for robots to carry out tasks effectively. Traditional approaches, where robots simply capture an image, process that image to take an action, and then forget the information, have proven to struggle in the presence of occlusions. Methods using multi-view perception, which have the potential to address some of these problems, require a world model that guides the collection, integration and extraction of information from multiple viewpoints. Furthermore, constructing a generic representation that can be applied in various environments and tasks is a difficult challenge. In this paper, a novel approach for building generic representations in occluded agro-food environments using multi-view perception and 3D multi-object tracking is introduced. The method is based on a detection algorithm that generates partial point clouds for each detected object, followed by a 3D multi-object tracking algorithm that updates the representation over time. The accuracy of the representation was evaluated in a real-world environment, where successful representation and localisation of tomatoes in tomato plants were achieved, despite high levels of occlusion, with the total count of tomatoes estimated with a maximum error of 5.08% and the tomatoes tracked with an accuracy up to 71.47%. Novel tracking metrics were introduced, demonstrating that valuable insight into the errors in localising and representing the fruits can be provided by their use. This approach presents a novel solution for building representations in occluded agro-food environments, demonstrating potential to enable robots to perform tasks effectively in these challenging environments.
[ { "created": "Fri, 4 Nov 2022 21:51:53 GMT", "version": "v1" }, { "created": "Tue, 11 Jul 2023 19:04:44 GMT", "version": "v2" }, { "created": "Tue, 28 Nov 2023 11:44:16 GMT", "version": "v3" } ]
2023-11-29
[ [ "Rincon", "David Rapado", "" ], [ "van Henten", "Eldert J.", "" ], [ "Kootstra", "Gert", "" ] ]
The ability to accurately represent and localise relevant objects is essential for robots to carry out tasks effectively. Traditional approaches, where robots simply capture an image, process that image to take an action, and then forget the information, have proven to struggle in the presence of occlusions. Methods using multi-view perception, which have the potential to address some of these problems, require a world model that guides the collection, integration and extraction of information from multiple viewpoints. Furthermore, constructing a generic representation that can be applied in various environments and tasks is a difficult challenge. In this paper, a novel approach for building generic representations in occluded agro-food environments using multi-view perception and 3D multi-object tracking is introduced. The method is based on a detection algorithm that generates partial point clouds for each detected object, followed by a 3D multi-object tracking algorithm that updates the representation over time. The accuracy of the representation was evaluated in a real-world environment, where successful representation and localisation of tomatoes in tomato plants were achieved, despite high levels of occlusion, with the total count of tomatoes estimated with a maximum error of 5.08% and the tomatoes tracked with an accuracy up to 71.47%. Novel tracking metrics were introduced, demonstrating that valuable insight into the errors in localising and representing the fruits can be provided by their use. This approach presents a novel solution for building representations in occluded agro-food environments, demonstrating potential to enable robots to perform tasks effectively in these challenging environments.
2402.02141
Bo Yang
Bo Yang, Chen Wang, Xiaoshuang Ma, Beiping Song, Zhuang Liu and Fangde Sun
Zero-shot sketch-based remote sensing image retrieval based on multi-level and attention-guided tokenization
44 pages, 6 figures
Remote Sens. 2024, 16, 1653
10.3390/rs16101653
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Effectively and efficiently retrieving images from remote sensing databases is a critical challenge in the realm of remote sensing big data. Utilizing hand-drawn sketches as retrieval inputs offers intuitive and user-friendly advantages, yet the potential of multi-level feature integration from sketches remains underexplored, leading to suboptimal retrieval performance. To address this gap, our study introduces a novel zero-shot, sketch-based retrieval method for remote sensing images, leveraging multi-level feature extraction, self-attention-guided tokenization and filtering, and cross-modality attention update. This approach employs only vision information and does not require semantic knowledge concerning the sketch and image. It starts by employing multi-level self-attention guided feature extraction to tokenize the query sketches, as well as self-attention feature extraction to tokenize the candidate images. It then employs cross-attention mechanisms to establish token correspondence between these two modalities, facilitating the computation of sketch-to-image similarity. Our method significantly outperforms existing sketch-based remote sensing image retrieval techniques, as evidenced by tests on multiple datasets. Notably, it also exhibits robust zero-shot learning capabilities and strong generalizability in handling unseen categories and novel remote sensing data. The method's scalability can be further enhanced by the pre-calculation of retrieval tokens for all candidate images in a database. This research underscores the significant potential of multi-level, attention-guided tokenization in cross-modal remote sensing image retrieval. For broader accessibility and research facilitation, we have made the code and dataset used in this study publicly available online. Code and dataset are available at https://github.com/Snowstormfly/Cross-modal-retrieval-MLAGT.
[ { "created": "Sat, 3 Feb 2024 13:11:14 GMT", "version": "v1" }, { "created": "Tue, 5 Mar 2024 12:15:57 GMT", "version": "v2" }, { "created": "Thu, 16 May 2024 03:00:22 GMT", "version": "v3" } ]
2024-05-20
[ [ "Yang", "Bo", "" ], [ "Wang", "Chen", "" ], [ "Ma", "Xiaoshuang", "" ], [ "Song", "Beiping", "" ], [ "Liu", "Zhuang", "" ], [ "Sun", "Fangde", "" ] ]
Effectively and efficiently retrieving images from remote sensing databases is a critical challenge in the realm of remote sensing big data. Utilizing hand-drawn sketches as retrieval inputs offers intuitive and user-friendly advantages, yet the potential of multi-level feature integration from sketches remains underexplored, leading to suboptimal retrieval performance. To address this gap, our study introduces a novel zero-shot, sketch-based retrieval method for remote sensing images, leveraging multi-level feature extraction, self-attention-guided tokenization and filtering, and cross-modality attention update. This approach employs only vision information and does not require semantic knowledge concerning the sketch and image. It starts by employing multi-level self-attention guided feature extraction to tokenize the query sketches, as well as self-attention feature extraction to tokenize the candidate images. It then employs cross-attention mechanisms to establish token correspondence between these two modalities, facilitating the computation of sketch-to-image similarity. Our method significantly outperforms existing sketch-based remote sensing image retrieval techniques, as evidenced by tests on multiple datasets. Notably, it also exhibits robust zero-shot learning capabilities and strong generalizability in handling unseen categories and novel remote sensing data. The method's scalability can be further enhanced by the pre-calculation of retrieval tokens for all candidate images in a database. This research underscores the significant potential of multi-level, attention-guided tokenization in cross-modal remote sensing image retrieval. For broader accessibility and research facilitation, we have made the code and dataset used in this study publicly available online. Code and dataset are available at https://github.com/Snowstormfly/Cross-modal-retrieval-MLAGT.
2204.07779
Renliang Sun
Renliang Sun and Xiaojun Wan
SimpleBERT: A Pre-trained Model That Learns to Generate Simple Words
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Pre-trained models are widely used in the tasks of natural language processing nowadays. However, in the specific field of text simplification, the research on improving pre-trained models is still blank. In this work, we propose a continued pre-training method for text simplification. Specifically, we propose a new masked language modeling (MLM) mechanism, which does not randomly mask words but only masks simple words. The new mechanism can make the model learn to generate simple words. We use a small-scale simple text dataset for continued pre-training and employ two methods to identify simple words from the texts. We choose BERT, a representative pre-trained model, and continue pre-training it using our proposed method. Finally, we obtain SimpleBERT, which surpasses BERT in both lexical simplification and sentence simplification tasks and has achieved state-of-the-art results on multiple datasets. What's more, SimpleBERT can replace BERT in existing simplification models without modification.
[ { "created": "Sat, 16 Apr 2022 11:28:01 GMT", "version": "v1" } ]
2022-04-19
[ [ "Sun", "Renliang", "" ], [ "Wan", "Xiaojun", "" ] ]
Pre-trained models are widely used in the tasks of natural language processing nowadays. However, in the specific field of text simplification, the research on improving pre-trained models is still blank. In this work, we propose a continued pre-training method for text simplification. Specifically, we propose a new masked language modeling (MLM) mechanism, which does not randomly mask words but only masks simple words. The new mechanism can make the model learn to generate simple words. We use a small-scale simple text dataset for continued pre-training and employ two methods to identify simple words from the texts. We choose BERT, a representative pre-trained model, and continue pre-training it using our proposed method. Finally, we obtain SimpleBERT, which surpasses BERT in both lexical simplification and sentence simplification tasks and has achieved state-of-the-art results on multiple datasets. What's more, SimpleBERT can replace BERT in existing simplification models without modification.
2204.07616
Vitor Guizilini
Vitor Guizilini, Rares Ambrus, Dian Chen, Sergey Zakharov, Adrien Gaidon
Multi-Frame Self-Supervised Depth with Transformers
Accepted to CVPR 2022 (correct project page)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-frame depth estimation improves over single-frame approaches by also leveraging geometric relationships between images via feature matching, in addition to learning appearance-based features. In this paper we revisit feature matching for self-supervised monocular depth estimation, and propose a novel transformer architecture for cost volume generation. We use depth-discretized epipolar sampling to select matching candidates, and refine predictions through a series of self- and cross-attention layers. These layers sharpen the matching probability between pixel features, improving over standard similarity metrics prone to ambiguities and local minima. The refined cost volume is decoded into depth estimates, and the whole pipeline is trained end-to-end from videos using only a photometric objective. Experiments on the KITTI and DDAD datasets show that our DepthFormer architecture establishes a new state of the art in self-supervised monocular depth estimation, and is even competitive with highly specialized supervised single-frame architectures. We also show that our learned cross-attention network yields representations transferable across datasets, increasing the effectiveness of pre-training strategies. Project page: https://sites.google.com/tri.global/depthformer
[ { "created": "Fri, 15 Apr 2022 19:04:57 GMT", "version": "v1" }, { "created": "Fri, 10 Jun 2022 21:56:34 GMT", "version": "v2" } ]
2022-06-14
[ [ "Guizilini", "Vitor", "" ], [ "Ambrus", "Rares", "" ], [ "Chen", "Dian", "" ], [ "Zakharov", "Sergey", "" ], [ "Gaidon", "Adrien", "" ] ]
Multi-frame depth estimation improves over single-frame approaches by also leveraging geometric relationships between images via feature matching, in addition to learning appearance-based features. In this paper we revisit feature matching for self-supervised monocular depth estimation, and propose a novel transformer architecture for cost volume generation. We use depth-discretized epipolar sampling to select matching candidates, and refine predictions through a series of self- and cross-attention layers. These layers sharpen the matching probability between pixel features, improving over standard similarity metrics prone to ambiguities and local minima. The refined cost volume is decoded into depth estimates, and the whole pipeline is trained end-to-end from videos using only a photometric objective. Experiments on the KITTI and DDAD datasets show that our DepthFormer architecture establishes a new state of the art in self-supervised monocular depth estimation, and is even competitive with highly specialized supervised single-frame architectures. We also show that our learned cross-attention network yields representations transferable across datasets, increasing the effectiveness of pre-training strategies. Project page: https://sites.google.com/tri.global/depthformer
2403.11408
Wei Duan
Wei Duan, Jie Lu, Yu Guang Wang, Junyu Xuan
Layer-diverse Negative Sampling for Graph Neural Networks
Published in Transactions on Machine Learning Research (03/2024)
null
null
null
cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Graph neural networks (GNNs) are a powerful solution for various structure learning applications due to their strong representation capabilities for graph data. However, traditional GNNs, relying on message-passing mechanisms that gather information exclusively from first-order neighbours (known as positive samples), can lead to issues such as over-smoothing and over-squashing. To mitigate these issues, we propose a layer-diverse negative sampling method for message-passing propagation. This method employs a sampling matrix within a determinantal point process, which transforms the candidate set into a space and selectively samples from this space to generate negative samples. To further enhance the diversity of the negative samples during each forward pass, we develop a space-squeezing method to achieve layer-wise diversity in multi-layer GNNs. Experiments on various real-world graph datasets demonstrate the effectiveness of our approach in improving the diversity of negative samples and overall learning performance. Moreover, adding negative samples dynamically changes the graph's topology, thus with the strong potential to improve the expressiveness of GNNs and reduce the risk of over-squashing.
[ { "created": "Mon, 18 Mar 2024 01:48:50 GMT", "version": "v1" } ]
2024-03-19
[ [ "Duan", "Wei", "" ], [ "Lu", "Jie", "" ], [ "Wang", "Yu Guang", "" ], [ "Xuan", "Junyu", "" ] ]
Graph neural networks (GNNs) are a powerful solution for various structure learning applications due to their strong representation capabilities for graph data. However, traditional GNNs, relying on message-passing mechanisms that gather information exclusively from first-order neighbours (known as positive samples), can lead to issues such as over-smoothing and over-squashing. To mitigate these issues, we propose a layer-diverse negative sampling method for message-passing propagation. This method employs a sampling matrix within a determinantal point process, which transforms the candidate set into a space and selectively samples from this space to generate negative samples. To further enhance the diversity of the negative samples during each forward pass, we develop a space-squeezing method to achieve layer-wise diversity in multi-layer GNNs. Experiments on various real-world graph datasets demonstrate the effectiveness of our approach in improving the diversity of negative samples and overall learning performance. Moreover, adding negative samples dynamically changes the graph's topology, thus with the strong potential to improve the expressiveness of GNNs and reduce the risk of over-squashing.
1611.06905
F. Richard Yu
Zhexiong Wei, F. Richard Yu, Helen Tang, Chengchao Liang, and Qiao Yan
Security Schemes in Vehicular Ad hoc Networks with Cognitive Radios
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Vehicular Ad hoc NETworks (VANETs) as the basic infrastructure can facilitate applications and services of connected vehicles (CVs). Cognitive radio (CR) technology is an effective supplement and enhancement for VANETs. It can reduce the impact of deficiency of spectrum resource in VANETs. Although CR-VANETs can utilize the unused licensed spectrum effectively, the distributed nature of CR-VANETs may open a door for different attacks, such as spectrum sensing data falsification attack. In this paper, we propose a joint RSU and vehicle-based light-weighted cloud for CR-VANETs. Based on this cloud computing model, we propose a new service named Spectrum Sensing as a Service (SSaaS), which can perform a cooperative spectrum sensing in CR-VANETs with cloud computing assistance to secure the spectrum sensing procedure. As a result, a reliable service can be obtained in CR-VANETs. Simulation results show that the cloud computing in CR-VANETs can effectively reduce latency and improve the security of CR-VANETs.
[ { "created": "Mon, 21 Nov 2016 17:07:42 GMT", "version": "v1" } ]
2016-11-22
[ [ "Wei", "Zhexiong", "" ], [ "Yu", "F. Richard", "" ], [ "Tang", "Helen", "" ], [ "Liang", "Chengchao", "" ], [ "Yan", "Qiao", "" ] ]
Vehicular Ad hoc NETworks (VANETs) as the basic infrastructure can facilitate applications and services of connected vehicles (CVs). Cognitive radio (CR) technology is an effective supplement and enhancement for VANETs. It can reduce the impact of deficiency of spectrum resource in VANETs. Although CR-VANETs can utilize the unused licensed spectrum effectively, the distributed nature of CR-VANETs may open a door for different attacks, such as spectrum sensing data falsification attack. In this paper, we propose a joint RSU and vehicle-based light-weighted cloud for CR-VANETs. Based on this cloud computing model, we propose a new service named Spectrum Sensing as a Service (SSaaS), which can perform a cooperative spectrum sensing in CR-VANETs with cloud computing assistance to secure the spectrum sensing procedure. As a result, a reliable service can be obtained in CR-VANETs. Simulation results show that the cloud computing in CR-VANETs can effectively reduce latency and improve the security of CR-VANETs.
2109.09809
Adam White Dr
Adam White, Artur d'Avila Garcez
Counterfactual Instances Explain Little
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
In many applications, it is important to be able to explain the decisions of machine learning systems. An increasingly popular approach has been to seek to provide \emph{counterfactual instance explanations}. These specify close possible worlds in which, contrary to the facts, a person receives their desired decision from the machine learning system. This paper will draw on literature from the philosophy of science to argue that a satisfactory explanation must consist of both counterfactual instances and a causal equation (or system of equations) that support the counterfactual instances. We will show that counterfactual instances by themselves explain little. We will further illustrate how explainable AI methods that provide both causal equations and counterfactual instances can successfully explain machine learning predictions.
[ { "created": "Mon, 20 Sep 2021 19:40:25 GMT", "version": "v1" } ]
2021-09-22
[ [ "White", "Adam", "" ], [ "Garcez", "Artur d'Avila", "" ] ]
In many applications, it is important to be able to explain the decisions of machine learning systems. An increasingly popular approach has been to seek to provide \emph{counterfactual instance explanations}. These specify close possible worlds in which, contrary to the facts, a person receives their desired decision from the machine learning system. This paper will draw on literature from the philosophy of science to argue that a satisfactory explanation must consist of both counterfactual instances and a causal equation (or system of equations) that support the counterfactual instances. We will show that counterfactual instances by themselves explain little. We will further illustrate how explainable AI methods that provide both causal equations and counterfactual instances can successfully explain machine learning predictions.
1608.05327
Igor Konnov
Igor Konnov, Marijana Lazic, Helmut Veith and Josef Widder
A Short Counterexample Property for Safety and Liveness Verification of Fault-tolerant Distributed Algorithms
16 pages, 11 pages appendix
null
10.1145/3009837.3009860
null
cs.LO cs.DC
http://creativecommons.org/licenses/by/4.0/
Distributed algorithms have many mission-critical applications ranging from embedded systems and replicated databases to cloud computing. Due to asynchronous communication, process faults, or network failures, these algorithms are difficult to design and verify. Many algorithms achieve fault tolerance by using threshold guards that, for instance, ensure that a process waits until it has received an acknowledgment from a majority of its peers. Consequently, domain-specific languages for fault-tolerant distributed systems offer language support for threshold guards. We introduce an automated method for model checking of safety and liveness of threshold-guarded distributed algorithms in systems where the number of processes and the fraction of faulty processes are parameters. Our method is based on a short counterexample property: if a distributed algorithm violates a temporal specification (in a fragment of LTL), then there is a counterexample whose length is bounded and independent of the parameters. We prove this property by (i) characterizing executions depending on the structure of the temporal formula, and (ii) using commutativity of transitions to accelerate and shorten executions. We extended the ByMC toolset (Byzantine Model Checker) with our technique, and verified liveness and safety of 10 prominent fault-tolerant distributed algorithms, most of which were out of reach for existing techniques.
[ { "created": "Thu, 18 Aug 2016 16:43:03 GMT", "version": "v1" }, { "created": "Wed, 9 Nov 2016 10:37:16 GMT", "version": "v2" } ]
2016-11-10
[ [ "Konnov", "Igor", "" ], [ "Lazic", "Marijana", "" ], [ "Veith", "Helmut", "" ], [ "Widder", "Josef", "" ] ]
Distributed algorithms have many mission-critical applications ranging from embedded systems and replicated databases to cloud computing. Due to asynchronous communication, process faults, or network failures, these algorithms are difficult to design and verify. Many algorithms achieve fault tolerance by using threshold guards that, for instance, ensure that a process waits until it has received an acknowledgment from a majority of its peers. Consequently, domain-specific languages for fault-tolerant distributed systems offer language support for threshold guards. We introduce an automated method for model checking of safety and liveness of threshold-guarded distributed algorithms in systems where the number of processes and the fraction of faulty processes are parameters. Our method is based on a short counterexample property: if a distributed algorithm violates a temporal specification (in a fragment of LTL), then there is a counterexample whose length is bounded and independent of the parameters. We prove this property by (i) characterizing executions depending on the structure of the temporal formula, and (ii) using commutativity of transitions to accelerate and shorten executions. We extended the ByMC toolset (Byzantine Model Checker) with our technique, and verified liveness and safety of 10 prominent fault-tolerant distributed algorithms, most of which were out of reach for existing techniques.
2405.08355
Mengsong Wu
Mengsong Wu, Tong Zhu, Han Han, Chuanyuan Tan, Xiang Zhang, Wenliang Chen
Seal-Tools: Self-Instruct Tool Learning Dataset for Agent Tuning and Detailed Benchmark
14 pages, 10 figures
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
This paper presents a new tool learning dataset Seal-Tools, which contains self-instruct API-like tools. Seal-Tools not only offers a large number of tools, but also includes instances which demonstrate the practical application of tools. Seeking to generate data on a large scale while ensuring reliability, we propose a self-instruct method to generate tools and instances, allowing precise control over the process. Moreover, our Seal-Tools contains hard instances that call multiple tools to complete the job, among which some are nested tool callings. For precise and comprehensive evaluation, we use strict format control and design three metrics from different dimensions. Therefore, Seal-Tools can serve as a new benchmark to evaluate the tool-calling ability of LLMs. Finally, we evaluate several prevalent LLMs and our finetuned model on Seal-Tools. The results show that current systems are far from perfect. The code, data and experiment results are available at https://github.com/fairyshine/Seal-Tools .
[ { "created": "Tue, 14 May 2024 06:50:19 GMT", "version": "v1" } ]
2024-05-15
[ [ "Wu", "Mengsong", "" ], [ "Zhu", "Tong", "" ], [ "Han", "Han", "" ], [ "Tan", "Chuanyuan", "" ], [ "Zhang", "Xiang", "" ], [ "Chen", "Wenliang", "" ] ]
This paper presents a new tool learning dataset Seal-Tools, which contains self-instruct API-like tools. Seal-Tools not only offers a large number of tools, but also includes instances which demonstrate the practical application of tools. Seeking to generate data on a large scale while ensuring reliability, we propose a self-instruct method to generate tools and instances, allowing precise control over the process. Moreover, our Seal-Tools contains hard instances that call multiple tools to complete the job, among which some are nested tool callings. For precise and comprehensive evaluation, we use strict format control and design three metrics from different dimensions. Therefore, Seal-Tools can serve as a new benchmark to evaluate the tool-calling ability of LLMs. Finally, we evaluate several prevalent LLMs and our finetuned model on Seal-Tools. The results show that current systems are far from perfect. The code, data and experiment results are available at https://github.com/fairyshine/Seal-Tools .
2310.08837
Gang Fan
Gang Fan, Xiaoheng Xie, Xunjin Zheng, Yinan Liang, Peng Di
Static Code Analysis in the AI Era: An In-depth Exploration of the Concept, Function, and Potential of Intelligent Code Analysis Agents
null
null
null
null
cs.SE
http://creativecommons.org/licenses/by-nc-nd/4.0/
The escalating complexity of software systems and accelerating development cycles pose a significant challenge in managing code errors and implementing business logic. Traditional techniques, while cornerstone for software quality assurance, exhibit limitations in handling intricate business logic and extensive codebases. To address these challenges, we introduce the Intelligent Code Analysis Agent (ICAA), a novel concept combining AI models, engineering process designs, and traditional non-AI components. The ICAA employs the capabilities of large language models (LLMs) such as GPT-3 or GPT-4 to automatically detect and diagnose code errors and business logic inconsistencies. In our exploration of this concept, we observed a substantial improvement in bug detection accuracy, reducing the false-positive rate to 66\% from the baseline's 85\%, and a promising recall rate of 60.8\%. However, the token consumption cost associated with LLMs, particularly the average cost for analyzing each line of code, remains a significant consideration for widespread adoption. Despite this challenge, our findings suggest that the ICAA holds considerable potential to revolutionize software quality assurance, significantly enhancing the efficiency and accuracy of bug detection in the software development process. We hope this pioneering work will inspire further research and innovation in this field, focusing on refining the ICAA concept and exploring ways to mitigate the associated costs.
[ { "created": "Fri, 13 Oct 2023 03:16:58 GMT", "version": "v1" } ]
2023-10-16
[ [ "Fan", "Gang", "" ], [ "Xie", "Xiaoheng", "" ], [ "Zheng", "Xunjin", "" ], [ "Liang", "Yinan", "" ], [ "Di", "Peng", "" ] ]
The escalating complexity of software systems and accelerating development cycles pose a significant challenge in managing code errors and implementing business logic. Traditional techniques, while cornerstone for software quality assurance, exhibit limitations in handling intricate business logic and extensive codebases. To address these challenges, we introduce the Intelligent Code Analysis Agent (ICAA), a novel concept combining AI models, engineering process designs, and traditional non-AI components. The ICAA employs the capabilities of large language models (LLMs) such as GPT-3 or GPT-4 to automatically detect and diagnose code errors and business logic inconsistencies. In our exploration of this concept, we observed a substantial improvement in bug detection accuracy, reducing the false-positive rate to 66\% from the baseline's 85\%, and a promising recall rate of 60.8\%. However, the token consumption cost associated with LLMs, particularly the average cost for analyzing each line of code, remains a significant consideration for widespread adoption. Despite this challenge, our findings suggest that the ICAA holds considerable potential to revolutionize software quality assurance, significantly enhancing the efficiency and accuracy of bug detection in the software development process. We hope this pioneering work will inspire further research and innovation in this field, focusing on refining the ICAA concept and exploring ways to mitigate the associated costs.
2002.11375
Adnan Shahid
Ingrid Moerman, Djamal Zeghlache, Adnan Shahid, Joao F. Santos, Luiz A. DaSilva, Klaus David, John Farserotu, Ad de Ridder, Wei Liu, and Jeroen Hoebeke
Mandate-driven Networking Eco-system: A Paradigm Shift in End-to-End Communications
internal organisation policy
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The wireless industry is driven by key stakeholders that follow a holistic approach of "one-system-fits-all" that leads to moving network functionality of meeting stringent E2E communication requirements towards the core and cloud infrastructures. This trend is limiting smaller and new players for bringing in new and novel solutions. For meeting these E2E requirements, tenants and end-users need to be active players for bringing their needs and innovations. Driving E2E communication not only in terms of QoS but also overall carbon footprint and spectrum efficiency from one specific community may lead to undesirable simplifications and a higher level of abstraction of other network segments may lead to sub-optimal operations. Based on this, the paper presents a paradigm shift that will enlarge the role of wireless innovation at academia, SME's, industries and start-ups while taking into account decentralized mandate-driven intelligence in E2E communications
[ { "created": "Wed, 26 Feb 2020 09:28:48 GMT", "version": "v1" }, { "created": "Tue, 3 Mar 2020 15:16:13 GMT", "version": "v2" }, { "created": "Fri, 6 Mar 2020 10:27:24 GMT", "version": "v3" } ]
2020-03-09
[ [ "Moerman", "Ingrid", "" ], [ "Zeghlache", "Djamal", "" ], [ "Shahid", "Adnan", "" ], [ "Santos", "Joao F.", "" ], [ "DaSilva", "Luiz A.", "" ], [ "David", "Klaus", "" ], [ "Farserotu", "John", "" ], [ "de Ridder", "Ad", "" ], [ "Liu", "Wei", "" ], [ "Hoebeke", "Jeroen", "" ] ]
The wireless industry is driven by key stakeholders that follow a holistic approach of "one-system-fits-all" that leads to moving network functionality of meeting stringent E2E communication requirements towards the core and cloud infrastructures. This trend is limiting smaller and new players for bringing in new and novel solutions. For meeting these E2E requirements, tenants and end-users need to be active players for bringing their needs and innovations. Driving E2E communication not only in terms of QoS but also overall carbon footprint and spectrum efficiency from one specific community may lead to undesirable simplifications and a higher level of abstraction of other network segments may lead to sub-optimal operations. Based on this, the paper presents a paradigm shift that will enlarge the role of wireless innovation at academia, SME's, industries and start-ups while taking into account decentralized mandate-driven intelligence in E2E communications
2110.02771
Meysam Goodarzi
Meysam Goodarzi, Vladica Sark, Nebojsa Maletic, Jes\'us Guti\'errez, Giuseppe Caire, and Eckhard Grass
DNN-assisted Particle-based Bayesian Joint Synchronization and Localization
null
null
null
null
cs.IT cs.AI cs.LG eess.SP math.IT
http://creativecommons.org/licenses/by/4.0/
In this work, we propose a Deep neural network-assisted Particle Filter-based (DePF) approach to address the Mobile User (MU) joint synchronization and localization (sync\&loc) problem in ultra dense networks. In particular, DePF deploys an asymmetric time-stamp exchange mechanism between the MUs and the Access Points (APs), which, traditionally, provides us with information about the MUs' clock offset and skew. However, information about the distance between an AP and an MU is also intrinsic to the propagation delay experienced by exchanged time-stamps. In addition, to estimate the angle of arrival of the received synchronization packet, DePF draws on the multiple signal classification algorithm that is fed by Channel Impulse Response (CIR) experienced by the sync packets. The CIR is also leveraged on to determine the link condition, i.e. Line-of-Sight (LoS) or Non-LoS. Finally, to perform joint sync\&loc, DePF capitalizes on particle Gaussian mixtures that allow for a hybrid particle-based and parametric Bayesian Recursive Filtering (BRF) fusion of the aforementioned pieces of information and thus jointly estimate the position and clock parameters of the MUs. The simulation results verifies the superiority of the proposed algorithm over the state-of-the-art schemes, especially that of Extended Kalman filter- and linearized BRF-based joint sync\&loc. In particular, only drawing on the synchronization time-stamp exchange and CIRs, for 90$\%$of the cases, the absolute position and clock offset estimation error remain below 1 meter and 2 nanoseconds, respectively.
[ { "created": "Wed, 29 Sep 2021 08:58:31 GMT", "version": "v1" }, { "created": "Thu, 2 Jun 2022 09:17:00 GMT", "version": "v2" } ]
2022-06-03
[ [ "Goodarzi", "Meysam", "" ], [ "Sark", "Vladica", "" ], [ "Maletic", "Nebojsa", "" ], [ "Gutiérrez", "Jesús", "" ], [ "Caire", "Giuseppe", "" ], [ "Grass", "Eckhard", "" ] ]
In this work, we propose a Deep neural network-assisted Particle Filter-based (DePF) approach to address the Mobile User (MU) joint synchronization and localization (sync\&loc) problem in ultra dense networks. In particular, DePF deploys an asymmetric time-stamp exchange mechanism between the MUs and the Access Points (APs), which, traditionally, provides us with information about the MUs' clock offset and skew. However, information about the distance between an AP and an MU is also intrinsic to the propagation delay experienced by exchanged time-stamps. In addition, to estimate the angle of arrival of the received synchronization packet, DePF draws on the multiple signal classification algorithm that is fed by Channel Impulse Response (CIR) experienced by the sync packets. The CIR is also leveraged on to determine the link condition, i.e. Line-of-Sight (LoS) or Non-LoS. Finally, to perform joint sync\&loc, DePF capitalizes on particle Gaussian mixtures that allow for a hybrid particle-based and parametric Bayesian Recursive Filtering (BRF) fusion of the aforementioned pieces of information and thus jointly estimate the position and clock parameters of the MUs. The simulation results verifies the superiority of the proposed algorithm over the state-of-the-art schemes, especially that of Extended Kalman filter- and linearized BRF-based joint sync\&loc. In particular, only drawing on the synchronization time-stamp exchange and CIRs, for 90$\%$of the cases, the absolute position and clock offset estimation error remain below 1 meter and 2 nanoseconds, respectively.
1708.05174
Sid Chi-Kin Chau
Muhammad Aftab, Sid Chi-Kin Chau, and Majid Khonji
Enabling Self-aware Smart Buildings by Augmented Reality
This paper appears in ACM International Conference on Future Energy Systems (e-Energy), 2018
null
10.1145/3208903.3208943
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Conventional HVAC control systems are usually incognizant of the physical structures and materials of buildings. These systems merely follow pre-set HVAC control logic based on abstract building thermal response models, which are rough approximations to true physical models, ignoring dynamic spatial variations in built environments. To enable more accurate and responsive HVAC control, this paper introduces the notion of "self-aware" smart buildings, such that buildings are able to explicitly construct physical models of themselves (e.g., incorporating building structures and materials, and thermal flow dynamics). The question is how to enable self-aware buildings that automatically acquire dynamic knowledge of themselves. This paper presents a novel approach using "augmented reality". The extensive user-environment interactions in augmented reality not only can provide intuitive user interfaces for building systems, but also can capture the physical structures and possibly materials of buildings accurately to enable real-time building simulation and control. This paper presents a building system prototype incorporating augmented reality, and discusses its applications.
[ { "created": "Thu, 17 Aug 2017 08:56:01 GMT", "version": "v1" }, { "created": "Mon, 21 May 2018 09:38:54 GMT", "version": "v2" } ]
2018-05-22
[ [ "Aftab", "Muhammad", "" ], [ "Chau", "Sid Chi-Kin", "" ], [ "Khonji", "Majid", "" ] ]
Conventional HVAC control systems are usually incognizant of the physical structures and materials of buildings. These systems merely follow pre-set HVAC control logic based on abstract building thermal response models, which are rough approximations to true physical models, ignoring dynamic spatial variations in built environments. To enable more accurate and responsive HVAC control, this paper introduces the notion of "self-aware" smart buildings, such that buildings are able to explicitly construct physical models of themselves (e.g., incorporating building structures and materials, and thermal flow dynamics). The question is how to enable self-aware buildings that automatically acquire dynamic knowledge of themselves. This paper presents a novel approach using "augmented reality". The extensive user-environment interactions in augmented reality not only can provide intuitive user interfaces for building systems, but also can capture the physical structures and possibly materials of buildings accurately to enable real-time building simulation and control. This paper presents a building system prototype incorporating augmented reality, and discusses its applications.
2011.13824
Kaidi Xu
Kaidi Xu, Huan Zhang, Shiqi Wang, Yihan Wang, Suman Jana, Xue Lin, Cho-Jui Hsieh
Fast and Complete: Enabling Complete Neural Network Verification with Rapid and Massively Parallel Incomplete Verifiers
Accepted by ICLR 2021
null
null
null
cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Formal verification of neural networks (NNs) is a challenging and important problem. Existing efficient complete solvers typically require the branch-and-bound (BaB) process, which splits the problem domain into sub-domains and solves each sub-domain using faster but weaker incomplete verifiers, such as Linear Programming (LP) on linearly relaxed sub-domains. In this paper, we propose to use the backward mode linear relaxation based perturbation analysis (LiRPA) to replace LP during the BaB process, which can be efficiently implemented on the typical machine learning accelerators such as GPUs and TPUs. However, unlike LP, LiRPA when applied naively can produce much weaker bounds and even cannot check certain conflicts of sub-domains during splitting, making the entire procedure incomplete after BaB. To address these challenges, we apply a fast gradient based bound tightening procedure combined with batch splits and the design of minimal usage of LP bound procedure, enabling us to effectively use LiRPA on the accelerator hardware for the challenging complete NN verification problem and significantly outperform LP-based approaches. On a single GPU, we demonstrate an order of magnitude speedup compared to existing LP-based approaches.
[ { "created": "Fri, 27 Nov 2020 16:42:12 GMT", "version": "v1" }, { "created": "Tue, 16 Mar 2021 16:35:00 GMT", "version": "v2" } ]
2021-03-17
[ [ "Xu", "Kaidi", "" ], [ "Zhang", "Huan", "" ], [ "Wang", "Shiqi", "" ], [ "Wang", "Yihan", "" ], [ "Jana", "Suman", "" ], [ "Lin", "Xue", "" ], [ "Hsieh", "Cho-Jui", "" ] ]
Formal verification of neural networks (NNs) is a challenging and important problem. Existing efficient complete solvers typically require the branch-and-bound (BaB) process, which splits the problem domain into sub-domains and solves each sub-domain using faster but weaker incomplete verifiers, such as Linear Programming (LP) on linearly relaxed sub-domains. In this paper, we propose to use the backward mode linear relaxation based perturbation analysis (LiRPA) to replace LP during the BaB process, which can be efficiently implemented on the typical machine learning accelerators such as GPUs and TPUs. However, unlike LP, LiRPA when applied naively can produce much weaker bounds and even cannot check certain conflicts of sub-domains during splitting, making the entire procedure incomplete after BaB. To address these challenges, we apply a fast gradient based bound tightening procedure combined with batch splits and the design of minimal usage of LP bound procedure, enabling us to effectively use LiRPA on the accelerator hardware for the challenging complete NN verification problem and significantly outperform LP-based approaches. On a single GPU, we demonstrate an order of magnitude speedup compared to existing LP-based approaches.
2001.02527
Sharu Theresa Jose
Sharu Theresa Jose and Osvaldo Simeone
Address-Event Variable-Length Compression for Time-Encoded Data
submitted
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Time-encoded signals, such as social network update logs and spiking traces in neuromorphic processors, are defined by multiple traces carrying information in the timing of events, or spikes. When time-encoded data is processed at a remote site with respect to the location it is produced, the occurrence of events needs to be encoded and transmitted in a timely fashion. The standard Address-Event Representation (AER) protocol for neuromorphic chips encodes the indices of the "spiking" traces in the payload of a packet produced at the same time the events are recorded, hence implicitly encoding the events' timing in the timing of the packet. This paper investigates the potential bandwidth saving that can be obtained by carrying out variable-length compression of packets' payloads. Compression leverages both intra-trace and inter-trace correlations over time that are typical in applications such as social networks or neuromorphic computing. The approach is based on discrete-time Hawkes processes and entropy coding with conditional codebooks. Results from an experiment based on a real-world retweet dataset are also provided.
[ { "created": "Wed, 8 Jan 2020 13:55:15 GMT", "version": "v1" }, { "created": "Thu, 9 Jan 2020 09:26:54 GMT", "version": "v2" }, { "created": "Fri, 24 Apr 2020 07:35:20 GMT", "version": "v3" } ]
2020-04-27
[ [ "Jose", "Sharu Theresa", "" ], [ "Simeone", "Osvaldo", "" ] ]
Time-encoded signals, such as social network update logs and spiking traces in neuromorphic processors, are defined by multiple traces carrying information in the timing of events, or spikes. When time-encoded data is processed at a remote site with respect to the location it is produced, the occurrence of events needs to be encoded and transmitted in a timely fashion. The standard Address-Event Representation (AER) protocol for neuromorphic chips encodes the indices of the "spiking" traces in the payload of a packet produced at the same time the events are recorded, hence implicitly encoding the events' timing in the timing of the packet. This paper investigates the potential bandwidth saving that can be obtained by carrying out variable-length compression of packets' payloads. Compression leverages both intra-trace and inter-trace correlations over time that are typical in applications such as social networks or neuromorphic computing. The approach is based on discrete-time Hawkes processes and entropy coding with conditional codebooks. Results from an experiment based on a real-world retweet dataset are also provided.
2005.10353
Yijun Zhou
Yijun Zhou, James Gregson
WHENet: Real-time Fine-Grained Estimation for Wide Range Head Pose
Accepted at BMVC 2020
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present an end-to-end head-pose estimation network designed to predict Euler angles through the full range head yaws from a single RGB image. Existing methods perform well for frontal views but few target head pose from all viewpoints. This has applications in autonomous driving and retail. Our network builds on multi-loss approaches with changes to loss functions and training strategies adapted to wide range estimation. Additionally, we extract ground truth labelings of anterior views from a current panoptic dataset for the first time. The resulting Wide Headpose Estimation Network (WHENet) is the first fine-grained modern method applicable to the full-range of head yaws (hence wide) yet also meets or beats state-of-the-art methods for frontal head pose estimation. Our network is compact and efficient for mobile devices and applications.
[ { "created": "Wed, 20 May 2020 20:53:01 GMT", "version": "v1" }, { "created": "Tue, 22 Sep 2020 22:54:45 GMT", "version": "v2" } ]
2020-09-24
[ [ "Zhou", "Yijun", "" ], [ "Gregson", "James", "" ] ]
We present an end-to-end head-pose estimation network designed to predict Euler angles through the full range head yaws from a single RGB image. Existing methods perform well for frontal views but few target head pose from all viewpoints. This has applications in autonomous driving and retail. Our network builds on multi-loss approaches with changes to loss functions and training strategies adapted to wide range estimation. Additionally, we extract ground truth labelings of anterior views from a current panoptic dataset for the first time. The resulting Wide Headpose Estimation Network (WHENet) is the first fine-grained modern method applicable to the full-range of head yaws (hence wide) yet also meets or beats state-of-the-art methods for frontal head pose estimation. Our network is compact and efficient for mobile devices and applications.
2109.02227
Yiwu Zhong
Yiwu Zhong, Jing Shi, Jianwei Yang, Chenliang Xu, Yin Li
Learning to Generate Scene Graph from Natural Language Supervision
Accepted to ICCV 2021
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Learning from image-text data has demonstrated recent success for many recognition tasks, yet is currently limited to visual features or individual visual concepts such as objects. In this paper, we propose one of the first methods that learn from image-sentence pairs to extract a graphical representation of localized objects and their relationships within an image, known as scene graph. To bridge the gap between images and texts, we leverage an off-the-shelf object detector to identify and localize object instances, match labels of detected regions to concepts parsed from captions, and thus create "pseudo" labels for learning scene graph. Further, we design a Transformer-based model to predict these "pseudo" labels via a masked token prediction task. Learning from only image-sentence pairs, our model achieves 30% relative gain over a latest method trained with human-annotated unlocalized scene graphs. Our model also shows strong results for weakly and fully supervised scene graph generation. In addition, we explore an open-vocabulary setting for detecting scene graphs, and present the first result for open-set scene graph generation. Our code is available at https://github.com/YiwuZhong/SGG_from_NLS.
[ { "created": "Mon, 6 Sep 2021 03:38:52 GMT", "version": "v1" } ]
2021-09-07
[ [ "Zhong", "Yiwu", "" ], [ "Shi", "Jing", "" ], [ "Yang", "Jianwei", "" ], [ "Xu", "Chenliang", "" ], [ "Li", "Yin", "" ] ]
Learning from image-text data has demonstrated recent success for many recognition tasks, yet is currently limited to visual features or individual visual concepts such as objects. In this paper, we propose one of the first methods that learn from image-sentence pairs to extract a graphical representation of localized objects and their relationships within an image, known as scene graph. To bridge the gap between images and texts, we leverage an off-the-shelf object detector to identify and localize object instances, match labels of detected regions to concepts parsed from captions, and thus create "pseudo" labels for learning scene graph. Further, we design a Transformer-based model to predict these "pseudo" labels via a masked token prediction task. Learning from only image-sentence pairs, our model achieves 30% relative gain over a latest method trained with human-annotated unlocalized scene graphs. Our model also shows strong results for weakly and fully supervised scene graph generation. In addition, we explore an open-vocabulary setting for detecting scene graphs, and present the first result for open-set scene graph generation. Our code is available at https://github.com/YiwuZhong/SGG_from_NLS.
2203.16093
Ying Gao
Ying Gao, Qingqing Wu, Guangchi Zhang, Wen Chen, Derrick Wing Kwan Ng, Marco Di Renzo
Beamforming Optimization for Active Intelligent Reflecting Surface-Aided SWIPT
32 pages, 10 figures, submitted to IEEE journal for possible publication
IEEE Transactions on Wireless Communications, 2022
10.1109/TWC.2022.3193845
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we study an active IRS-aided simultaneous wireless information and power transfer (SWIPT) system. Specifically, an active IRS is deployed to assist a multi-antenna access point (AP) to convey information and energy simultaneously to multiple single-antenna information users (IUs) and energy users (EUs). Two joint transmit and reflect beamforming optimization problems are investigated with different practical objectives. The first problem maximizes the weighted sum-power harvested by the EUs subject to individual signal-to-interference-plus-noise ratio (SINR) constraints at the IUs, while the second problem maximizes the weighted sum-rate of the IUs subject to individual energy harvesting (EH) constraints at the EUs. The optimization problems are non-convex and difficult to solve optimally. To tackle these two problems, we first rigorously prove that dedicated energy beams are not required for their corresponding semidefinite relaxation (SDR) reformulations and the SDR is tight for the first problem, thus greatly simplifying the AP precoding design. Then, by capitalizing on the techniques of alternating optimization (AO), SDR, and successive convex approximation (SCA), computationally efficient algorithms are developed to obtain suboptimal solutions of the resulting optimization problems. Simulation results demonstrate that, given the same total system power budget, significant performance gains in terms of operating range of wireless power transfer (WPT), total harvested energy, as well as achievable rate can be obtained by our proposed designs over benchmark schemes (especially the one adopting a passive IRS). Moreover, it is advisable to deploy an active IRS in the proximity of the users for the effective operation of WPT/SWIPT.
[ { "created": "Wed, 30 Mar 2022 06:46:13 GMT", "version": "v1" }, { "created": "Thu, 9 Jun 2022 07:37:02 GMT", "version": "v2" } ]
2022-08-05
[ [ "Gao", "Ying", "" ], [ "Wu", "Qingqing", "" ], [ "Zhang", "Guangchi", "" ], [ "Chen", "Wen", "" ], [ "Ng", "Derrick Wing Kwan", "" ], [ "Di Renzo", "Marco", "" ] ]
In this paper, we study an active IRS-aided simultaneous wireless information and power transfer (SWIPT) system. Specifically, an active IRS is deployed to assist a multi-antenna access point (AP) to convey information and energy simultaneously to multiple single-antenna information users (IUs) and energy users (EUs). Two joint transmit and reflect beamforming optimization problems are investigated with different practical objectives. The first problem maximizes the weighted sum-power harvested by the EUs subject to individual signal-to-interference-plus-noise ratio (SINR) constraints at the IUs, while the second problem maximizes the weighted sum-rate of the IUs subject to individual energy harvesting (EH) constraints at the EUs. The optimization problems are non-convex and difficult to solve optimally. To tackle these two problems, we first rigorously prove that dedicated energy beams are not required for their corresponding semidefinite relaxation (SDR) reformulations and the SDR is tight for the first problem, thus greatly simplifying the AP precoding design. Then, by capitalizing on the techniques of alternating optimization (AO), SDR, and successive convex approximation (SCA), computationally efficient algorithms are developed to obtain suboptimal solutions of the resulting optimization problems. Simulation results demonstrate that, given the same total system power budget, significant performance gains in terms of operating range of wireless power transfer (WPT), total harvested energy, as well as achievable rate can be obtained by our proposed designs over benchmark schemes (especially the one adopting a passive IRS). Moreover, it is advisable to deploy an active IRS in the proximity of the users for the effective operation of WPT/SWIPT.
2004.00583
Alan JiaXiang Guo
Alan J.X. Guo and Fei Zhu
Improving Deep Hyperspectral Image Classification Performance with Spectral Unmixing
null
null
null
null
cs.CV cs.LG eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent advances in neural networks have made great progress in the hyperspectral image (HSI) classification. However, the overfitting effect, which is mainly caused by complicated model structure and small training set, remains a major concern. Reducing the complexity of the neural networks could prevent overfitting to some extent, but also declines the networks' ability to express more abstract features. Enlarging the training set is also difficult, for the high expense of acquisition and manual labeling. In this paper, we propose an abundance-based multi-HSI classification method. Firstly, we convert every HSI from the spectral domain to the abundance domain by a dataset-specific autoencoder. Secondly, the abundance representations from multiple HSIs are collected to form an enlarged dataset. Lastly, we train an abundance-based classifier and employ the classifier to predict over all the involved HSI datasets. Different from the spectra that are usually highly mixed, the abundance features are more representative in reduced dimension with less noise. This benefits the proposed method to employ simple classifiers and enlarged training data, and to expect less overfitting issues. The effectiveness of the proposed method is verified by the ablation study and the comparative experiments.
[ { "created": "Wed, 1 Apr 2020 17:14:05 GMT", "version": "v1" }, { "created": "Thu, 2 Apr 2020 02:52:54 GMT", "version": "v2" }, { "created": "Fri, 3 Apr 2020 16:09:51 GMT", "version": "v3" }, { "created": "Mon, 21 Dec 2020 05:10:08 GMT", "version": "v4" } ]
2020-12-22
[ [ "Guo", "Alan J. X.", "" ], [ "Zhu", "Fei", "" ] ]
Recent advances in neural networks have made great progress in the hyperspectral image (HSI) classification. However, the overfitting effect, which is mainly caused by complicated model structure and small training set, remains a major concern. Reducing the complexity of the neural networks could prevent overfitting to some extent, but also declines the networks' ability to express more abstract features. Enlarging the training set is also difficult, for the high expense of acquisition and manual labeling. In this paper, we propose an abundance-based multi-HSI classification method. Firstly, we convert every HSI from the spectral domain to the abundance domain by a dataset-specific autoencoder. Secondly, the abundance representations from multiple HSIs are collected to form an enlarged dataset. Lastly, we train an abundance-based classifier and employ the classifier to predict over all the involved HSI datasets. Different from the spectra that are usually highly mixed, the abundance features are more representative in reduced dimension with less noise. This benefits the proposed method to employ simple classifiers and enlarged training data, and to expect less overfitting issues. The effectiveness of the proposed method is verified by the ablation study and the comparative experiments.
2101.00588
Xin Jin
Xin Jin, Cuiling Lan, Wenjun Zeng, Zhibo Chen
Style Normalization and Restitution for Domain Generalization and Adaptation
Published on IEEE Transactions on Multimedia. This paper extended SNR (CVPR'20) for domain generalization and adaptation to various computer vision tasks, e.g., image classification, object detection, semantic segmentation
null
null
null
cs.CV
http://creativecommons.org/publicdomain/zero/1.0/
For many practical computer vision applications, the learned models usually have high performance on the datasets used for training but suffer from significant performance degradation when deployed in new environments, where there are usually style differences between the training images and the testing images. An effective domain generalizable model is expected to be able to learn feature representations that are both generalizable and discriminative. In this paper, we design a novel Style Normalization and Restitution module (SNR) to simultaneously ensure both high generalization and discrimination capability of the networks. In the SNR module, particularly, we filter out the style variations (e.g, illumination, color contrast) by performing Instance Normalization (IN) to obtain style normalized features, where the discrepancy among different samples and domains is reduced. However, such a process is task-ignorant and inevitably removes some task-relevant discriminative information, which could hurt the performance. To remedy this, we propose to distill task-relevant discriminative features from the residual (i.e, the difference between the original feature and the style normalized feature) and add them back to the network to ensure high discrimination. Moreover, for better disentanglement, we enforce a dual causality loss constraint in the restitution step to encourage the better separation of task-relevant and task-irrelevant features. We validate the effectiveness of our SNR on different computer vision tasks, including classification, semantic segmentation, and object detection. Experiments demonstrate that our SNR module is capable of improving the performance of networks for domain generalization (DG) and unsupervised domain adaptation (UDA) on many tasks. Code are available at https://github.com/microsoft/SNR.
[ { "created": "Sun, 3 Jan 2021 09:01:39 GMT", "version": "v1" }, { "created": "Wed, 2 Jun 2021 11:33:36 GMT", "version": "v2" }, { "created": "Fri, 11 Mar 2022 03:15:04 GMT", "version": "v3" } ]
2022-03-14
[ [ "Jin", "Xin", "" ], [ "Lan", "Cuiling", "" ], [ "Zeng", "Wenjun", "" ], [ "Chen", "Zhibo", "" ] ]
For many practical computer vision applications, the learned models usually have high performance on the datasets used for training but suffer from significant performance degradation when deployed in new environments, where there are usually style differences between the training images and the testing images. An effective domain generalizable model is expected to be able to learn feature representations that are both generalizable and discriminative. In this paper, we design a novel Style Normalization and Restitution module (SNR) to simultaneously ensure both high generalization and discrimination capability of the networks. In the SNR module, particularly, we filter out the style variations (e.g, illumination, color contrast) by performing Instance Normalization (IN) to obtain style normalized features, where the discrepancy among different samples and domains is reduced. However, such a process is task-ignorant and inevitably removes some task-relevant discriminative information, which could hurt the performance. To remedy this, we propose to distill task-relevant discriminative features from the residual (i.e, the difference between the original feature and the style normalized feature) and add them back to the network to ensure high discrimination. Moreover, for better disentanglement, we enforce a dual causality loss constraint in the restitution step to encourage the better separation of task-relevant and task-irrelevant features. We validate the effectiveness of our SNR on different computer vision tasks, including classification, semantic segmentation, and object detection. Experiments demonstrate that our SNR module is capable of improving the performance of networks for domain generalization (DG) and unsupervised domain adaptation (UDA) on many tasks. Code are available at https://github.com/microsoft/SNR.
2101.03392
Yongfeng Zhang
Hanxiong Chen, Xu Chen, Shaoyun Shi, Yongfeng Zhang
Generate Natural Language Explanations for Recommendation
Accepted to the SIGIR 2019 Workshop on ExplainAble Recommendation and Search, Paris, France, July 2019
null
null
null
cs.IR cs.AI cs.CL
http://creativecommons.org/licenses/by/4.0/
Providing personalized explanations for recommendations can help users to understand the underlying insight of the recommendation results, which is helpful to the effectiveness, transparency, persuasiveness and trustworthiness of recommender systems. Current explainable recommendation models mostly generate textual explanations based on pre-defined sentence templates. However, the expressiveness power of template-based explanation sentences are limited to the pre-defined expressions, and manually defining the expressions require significant human efforts. Motivated by this problem, we propose to generate free-text natural language explanations for personalized recommendation. In particular, we propose a hierarchical sequence-to-sequence model (HSS) for personalized explanation generation. Different from conventional sentence generation in NLP research, a great challenge of explanation generation in e-commerce recommendation is that not all sentences in user reviews are of explanation purpose. To solve the problem, we further propose an auto-denoising mechanism based on topical item feature words for sentence generation. Experiments on various e-commerce product domains show that our approach can not only improve the recommendation accuracy, but also the explanation quality in terms of the offline measures and feature words coverage. This research is one of the initial steps to grant intelligent agents with the ability to explain itself based on natural language sentences.
[ { "created": "Sat, 9 Jan 2021 17:00:41 GMT", "version": "v1" } ]
2021-01-12
[ [ "Chen", "Hanxiong", "" ], [ "Chen", "Xu", "" ], [ "Shi", "Shaoyun", "" ], [ "Zhang", "Yongfeng", "" ] ]
Providing personalized explanations for recommendations can help users to understand the underlying insight of the recommendation results, which is helpful to the effectiveness, transparency, persuasiveness and trustworthiness of recommender systems. Current explainable recommendation models mostly generate textual explanations based on pre-defined sentence templates. However, the expressiveness power of template-based explanation sentences are limited to the pre-defined expressions, and manually defining the expressions require significant human efforts. Motivated by this problem, we propose to generate free-text natural language explanations for personalized recommendation. In particular, we propose a hierarchical sequence-to-sequence model (HSS) for personalized explanation generation. Different from conventional sentence generation in NLP research, a great challenge of explanation generation in e-commerce recommendation is that not all sentences in user reviews are of explanation purpose. To solve the problem, we further propose an auto-denoising mechanism based on topical item feature words for sentence generation. Experiments on various e-commerce product domains show that our approach can not only improve the recommendation accuracy, but also the explanation quality in terms of the offline measures and feature words coverage. This research is one of the initial steps to grant intelligent agents with the ability to explain itself based on natural language sentences.
2310.17378
Daniel Racz
D\'aniel R\'acz, Mih\'aly Petreczky, Andr\'as Csert\'an, B\'alint Dar\'oczy
Optimization dependent generalization bound for ReLU networks based on sensitivity in the tangent bundle
17 pages, 5 figures, OPT2023: 15th Annual Workshop on Optimization for Machine Learning at the 37th NeurIPS 2023, New Orleans, LA, USA
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Recent advances in deep learning have given us some very promising results on the generalization ability of deep neural networks, however literature still lacks a comprehensive theory explaining why heavily over-parametrized models are able to generalize well while fitting the training data. In this paper we propose a PAC type bound on the generalization error of feedforward ReLU networks via estimating the Rademacher complexity of the set of networks available from an initial parameter vector via gradient descent. The key idea is to bound the sensitivity of the network's gradient to perturbation of the input data along the optimization trajectory. The obtained bound does not explicitly depend on the depth of the network. Our results are experimentally verified on the MNIST and CIFAR-10 datasets.
[ { "created": "Thu, 26 Oct 2023 13:14:13 GMT", "version": "v1" }, { "created": "Mon, 4 Dec 2023 15:57:40 GMT", "version": "v2" } ]
2023-12-05
[ [ "Rácz", "Dániel", "" ], [ "Petreczky", "Mihály", "" ], [ "Csertán", "András", "" ], [ "Daróczy", "Bálint", "" ] ]
Recent advances in deep learning have given us some very promising results on the generalization ability of deep neural networks, however literature still lacks a comprehensive theory explaining why heavily over-parametrized models are able to generalize well while fitting the training data. In this paper we propose a PAC type bound on the generalization error of feedforward ReLU networks via estimating the Rademacher complexity of the set of networks available from an initial parameter vector via gradient descent. The key idea is to bound the sensitivity of the network's gradient to perturbation of the input data along the optimization trajectory. The obtained bound does not explicitly depend on the depth of the network. Our results are experimentally verified on the MNIST and CIFAR-10 datasets.
2305.04594
Huu Quoc Dong Tran
Hoang-Anh Phan, Phuc Vinh Nguyen, Thu Hang Thi Khuat, Hieu Dang Van, Dong Huu Quoc Tran, Bao Lam Dang, Tung Thanh Bui, Van Nguyen Thi Thanh and Trinh Chu Duc
A sensor fusion approach for improving implementation speed and accuracy of RTAB-Map algorithm based indoor 3D mapping
Accepted to 20th International Joint Conference on Computer Science and Software Engineering (JCSSE 2023). 5 pages
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
In recent years, 3D mapping for indoor environments has undergone considerable research and improvement because of its effective applications in various fields, including robotics, autonomous navigation, and virtual reality. Building an accurate 3D map for indoor environment is challenging due to the complex nature of the indoor space, the problem of real-time embedding and positioning errors of the robot system. This study proposes a method to improve the accuracy, speed, and quality of 3D indoor mapping by fusing data from the Inertial Measurement System (IMU) of the Intel Realsense D435i camera, the Ultrasonic-based Indoor Positioning System (IPS), and the encoder of the robot's wheel using the extended Kalman filter (EKF) algorithm. The merged data is processed using a Real-time Image Based Mapping algorithm (RTAB-Map), with the processing frequency updated in synch with the position frequency of the IPS device. The results suggest that fusing IMU and IPS data significantly improves the accuracy, mapping time, and quality of 3D maps. Our study highlights the proposed method's potential to improve indoor mapping in various fields, indicating that the fusion of multiple data sources can be a valuable tool in creating high-quality 3D indoor maps.
[ { "created": "Mon, 8 May 2023 10:08:55 GMT", "version": "v1" } ]
2023-05-09
[ [ "Phan", "Hoang-Anh", "" ], [ "Nguyen", "Phuc Vinh", "" ], [ "Khuat", "Thu Hang Thi", "" ], [ "Van", "Hieu Dang", "" ], [ "Tran", "Dong Huu Quoc", "" ], [ "Dang", "Bao Lam", "" ], [ "Bui", "Tung Thanh", "" ], [ "Thanh", "Van Nguyen Thi", "" ], [ "Duc", "Trinh Chu", "" ] ]
In recent years, 3D mapping for indoor environments has undergone considerable research and improvement because of its effective applications in various fields, including robotics, autonomous navigation, and virtual reality. Building an accurate 3D map for indoor environment is challenging due to the complex nature of the indoor space, the problem of real-time embedding and positioning errors of the robot system. This study proposes a method to improve the accuracy, speed, and quality of 3D indoor mapping by fusing data from the Inertial Measurement System (IMU) of the Intel Realsense D435i camera, the Ultrasonic-based Indoor Positioning System (IPS), and the encoder of the robot's wheel using the extended Kalman filter (EKF) algorithm. The merged data is processed using a Real-time Image Based Mapping algorithm (RTAB-Map), with the processing frequency updated in synch with the position frequency of the IPS device. The results suggest that fusing IMU and IPS data significantly improves the accuracy, mapping time, and quality of 3D maps. Our study highlights the proposed method's potential to improve indoor mapping in various fields, indicating that the fusion of multiple data sources can be a valuable tool in creating high-quality 3D indoor maps.
2301.13803
Yao Qiang
Yao Qiang, Chengyin Li, Prashant Khanduri, and Dongxiao Zhu
Fairness-aware Vision Transformer via Debiased Self-Attention
null
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Vision Transformer (ViT) has recently gained significant attention in solving computer vision (CV) problems due to its capability of extracting informative features and modeling long-range dependencies through the attention mechanism. Whereas recent works have explored the trustworthiness of ViT, including its robustness and explainability, the issue of fairness has not yet been adequately addressed. We establish that the existing fairness-aware algorithms designed for CNNs do not perform well on ViT, which highlights the need to develop our novel framework via Debiased Self-Attention (DSA). DSA is a fairness-through-blindness approach that enforces ViT to eliminate spurious features correlated with the sensitive label for bias mitigation and simultaneously retain real features for target prediction. Notably, DSA leverages adversarial examples to locate and mask the spurious features in the input image patches with an additional attention weights alignment regularizer in the training objective to encourage learning real features for target prediction. Importantly, our DSA framework leads to improved fairness guarantees over prior works on multiple prediction tasks without compromising target prediction performance. Code is available at \href{https://github.com/qiangyao1988/DSA}{https://github.com/qiangyao1988/DSA}.
[ { "created": "Tue, 31 Jan 2023 17:44:59 GMT", "version": "v1" }, { "created": "Tue, 29 Aug 2023 17:38:45 GMT", "version": "v2" }, { "created": "Thu, 11 Jul 2024 02:11:49 GMT", "version": "v3" } ]
2024-07-12
[ [ "Qiang", "Yao", "" ], [ "Li", "Chengyin", "" ], [ "Khanduri", "Prashant", "" ], [ "Zhu", "Dongxiao", "" ] ]
Vision Transformer (ViT) has recently gained significant attention in solving computer vision (CV) problems due to its capability of extracting informative features and modeling long-range dependencies through the attention mechanism. Whereas recent works have explored the trustworthiness of ViT, including its robustness and explainability, the issue of fairness has not yet been adequately addressed. We establish that the existing fairness-aware algorithms designed for CNNs do not perform well on ViT, which highlights the need to develop our novel framework via Debiased Self-Attention (DSA). DSA is a fairness-through-blindness approach that enforces ViT to eliminate spurious features correlated with the sensitive label for bias mitigation and simultaneously retain real features for target prediction. Notably, DSA leverages adversarial examples to locate and mask the spurious features in the input image patches with an additional attention weights alignment regularizer in the training objective to encourage learning real features for target prediction. Importantly, our DSA framework leads to improved fairness guarantees over prior works on multiple prediction tasks without compromising target prediction performance. Code is available at \href{https://github.com/qiangyao1988/DSA}{https://github.com/qiangyao1988/DSA}.
2406.04058
Christopher Liscio
Christopher Liscio and Daniel G. Brown
Watching Popular Musicians Learn by Ear: A Hypothesis-Generating Study of Human-Recording Interactions in YouTube Videos
null
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
Popular musicians often learn music by ear. It is unclear what role technology plays for those with experience at this task. In search of opportunities for the development of novel human-recording interactions, we analyze 18 YouTube videos depicting real-world examples of by-ear learning, and discuss why, during this preliminary phase of research, online videos are appropriate data. From our observations we generate hypotheses that can inform future work. For example, a musician's scope of learning may influence what technological interactions would help them, they could benefit from tools that accommodate their working memory, and transcription does not appear to play a key role in ear learning. Based on these findings, we pose a number of research questions, and discuss their methodological considerations to guide future study.
[ { "created": "Thu, 6 Jun 2024 13:25:42 GMT", "version": "v1" } ]
2024-06-07
[ [ "Liscio", "Christopher", "" ], [ "Brown", "Daniel G.", "" ] ]
Popular musicians often learn music by ear. It is unclear what role technology plays for those with experience at this task. In search of opportunities for the development of novel human-recording interactions, we analyze 18 YouTube videos depicting real-world examples of by-ear learning, and discuss why, during this preliminary phase of research, online videos are appropriate data. From our observations we generate hypotheses that can inform future work. For example, a musician's scope of learning may influence what technological interactions would help them, they could benefit from tools that accommodate their working memory, and transcription does not appear to play a key role in ear learning. Based on these findings, we pose a number of research questions, and discuss their methodological considerations to guide future study.
1712.04245
Ghassan Samara
Abla Hussein, Ghassan Samara
Coordinator Location Effects in AODV Routing Protocol in ZigBee Mesh Network
7 pages
International Journal of Computer Applications (0975-8887), October 2015 Volume 127 - No.8
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
ZigBee mesh network is very important research field in computer networks. However, the location of ZigBee coordinator plays a significant role in design and routing performance. In this paper, an extensive study on the factors that influence the performance of AODV routing protocol had been performed through the study of battery voltage decaying of nodes, neighboring tables, time delay and network topology structure. Simulation results reveal that the location of the coordinator within approximate equal distances to all nodes is more appropriate for lifelong batteries and AODV routing performance.
[ { "created": "Tue, 12 Dec 2017 11:30:05 GMT", "version": "v1" } ]
2017-12-13
[ [ "Hussein", "Abla", "" ], [ "Samara", "Ghassan", "" ] ]
ZigBee mesh network is very important research field in computer networks. However, the location of ZigBee coordinator plays a significant role in design and routing performance. In this paper, an extensive study on the factors that influence the performance of AODV routing protocol had been performed through the study of battery voltage decaying of nodes, neighboring tables, time delay and network topology structure. Simulation results reveal that the location of the coordinator within approximate equal distances to all nodes is more appropriate for lifelong batteries and AODV routing performance.
2112.01710
Euiwoong Lee
Euiwoong Lee and Pengxiang Wang
Strong Hardness of Approximation for Tree Transversals
null
null
null
null
cs.CC
http://creativecommons.org/licenses/by/4.0/
Let $H$ be a fixed graph. The $H$-Transversal problem, given a graph $G$, asks to remove the smallest number of vertices from $G$ so that $G$ does not contain $H$ as a subgraph. While a simple $|V(H)|$-approximation algorithm exists and is believed to be tight for every $2$-vertex-connected $H$, the best hardness of approximation for any tree was $\Omega(\log |V(H)|)$-inapproximability when $H$ is a star. In this paper, we identify a natural parameter $\Delta$ for every tree $T$ and show that $T$-Transversal is NP-hard to approximate within a factor $(\Delta - 1 -\varepsilon)$ for an arbitrarily small constant $\varepsilon > 0$. As a corollary, we prove that there exists a tree $T$ such that $T$-Transversal is NP-hard to approximate within a factor $\Omega(|V(T)|)$, exponentially improving the best known hardness of approximation for tree transversals.
[ { "created": "Fri, 3 Dec 2021 04:42:50 GMT", "version": "v1" } ]
2021-12-06
[ [ "Lee", "Euiwoong", "" ], [ "Wang", "Pengxiang", "" ] ]
Let $H$ be a fixed graph. The $H$-Transversal problem, given a graph $G$, asks to remove the smallest number of vertices from $G$ so that $G$ does not contain $H$ as a subgraph. While a simple $|V(H)|$-approximation algorithm exists and is believed to be tight for every $2$-vertex-connected $H$, the best hardness of approximation for any tree was $\Omega(\log |V(H)|)$-inapproximability when $H$ is a star. In this paper, we identify a natural parameter $\Delta$ for every tree $T$ and show that $T$-Transversal is NP-hard to approximate within a factor $(\Delta - 1 -\varepsilon)$ for an arbitrarily small constant $\varepsilon > 0$. As a corollary, we prove that there exists a tree $T$ such that $T$-Transversal is NP-hard to approximate within a factor $\Omega(|V(T)|)$, exponentially improving the best known hardness of approximation for tree transversals.
1603.09376
Mohamed Khalil
Mohamed Amir, Tamer Khattab, Tarek Elfouly
On the Secure Degrees of Freedom of the K-user MAC and 2-user Interference Channels
arXiv admin note: substantial text overlap with arXiv:1404.5007
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the secure degrees of freedom (SDoF) of the K-user MIMO multiple access (MAC) and the two user MIMO interference channel. An unknown number of eavesdroppers are trying to decode the messages sent by the transmitters. Each eavesdropper is equipped with a number of antennas less than or equal to a known value NE. The legitimate transmitters and receivers are assumed to have global channel knowledge. We present the sum SDoF of the two user MIMO interference channel. We derive an upperbound on the sum SDoF of the K-user MAC channel and present an achievable scheme that partially meets the derived upperbound.
[ { "created": "Wed, 30 Mar 2016 20:47:48 GMT", "version": "v1" } ]
2016-04-01
[ [ "Amir", "Mohamed", "" ], [ "Khattab", "Tamer", "" ], [ "Elfouly", "Tarek", "" ] ]
We investigate the secure degrees of freedom (SDoF) of the K-user MIMO multiple access (MAC) and the two user MIMO interference channel. An unknown number of eavesdroppers are trying to decode the messages sent by the transmitters. Each eavesdropper is equipped with a number of antennas less than or equal to a known value NE. The legitimate transmitters and receivers are assumed to have global channel knowledge. We present the sum SDoF of the two user MIMO interference channel. We derive an upperbound on the sum SDoF of the K-user MAC channel and present an achievable scheme that partially meets the derived upperbound.
1810.10151
Cheng Li
Hui Sun, Cheng Li, Boqiang Liu, Hairong Zheng, David Dagan Feng, and Shanshan Wang
AUNet: Attention-guided dense-upsampling networks for breast mass segmentation in whole mammograms
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mammography is one of the most commonly applied tools for early breast cancer screening. Automatic segmentation of breast masses in mammograms is essential but challenging due to the low signal-to-noise ratio and the wide variety of mass shapes and sizes. Existing methods deal with these challenges mainly by extracting mass-centered image patches manually or automatically. However, manual patch extraction is time-consuming and automatic patch extraction brings errors that could not be compensated in the following segmentation step. In this study, we propose a novel attention-guided dense-upsampling network (AUNet) for accurate breast mass segmentation in whole mammograms directly. In AUNet, we employ an asymmetrical encoder-decoder structure and propose an effective upsampling block, attention-guided dense-upsampling block (AU block). Especially, the AU block is designed to have three merits. Firstly, it compensates the information loss of bilinear upsampling by dense upsampling. Secondly, it designs a more effective method to fuse high- and low-level features. Thirdly, it includes a channel-attention function to highlight rich-information channels. We evaluated the proposed method on two publicly available datasets, CBIS-DDSM and INbreast. Compared to three state-of-the-art fully convolutional networks, AUNet achieved the best performances with an average Dice similarity coefficient of 81.8% for CBIS-DDSM and 79.1% for INbreast.
[ { "created": "Wed, 24 Oct 2018 01:55:33 GMT", "version": "v1" }, { "created": "Wed, 19 Dec 2018 06:15:05 GMT", "version": "v2" }, { "created": "Tue, 6 Aug 2019 07:53:47 GMT", "version": "v3" } ]
2019-08-07
[ [ "Sun", "Hui", "" ], [ "Li", "Cheng", "" ], [ "Liu", "Boqiang", "" ], [ "Zheng", "Hairong", "" ], [ "Feng", "David Dagan", "" ], [ "Wang", "Shanshan", "" ] ]
Mammography is one of the most commonly applied tools for early breast cancer screening. Automatic segmentation of breast masses in mammograms is essential but challenging due to the low signal-to-noise ratio and the wide variety of mass shapes and sizes. Existing methods deal with these challenges mainly by extracting mass-centered image patches manually or automatically. However, manual patch extraction is time-consuming and automatic patch extraction brings errors that could not be compensated in the following segmentation step. In this study, we propose a novel attention-guided dense-upsampling network (AUNet) for accurate breast mass segmentation in whole mammograms directly. In AUNet, we employ an asymmetrical encoder-decoder structure and propose an effective upsampling block, attention-guided dense-upsampling block (AU block). Especially, the AU block is designed to have three merits. Firstly, it compensates the information loss of bilinear upsampling by dense upsampling. Secondly, it designs a more effective method to fuse high- and low-level features. Thirdly, it includes a channel-attention function to highlight rich-information channels. We evaluated the proposed method on two publicly available datasets, CBIS-DDSM and INbreast. Compared to three state-of-the-art fully convolutional networks, AUNet achieved the best performances with an average Dice similarity coefficient of 81.8% for CBIS-DDSM and 79.1% for INbreast.
1805.06956
Ahmad Babaeian Jelodar
Ahmad Babaeian Jelodar, Md Sirajus Salekin, Yu Sun
Identifying Object States in Cooking-Related Images
7 pages, 8 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding object states is as important as object recognition for robotic task planning and manipulation. To our knowledge, this paper explicitly introduces and addresses the state identification problem in cooking related images for the first time. In this paper, objects and ingredients in cooking videos are explored and the most frequent objects are analyzed. Eleven states from the most frequent cooking objects are examined and a dataset of images containing those objects and their states is created. As a solution to the state identification problem, a Resnet based deep model is proposed. The model is initialized with Imagenet weights and trained on the dataset of eleven classes. The trained state identification model is evaluated on a subset of the Imagenet dataset and state labels are provided using a combination of the model with manual checking. Moreover, an individual model is fine-tuned for each object in the dataset using the weights from the initially trained model and object-specific images, where significant improvement is demonstrated.
[ { "created": "Thu, 17 May 2018 20:18:56 GMT", "version": "v1" }, { "created": "Sun, 27 May 2018 21:06:21 GMT", "version": "v2" }, { "created": "Tue, 30 Oct 2018 14:37:19 GMT", "version": "v3" } ]
2018-10-31
[ [ "Jelodar", "Ahmad Babaeian", "" ], [ "Salekin", "Md Sirajus", "" ], [ "Sun", "Yu", "" ] ]
Understanding object states is as important as object recognition for robotic task planning and manipulation. To our knowledge, this paper explicitly introduces and addresses the state identification problem in cooking related images for the first time. In this paper, objects and ingredients in cooking videos are explored and the most frequent objects are analyzed. Eleven states from the most frequent cooking objects are examined and a dataset of images containing those objects and their states is created. As a solution to the state identification problem, a Resnet based deep model is proposed. The model is initialized with Imagenet weights and trained on the dataset of eleven classes. The trained state identification model is evaluated on a subset of the Imagenet dataset and state labels are provided using a combination of the model with manual checking. Moreover, an individual model is fine-tuned for each object in the dataset using the weights from the initially trained model and object-specific images, where significant improvement is demonstrated.
2206.08809
Hongyu Hu
Hongyu Hu, Qi Wang, Zhengguang Zhang, Zhengyi Li, Zhenhai Gao
Holistic Transformer: A Joint Neural Network for Trajectory Prediction and Decision-Making of Autonomous Vehicles
26 pages, 6 figures
null
null
null
cs.LG cs.AI cs.RO
http://creativecommons.org/licenses/by/4.0/
Trajectory prediction and behavioral decision-making are two important tasks for autonomous vehicles that require good understanding of the environmental context; behavioral decisions are better made by referring to the outputs of trajectory predictions. However, most current solutions perform these two tasks separately. Therefore, a joint neural network that combines multiple cues is proposed and named as the holistic transformer to predict trajectories and make behavioral decisions simultaneously. To better explore the intrinsic relationships between cues, the network uses existing knowledge and adopts three kinds of attention mechanisms: the sparse multi-head type for reducing noise impact, feature selection sparse type for optimally using partial prior knowledge, and multi-head with sigmoid activation type for optimally using posteriori knowledge. Compared with other trajectory prediction models, the proposed model has better comprehensive performance and good interpretability. Perceptual noise robustness experiments demonstrate that the proposed model has good noise robustness. Thus, simultaneous trajectory prediction and behavioral decision-making combining multiple cues can reduce computational costs and enhance semantic relationships between scenes and agents.
[ { "created": "Fri, 17 Jun 2022 14:38:11 GMT", "version": "v1" } ]
2022-06-20
[ [ "Hu", "Hongyu", "" ], [ "Wang", "Qi", "" ], [ "Zhang", "Zhengguang", "" ], [ "Li", "Zhengyi", "" ], [ "Gao", "Zhenhai", "" ] ]
Trajectory prediction and behavioral decision-making are two important tasks for autonomous vehicles that require good understanding of the environmental context; behavioral decisions are better made by referring to the outputs of trajectory predictions. However, most current solutions perform these two tasks separately. Therefore, a joint neural network that combines multiple cues is proposed and named as the holistic transformer to predict trajectories and make behavioral decisions simultaneously. To better explore the intrinsic relationships between cues, the network uses existing knowledge and adopts three kinds of attention mechanisms: the sparse multi-head type for reducing noise impact, feature selection sparse type for optimally using partial prior knowledge, and multi-head with sigmoid activation type for optimally using posteriori knowledge. Compared with other trajectory prediction models, the proposed model has better comprehensive performance and good interpretability. Perceptual noise robustness experiments demonstrate that the proposed model has good noise robustness. Thus, simultaneous trajectory prediction and behavioral decision-making combining multiple cues can reduce computational costs and enhance semantic relationships between scenes and agents.
1002.1099
Ioannis Chatzigiannakis
Ioannis Chatzigiannakis, Georgios Mylonas, Orestis Akribopoulos, Marios Logaras, Panagiotis Kokkinos, Paul Spirakis
The "Hot Potato" Case: Challenges in Multiplayer Pervasive Games Based on Ad hoc Mobile Sensor Networks and the Experimental Evaluation of a Prototype Game
null
null
null
null
cs.HC cs.DC cs.MA cs.NI cs.PF
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we discuss multiplayer pervasive games that rely on the use of ad hoc mobile sensor networks. The unique feature in such games is that players interact with each other and their surrounding environment by using movement and presence as a means of performing game-related actions, utilizing sensor devices. We discuss the fundamental issues and challenges related to these type of games and the scenarios associated with them. We also present and evaluate an example of such a game, called the "Hot Potato", developed using the Sun SPOT hardware platform. We provide a set of experimental results, so as to both evaluate our implementation and also to identify issues that arise in pervasive games which utilize sensor network nodes, which show that there is great potential in this type of games.
[ { "created": "Thu, 4 Feb 2010 23:02:02 GMT", "version": "v1" } ]
2010-02-08
[ [ "Chatzigiannakis", "Ioannis", "" ], [ "Mylonas", "Georgios", "" ], [ "Akribopoulos", "Orestis", "" ], [ "Logaras", "Marios", "" ], [ "Kokkinos", "Panagiotis", "" ], [ "Spirakis", "Paul", "" ] ]
In this work, we discuss multiplayer pervasive games that rely on the use of ad hoc mobile sensor networks. The unique feature in such games is that players interact with each other and their surrounding environment by using movement and presence as a means of performing game-related actions, utilizing sensor devices. We discuss the fundamental issues and challenges related to these type of games and the scenarios associated with them. We also present and evaluate an example of such a game, called the "Hot Potato", developed using the Sun SPOT hardware platform. We provide a set of experimental results, so as to both evaluate our implementation and also to identify issues that arise in pervasive games which utilize sensor network nodes, which show that there is great potential in this type of games.
1101.5966
Eirik Rosnes
Eirik Rosnes and Alexandre Graell i Amat
On the Analysis of Weighted Nonbinary Repeat Multiple-Accumulate Codes
The material in this paper was presented in part at the 6th International Symposium on Turbo Codes & Iterative Information Processing, Brest, France, September 2010, and at the Information Theory and Applications (ITA) workshop, La Jolla, CA, February 2011
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we consider weighted nonbinary repeat multiple-accumulate (WNRMA) code ensembles obtained from the serial concatenation of a nonbinary rate-1/n repeat code and the cascade of L>= 1 accumulators, where each encoder is followed by a nonbinary random weighter. The WNRMA codes are assumed to be iteratively decoded using the turbo principle with maximum a posteriori constituent decoders. We derive the exact weight enumerator of nonbinary accumulators and subsequently give the weight enumerators for WNRMA code ensembles. We formally prove that the symbol-wise minimum distance of WNRMA code ensembles asymptotically grows linearly with the block length when L >= 3 and n >= 2, and L=2 and n >= 3, for all powers of primes q >= 3 considered, where q is the field size. Thus, WNRMA code ensembles are asymptotically good for these parameters. We also give iterative decoding thresholds, computed by an extrinsic information transfer chart analysis, on the q-ary symmetric channel to show the convergence properties. Finally, we consider the binary image of WNRMA code ensembles and compare the asymptotic minimum distance growth rates with those of binary repeat multiple-accumulate code ensembles.
[ { "created": "Mon, 31 Jan 2011 13:54:30 GMT", "version": "v1" } ]
2011-02-01
[ [ "Rosnes", "Eirik", "" ], [ "Amat", "Alexandre Graell i", "" ] ]
In this paper, we consider weighted nonbinary repeat multiple-accumulate (WNRMA) code ensembles obtained from the serial concatenation of a nonbinary rate-1/n repeat code and the cascade of L>= 1 accumulators, where each encoder is followed by a nonbinary random weighter. The WNRMA codes are assumed to be iteratively decoded using the turbo principle with maximum a posteriori constituent decoders. We derive the exact weight enumerator of nonbinary accumulators and subsequently give the weight enumerators for WNRMA code ensembles. We formally prove that the symbol-wise minimum distance of WNRMA code ensembles asymptotically grows linearly with the block length when L >= 3 and n >= 2, and L=2 and n >= 3, for all powers of primes q >= 3 considered, where q is the field size. Thus, WNRMA code ensembles are asymptotically good for these parameters. We also give iterative decoding thresholds, computed by an extrinsic information transfer chart analysis, on the q-ary symmetric channel to show the convergence properties. Finally, we consider the binary image of WNRMA code ensembles and compare the asymptotic minimum distance growth rates with those of binary repeat multiple-accumulate code ensembles.
2301.08986
Zixuan Ke
Zixuan Ke, Yijia Shao, Haowei Lin, Hu Xu, Lei Shu and Bing Liu
Adapting a Language Model While Preserving its General Knowledge
EMNLP 2022
null
null
null
cs.CL cs.AI cs.LG cs.NE
http://creativecommons.org/publicdomain/zero/1.0/
Domain-adaptive pre-training (or DA-training for short), also known as post-training, aims to train a pre-trained general-purpose language model (LM) using an unlabeled corpus of a particular domain to adapt the LM so that end-tasks in the domain can give improved performances. However, existing DA-training methods are in some sense blind as they do not explicitly identify what knowledge in the LM should be preserved and what should be changed by the domain corpus. This paper shows that the existing methods are suboptimal and proposes a novel method to perform a more informed adaptation of the knowledge in the LM by (1) soft-masking the attention heads based on their importance to best preserve the general knowledge in the LM and (2) contrasting the representations of the general and the full (both general and domain knowledge) to learn an integrated representation with both general and domain-specific knowledge. Experimental results will demonstrate the effectiveness of the proposed approach.
[ { "created": "Sat, 21 Jan 2023 17:57:53 GMT", "version": "v1" } ]
2023-01-24
[ [ "Ke", "Zixuan", "" ], [ "Shao", "Yijia", "" ], [ "Lin", "Haowei", "" ], [ "Xu", "Hu", "" ], [ "Shu", "Lei", "" ], [ "Liu", "Bing", "" ] ]
Domain-adaptive pre-training (or DA-training for short), also known as post-training, aims to train a pre-trained general-purpose language model (LM) using an unlabeled corpus of a particular domain to adapt the LM so that end-tasks in the domain can give improved performances. However, existing DA-training methods are in some sense blind as they do not explicitly identify what knowledge in the LM should be preserved and what should be changed by the domain corpus. This paper shows that the existing methods are suboptimal and proposes a novel method to perform a more informed adaptation of the knowledge in the LM by (1) soft-masking the attention heads based on their importance to best preserve the general knowledge in the LM and (2) contrasting the representations of the general and the full (both general and domain knowledge) to learn an integrated representation with both general and domain-specific knowledge. Experimental results will demonstrate the effectiveness of the proposed approach.
2310.11731
Jianlan Luo
Jianlan Luo, Perry Dong, Jeffrey Wu, Aviral Kumar, Xinyang Geng, Sergey Levine
Action-Quantized Offline Reinforcement Learning for Robotic Skill Learning
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
The offline reinforcement learning (RL) paradigm provides a general recipe to convert static behavior datasets into policies that can perform better than the policy that collected the data. While policy constraints, conservatism, and other methods for mitigating distributional shifts have made offline reinforcement learning more effective, the continuous action setting often necessitates various approximations for applying these techniques. Many of these challenges are greatly alleviated in discrete action settings, where offline RL constraints and regularizers can often be computed more precisely or even exactly. In this paper, we propose an adaptive scheme for action quantization. We use a VQ-VAE to learn state-conditioned action quantization, avoiding the exponential blowup that comes with na\"ive discretization of the action space. We show that several state-of-the-art offline RL methods such as IQL, CQL, and BRAC improve in performance on benchmarks when combined with our proposed discretization scheme. We further validate our approach on a set of challenging long-horizon complex robotic manipulation tasks in the Robomimic environment, where our discretized offline RL algorithms are able to improve upon their continuous counterparts by 2-3x. Our project page is at https://saqrl.github.io/
[ { "created": "Wed, 18 Oct 2023 06:07:10 GMT", "version": "v1" } ]
2023-10-19
[ [ "Luo", "Jianlan", "" ], [ "Dong", "Perry", "" ], [ "Wu", "Jeffrey", "" ], [ "Kumar", "Aviral", "" ], [ "Geng", "Xinyang", "" ], [ "Levine", "Sergey", "" ] ]
The offline reinforcement learning (RL) paradigm provides a general recipe to convert static behavior datasets into policies that can perform better than the policy that collected the data. While policy constraints, conservatism, and other methods for mitigating distributional shifts have made offline reinforcement learning more effective, the continuous action setting often necessitates various approximations for applying these techniques. Many of these challenges are greatly alleviated in discrete action settings, where offline RL constraints and regularizers can often be computed more precisely or even exactly. In this paper, we propose an adaptive scheme for action quantization. We use a VQ-VAE to learn state-conditioned action quantization, avoiding the exponential blowup that comes with na\"ive discretization of the action space. We show that several state-of-the-art offline RL methods such as IQL, CQL, and BRAC improve in performance on benchmarks when combined with our proposed discretization scheme. We further validate our approach on a set of challenging long-horizon complex robotic manipulation tasks in the Robomimic environment, where our discretized offline RL algorithms are able to improve upon their continuous counterparts by 2-3x. Our project page is at https://saqrl.github.io/
2210.12598
Junyuan Fang
Junyuan Fang, Haixian Wen, Jiajing Wu, Qi Xuan, Zibin Zheng, Chi K. Tse
GANI: Global Attacks on Graph Neural Networks via Imperceptible Node Injections
null
null
null
null
cs.LG cs.AI cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph neural networks (GNNs) have found successful applications in various graph-related tasks. However, recent studies have shown that many GNNs are vulnerable to adversarial attacks. In a vast majority of existing studies, adversarial attacks on GNNs are launched via direct modification of the original graph such as adding/removing links, which may not be applicable in practice. In this paper, we focus on a realistic attack operation via injecting fake nodes. The proposed Global Attack strategy via Node Injection (GANI) is designed under the comprehensive consideration of an unnoticeable perturbation setting from both structure and feature domains. Specifically, to make the node injections as imperceptible and effective as possible, we propose a sampling operation to determine the degree of the newly injected nodes, and then generate features and select neighbors for these injected nodes based on the statistical information of features and evolutionary perturbations obtained from a genetic algorithm, respectively. In particular, the proposed feature generation mechanism is suitable for both binary and continuous node features. Extensive experimental results on benchmark datasets against both general and defended GNNs show strong attack performance of GANI. Moreover, the imperceptibility analyses also demonstrate that GANI achieves a relatively unnoticeable injection on benchmark datasets.
[ { "created": "Sun, 23 Oct 2022 02:12:26 GMT", "version": "v1" } ]
2022-10-25
[ [ "Fang", "Junyuan", "" ], [ "Wen", "Haixian", "" ], [ "Wu", "Jiajing", "" ], [ "Xuan", "Qi", "" ], [ "Zheng", "Zibin", "" ], [ "Tse", "Chi K.", "" ] ]
Graph neural networks (GNNs) have found successful applications in various graph-related tasks. However, recent studies have shown that many GNNs are vulnerable to adversarial attacks. In a vast majority of existing studies, adversarial attacks on GNNs are launched via direct modification of the original graph such as adding/removing links, which may not be applicable in practice. In this paper, we focus on a realistic attack operation via injecting fake nodes. The proposed Global Attack strategy via Node Injection (GANI) is designed under the comprehensive consideration of an unnoticeable perturbation setting from both structure and feature domains. Specifically, to make the node injections as imperceptible and effective as possible, we propose a sampling operation to determine the degree of the newly injected nodes, and then generate features and select neighbors for these injected nodes based on the statistical information of features and evolutionary perturbations obtained from a genetic algorithm, respectively. In particular, the proposed feature generation mechanism is suitable for both binary and continuous node features. Extensive experimental results on benchmark datasets against both general and defended GNNs show strong attack performance of GANI. Moreover, the imperceptibility analyses also demonstrate that GANI achieves a relatively unnoticeable injection on benchmark datasets.
1801.03354
Blai Bonet
Wilmer Bandres, Blai Bonet, Hector Geffner
Planning with Pixels in (Almost) Real Time
Published at AAAI-18
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, width-based planning methods have been shown to yield state-of-the-art results in the Atari 2600 video games. For this, the states were associated with the (RAM) memory states of the simulator. In this work, we consider the same planning problem but using the screen instead. By using the same visual inputs, the planning results can be compared with those of humans and learning methods. We show that the planning approach, out of the box and without training, results in scores that compare well with those obtained by humans and learning methods, and moreover, by developing an episodic, rollout version of the IW(k) algorithm, we show that such scores can be obtained in almost real time.
[ { "created": "Wed, 10 Jan 2018 12:54:00 GMT", "version": "v1" } ]
2018-01-11
[ [ "Bandres", "Wilmer", "" ], [ "Bonet", "Blai", "" ], [ "Geffner", "Hector", "" ] ]
Recently, width-based planning methods have been shown to yield state-of-the-art results in the Atari 2600 video games. For this, the states were associated with the (RAM) memory states of the simulator. In this work, we consider the same planning problem but using the screen instead. By using the same visual inputs, the planning results can be compared with those of humans and learning methods. We show that the planning approach, out of the box and without training, results in scores that compare well with those obtained by humans and learning methods, and moreover, by developing an episodic, rollout version of the IW(k) algorithm, we show that such scores can be obtained in almost real time.
2102.03513
Rafael Dowsley
Sikha Pentyala and Rafael Dowsley and Martine De Cock
Privacy-Preserving Video Classification with Convolutional Neural Networks
null
null
null
null
cs.CR cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many video classification applications require access to personal data, thereby posing an invasive security risk to the users' privacy. We propose a privacy-preserving implementation of single-frame method based video classification with convolutional neural networks that allows a party to infer a label from a video without necessitating the video owner to disclose their video to other entities in an unencrypted manner. Similarly, our approach removes the requirement of the classifier owner from revealing their model parameters to outside entities in plaintext. To this end, we combine existing Secure Multi-Party Computation (MPC) protocols for private image classification with our novel MPC protocols for oblivious single-frame selection and secure label aggregation across frames. The result is an end-to-end privacy-preserving video classification pipeline. We evaluate our proposed solution in an application for private human emotion recognition. Our results across a variety of security settings, spanning honest and dishonest majority configurations of the computing parties, and for both passive and active adversaries, demonstrate that videos can be classified with state-of-the-art accuracy, and without leaking sensitive user information.
[ { "created": "Sat, 6 Feb 2021 05:05:31 GMT", "version": "v1" } ]
2021-02-09
[ [ "Pentyala", "Sikha", "" ], [ "Dowsley", "Rafael", "" ], [ "De Cock", "Martine", "" ] ]
Many video classification applications require access to personal data, thereby posing an invasive security risk to the users' privacy. We propose a privacy-preserving implementation of single-frame method based video classification with convolutional neural networks that allows a party to infer a label from a video without necessitating the video owner to disclose their video to other entities in an unencrypted manner. Similarly, our approach removes the requirement of the classifier owner from revealing their model parameters to outside entities in plaintext. To this end, we combine existing Secure Multi-Party Computation (MPC) protocols for private image classification with our novel MPC protocols for oblivious single-frame selection and secure label aggregation across frames. The result is an end-to-end privacy-preserving video classification pipeline. We evaluate our proposed solution in an application for private human emotion recognition. Our results across a variety of security settings, spanning honest and dishonest majority configurations of the computing parties, and for both passive and active adversaries, demonstrate that videos can be classified with state-of-the-art accuracy, and without leaking sensitive user information.
2006.14691
Ilya Chugunov
Ilya Chugunov and Avideh Zakhor
Duodepth: Static Gesture Recognition Via Dual Depth Sensors
26th International Conference on Image Processing
2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 2019, pp. 3467-3471
10.1109/ICIP.2019.8803665
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Static gesture recognition is an effective non-verbal communication channel between a user and their devices; however many modern methods are sensitive to the relative pose of the user's hands with respect to the capture device, as parts of the gesture can become occluded. We present two methodologies for gesture recognition via synchronized recording from two depth cameras to alleviate this occlusion problem. One is a more classic approach using iterative closest point registration to accurately fuse point clouds and a single PointNet architecture for classification, and the other is a dual Point-Net architecture for classification without registration. On a manually collected data-set of 20,100 point clouds we show a 39.2% reduction in misclassification for the fused point cloud method, and 53.4% for the dual PointNet, when compared to a standard single camera pipeline.
[ { "created": "Thu, 25 Jun 2020 20:41:47 GMT", "version": "v1" } ]
2020-06-29
[ [ "Chugunov", "Ilya", "" ], [ "Zakhor", "Avideh", "" ] ]
Static gesture recognition is an effective non-verbal communication channel between a user and their devices; however many modern methods are sensitive to the relative pose of the user's hands with respect to the capture device, as parts of the gesture can become occluded. We present two methodologies for gesture recognition via synchronized recording from two depth cameras to alleviate this occlusion problem. One is a more classic approach using iterative closest point registration to accurately fuse point clouds and a single PointNet architecture for classification, and the other is a dual Point-Net architecture for classification without registration. On a manually collected data-set of 20,100 point clouds we show a 39.2% reduction in misclassification for the fused point cloud method, and 53.4% for the dual PointNet, when compared to a standard single camera pipeline.
0711.3276
EDA Publishing Association
A. Phommahaxay (ESYCOM-Esiee), G. Lissorgues (ESYCOM-Esiee), L. Rousseau (ESYCOM-Esiee), T. Bourouina (ESYCOM-Esiee), P. Nicole
Surface Conditioning Effect on Vacuum Microelectronics Components Fabricated by Deep Reactive Ion Etching
Submitted on behalf of TIMA Editions (http://irevues.inist.fr/tima-editions)
Dans Symposium on Design, Test, Integration and Packaging of MEMS/MOEMS - DTIP 2006, Stresa, Lago Maggiore : Italie (2006)
null
null
cs.OH
null
Advances in material processing such as silicon micromachining are opening the way to vacuum microelectronics. Two-dimensional vacuum components can be fabricated using the microsystems processes. We developed such devices using a single metal layer and silicon micromachining by DRIE. The latter technological step has significant impact on the characteristics of the vacuum components. This paper presents a brief summary of electron emission possibilities and the design leading to the fabrication of a lateral field emission diode. First measurement results and the aging of the devices are also discussed.
[ { "created": "Wed, 21 Nov 2007 08:25:17 GMT", "version": "v1" } ]
2007-11-29
[ [ "Phommahaxay", "A.", "", "ESYCOM-Esiee" ], [ "Lissorgues", "G.", "", "ESYCOM-Esiee" ], [ "Rousseau", "L.", "", "ESYCOM-Esiee" ], [ "Bourouina", "T.", "", "ESYCOM-Esiee" ], [ "Nicole", "P.", "" ] ]
Advances in material processing such as silicon micromachining are opening the way to vacuum microelectronics. Two-dimensional vacuum components can be fabricated using the microsystems processes. We developed such devices using a single metal layer and silicon micromachining by DRIE. The latter technological step has significant impact on the characteristics of the vacuum components. This paper presents a brief summary of electron emission possibilities and the design leading to the fabrication of a lateral field emission diode. First measurement results and the aging of the devices are also discussed.
cs/0506065
Mitsugu Iwamoto
Mitsugu Iwamoto, Hirosuke Yamamoto
Strongly secure ramp secret sharing schemes for general access structures
null
null
null
null
cs.CR cs.IT math.IT
null
Ramp secret sharing (SS) schemes can be classified into strong ramp SS schemes and weak ramp SS schemes. The strong ramp SS schemes do not leak out any part of a secret explicitly even in the case where some information about the secret leaks from a non-qualified set of shares, and hence, they are more desirable than weak ramp SS schemes. However, it is not known how to construct the strong ramp SS schemes in the case of general access structures. In this paper, it is shown that a strong ramp SS scheme can always be constructed from a SS scheme with plural secrets for any feasible general access structure. As a byproduct, it is pointed out that threshold ramp SS schemes based on Shamir's polynomial interpolation method are {\em not} always strong.
[ { "created": "Wed, 15 Jun 2005 06:36:17 GMT", "version": "v1" } ]
2016-08-31
[ [ "Iwamoto", "Mitsugu", "" ], [ "Yamamoto", "Hirosuke", "" ] ]
Ramp secret sharing (SS) schemes can be classified into strong ramp SS schemes and weak ramp SS schemes. The strong ramp SS schemes do not leak out any part of a secret explicitly even in the case where some information about the secret leaks from a non-qualified set of shares, and hence, they are more desirable than weak ramp SS schemes. However, it is not known how to construct the strong ramp SS schemes in the case of general access structures. In this paper, it is shown that a strong ramp SS scheme can always be constructed from a SS scheme with plural secrets for any feasible general access structure. As a byproduct, it is pointed out that threshold ramp SS schemes based on Shamir's polynomial interpolation method are {\em not} always strong.
1705.08174
Hendrik Fichtenberger
Hendrik Fichtenberger, Yadu Vasudev
Distributed Testing of Conductance
revised introduction and some fixes
null
null
null
cs.DC cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the problem of testing conductance in the setting of distributed computing and give a two-sided tester that takes $\mathcal{O}(\log(n) / (\epsilon \Phi^2))$ rounds to decide if a graph has conductance at least $\Phi$ or is $\epsilon$-far from having conductance at least $\Phi^2 / 1000$ in the distributed CONGEST model. We also show that $\Omega(\log n)$ rounds are necessary for testing conductance even in the LOCAL model. In the case of a connected graph, we show that we can perform the test even when the number of vertices in the graph is not known a priori. This is the first two-sided tester in the distributed model we are aware of. A key observation is that one can perform a polynomial number of random walks from a small set of vertices if it is sufficient to track only some small statistics of the walks. This greatly reduces the congestion on the edges compared to tracking each walk individually.
[ { "created": "Tue, 23 May 2017 10:50:06 GMT", "version": "v1" }, { "created": "Mon, 17 Jul 2017 13:20:42 GMT", "version": "v2" }, { "created": "Thu, 19 Oct 2017 11:28:26 GMT", "version": "v3" } ]
2017-10-20
[ [ "Fichtenberger", "Hendrik", "" ], [ "Vasudev", "Yadu", "" ] ]
We study the problem of testing conductance in the setting of distributed computing and give a two-sided tester that takes $\mathcal{O}(\log(n) / (\epsilon \Phi^2))$ rounds to decide if a graph has conductance at least $\Phi$ or is $\epsilon$-far from having conductance at least $\Phi^2 / 1000$ in the distributed CONGEST model. We also show that $\Omega(\log n)$ rounds are necessary for testing conductance even in the LOCAL model. In the case of a connected graph, we show that we can perform the test even when the number of vertices in the graph is not known a priori. This is the first two-sided tester in the distributed model we are aware of. A key observation is that one can perform a polynomial number of random walks from a small set of vertices if it is sufficient to track only some small statistics of the walks. This greatly reduces the congestion on the edges compared to tracking each walk individually.
1806.09076
Yanxiang Jiang
Yabai Hu, Yanxiang Jiang, Mehdi Bennis, and Fu-Chun Zheng
Distributed Edge Caching in Ultra-dense Fog Radio Access Networks: A Mean Field Approach
6 pages, 3 figures. This paper has been accepted by IEEE VTC 2018 FALL
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, the edge caching problem in ultra-dense fog radio access networks (F-RAN) is investigated. Taking into account time-variant user requests and ultra-dense deployment of fog access points (F-APs), we propose a dynamic distributed edge caching scheme to jointly minimize the request service delay and fronthaul traffic load. Considering the interactive relationship among F-APs, we model the caching optimization problem as a stochastic differential game (SDG) which captures the temporal dynamics of F-AP states and incorporates user requests status. The SDG is further approximated as a mean field game (MFG) by exploiting the ultra-dense property of F-RAN. In the MFG, each F-AP can optimize its caching policy independently through iteratively solving the corresponding partial differential equations without any information exchange with other F-APs. The simulation results show that the proposed edge caching scheme outperforms the baseline schemes under both static and time-variant user requests.
[ { "created": "Sun, 24 Jun 2018 03:44:17 GMT", "version": "v1" } ]
2018-06-26
[ [ "Hu", "Yabai", "" ], [ "Jiang", "Yanxiang", "" ], [ "Bennis", "Mehdi", "" ], [ "Zheng", "Fu-Chun", "" ] ]
In this paper, the edge caching problem in ultra-dense fog radio access networks (F-RAN) is investigated. Taking into account time-variant user requests and ultra-dense deployment of fog access points (F-APs), we propose a dynamic distributed edge caching scheme to jointly minimize the request service delay and fronthaul traffic load. Considering the interactive relationship among F-APs, we model the caching optimization problem as a stochastic differential game (SDG) which captures the temporal dynamics of F-AP states and incorporates user requests status. The SDG is further approximated as a mean field game (MFG) by exploiting the ultra-dense property of F-RAN. In the MFG, each F-AP can optimize its caching policy independently through iteratively solving the corresponding partial differential equations without any information exchange with other F-APs. The simulation results show that the proposed edge caching scheme outperforms the baseline schemes under both static and time-variant user requests.
1905.09543
Igor Korkin
Igor Korkin
MemoryRanger Prevents Hijacking FILE_OBJECT Structures in Windows Kernel
10 pages, 5 figures. Korkin, I. (2019, May 15-16). MemoryRanger Prevents Hijacking FILE_OBJECT Structures in Windows Kernel. Paper presented at the Proceedings of the 14th annual Conference on Digital Forensics, Security and Law (CDFSL), Embry-Riddle Aeronautical University, Daytona Beach, Florida, USA. Retrieved from https://commons.erau.edu/adfsl/2019/paper-presentation/7/
null
null
null
cs.CR cs.OS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Windows OS kernel memory is one of the main targets of cyber-attacks. By launching such attacks, hackers are succeeding in process privilege escalation and tampering with users data by accessing kernel mode memory. This paper considers a new example of such an attack, which results in access to the files opened in an exclusive mode. Windows built-in security features prevent such legal access, but attackers can circumvent them by patching dynamically allocated objects. The research shows that the Windows 10, version 1809 x64 is vulnerable to this attack. The paper provides an example of using MemoryRanger, a hypervisor-based solution to prevent such attack by running kernel-mode drivers in isolated kernel memory enclaves.
[ { "created": "Thu, 23 May 2019 08:57:45 GMT", "version": "v1" } ]
2019-05-31
[ [ "Korkin", "Igor", "" ] ]
Windows OS kernel memory is one of the main targets of cyber-attacks. By launching such attacks, hackers are succeeding in process privilege escalation and tampering with users data by accessing kernel mode memory. This paper considers a new example of such an attack, which results in access to the files opened in an exclusive mode. Windows built-in security features prevent such legal access, but attackers can circumvent them by patching dynamically allocated objects. The research shows that the Windows 10, version 1809 x64 is vulnerable to this attack. The paper provides an example of using MemoryRanger, a hypervisor-based solution to prevent such attack by running kernel-mode drivers in isolated kernel memory enclaves.
2108.08596
Seogkyu Jeon
Seogkyu Jeon, Kibeom Hong, Pilhyeon Lee, Jewook Lee, and Hyeran Byun
Feature Stylization and Domain-aware Contrastive Learning for Domain Generalization
Accepted to ACM MM 2021 (oral)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Domain generalization aims to enhance the model robustness against domain shift without accessing the target domain. Since the available source domains for training are limited, recent approaches focus on generating samples of novel domains. Nevertheless, they either struggle with the optimization problem when synthesizing abundant domains or cause the distortion of class semantics. To these ends, we propose a novel domain generalization framework where feature statistics are utilized for stylizing original features to ones with novel domain properties. To preserve class information during stylization, we first decompose features into high and low frequency components. Afterward, we stylize the low frequency components with the novel domain styles sampled from the manipulated statistics, while preserving the shape cues in high frequency ones. As the final step, we re-merge both components to synthesize novel domain features. To enhance domain robustness, we utilize the stylized features to maintain the model consistency in terms of features as well as outputs. We achieve the feature consistency with the proposed domain-aware supervised contrastive loss, which ensures domain invariance while increasing class discriminability. Experimental results demonstrate the effectiveness of the proposed feature stylization and the domain-aware contrastive loss. Through quantitative comparisons, we verify the lead of our method upon existing state-of-the-art methods on two benchmarks, PACS and Office-Home.
[ { "created": "Thu, 19 Aug 2021 10:04:01 GMT", "version": "v1" } ]
2021-08-20
[ [ "Jeon", "Seogkyu", "" ], [ "Hong", "Kibeom", "" ], [ "Lee", "Pilhyeon", "" ], [ "Lee", "Jewook", "" ], [ "Byun", "Hyeran", "" ] ]
Domain generalization aims to enhance the model robustness against domain shift without accessing the target domain. Since the available source domains for training are limited, recent approaches focus on generating samples of novel domains. Nevertheless, they either struggle with the optimization problem when synthesizing abundant domains or cause the distortion of class semantics. To these ends, we propose a novel domain generalization framework where feature statistics are utilized for stylizing original features to ones with novel domain properties. To preserve class information during stylization, we first decompose features into high and low frequency components. Afterward, we stylize the low frequency components with the novel domain styles sampled from the manipulated statistics, while preserving the shape cues in high frequency ones. As the final step, we re-merge both components to synthesize novel domain features. To enhance domain robustness, we utilize the stylized features to maintain the model consistency in terms of features as well as outputs. We achieve the feature consistency with the proposed domain-aware supervised contrastive loss, which ensures domain invariance while increasing class discriminability. Experimental results demonstrate the effectiveness of the proposed feature stylization and the domain-aware contrastive loss. Through quantitative comparisons, we verify the lead of our method upon existing state-of-the-art methods on two benchmarks, PACS and Office-Home.
1710.03090
Noson S. Yanofsky
Noson S. Yanofsky
Theoretical Computer Science for the Working Category Theorist
47 pages
null
null
null
cs.LO cs.CC math.CT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Theoretical computer science discusses foundational issues about computations. It asks and answers questions such as "What is a computation?", "What is computable?", "What is efficiently computable?","What is information?", "What is random?", "What is an algorithm?", etc. We will present many of the major themes and theorems with the basic language of category theory. Surprisingly, many interesting theorems and concepts of theoretical computer science are easy consequences of functoriality and composition when you look at the right categories and functors connecting them.
[ { "created": "Wed, 4 Oct 2017 19:19:00 GMT", "version": "v1" } ]
2017-10-10
[ [ "Yanofsky", "Noson S.", "" ] ]
Theoretical computer science discusses foundational issues about computations. It asks and answers questions such as "What is a computation?", "What is computable?", "What is efficiently computable?","What is information?", "What is random?", "What is an algorithm?", etc. We will present many of the major themes and theorems with the basic language of category theory. Surprisingly, many interesting theorems and concepts of theoretical computer science are easy consequences of functoriality and composition when you look at the right categories and functors connecting them.
1502.04204
Amelia Carolina Sparavigna
Amelia Carolina Sparavigna
Gray-Level Image Transitions Driven by Tsallis Entropic Index
Tsallis Entropy, Image Processing, Image Segmentation, Image Thresholding, Texture Transitions, Medical Image Processing, Typos emended
International Journal of Sciences 4(2), 16-25, 2015
10.18483/ijSci.621
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The maximum entropy principle is largely used in thresholding and segmentation of images. Among the several formulations of this principle, the most effectively applied is that based on Tsallis non-extensive entropy. Here, we discuss the role of its entropic index in determining the thresholds. When this index is spanning the interval (0,1), for some images, the values of thresholds can have large leaps. In this manner, we observe abrupt transitions in the appearance of corresponding bi-level or multi-level images. These gray-level image transitions are analogous to order or texture transitions observed in physical systems, transitions which are driven by the temperature or by other physical quantities.
[ { "created": "Sat, 14 Feb 2015 12:35:12 GMT", "version": "v1" }, { "created": "Thu, 19 Feb 2015 18:12:30 GMT", "version": "v2" } ]
2015-08-06
[ [ "Sparavigna", "Amelia Carolina", "" ] ]
The maximum entropy principle is largely used in thresholding and segmentation of images. Among the several formulations of this principle, the most effectively applied is that based on Tsallis non-extensive entropy. Here, we discuss the role of its entropic index in determining the thresholds. When this index is spanning the interval (0,1), for some images, the values of thresholds can have large leaps. In this manner, we observe abrupt transitions in the appearance of corresponding bi-level or multi-level images. These gray-level image transitions are analogous to order or texture transitions observed in physical systems, transitions which are driven by the temperature or by other physical quantities.
1804.03904
Nils Gessert
Nils Gessert and Markus Heyder and Sarah Latus and Matthias Lutz and Alexander Schlaefer
Plaque Classification in Coronary Arteries from IVOCT Images Using Convolutional Neural Networks and Transfer Learning
Submitted to CARS 2018, accepted for publication
null
10.1007/s11548-018-1766-y
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Advanced atherosclerosis in the coronary arteries is one of the leading causes of deaths worldwide while being preventable and treatable. In order to image atherosclerotic lesions (plaque), intravascular optical coherence tomography (IVOCT) can be used. The technique provides high-resolution images of arterial walls which allows for early plaque detection by experts. Due to the vast amount of IVOCT images acquired in clinical routines, automatic plaque detection has been addressed. For example, attenuation profiles in single A-Scans of IVOCT images are examined to detect plaque. We address automatic plaque classification from entire IVOCT images, the cross-sectional view of the artery, using deep feature learning. In this way, we take context between A-Scans into account and we directly learn relevant features from the image source without the need for handcrafting features.
[ { "created": "Wed, 11 Apr 2018 09:50:58 GMT", "version": "v1" } ]
2018-06-11
[ [ "Gessert", "Nils", "" ], [ "Heyder", "Markus", "" ], [ "Latus", "Sarah", "" ], [ "Lutz", "Matthias", "" ], [ "Schlaefer", "Alexander", "" ] ]
Advanced atherosclerosis in the coronary arteries is one of the leading causes of deaths worldwide while being preventable and treatable. In order to image atherosclerotic lesions (plaque), intravascular optical coherence tomography (IVOCT) can be used. The technique provides high-resolution images of arterial walls which allows for early plaque detection by experts. Due to the vast amount of IVOCT images acquired in clinical routines, automatic plaque detection has been addressed. For example, attenuation profiles in single A-Scans of IVOCT images are examined to detect plaque. We address automatic plaque classification from entire IVOCT images, the cross-sectional view of the artery, using deep feature learning. In this way, we take context between A-Scans into account and we directly learn relevant features from the image source without the need for handcrafting features.
2106.03593
Xiangyu Liu
Xiangyu Liu, Chuan Yu, Zhilin Zhang, Zhenzhe Zheng, Yu Rong, Hongtao Lv, Da Huo, Yiqing Wang, Dagui Chen, Jian Xu, Fan Wu, Guihai Chen and Xiaoqiang Zhu
Neural Auction: End-to-End Learning of Auction Mechanisms for E-Commerce Advertising
To appear in the Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2021
null
null
null
cs.GT cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In e-commerce advertising, it is crucial to jointly consider various performance metrics, e.g., user experience, advertiser utility, and platform revenue. Traditional auction mechanisms, such as GSP and VCG auctions, can be suboptimal due to their fixed allocation rules to optimize a single performance metric (e.g., revenue or social welfare). Recently, data-driven auctions, learned directly from auction outcomes to optimize multiple performance metrics, have attracted increasing research interests. However, the procedure of auction mechanisms involves various discrete calculation operations, making it challenging to be compatible with continuous optimization pipelines in machine learning. In this paper, we design \underline{D}eep \underline{N}eural \underline{A}uctions (DNAs) to enable end-to-end auction learning by proposing a differentiable model to relax the discrete sorting operation, a key component in auctions. We optimize the performance metrics by developing deep models to efficiently extract contexts from auctions, providing rich features for auction design. We further integrate the game theoretical conditions within the model design, to guarantee the stability of the auctions. DNAs have been successfully deployed in the e-commerce advertising system at Taobao. Experimental evaluation results on both large-scale data set as well as online A/B test demonstrated that DNAs significantly outperformed other mechanisms widely adopted in industry.
[ { "created": "Mon, 7 Jun 2021 13:20:40 GMT", "version": "v1" }, { "created": "Wed, 14 Jul 2021 03:16:56 GMT", "version": "v2" } ]
2021-07-15
[ [ "Liu", "Xiangyu", "" ], [ "Yu", "Chuan", "" ], [ "Zhang", "Zhilin", "" ], [ "Zheng", "Zhenzhe", "" ], [ "Rong", "Yu", "" ], [ "Lv", "Hongtao", "" ], [ "Huo", "Da", "" ], [ "Wang", "Yiqing", "" ], [ "Chen", "Dagui", "" ], [ "Xu", "Jian", "" ], [ "Wu", "Fan", "" ], [ "Chen", "Guihai", "" ], [ "Zhu", "Xiaoqiang", "" ] ]
In e-commerce advertising, it is crucial to jointly consider various performance metrics, e.g., user experience, advertiser utility, and platform revenue. Traditional auction mechanisms, such as GSP and VCG auctions, can be suboptimal due to their fixed allocation rules to optimize a single performance metric (e.g., revenue or social welfare). Recently, data-driven auctions, learned directly from auction outcomes to optimize multiple performance metrics, have attracted increasing research interests. However, the procedure of auction mechanisms involves various discrete calculation operations, making it challenging to be compatible with continuous optimization pipelines in machine learning. In this paper, we design \underline{D}eep \underline{N}eural \underline{A}uctions (DNAs) to enable end-to-end auction learning by proposing a differentiable model to relax the discrete sorting operation, a key component in auctions. We optimize the performance metrics by developing deep models to efficiently extract contexts from auctions, providing rich features for auction design. We further integrate the game theoretical conditions within the model design, to guarantee the stability of the auctions. DNAs have been successfully deployed in the e-commerce advertising system at Taobao. Experimental evaluation results on both large-scale data set as well as online A/B test demonstrated that DNAs significantly outperformed other mechanisms widely adopted in industry.
2202.06273
Gang Chen
Gang Chen, Wei Dong, Peng Peng, Javier Alonso-Mora, Xiangyang Zhu
Continuous Occupancy Mapping in Dynamic Environments Using Particles
This paper has been accepted by IEEE Transactions on Robotics
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Particle-based dynamic occupancy maps were proposed in recent years to model the obstacles in dynamic environments. Current particle-based maps describe the occupancy status in discrete grid form and suffer from the grid size problem, wherein a large grid size is unfavorable for motion planning, while a small grid size lowers efficiency and causes gaps and inconsistencies. To tackle this problem, this paper generalizes the particle-based map into continuous space and builds an efficient 3D egocentric local map. A dual-structure subspace division paradigm, composed of a voxel subspace division and a novel pyramid-like subspace division, is proposed to propagate particles and update the map efficiently with the consideration of occlusions. The occupancy status of an arbitrary point in the map space can then be estimated with the particles' weights. To further enhance the performance of simultaneously modeling static and dynamic obstacles and minimize noise, an initial velocity estimation approach and a mixture model are utilized. Experimental results show that our map can effectively and efficiently model both dynamic obstacles and static obstacles. Compared to the state-of-the-art grid-form particle-based map, our map enables continuous occupancy estimation and substantially improves the performance in different resolutions.
[ { "created": "Sun, 13 Feb 2022 09:55:48 GMT", "version": "v1" }, { "created": "Thu, 19 Oct 2023 14:51:34 GMT", "version": "v2" } ]
2023-10-20
[ [ "Chen", "Gang", "" ], [ "Dong", "Wei", "" ], [ "Peng", "Peng", "" ], [ "Alonso-Mora", "Javier", "" ], [ "Zhu", "Xiangyang", "" ] ]
Particle-based dynamic occupancy maps were proposed in recent years to model the obstacles in dynamic environments. Current particle-based maps describe the occupancy status in discrete grid form and suffer from the grid size problem, wherein a large grid size is unfavorable for motion planning, while a small grid size lowers efficiency and causes gaps and inconsistencies. To tackle this problem, this paper generalizes the particle-based map into continuous space and builds an efficient 3D egocentric local map. A dual-structure subspace division paradigm, composed of a voxel subspace division and a novel pyramid-like subspace division, is proposed to propagate particles and update the map efficiently with the consideration of occlusions. The occupancy status of an arbitrary point in the map space can then be estimated with the particles' weights. To further enhance the performance of simultaneously modeling static and dynamic obstacles and minimize noise, an initial velocity estimation approach and a mixture model are utilized. Experimental results show that our map can effectively and efficiently model both dynamic obstacles and static obstacles. Compared to the state-of-the-art grid-form particle-based map, our map enables continuous occupancy estimation and substantially improves the performance in different resolutions.
1201.6530
Purushottam Kar
Purushottam Kar and Harish Karnick
Random Feature Maps for Dot Product Kernels
To appear in the proceedings of the 15th International Conference on Artificial Intelligence and Statistics (AISTATS 2012). This version corrects a minor error with Lemma 10. Acknowledgements : Devanshu Bhimwal
Journal of Machine Learning Research, W&CP 22 (2012) 583-591
null
null
cs.LG cs.CG math.FA stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Approximating non-linear kernels using feature maps has gained a lot of interest in recent years due to applications in reducing training and testing times of SVM classifiers and other kernel based learning algorithms. We extend this line of work and present low distortion embeddings for dot product kernels into linear Euclidean spaces. We base our results on a classical result in harmonic analysis characterizing all dot product kernels and use it to define randomized feature maps into explicit low dimensional Euclidean spaces in which the native dot product provides an approximation to the dot product kernel with high confidence.
[ { "created": "Tue, 31 Jan 2012 12:59:50 GMT", "version": "v1" }, { "created": "Fri, 2 Mar 2012 13:57:55 GMT", "version": "v2" }, { "created": "Mon, 26 Mar 2012 10:56:00 GMT", "version": "v3" } ]
2015-03-20
[ [ "Kar", "Purushottam", "" ], [ "Karnick", "Harish", "" ] ]
Approximating non-linear kernels using feature maps has gained a lot of interest in recent years due to applications in reducing training and testing times of SVM classifiers and other kernel based learning algorithms. We extend this line of work and present low distortion embeddings for dot product kernels into linear Euclidean spaces. We base our results on a classical result in harmonic analysis characterizing all dot product kernels and use it to define randomized feature maps into explicit low dimensional Euclidean spaces in which the native dot product provides an approximation to the dot product kernel with high confidence.
2006.13673
Przemys{\l}aw Uzna\'nski
Shay Golan, Tomasz Kociumaka, Tsvi Kopelowitz, Ely Porat, Przemys{\l}aw Uzna\'nski
Improved Circular $k$-Mismatch Sketches
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The shift distance $\mathsf{sh}(S_1,S_2)$ between two strings $S_1$ and $S_2$ of the same length is defined as the minimum Hamming distance between $S_1$ and any rotation (cyclic shift) of $S_2$. We study the problem of sketching the shift distance, which is the following communication complexity problem: Strings $S_1$ and $S_2$ of length $n$ are given to two identical players (encoders), who independently compute sketches (summaries) $\mathtt{sk}(S_1)$ and $\mathtt{sk}(S_2)$, respectively, so that upon receiving the two sketches, a third player (decoder) is able to compute (or approximate) $\mathsf{sh}(S_1,S_2)$ with high probability. This paper primarily focuses on the more general $k$-mismatch version of the problem, where the decoder is allowed to declare a failure if $\mathsf{sh}(S_1,S_2)>k$, where $k$ is a parameter known to all parties. Andoni et al. (STOC'13) introduced exact circular $k$-mismatch sketches of size $\widetilde{O}(k+D(n))$, where $D(n)$ is the number of divisors of $n$. Andoni et al. also showed that their sketch size is optimal in the class of linear homomorphic sketches. We circumvent this lower bound by designing a (non-linear) exact circular $k$-mismatch sketch of size $\widetilde{O}(k)$; this size matches communication-complexity lower bounds. We also design $(1\pm \varepsilon)$-approximate circular $k$-mismatch sketch of size $\widetilde{O}(\min(\varepsilon^{-2}\sqrt{k}, \varepsilon^{-1.5}\sqrt{n}))$, which improves upon an $\widetilde{O}(\varepsilon^{-2}\sqrt{n})$-size sketch of Crouch and McGregor (APPROX'11).
[ { "created": "Wed, 24 Jun 2020 12:44:22 GMT", "version": "v1" } ]
2020-06-25
[ [ "Golan", "Shay", "" ], [ "Kociumaka", "Tomasz", "" ], [ "Kopelowitz", "Tsvi", "" ], [ "Porat", "Ely", "" ], [ "Uznański", "Przemysław", "" ] ]
The shift distance $\mathsf{sh}(S_1,S_2)$ between two strings $S_1$ and $S_2$ of the same length is defined as the minimum Hamming distance between $S_1$ and any rotation (cyclic shift) of $S_2$. We study the problem of sketching the shift distance, which is the following communication complexity problem: Strings $S_1$ and $S_2$ of length $n$ are given to two identical players (encoders), who independently compute sketches (summaries) $\mathtt{sk}(S_1)$ and $\mathtt{sk}(S_2)$, respectively, so that upon receiving the two sketches, a third player (decoder) is able to compute (or approximate) $\mathsf{sh}(S_1,S_2)$ with high probability. This paper primarily focuses on the more general $k$-mismatch version of the problem, where the decoder is allowed to declare a failure if $\mathsf{sh}(S_1,S_2)>k$, where $k$ is a parameter known to all parties. Andoni et al. (STOC'13) introduced exact circular $k$-mismatch sketches of size $\widetilde{O}(k+D(n))$, where $D(n)$ is the number of divisors of $n$. Andoni et al. also showed that their sketch size is optimal in the class of linear homomorphic sketches. We circumvent this lower bound by designing a (non-linear) exact circular $k$-mismatch sketch of size $\widetilde{O}(k)$; this size matches communication-complexity lower bounds. We also design $(1\pm \varepsilon)$-approximate circular $k$-mismatch sketch of size $\widetilde{O}(\min(\varepsilon^{-2}\sqrt{k}, \varepsilon^{-1.5}\sqrt{n}))$, which improves upon an $\widetilde{O}(\varepsilon^{-2}\sqrt{n})$-size sketch of Crouch and McGregor (APPROX'11).
1408.5979
EPTCS
Rumyana Neykova (Imperial College London), Laura Bocchi (Imperial College London), Nobuko Yoshida (Imperial College London)
Timed Runtime Monitoring for Multiparty Conversations
In Proceedings BEAT 2014, arXiv:1408.5564
EPTCS 162, 2014, pp. 19-26
10.4204/EPTCS.162.3
null
cs.DC cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a dynamic verification framework for protocols in real-time distributed systems. The framework is based on Scribble, a tool-chain for design and verification of choreographies based on multiparty session types, developed with our industrial partners. Drawing from recent work on multiparty session types for real-time interactions, we extend Scribble with clocks, resets, and clock predicates constraining the times in which interactions should occur. We present a timed API for Python to program distributed implementations of Scribble specifications. A dynamic verification framework ensures the safe execution of applications written with our timed API: we have implemented dedicated runtime monitors that check that each interaction occurs at a correct timing with respect to the corresponding Scribble specification. The performance of our implementation and its practicability are analysed via benchmarking.
[ { "created": "Tue, 26 Aug 2014 02:15:40 GMT", "version": "v1" } ]
2014-08-27
[ [ "Neykova", "Rumyana", "", "Imperial College London" ], [ "Bocchi", "Laura", "", "Imperial\n College London" ], [ "Yoshida", "Nobuko", "", "Imperial College London" ] ]
We propose a dynamic verification framework for protocols in real-time distributed systems. The framework is based on Scribble, a tool-chain for design and verification of choreographies based on multiparty session types, developed with our industrial partners. Drawing from recent work on multiparty session types for real-time interactions, we extend Scribble with clocks, resets, and clock predicates constraining the times in which interactions should occur. We present a timed API for Python to program distributed implementations of Scribble specifications. A dynamic verification framework ensures the safe execution of applications written with our timed API: we have implemented dedicated runtime monitors that check that each interaction occurs at a correct timing with respect to the corresponding Scribble specification. The performance of our implementation and its practicability are analysed via benchmarking.
2207.13684
Rowan Border
Rowan Border and Jonathan D. Gammell
The Surface Edge Explorer (SEE): A measurement-direct approach to next best view planning
The International Journal of Robotics Research (IJRR) 2024, Vol. 0(0) 1-27. 25 pages, 17 figures, 6 tables. Videos available at https://www.youtube.com/watch?v=dqppqRlaGEA and https://www.youtube.com/playlist?list=PLbaQBz4TuPcyNh4COoaCtC1ZGhpbEkFEo
The International Journal of Robotics Research (IJRR) 2024, Vol. 0(0) 1-27
10.1177/02783649241230098
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
High-quality observations of the real world are crucial for a variety of applications, including producing 3D printed replicas of small-scale scenes and conducting inspections of large-scale infrastructure. These 3D observations are commonly obtained by combining multiple sensor measurements from different views. Guiding the selection of suitable views is known as the NBV planning problem. Most NBV approaches reason about measurements using rigid data structures (e.g., surface meshes or voxel grids). This simplifies next best view selection but can be computationally expensive, reduces real-world fidelity, and couples the selection of a next best view with the final data processing. This paper presents the Surface Edge Explorer, a NBV approach that selects new observations directly from previous sensor measurements without requiring rigid data structures. SEE uses measurement density to propose next best views that increase coverage of insufficiently observed surfaces while avoiding potential occlusions. Statistical results from simulated experiments show that SEE can attain similar or better surface coverage with less observation time and travel distance than evaluated volumetric approaches on both small- and large-scale scenes. Real-world experiments demonstrate SEE autonomously observing a deer statue using a 3D sensor affixed to a robotic arm.
[ { "created": "Wed, 27 Jul 2022 17:54:54 GMT", "version": "v1" }, { "created": "Fri, 8 Sep 2023 08:53:49 GMT", "version": "v2" }, { "created": "Fri, 17 Nov 2023 21:41:56 GMT", "version": "v3" }, { "created": "Tue, 6 Feb 2024 14:39:02 GMT", "version": "v4" } ]
2024-02-07
[ [ "Border", "Rowan", "" ], [ "Gammell", "Jonathan D.", "" ] ]
High-quality observations of the real world are crucial for a variety of applications, including producing 3D printed replicas of small-scale scenes and conducting inspections of large-scale infrastructure. These 3D observations are commonly obtained by combining multiple sensor measurements from different views. Guiding the selection of suitable views is known as the NBV planning problem. Most NBV approaches reason about measurements using rigid data structures (e.g., surface meshes or voxel grids). This simplifies next best view selection but can be computationally expensive, reduces real-world fidelity, and couples the selection of a next best view with the final data processing. This paper presents the Surface Edge Explorer, a NBV approach that selects new observations directly from previous sensor measurements without requiring rigid data structures. SEE uses measurement density to propose next best views that increase coverage of insufficiently observed surfaces while avoiding potential occlusions. Statistical results from simulated experiments show that SEE can attain similar or better surface coverage with less observation time and travel distance than evaluated volumetric approaches on both small- and large-scale scenes. Real-world experiments demonstrate SEE autonomously observing a deer statue using a 3D sensor affixed to a robotic arm.
2305.13092
Sam Spilsbury
Sam Spilsbury, Alexander Ilin
Improved Compositional Generalization by Generating Demonstrations for Meta-Learning
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Meta-learning and few-shot prompting are viable methods to induce certain types of compositional behaviour. However, these methods can be very sensitive to the choice of support examples used. Choosing good supports from the training data for a given test query is already a difficult problem, but in some cases solving this may not even be enough. We consider a grounded language learning problem (gSCAN) where good support examples for certain test splits might not even exist in the training data, or would be infeasible to search for. We design an agent which instead generates possible supports which are relevant to the test query and current state of the world, then uses these supports via meta-learning to solve the test query. We show substantially improved performance on a previously unsolved compositional behaviour split without a loss of performance on other splits. Further experiments show that in this case, searching for relevant demonstrations even with an oracle function is not sufficient to attain good performance when using meta-learning.
[ { "created": "Mon, 22 May 2023 14:58:54 GMT", "version": "v1" } ]
2023-05-23
[ [ "Spilsbury", "Sam", "" ], [ "Ilin", "Alexander", "" ] ]
Meta-learning and few-shot prompting are viable methods to induce certain types of compositional behaviour. However, these methods can be very sensitive to the choice of support examples used. Choosing good supports from the training data for a given test query is already a difficult problem, but in some cases solving this may not even be enough. We consider a grounded language learning problem (gSCAN) where good support examples for certain test splits might not even exist in the training data, or would be infeasible to search for. We design an agent which instead generates possible supports which are relevant to the test query and current state of the world, then uses these supports via meta-learning to solve the test query. We show substantially improved performance on a previously unsolved compositional behaviour split without a loss of performance on other splits. Further experiments show that in this case, searching for relevant demonstrations even with an oracle function is not sufficient to attain good performance when using meta-learning.
2307.07790
Zhizhong Huang
Zhizhong Huang, Siteng Ma, Junping Zhang, Hongming Shan
Adaptive Nonlinear Latent Transformation for Conditional Face Editing
ICCV 2023
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent works for face editing usually manipulate the latent space of StyleGAN via the linear semantic directions. However, they usually suffer from the entanglement of facial attributes, need to tune the optimal editing strength, and are limited to binary attributes with strong supervision signals. This paper proposes a novel adaptive nonlinear latent transformation for disentangled and conditional face editing, termed AdaTrans. Specifically, our AdaTrans divides the manipulation process into several finer steps; i.e., the direction and size at each step are conditioned on both the facial attributes and the latent codes. In this way, AdaTrans describes an adaptive nonlinear transformation trajectory to manipulate the faces into target attributes while keeping other attributes unchanged. Then, AdaTrans leverages a predefined density model to constrain the learned trajectory in the distribution of latent codes by maximizing the likelihood of transformed latent code. Moreover, we also propose a disentangled learning strategy under a mutual information framework to eliminate the entanglement among attributes, which can further relax the need for labeled data. Consequently, AdaTrans enables a controllable face editing with the advantages of disentanglement, flexibility with non-binary attributes, and high fidelity. Extensive experimental results on various facial attributes demonstrate the qualitative and quantitative effectiveness of the proposed AdaTrans over existing state-of-the-art methods, especially in the most challenging scenarios with a large age gap and few labeled examples. The source code is available at https://github.com/Hzzone/AdaTrans.
[ { "created": "Sat, 15 Jul 2023 12:36:50 GMT", "version": "v1" } ]
2023-07-18
[ [ "Huang", "Zhizhong", "" ], [ "Ma", "Siteng", "" ], [ "Zhang", "Junping", "" ], [ "Shan", "Hongming", "" ] ]
Recent works for face editing usually manipulate the latent space of StyleGAN via the linear semantic directions. However, they usually suffer from the entanglement of facial attributes, need to tune the optimal editing strength, and are limited to binary attributes with strong supervision signals. This paper proposes a novel adaptive nonlinear latent transformation for disentangled and conditional face editing, termed AdaTrans. Specifically, our AdaTrans divides the manipulation process into several finer steps; i.e., the direction and size at each step are conditioned on both the facial attributes and the latent codes. In this way, AdaTrans describes an adaptive nonlinear transformation trajectory to manipulate the faces into target attributes while keeping other attributes unchanged. Then, AdaTrans leverages a predefined density model to constrain the learned trajectory in the distribution of latent codes by maximizing the likelihood of transformed latent code. Moreover, we also propose a disentangled learning strategy under a mutual information framework to eliminate the entanglement among attributes, which can further relax the need for labeled data. Consequently, AdaTrans enables a controllable face editing with the advantages of disentanglement, flexibility with non-binary attributes, and high fidelity. Extensive experimental results on various facial attributes demonstrate the qualitative and quantitative effectiveness of the proposed AdaTrans over existing state-of-the-art methods, especially in the most challenging scenarios with a large age gap and few labeled examples. The source code is available at https://github.com/Hzzone/AdaTrans.
2211.09603
Tanmay Inamdar
Fedor V. Fomin, Petr A. Golovach, Tanmay Inamdar, Saket Saurabh, Meirav Zehavi
(Re)packing Equal Disks into Rectangle
Full version of ICALP 2022 paper
null
null
null
cs.CG cs.DS
http://creativecommons.org/licenses/by/4.0/
The problem of packing of equal disks (or circles) into a rectangle is a fundamental geometric problem. (By a packing here we mean an arrangement of disks in a rectangle without overlapping.) We consider the following algorithmic generalization of the equal disk packing problem. In this problem, for a given packing of equal disks into a rectangle, the question is whether by changing positions of a small number of disks, we can allocate space for packing more disks. More formally, in the repacking problem, for a given set of $n$ equal disks packed into a rectangle and integers $k$ and $h$, we ask whether it is possible by changing positions of at most $h$ disks to pack $n+k$ disks. Thus the problem of packing equal disks is the special case of our problem with $n=h=0$. While the computational complexity of packing equal disks into a rectangle remains open, we prove that the repacking problem is NP-hard already for $h=0$. Our main algorithmic contribution is an algorithm that solves the repacking problem in time $(h+k)^{O(h+k)}\cdot |I|^{O(1)}$, where $I$ is the input size. That is, the problem is fixed-parameter tractable parameterized by $k$ and $h$.
[ { "created": "Thu, 17 Nov 2022 15:48:12 GMT", "version": "v1" } ]
2022-11-18
[ [ "Fomin", "Fedor V.", "" ], [ "Golovach", "Petr A.", "" ], [ "Inamdar", "Tanmay", "" ], [ "Saurabh", "Saket", "" ], [ "Zehavi", "Meirav", "" ] ]
The problem of packing of equal disks (or circles) into a rectangle is a fundamental geometric problem. (By a packing here we mean an arrangement of disks in a rectangle without overlapping.) We consider the following algorithmic generalization of the equal disk packing problem. In this problem, for a given packing of equal disks into a rectangle, the question is whether by changing positions of a small number of disks, we can allocate space for packing more disks. More formally, in the repacking problem, for a given set of $n$ equal disks packed into a rectangle and integers $k$ and $h$, we ask whether it is possible by changing positions of at most $h$ disks to pack $n+k$ disks. Thus the problem of packing equal disks is the special case of our problem with $n=h=0$. While the computational complexity of packing equal disks into a rectangle remains open, we prove that the repacking problem is NP-hard already for $h=0$. Our main algorithmic contribution is an algorithm that solves the repacking problem in time $(h+k)^{O(h+k)}\cdot |I|^{O(1)}$, where $I$ is the input size. That is, the problem is fixed-parameter tractable parameterized by $k$ and $h$.
1302.0533
Rodrigo de Lamare
L. Wang and R. C. de Lamare
Low-Complexity Reduced-Rank Beamforming Algorithms
7 figures
IEEE Transactions on Aerospace and Electronic Systems, 2012
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A reduced-rank framework with set-membership filtering (SMF) techniques is presented for adaptive beamforming problems encountered in radar systems. We develop and analyze stochastic gradient (SG) and recursive least squares (RLS)-type adaptive algorithms, which achieve an enhanced convergence and tracking performance with low computational cost as compared to existing techniques. Simulations show that the proposed algorithms have a superior performance to prior methods, while the complexity is lower.
[ { "created": "Sun, 3 Feb 2013 21:09:29 GMT", "version": "v1" } ]
2013-02-05
[ [ "Wang", "L.", "" ], [ "de Lamare", "R. C.", "" ] ]
A reduced-rank framework with set-membership filtering (SMF) techniques is presented for adaptive beamforming problems encountered in radar systems. We develop and analyze stochastic gradient (SG) and recursive least squares (RLS)-type adaptive algorithms, which achieve an enhanced convergence and tracking performance with low computational cost as compared to existing techniques. Simulations show that the proposed algorithms have a superior performance to prior methods, while the complexity is lower.
2212.03016
Maximilian V\"otsch
Ashish Chiplunkar, Monika Henzinger, Sagar Sudhir Kale, Maximilian V\"otsch
Online Min-Max Paging
25 pages, 1 figure, to appear in SODA 2023
null
null
null
cs.DS
http://creativecommons.org/licenses/by/4.0/
Motivated by fairness requirements in communication networks, we introduce a natural variant of the online paging problem, called \textit{min-max} paging, where the objective is to minimize the maximum number of faults on any page. While the classical paging problem, whose objective is to minimize the total number of faults, admits $k$-competitive deterministic and $O(\log k)$-competitive randomized algorithms, we show that min-max paging does not admit a $c(k)$-competitive algorithm for any function $c$. Specifically, we prove that the randomized competitive ratio of min-max paging is $\Omega(\log(n))$ and its deterministic competitive ratio is $\Omega(k\log(n)/\log(k))$, where $n$ is the total number of pages ever requested. We design a fractional algorithm for paging with a more general objective -- minimize the value of an $n$-variate differentiable convex function applied to the vector of the number of faults on each page. This gives an $O(\log(n)\log(k))$-competitive fractional algorithm for min-max paging. We show how to round such a fractional algorithm with at most a $k$ factor loss in the competitive ratio, resulting in a deterministic $O(k\log(n)\log(k))$-competitive algorithm for min-max paging. This matches our lower bound modulo a $\mathrm{poly}(\log(k))$ factor. We also give a randomized rounding algorithm that results in a $O(\log^2 n \log k)$-competitive algorithm.
[ { "created": "Tue, 6 Dec 2022 14:43:17 GMT", "version": "v1" } ]
2022-12-07
[ [ "Chiplunkar", "Ashish", "" ], [ "Henzinger", "Monika", "" ], [ "Kale", "Sagar Sudhir", "" ], [ "Vötsch", "Maximilian", "" ] ]
Motivated by fairness requirements in communication networks, we introduce a natural variant of the online paging problem, called \textit{min-max} paging, where the objective is to minimize the maximum number of faults on any page. While the classical paging problem, whose objective is to minimize the total number of faults, admits $k$-competitive deterministic and $O(\log k)$-competitive randomized algorithms, we show that min-max paging does not admit a $c(k)$-competitive algorithm for any function $c$. Specifically, we prove that the randomized competitive ratio of min-max paging is $\Omega(\log(n))$ and its deterministic competitive ratio is $\Omega(k\log(n)/\log(k))$, where $n$ is the total number of pages ever requested. We design a fractional algorithm for paging with a more general objective -- minimize the value of an $n$-variate differentiable convex function applied to the vector of the number of faults on each page. This gives an $O(\log(n)\log(k))$-competitive fractional algorithm for min-max paging. We show how to round such a fractional algorithm with at most a $k$ factor loss in the competitive ratio, resulting in a deterministic $O(k\log(n)\log(k))$-competitive algorithm for min-max paging. This matches our lower bound modulo a $\mathrm{poly}(\log(k))$ factor. We also give a randomized rounding algorithm that results in a $O(\log^2 n \log k)$-competitive algorithm.
1410.4950
Shota Nakagawa
Shota Nakagawa and Ichiro Hasuo
Near-Optimal Scheduling for LTL with Future Discounting
null
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the search problem for optimal schedulers for the linear temporal logic (LTL) with future discounting. The logic, introduced by Almagor, Boker and Kupferman, is a quantitative variant of LTL in which an event in the far future has only discounted contribution to a truth value (that is a real number in the unit interval [0, 1]). The precise problem we study---it naturally arises e.g. in search for a scheduler that recovers from an internal error state as soon as possible---is the following: given a Kripke frame, a formula and a number in [0, 1] called a margin, find a path of the Kripke frame that is optimal with respect to the formula up to the prescribed margin (a truly optimal path may not exist). We present an algorithm for the problem; it works even in the extended setting with propositional quality operators, a setting where (threshold) model-checking is known to be undecidable.
[ { "created": "Sat, 18 Oct 2014 12:12:05 GMT", "version": "v1" }, { "created": "Thu, 20 Nov 2014 04:39:27 GMT", "version": "v2" }, { "created": "Wed, 4 Nov 2015 04:44:20 GMT", "version": "v3" }, { "created": "Sun, 8 Nov 2015 04:28:53 GMT", "version": "v4" } ]
2015-11-10
[ [ "Nakagawa", "Shota", "" ], [ "Hasuo", "Ichiro", "" ] ]
We study the search problem for optimal schedulers for the linear temporal logic (LTL) with future discounting. The logic, introduced by Almagor, Boker and Kupferman, is a quantitative variant of LTL in which an event in the far future has only discounted contribution to a truth value (that is a real number in the unit interval [0, 1]). The precise problem we study---it naturally arises e.g. in search for a scheduler that recovers from an internal error state as soon as possible---is the following: given a Kripke frame, a formula and a number in [0, 1] called a margin, find a path of the Kripke frame that is optimal with respect to the formula up to the prescribed margin (a truly optimal path may not exist). We present an algorithm for the problem; it works even in the extended setting with propositional quality operators, a setting where (threshold) model-checking is known to be undecidable.
2308.12748
Xiaolin Chang
Lina Liu, Jing Bai, Xiaolin Chang, Fumio Machida, Kishor S. Trivedi, Haoran Zhu
Towards Semi-Markov Model-based Dependability Evaluation of VM-based Multi-Domain Service Function Chain
null
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In NFV networks, service functions (SFs) can be deployed on virtual machines (VMs) across multiple domains and then form a service function chain (MSFC) for end-to-end network service provision. However, any software component in a VM-based MSFC must experience software aging issue after a long period of operation. This paper quantitatively investigates the capability of proactive rejuvenation techniques in reducing the damage of software aging on a VM-based MSFC. We develop a semi-Markov model to capture the behaviors of SFs, VMs and virtual machine monitors (VMMs) from software aging to recovery under the condition that failure times and recovery times follow general distributions. We derive the formulas for calculating the steady-state availability and reliability of the VM-based MSFC composed of multiple SFs running on VMs hosted by VMMs. Sensitivity analysis is also conducted to identify potential dependability bottlenecks.
[ { "created": "Thu, 24 Aug 2023 12:51:45 GMT", "version": "v1" } ]
2023-08-25
[ [ "Liu", "Lina", "" ], [ "Bai", "Jing", "" ], [ "Chang", "Xiaolin", "" ], [ "Machida", "Fumio", "" ], [ "Trivedi", "Kishor S.", "" ], [ "Zhu", "Haoran", "" ] ]
In NFV networks, service functions (SFs) can be deployed on virtual machines (VMs) across multiple domains and then form a service function chain (MSFC) for end-to-end network service provision. However, any software component in a VM-based MSFC must experience software aging issue after a long period of operation. This paper quantitatively investigates the capability of proactive rejuvenation techniques in reducing the damage of software aging on a VM-based MSFC. We develop a semi-Markov model to capture the behaviors of SFs, VMs and virtual machine monitors (VMMs) from software aging to recovery under the condition that failure times and recovery times follow general distributions. We derive the formulas for calculating the steady-state availability and reliability of the VM-based MSFC composed of multiple SFs running on VMs hosted by VMMs. Sensitivity analysis is also conducted to identify potential dependability bottlenecks.
2404.11433
Dirk Sudholt
Andre Opris, Duc-Cuong Dang, Frank Neumann, Dirk Sudholt
Runtime Analyses of NSGA-III on Many-Objective Problems
To appear at GECCO 2024
null
null
null
cs.NE
http://creativecommons.org/licenses/by-sa/4.0/
NSGA-II and NSGA-III are two of the most popular evolutionary multi-objective algorithms used in practice. While NSGA-II is used for few objectives such as 2 and 3, NSGA-III is designed to deal with a larger number of objectives. In a recent breakthrough, Wietheger and Doerr (IJCAI 2023) gave the first runtime analysis for NSGA-III on the 3-objective OneMinMax problem, showing that this state-of-the-art algorithm can be analyzed rigorously. We advance this new line of research by presenting the first runtime analyses of NSGA-III on the popular many-objective benchmark problems mLOTZ, mOMM, and mCOCZ, for an arbitrary constant number $m$ of objectives. Our analysis provides ways to set the important parameters of the algorithm: the number of reference points and the population size, so that a good performance can be guaranteed. We show how these parameters should be scaled with the problem dimension, the number of objectives and the fitness range. To our knowledge, these are the first runtime analyses for NSGA-III for more than 3 objectives.
[ { "created": "Wed, 17 Apr 2024 14:39:14 GMT", "version": "v1" }, { "created": "Thu, 18 Apr 2024 08:09:35 GMT", "version": "v2" } ]
2024-04-19
[ [ "Opris", "Andre", "" ], [ "Dang", "Duc-Cuong", "" ], [ "Neumann", "Frank", "" ], [ "Sudholt", "Dirk", "" ] ]
NSGA-II and NSGA-III are two of the most popular evolutionary multi-objective algorithms used in practice. While NSGA-II is used for few objectives such as 2 and 3, NSGA-III is designed to deal with a larger number of objectives. In a recent breakthrough, Wietheger and Doerr (IJCAI 2023) gave the first runtime analysis for NSGA-III on the 3-objective OneMinMax problem, showing that this state-of-the-art algorithm can be analyzed rigorously. We advance this new line of research by presenting the first runtime analyses of NSGA-III on the popular many-objective benchmark problems mLOTZ, mOMM, and mCOCZ, for an arbitrary constant number $m$ of objectives. Our analysis provides ways to set the important parameters of the algorithm: the number of reference points and the population size, so that a good performance can be guaranteed. We show how these parameters should be scaled with the problem dimension, the number of objectives and the fitness range. To our knowledge, these are the first runtime analyses for NSGA-III for more than 3 objectives.
2405.17127
Jiangpeng Hu
Jiangpeng Hu, Fan Yang, Fang Nan, and Marco Hutter
Motion Primitives Planning For Center-Articulated Vehicles
8 pages, 9 figures
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Autonomous navigation across unstructured terrains, including forests and construction areas, faces unique challenges due to intricate obstacles and the element of the unknown. Lacking pre-existing maps, these scenarios necessitate a motion planning approach that combines agility with efficiency. Critically, it must also incorporate the robot's kinematic constraints to navigate more effectively through complex environments. This work introduces a novel planning method for center-articulated vehicles (CAV), leveraging motion primitives within a receding horizon planning framework using onboard sensing. The approach commences with the offline creation of motion primitives, generated through forward simulations that reflect the distinct kinematic model of center-articulated vehicles. These primitives undergo evaluation through a heuristic-based scoring function, facilitating the selection of the most suitable path for real-time navigation. To augment this planning process, we develop a pose-stabilizing controller, tailored to the kinematic specifications of center-articulated vehicles. During experiments, our method demonstrates a $67\%$ improvement in SPL (Success Rate weighted by Path Length) performance over existing strategies. Furthermore, its efficacy was validated through real-world experiments conducted with a tree harvester vehicle - SAHA.
[ { "created": "Mon, 27 May 2024 12:45:37 GMT", "version": "v1" } ]
2024-05-28
[ [ "Hu", "Jiangpeng", "" ], [ "Yang", "Fan", "" ], [ "Nan", "Fang", "" ], [ "Hutter", "Marco", "" ] ]
Autonomous navigation across unstructured terrains, including forests and construction areas, faces unique challenges due to intricate obstacles and the element of the unknown. Lacking pre-existing maps, these scenarios necessitate a motion planning approach that combines agility with efficiency. Critically, it must also incorporate the robot's kinematic constraints to navigate more effectively through complex environments. This work introduces a novel planning method for center-articulated vehicles (CAV), leveraging motion primitives within a receding horizon planning framework using onboard sensing. The approach commences with the offline creation of motion primitives, generated through forward simulations that reflect the distinct kinematic model of center-articulated vehicles. These primitives undergo evaluation through a heuristic-based scoring function, facilitating the selection of the most suitable path for real-time navigation. To augment this planning process, we develop a pose-stabilizing controller, tailored to the kinematic specifications of center-articulated vehicles. During experiments, our method demonstrates a $67\%$ improvement in SPL (Success Rate weighted by Path Length) performance over existing strategies. Furthermore, its efficacy was validated through real-world experiments conducted with a tree harvester vehicle - SAHA.
2307.06046
Jincheng Zhou
Jincheng Zhou, Beatrice Bevilacqua, Bruno Ribeiro
A Multi-Task Perspective for Link Prediction with New Relation Types and Nodes
Accepted to NeurIPS GLFrontiers 2023. 24 pages, 3 figures
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by-sa/4.0/
The task of inductive link prediction in (discrete) attributed multigraphs infers missing attributed links (relations) between nodes in new test multigraphs. Traditional relational learning methods face the challenge of limited generalization to test multigraphs containing both novel nodes and novel relation types not seen in training. Recently, under the only assumption that all relation types share the same structural predictive patterns (single task), Gao et al. (2023) proposed a link prediction method using the theoretical concept of double equivariance (equivariance for nodes & relation types), in contrast to the (single) equivariance (only for nodes) used to design Graph Neural Networks (GNNs). In this work we further extend the double equivariance concept to multi-task double equivariance, where we define link prediction in attributed multigraphs that can have distinct and potentially conflicting predictive patterns for different sets of relation types (multiple tasks). Our empirical results on real-world datasets demonstrate that our approach can effectively generalize to test graphs with multi-task structures without access to additional information.
[ { "created": "Wed, 12 Jul 2023 09:49:15 GMT", "version": "v1" }, { "created": "Mon, 4 Dec 2023 22:16:14 GMT", "version": "v2" } ]
2023-12-06
[ [ "Zhou", "Jincheng", "" ], [ "Bevilacqua", "Beatrice", "" ], [ "Ribeiro", "Bruno", "" ] ]
The task of inductive link prediction in (discrete) attributed multigraphs infers missing attributed links (relations) between nodes in new test multigraphs. Traditional relational learning methods face the challenge of limited generalization to test multigraphs containing both novel nodes and novel relation types not seen in training. Recently, under the only assumption that all relation types share the same structural predictive patterns (single task), Gao et al. (2023) proposed a link prediction method using the theoretical concept of double equivariance (equivariance for nodes & relation types), in contrast to the (single) equivariance (only for nodes) used to design Graph Neural Networks (GNNs). In this work we further extend the double equivariance concept to multi-task double equivariance, where we define link prediction in attributed multigraphs that can have distinct and potentially conflicting predictive patterns for different sets of relation types (multiple tasks). Our empirical results on real-world datasets demonstrate that our approach can effectively generalize to test graphs with multi-task structures without access to additional information.
2406.14120
Fares Bougourzi
Mohamed Fadhlallah Guerri, Cosimo Distante, Paolo Spagnolo, Fares Bougourzi, Abdelmalik Taleb-Ahmed
Boosting Hyperspectral Image Classification with Gate-Shift-Fuse Mechanisms in a Novel CNN-Transformer Approach
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
During the process of classifying Hyperspectral Image (HSI), every pixel sample is categorized under a land-cover type. CNN-based techniques for HSI classification have notably advanced the field by their adept feature representation capabilities. However, acquiring deep features remains a challenge for these CNN-based methods. In contrast, transformer models are adept at extracting high-level semantic features, offering a complementary strength. This paper's main contribution is the introduction of an HSI classification model that includes two convolutional blocks, a Gate-Shift-Fuse (GSF) block and a transformer block. This model leverages the strengths of CNNs in local feature extraction and transformers in long-range context modelling. The GSF block is designed to strengthen the extraction of local and global spatial-spectral features. An effective attention mechanism module is also proposed to enhance the extraction of information from HSI cubes. The proposed method is evaluated on four well-known datasets (the Indian Pines, Pavia University, WHU-WHU-Hi-LongKou and WHU-Hi-HanChuan), demonstrating that the proposed framework achieves superior results compared to other models.
[ { "created": "Thu, 20 Jun 2024 09:05:50 GMT", "version": "v1" } ]
2024-06-21
[ [ "Guerri", "Mohamed Fadhlallah", "" ], [ "Distante", "Cosimo", "" ], [ "Spagnolo", "Paolo", "" ], [ "Bougourzi", "Fares", "" ], [ "Taleb-Ahmed", "Abdelmalik", "" ] ]
During the process of classifying Hyperspectral Image (HSI), every pixel sample is categorized under a land-cover type. CNN-based techniques for HSI classification have notably advanced the field by their adept feature representation capabilities. However, acquiring deep features remains a challenge for these CNN-based methods. In contrast, transformer models are adept at extracting high-level semantic features, offering a complementary strength. This paper's main contribution is the introduction of an HSI classification model that includes two convolutional blocks, a Gate-Shift-Fuse (GSF) block and a transformer block. This model leverages the strengths of CNNs in local feature extraction and transformers in long-range context modelling. The GSF block is designed to strengthen the extraction of local and global spatial-spectral features. An effective attention mechanism module is also proposed to enhance the extraction of information from HSI cubes. The proposed method is evaluated on four well-known datasets (the Indian Pines, Pavia University, WHU-WHU-Hi-LongKou and WHU-Hi-HanChuan), demonstrating that the proposed framework achieves superior results compared to other models.
2311.11177
Amjed Tahir
Vahid Majdinasab and Michael Joshua Bishop and Shawn Rasheed and Arghavan Moradidakhel and Amjed Tahir and Foutse Khomh
Assessing the Security of GitHub Copilot Generated Code -- A Targeted Replication Study
null
null
null
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
AI-powered code generation models have been developing rapidly, allowing developers to expedite code generation and thus improve their productivity. These models are trained on large corpora of code (primarily sourced from public repositories), which may contain bugs and vulnerabilities. Several concerns have been raised about the security of the code generated by these models. Recent studies have investigated security issues in AI-powered code generation tools such as GitHub Copilot and Amazon CodeWhisperer, revealing several security weaknesses in the code generated by these tools. As these tools evolve, it is expected that they will improve their security protocols to prevent the suggestion of insecure code to developers. This paper replicates the study of Pearce et al., which investigated security weaknesses in Copilot and uncovered several weaknesses in the code suggested by Copilot across diverse scenarios and languages (Python, C and Verilog). Our replication examines Copilot security weaknesses using newer versions of Copilot and CodeQL (the security analysis framework). The replication focused on the presence of security vulnerabilities in Python code. Our results indicate that, even with the improvements in newer versions of Copilot, the percentage of vulnerable code suggestions has reduced from 36.54% to 27.25%. Nonetheless, it remains evident that the model still suggests insecure code.
[ { "created": "Sat, 18 Nov 2023 22:12:59 GMT", "version": "v1" } ]
2023-11-21
[ [ "Majdinasab", "Vahid", "" ], [ "Bishop", "Michael Joshua", "" ], [ "Rasheed", "Shawn", "" ], [ "Moradidakhel", "Arghavan", "" ], [ "Tahir", "Amjed", "" ], [ "Khomh", "Foutse", "" ] ]
AI-powered code generation models have been developing rapidly, allowing developers to expedite code generation and thus improve their productivity. These models are trained on large corpora of code (primarily sourced from public repositories), which may contain bugs and vulnerabilities. Several concerns have been raised about the security of the code generated by these models. Recent studies have investigated security issues in AI-powered code generation tools such as GitHub Copilot and Amazon CodeWhisperer, revealing several security weaknesses in the code generated by these tools. As these tools evolve, it is expected that they will improve their security protocols to prevent the suggestion of insecure code to developers. This paper replicates the study of Pearce et al., which investigated security weaknesses in Copilot and uncovered several weaknesses in the code suggested by Copilot across diverse scenarios and languages (Python, C and Verilog). Our replication examines Copilot security weaknesses using newer versions of Copilot and CodeQL (the security analysis framework). The replication focused on the presence of security vulnerabilities in Python code. Our results indicate that, even with the improvements in newer versions of Copilot, the percentage of vulnerable code suggestions has reduced from 36.54% to 27.25%. Nonetheless, it remains evident that the model still suggests insecure code.
2305.19203
Eytan Singher
Eytan Singher and Shachar Itzhaky
Colored E-Graph: Equality Reasoning with Conditions
null
null
null
null
cs.PL
http://creativecommons.org/licenses/by/4.0/
E-graphs are a prominent data structure that has been increasing in popularity in recent years due to their expanding range of applications in various formal reasoning tasks. Often, they are used for equality saturation, a process of deriving consequences through repeatedly applying universally quantified equality formulas via term rewriting. They handle equality reasoning over a large spaces of terms, but are severely limited in their handling of case splitting and other types of logical cuts, especially when compared to other reasoning techniques such as sequent calculi and resolution. The main difficulty is when equality reasoning requires multiple inconsistent assumptions to reach a single conclusion. Ad-hoc solutions, such as duplicating the e-graph for each assumption, are available, but they are notably resource-intensive. We introduce a key observation is that each duplicate e-graph (with an added assumption) corresponds to coarsened congruence relation. Based on that, we present an extension to e-graphs, called Colored E-Graphs, as a way to represent all of the coarsened congruence relations in a single structure. A colored e-graph is a memory-efficient equivalent of multiple copies of an e-graph, with a much lower overhead. This is attained by sharing as much as possible between different cases, while carefully tracking which conclusion is true under which assumption. Support for multiple relations can be thought of as adding multiple "color-coded" layers on top of the original e-graph structure, leading to a large degree of sharing. In our implementation, we introduce optimizations to rebuilding and e-matching. We run experiments and demonstrate that our colored e-graphs can support hundreds of assumptions and millions of terms with space requirements that are an order of magnitude lower, and with similar time requirements.
[ { "created": "Tue, 30 May 2023 16:49:10 GMT", "version": "v1" } ]
2023-05-31
[ [ "Singher", "Eytan", "" ], [ "Itzhaky", "Shachar", "" ] ]
E-graphs are a prominent data structure that has been increasing in popularity in recent years due to their expanding range of applications in various formal reasoning tasks. Often, they are used for equality saturation, a process of deriving consequences through repeatedly applying universally quantified equality formulas via term rewriting. They handle equality reasoning over a large spaces of terms, but are severely limited in their handling of case splitting and other types of logical cuts, especially when compared to other reasoning techniques such as sequent calculi and resolution. The main difficulty is when equality reasoning requires multiple inconsistent assumptions to reach a single conclusion. Ad-hoc solutions, such as duplicating the e-graph for each assumption, are available, but they are notably resource-intensive. We introduce a key observation is that each duplicate e-graph (with an added assumption) corresponds to coarsened congruence relation. Based on that, we present an extension to e-graphs, called Colored E-Graphs, as a way to represent all of the coarsened congruence relations in a single structure. A colored e-graph is a memory-efficient equivalent of multiple copies of an e-graph, with a much lower overhead. This is attained by sharing as much as possible between different cases, while carefully tracking which conclusion is true under which assumption. Support for multiple relations can be thought of as adding multiple "color-coded" layers on top of the original e-graph structure, leading to a large degree of sharing. In our implementation, we introduce optimizations to rebuilding and e-matching. We run experiments and demonstrate that our colored e-graphs can support hundreds of assumptions and millions of terms with space requirements that are an order of magnitude lower, and with similar time requirements.
1902.10223
Zhu Wang
Zhu Wang, Anat Lubetzky, Marta Gospodarek, Makan TaghaviDilamani, Ken Perlin
Virtual Environments for Rehabilitation of Postural Control Dysfunction
null
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We developed a novel virtual reality [VR] platform with 3-dimensional sounds to help improve sensory integration and visuomotor processing for postural control and fall prevention in individuals with balance problems related to sensory deficits, such as vestibular dysfunction (disease of the inner ear). The system has scenes that simulate scenario-based environments. We can adjust the intensity of the visual and audio stimuli in the virtual scenes by controlling the user interface (UI) settings. A VR headset (HTC Vive or Oculus Rift) delivers stereo display while providing real-time position and orientation of the participants' head. The 3D game-like scenes make participants feel immersed and gradually exposes them to situations that may induce dizziness, anxiety or imbalance in their daily-living.
[ { "created": "Fri, 8 Feb 2019 20:16:42 GMT", "version": "v1" } ]
2019-02-28
[ [ "Wang", "Zhu", "" ], [ "Lubetzky", "Anat", "" ], [ "Gospodarek", "Marta", "" ], [ "TaghaviDilamani", "Makan", "" ], [ "Perlin", "Ken", "" ] ]
We developed a novel virtual reality [VR] platform with 3-dimensional sounds to help improve sensory integration and visuomotor processing for postural control and fall prevention in individuals with balance problems related to sensory deficits, such as vestibular dysfunction (disease of the inner ear). The system has scenes that simulate scenario-based environments. We can adjust the intensity of the visual and audio stimuli in the virtual scenes by controlling the user interface (UI) settings. A VR headset (HTC Vive or Oculus Rift) delivers stereo display while providing real-time position and orientation of the participants' head. The 3D game-like scenes make participants feel immersed and gradually exposes them to situations that may induce dizziness, anxiety or imbalance in their daily-living.
2307.12442
Amirhossein Aminimehr
Amirhossein Aminimehr, Amirali Molaei, Erik Cambria
EnTri: Ensemble Learning with Tri-level Representations for Explainable Scene Recognition
null
null
10.2139/ssrn.4482110
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Scene recognition based on deep-learning has made significant progress, but there are still limitations in its performance due to challenges posed by inter-class similarities and intra-class dissimilarities. Furthermore, prior research has primarily focused on improving classification accuracy, yet it has given less attention to achieving interpretable, precise scene classification. Therefore, we are motivated to propose EnTri, an ensemble scene recognition framework that employs ensemble learning using a hierarchy of visual features. EnTri represents features at three distinct levels of detail: pixel-level, semantic segmentation-level, and object class and frequency level. By incorporating distinct feature encoding schemes of differing complexity and leveraging ensemble strategies, our approach aims to improve classification accuracy while enhancing transparency and interpretability via visual and textual explanations. To achieve interpretability, we devised an extension algorithm that generates both visual and textual explanations highlighting various properties of a given scene that contribute to the final prediction of its category. This includes information about objects, statistics, spatial layout, and textural details. Through experiments on benchmark scene classification datasets, EnTri has demonstrated superiority in terms of recognition accuracy, achieving competitive performance compared to state-of-the-art approaches, with an accuracy of 87.69%, 75.56%, and 99.17% on the MIT67, SUN397, and UIUC8 datasets, respectively.
[ { "created": "Sun, 23 Jul 2023 22:11:23 GMT", "version": "v1" }, { "created": "Mon, 15 Jul 2024 12:06:20 GMT", "version": "v2" } ]
2024-07-16
[ [ "Aminimehr", "Amirhossein", "" ], [ "Molaei", "Amirali", "" ], [ "Cambria", "Erik", "" ] ]
Scene recognition based on deep-learning has made significant progress, but there are still limitations in its performance due to challenges posed by inter-class similarities and intra-class dissimilarities. Furthermore, prior research has primarily focused on improving classification accuracy, yet it has given less attention to achieving interpretable, precise scene classification. Therefore, we are motivated to propose EnTri, an ensemble scene recognition framework that employs ensemble learning using a hierarchy of visual features. EnTri represents features at three distinct levels of detail: pixel-level, semantic segmentation-level, and object class and frequency level. By incorporating distinct feature encoding schemes of differing complexity and leveraging ensemble strategies, our approach aims to improve classification accuracy while enhancing transparency and interpretability via visual and textual explanations. To achieve interpretability, we devised an extension algorithm that generates both visual and textual explanations highlighting various properties of a given scene that contribute to the final prediction of its category. This includes information about objects, statistics, spatial layout, and textural details. Through experiments on benchmark scene classification datasets, EnTri has demonstrated superiority in terms of recognition accuracy, achieving competitive performance compared to state-of-the-art approaches, with an accuracy of 87.69%, 75.56%, and 99.17% on the MIT67, SUN397, and UIUC8 datasets, respectively.
2001.05497
Max Hopkins
Max Hopkins, Daniel Kane, Shachar Lovett, Gaurav Mahajan
Noise-tolerant, Reliable Active Classification with Comparison Queries
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the explosion of massive, widely available unlabeled data in the past years, finding label and time efficient, robust learning algorithms has become ever more important in theory and in practice. We study the paradigm of active learning, in which algorithms with access to large pools of data may adaptively choose what samples to label in the hope of exponentially increasing efficiency. By introducing comparisons, an additional type of query comparing two points, we provide the first time and query efficient algorithms for learning non-homogeneous linear separators robust to bounded (Massart) noise. We further provide algorithms for a generalization of the popular Tsybakov low noise condition, and show how comparisons provide a strong reliability guarantee that is often impractical or impossible with only labels - returning a classifier that makes no errors with high probability.
[ { "created": "Wed, 15 Jan 2020 19:00:00 GMT", "version": "v1" } ]
2020-01-17
[ [ "Hopkins", "Max", "" ], [ "Kane", "Daniel", "" ], [ "Lovett", "Shachar", "" ], [ "Mahajan", "Gaurav", "" ] ]
With the explosion of massive, widely available unlabeled data in the past years, finding label and time efficient, robust learning algorithms has become ever more important in theory and in practice. We study the paradigm of active learning, in which algorithms with access to large pools of data may adaptively choose what samples to label in the hope of exponentially increasing efficiency. By introducing comparisons, an additional type of query comparing two points, we provide the first time and query efficient algorithms for learning non-homogeneous linear separators robust to bounded (Massart) noise. We further provide algorithms for a generalization of the popular Tsybakov low noise condition, and show how comparisons provide a strong reliability guarantee that is often impractical or impossible with only labels - returning a classifier that makes no errors with high probability.
2012.01141
Mustafa Hajij
Mustafa Hajij, Ghada Zamzmi, Matthew Dawson, Greg Muller
Algebraically-Informed Deep Networks (AIDN): A Deep Learning Approach to Represent Algebraic Structures
null
null
null
null
cs.LG math.AT math.GR math.GT math.RT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the central problems in the interface of deep learning and mathematics is that of building learning systems that can automatically uncover underlying mathematical laws from observed data. In this work, we make one step towards building a bridge between algebraic structures and deep learning, and introduce \textbf{AIDN}, \textit{Algebraically-Informed Deep Networks}. \textbf{AIDN} is a deep learning algorithm to represent any finitely-presented algebraic object with a set of deep neural networks. The deep networks obtained via \textbf{AIDN} are \textit{algebraically-informed} in the sense that they satisfy the algebraic relations of the presentation of the algebraic structure that serves as the input to the algorithm. Our proposed network can robustly compute linear and non-linear representations of most finitely-presented algebraic structures such as groups, associative algebras, and Lie algebras. We evaluate our proposed approach and demonstrate its applicability to algebraic and geometric objects that are significant in low-dimensional topology. In particular, we study solutions for the Yang-Baxter equations and their applications on braid groups. Further, we study the representations of the Temperley-Lieb algebra. Finally, we show, using the Reshetikhin-Turaev construction, how our proposed deep learning approach can be utilized to construct new link invariants. We believe the proposed approach would tread a path toward a promising future research in deep learning applied to algebraic and geometric structures.
[ { "created": "Wed, 2 Dec 2020 12:43:39 GMT", "version": "v1" }, { "created": "Sat, 5 Dec 2020 02:44:07 GMT", "version": "v2" }, { "created": "Fri, 12 Feb 2021 07:06:52 GMT", "version": "v3" } ]
2021-02-15
[ [ "Hajij", "Mustafa", "" ], [ "Zamzmi", "Ghada", "" ], [ "Dawson", "Matthew", "" ], [ "Muller", "Greg", "" ] ]
One of the central problems in the interface of deep learning and mathematics is that of building learning systems that can automatically uncover underlying mathematical laws from observed data. In this work, we make one step towards building a bridge between algebraic structures and deep learning, and introduce \textbf{AIDN}, \textit{Algebraically-Informed Deep Networks}. \textbf{AIDN} is a deep learning algorithm to represent any finitely-presented algebraic object with a set of deep neural networks. The deep networks obtained via \textbf{AIDN} are \textit{algebraically-informed} in the sense that they satisfy the algebraic relations of the presentation of the algebraic structure that serves as the input to the algorithm. Our proposed network can robustly compute linear and non-linear representations of most finitely-presented algebraic structures such as groups, associative algebras, and Lie algebras. We evaluate our proposed approach and demonstrate its applicability to algebraic and geometric objects that are significant in low-dimensional topology. In particular, we study solutions for the Yang-Baxter equations and their applications on braid groups. Further, we study the representations of the Temperley-Lieb algebra. Finally, we show, using the Reshetikhin-Turaev construction, how our proposed deep learning approach can be utilized to construct new link invariants. We believe the proposed approach would tread a path toward a promising future research in deep learning applied to algebraic and geometric structures.
1902.01026
Hossein K. Mousavi
Hossein K. Mousavi, Nader Motee
Estimation with Fast Landmark Selection in Robot Visual Navigation
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
We consider the visual feature selection to improve the estimation quality required for the accurate navigation of a robot. We build upon a key property that asserts: contributions of trackable features (landmarks) appear linearly in the information matrix of the corresponding estimation problem. We utilize standard models for motion and vision system using a camera to formulate the feature selection problem over moving finite time horizons. A scalable randomized sampling algorithm is proposed to select more informative features (and ignore the rest) to achieve a superior position estimation quality. We provide probabilistic performance guarantees for our method. The time-complexity of our feature selection algorithm is linear in the number of candidate features, which is practically plausible and outperforms existing greedy methods that scale quadratically with the number of candidates features. Our numerical simulations confirm that not only the execution time of our proposed method is comparably less than that of the greedy method, but also the resulting estimation quality is very close to the greedy method.
[ { "created": "Mon, 4 Feb 2019 04:07:24 GMT", "version": "v1" } ]
2019-02-05
[ [ "Mousavi", "Hossein K.", "" ], [ "Motee", "Nader", "" ] ]
We consider the visual feature selection to improve the estimation quality required for the accurate navigation of a robot. We build upon a key property that asserts: contributions of trackable features (landmarks) appear linearly in the information matrix of the corresponding estimation problem. We utilize standard models for motion and vision system using a camera to formulate the feature selection problem over moving finite time horizons. A scalable randomized sampling algorithm is proposed to select more informative features (and ignore the rest) to achieve a superior position estimation quality. We provide probabilistic performance guarantees for our method. The time-complexity of our feature selection algorithm is linear in the number of candidate features, which is practically plausible and outperforms existing greedy methods that scale quadratically with the number of candidates features. Our numerical simulations confirm that not only the execution time of our proposed method is comparably less than that of the greedy method, but also the resulting estimation quality is very close to the greedy method.
1607.01827
Albert Fannjiang
Albert Fannjiang
Compressive Spectral Estimation with Single-Snapshot ESPRIT: Stability and Resolution
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper Estimation of Signal Parameters via Rotational Invariance Techniques (ESPRIT) is developed for spectral estimation with single-snapshot measurement. Stability and resolution analysis with performance guarantee for Single-Snapshot ESPRIT (SS-ESPRIT) is the main focus. In the noise-free case, exact reconstruction is guaranteed for any arbitrary set of frequencies as long as the number of measurement data is at least twice the number of distinct frequencies to be recovered. In the presence of noise and under the assumption that the true frequencies are separated by at least two times Rayleigh's Resolution Length, an explicit error bound for frequency reconstruction is given in terms of the dynamic range and the separation of the frequencies. The separation and sparsity constraint compares favorably with those of the leading approaches to compressed sensing in the continuum.
[ { "created": "Wed, 6 Jul 2016 22:17:31 GMT", "version": "v1" } ]
2016-07-08
[ [ "Fannjiang", "Albert", "" ] ]
In this paper Estimation of Signal Parameters via Rotational Invariance Techniques (ESPRIT) is developed for spectral estimation with single-snapshot measurement. Stability and resolution analysis with performance guarantee for Single-Snapshot ESPRIT (SS-ESPRIT) is the main focus. In the noise-free case, exact reconstruction is guaranteed for any arbitrary set of frequencies as long as the number of measurement data is at least twice the number of distinct frequencies to be recovered. In the presence of noise and under the assumption that the true frequencies are separated by at least two times Rayleigh's Resolution Length, an explicit error bound for frequency reconstruction is given in terms of the dynamic range and the separation of the frequencies. The separation and sparsity constraint compares favorably with those of the leading approaches to compressed sensing in the continuum.
1811.08565
Adam Kortylewski
Adam Kortylewski, Bernhard Egger, Andreas Morel-Forster, Andreas Schneider, Thomas Gerig, Clemens Blumer, Corius Reyneke, Thomas Vetter
Can Synthetic Faces Undo the Damage of Dataset Bias to Face Recognition and Facial Landmark Detection?
Technical report
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is well known that deep learning approaches to face recognition and facial landmark detection suffer from biases in modern training datasets. In this work, we propose to use synthetic face images to reduce the negative effects of dataset biases on these tasks. Using a 3D morphable face model, we generate large amounts of synthetic face images with full control over facial shape and color, pose, illumination, and background. With a series of experiments, we extensively test the effects of priming deep nets by pre-training them with synthetic faces. We observe the following positive effects for face recognition and facial landmark detection tasks: 1) Priming with synthetic face images improves the performance consistently across all benchmarks because it reduces the negative effects of biases in the training data. 2) Traditional approaches for reducing the damage of dataset bias, such as data augmentation and transfer learning, are less effective than training with synthetic faces. 3) Using synthetic data, we can reduce the size of real-world datasets by 75% for face recognition and by 50% for facial landmark detection while maintaining performance. Thus, offering a means to focus the data collection process on less but higher quality data.
[ { "created": "Mon, 19 Nov 2018 21:17:21 GMT", "version": "v1" }, { "created": "Sun, 23 Jun 2019 00:26:34 GMT", "version": "v2" } ]
2019-06-25
[ [ "Kortylewski", "Adam", "" ], [ "Egger", "Bernhard", "" ], [ "Morel-Forster", "Andreas", "" ], [ "Schneider", "Andreas", "" ], [ "Gerig", "Thomas", "" ], [ "Blumer", "Clemens", "" ], [ "Reyneke", "Corius", "" ], [ "Vetter", "Thomas", "" ] ]
It is well known that deep learning approaches to face recognition and facial landmark detection suffer from biases in modern training datasets. In this work, we propose to use synthetic face images to reduce the negative effects of dataset biases on these tasks. Using a 3D morphable face model, we generate large amounts of synthetic face images with full control over facial shape and color, pose, illumination, and background. With a series of experiments, we extensively test the effects of priming deep nets by pre-training them with synthetic faces. We observe the following positive effects for face recognition and facial landmark detection tasks: 1) Priming with synthetic face images improves the performance consistently across all benchmarks because it reduces the negative effects of biases in the training data. 2) Traditional approaches for reducing the damage of dataset bias, such as data augmentation and transfer learning, are less effective than training with synthetic faces. 3) Using synthetic data, we can reduce the size of real-world datasets by 75% for face recognition and by 50% for facial landmark detection while maintaining performance. Thus, offering a means to focus the data collection process on less but higher quality data.
2110.13220
Tim Dockhorn
Tim Dockhorn, Yaoliang Yu, Eyy\"ub Sari, Mahdi Zolnouri, Vahid Partovi Nia
Demystifying and Generalizing BinaryConnect
NeurIPS 2021
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
BinaryConnect (BC) and its many variations have become the de facto standard for neural network quantization. However, our understanding of the inner workings of BC is still quite limited. We attempt to close this gap in four different aspects: (a) we show that existing quantization algorithms, including post-training quantization, are surprisingly similar to each other; (b) we argue for proximal maps as a natural family of quantizers that is both easy to design and analyze; (c) we refine the observation that BC is a special case of dual averaging, which itself is a special case of the generalized conditional gradient algorithm; (d) consequently, we propose ProxConnect (PC) as a generalization of BC and we prove its convergence properties by exploiting the established connections. We conduct experiments on CIFAR-10 and ImageNet, and verify that PC achieves competitive performance.
[ { "created": "Mon, 25 Oct 2021 19:07:38 GMT", "version": "v1" } ]
2021-10-27
[ [ "Dockhorn", "Tim", "" ], [ "Yu", "Yaoliang", "" ], [ "Sari", "Eyyüb", "" ], [ "Zolnouri", "Mahdi", "" ], [ "Nia", "Vahid Partovi", "" ] ]
BinaryConnect (BC) and its many variations have become the de facto standard for neural network quantization. However, our understanding of the inner workings of BC is still quite limited. We attempt to close this gap in four different aspects: (a) we show that existing quantization algorithms, including post-training quantization, are surprisingly similar to each other; (b) we argue for proximal maps as a natural family of quantizers that is both easy to design and analyze; (c) we refine the observation that BC is a special case of dual averaging, which itself is a special case of the generalized conditional gradient algorithm; (d) consequently, we propose ProxConnect (PC) as a generalization of BC and we prove its convergence properties by exploiting the established connections. We conduct experiments on CIFAR-10 and ImageNet, and verify that PC achieves competitive performance.
1808.00449
Wei-Sheng Lai
Wei-Sheng Lai, Jia-Bin Huang, Oliver Wang, Eli Shechtman, Ersin Yumer, Ming-Hsuan Yang
Learning Blind Video Temporal Consistency
This work is accepted in ECCV 2018. Project website: http://vllab.ucmerced.edu/wlai24/video_consistency/
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Applying image processing algorithms independently to each frame of a video often leads to undesired inconsistent results over time. Developing temporally consistent video-based extensions, however, requires domain knowledge for individual tasks and is unable to generalize to other applications. In this paper, we present an efficient end-to-end approach based on deep recurrent network for enforcing temporal consistency in a video. Our method takes the original unprocessed and per-frame processed videos as inputs to produce a temporally consistent video. Consequently, our approach is agnostic to specific image processing algorithms applied on the original video. We train the proposed network by minimizing both short-term and long-term temporal losses as well as the perceptual loss to strike a balance between temporal stability and perceptual similarity with the processed frames. At test time, our model does not require computing optical flow and thus achieves real-time speed even for high-resolution videos. We show that our single model can handle multiple and unseen tasks, including but not limited to artistic style transfer, enhancement, colorization, image-to-image translation and intrinsic image decomposition. Extensive objective evaluation and subject study demonstrate that the proposed approach performs favorably against the state-of-the-art methods on various types of videos.
[ { "created": "Wed, 1 Aug 2018 17:59:15 GMT", "version": "v1" } ]
2018-08-02
[ [ "Lai", "Wei-Sheng", "" ], [ "Huang", "Jia-Bin", "" ], [ "Wang", "Oliver", "" ], [ "Shechtman", "Eli", "" ], [ "Yumer", "Ersin", "" ], [ "Yang", "Ming-Hsuan", "" ] ]
Applying image processing algorithms independently to each frame of a video often leads to undesired inconsistent results over time. Developing temporally consistent video-based extensions, however, requires domain knowledge for individual tasks and is unable to generalize to other applications. In this paper, we present an efficient end-to-end approach based on deep recurrent network for enforcing temporal consistency in a video. Our method takes the original unprocessed and per-frame processed videos as inputs to produce a temporally consistent video. Consequently, our approach is agnostic to specific image processing algorithms applied on the original video. We train the proposed network by minimizing both short-term and long-term temporal losses as well as the perceptual loss to strike a balance between temporal stability and perceptual similarity with the processed frames. At test time, our model does not require computing optical flow and thus achieves real-time speed even for high-resolution videos. We show that our single model can handle multiple and unseen tasks, including but not limited to artistic style transfer, enhancement, colorization, image-to-image translation and intrinsic image decomposition. Extensive objective evaluation and subject study demonstrate that the proposed approach performs favorably against the state-of-the-art methods on various types of videos.
1702.08530
Richard Nock
Amir Dezfouli, Edwin V. Bonilla, Richard Nock
Semi-parametric Network Structure Discovery Models
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a network structure discovery model for continuous observations that generalizes linear causal models by incorporating a Gaussian process (GP) prior on a network-independent component, and random sparsity and weight matrices as the network-dependent parameters. This approach provides flexible modeling of network-independent trends in the observations as well as uncertainty quantification around the discovered network structure. We establish a connection between our model and multi-task GPs and develop an efficient stochastic variational inference algorithm for it. Furthermore, we formally show that our approach is numerically stable and in fact numerically easy to carry out almost everywhere on the support of the random variables involved. Finally, we evaluate our model on three applications, showing that it outperforms previous approaches. We provide a qualitative and quantitative analysis of the structures discovered for domains such as the study of the full genome regulation of the yeast Saccharomyces cerevisiae.
[ { "created": "Mon, 27 Feb 2017 21:04:05 GMT", "version": "v1" } ]
2017-03-01
[ [ "Dezfouli", "Amir", "" ], [ "Bonilla", "Edwin V.", "" ], [ "Nock", "Richard", "" ] ]
We propose a network structure discovery model for continuous observations that generalizes linear causal models by incorporating a Gaussian process (GP) prior on a network-independent component, and random sparsity and weight matrices as the network-dependent parameters. This approach provides flexible modeling of network-independent trends in the observations as well as uncertainty quantification around the discovered network structure. We establish a connection between our model and multi-task GPs and develop an efficient stochastic variational inference algorithm for it. Furthermore, we formally show that our approach is numerically stable and in fact numerically easy to carry out almost everywhere on the support of the random variables involved. Finally, we evaluate our model on three applications, showing that it outperforms previous approaches. We provide a qualitative and quantitative analysis of the structures discovered for domains such as the study of the full genome regulation of the yeast Saccharomyces cerevisiae.