id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2401.02584
Xuenan Xu
Xuenan Xu, Ziyang Ma, Mengyue Wu, Kai Yu
Towards Weakly Supervised Text-to-Audio Grounding
null
null
null
null
cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Text-to-audio grounding (TAG) task aims to predict the onsets and offsets of sound events described by natural language. This task can facilitate applications such as multimodal information retrieval. This paper focuses on weakly-supervised text-to-audio grounding (WSTAG), where frame-level annotations of sound events are unavailable, and only the caption of a whole audio clip can be utilized for training. WSTAG is superior to strongly-supervised approaches in its scalability to large audio-text datasets. Two WSTAG frameworks are studied in this paper: sentence-level and phrase-level. First, we analyze the limitations of mean pooling used in the previous WSTAG approach and investigate the effects of different pooling strategies. We then propose phrase-level WSTAG to use matching labels between audio clips and phrases for training. Advanced negative sampling strategies and self-supervision are proposed to enhance the accuracy of the weak labels and provide pseudo strong labels. Experimental results show that our system significantly outperforms the previous WSTAG SOTA. Finally, we conduct extensive experiments to analyze the effects of several factors on phrase-level WSTAG. The code and model is available at https://github.com/wsntxxn/TextToAudioGrounding.
[ { "created": "Fri, 5 Jan 2024 00:27:32 GMT", "version": "v1" }, { "created": "Wed, 17 Jul 2024 06:11:43 GMT", "version": "v2" } ]
2024-07-18
[ [ "Xu", "Xuenan", "" ], [ "Ma", "Ziyang", "" ], [ "Wu", "Mengyue", "" ], [ "Yu", "Kai", "" ] ]
Text-to-audio grounding (TAG) task aims to predict the onsets and offsets of sound events described by natural language. This task can facilitate applications such as multimodal information retrieval. This paper focuses on weakly-supervised text-to-audio grounding (WSTAG), where frame-level annotations of sound events are unavailable, and only the caption of a whole audio clip can be utilized for training. WSTAG is superior to strongly-supervised approaches in its scalability to large audio-text datasets. Two WSTAG frameworks are studied in this paper: sentence-level and phrase-level. First, we analyze the limitations of mean pooling used in the previous WSTAG approach and investigate the effects of different pooling strategies. We then propose phrase-level WSTAG to use matching labels between audio clips and phrases for training. Advanced negative sampling strategies and self-supervision are proposed to enhance the accuracy of the weak labels and provide pseudo strong labels. Experimental results show that our system significantly outperforms the previous WSTAG SOTA. Finally, we conduct extensive experiments to analyze the effects of several factors on phrase-level WSTAG. The code and model is available at https://github.com/wsntxxn/TextToAudioGrounding.
2206.02757
Yaodong Yu
Yaodong Yu and Stephen Bates and Yi Ma and Michael I. Jordan
Robust Calibration with Multi-domain Temperature Scaling
null
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Uncertainty quantification is essential for the reliable deployment of machine learning models to high-stakes application domains. Uncertainty quantification is all the more challenging when training distribution and test distribution are different, even the distribution shifts are mild. Despite the ubiquity of distribution shifts in real-world applications, existing uncertainty quantification approaches mainly study the in-distribution setting where the train and test distributions are the same. In this paper, we develop a systematic calibration model to handle distribution shifts by leveraging data from multiple domains. Our proposed method -- multi-domain temperature scaling -- uses the heterogeneity in the domains to improve calibration robustness under distribution shift. Through experiments on three benchmark data sets, we find our proposed method outperforms existing methods as measured on both in-distribution and out-of-distribution test sets.
[ { "created": "Mon, 6 Jun 2022 17:32:12 GMT", "version": "v1" } ]
2022-06-07
[ [ "Yu", "Yaodong", "" ], [ "Bates", "Stephen", "" ], [ "Ma", "Yi", "" ], [ "Jordan", "Michael I.", "" ] ]
Uncertainty quantification is essential for the reliable deployment of machine learning models to high-stakes application domains. Uncertainty quantification is all the more challenging when training distribution and test distribution are different, even the distribution shifts are mild. Despite the ubiquity of distribution shifts in real-world applications, existing uncertainty quantification approaches mainly study the in-distribution setting where the train and test distributions are the same. In this paper, we develop a systematic calibration model to handle distribution shifts by leveraging data from multiple domains. Our proposed method -- multi-domain temperature scaling -- uses the heterogeneity in the domains to improve calibration robustness under distribution shift. Through experiments on three benchmark data sets, we find our proposed method outperforms existing methods as measured on both in-distribution and out-of-distribution test sets.
1305.5786
Marius Buliga
Marius Buliga
Graphic lambda calculus
v2: Minor typos and figure corrections in section 3. v1: Massive revision of all previous descriptions of graphic lambda calculus, based on arXiv:1207.0332 and arXiv:1302.0778, with a lot of material added and harmonized exposition
Complex Systems 22, 4 (2013), 311-360
null
null
cs.LO math.GT math.LO
http://creativecommons.org/licenses/by/3.0/
We introduce and study graphic lambda calculus, a visual language which can be used for representing untyped lambda calculus, but it can also be used for computations in emergent algebras or for representing Reidemeister moves of locally planar tangle diagrams.
[ { "created": "Fri, 24 May 2013 16:38:25 GMT", "version": "v1" }, { "created": "Tue, 24 Sep 2013 10:36:59 GMT", "version": "v2" } ]
2019-02-18
[ [ "Buliga", "Marius", "" ] ]
We introduce and study graphic lambda calculus, a visual language which can be used for representing untyped lambda calculus, but it can also be used for computations in emergent algebras or for representing Reidemeister moves of locally planar tangle diagrams.
2009.04327
Iain Barclay
Iain Barclay, Maria Freytsis, Sherri Bucher, Swapna Radha, Alun Preece and Ian Taylor
Towards a Modelling Framework for Self-Sovereign Identity Systems
null
null
null
null
cs.SE cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Self-sovereign Identity promises to give users control of their own data, and has the potential to foster advancements in terms of personal data privacy. Self-sovereign concepts can also be applied to other entities, such as datasets and devices. Systems adopting this paradigm will be decentralised, with messages passing between multiple actors, both human and representing other entities, in order to issue and request credentials necessary to meet individual and collective goals. Such systems are complex, and build upon social and technical interactions and behaviours. Modelling self-sovereign identity systems seeks to provide stakeholders and software architects with tools to enable them to communicate effectively, and lead to effective and well-regarded system designs and implementations. This paper draws upon research from Actor-based Modelling to guide a way forward in modelling self-sovereign systems, and reports early success in utilising the iStar 2.0 framework to provide a representation of a birth registration case study.
[ { "created": "Wed, 9 Sep 2020 14:32:28 GMT", "version": "v1" }, { "created": "Thu, 10 Sep 2020 09:12:29 GMT", "version": "v2" } ]
2020-09-11
[ [ "Barclay", "Iain", "" ], [ "Freytsis", "Maria", "" ], [ "Bucher", "Sherri", "" ], [ "Radha", "Swapna", "" ], [ "Preece", "Alun", "" ], [ "Taylor", "Ian", "" ] ]
Self-sovereign Identity promises to give users control of their own data, and has the potential to foster advancements in terms of personal data privacy. Self-sovereign concepts can also be applied to other entities, such as datasets and devices. Systems adopting this paradigm will be decentralised, with messages passing between multiple actors, both human and representing other entities, in order to issue and request credentials necessary to meet individual and collective goals. Such systems are complex, and build upon social and technical interactions and behaviours. Modelling self-sovereign identity systems seeks to provide stakeholders and software architects with tools to enable them to communicate effectively, and lead to effective and well-regarded system designs and implementations. This paper draws upon research from Actor-based Modelling to guide a way forward in modelling self-sovereign systems, and reports early success in utilising the iStar 2.0 framework to provide a representation of a birth registration case study.
1809.03010
Zhongliang Yang
Zhongliang Yang, Xueshun Peng, Yongfeng Huang, Chinchen Chang
A novel method of speech information hiding based on 3D-Magic Matrix
Accepted by Journal of Internet Technology
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Redundant information of low-bit-rate speech is extremely small, thus it's very difficult to implement large capacity steganography on the low-bit-rate speech. Based on multiple vector quantization characteristics of the Line Spectrum Pair (LSP) of the speech codec, this paper proposes a steganography scheme using a 3D-Magic matrix to enlarge capacity and improve quality of speech. A cyclically moving algorithm to construct a 3D-Magic matrix for steganography is proposed in this paper, as well as an embedding and an extracting algorithm of steganography based on the 3D-Magic matrix in low-bit-rate speech codec. Theoretical analysis is provided to demonstrate that the concealment and the hidden capacity are greatly improved with the proposed scheme. Experimental results show the hidden capacity is raised to 200bps in ITU-T G.723.1 codec. Moreover, the quality of steganography speech in Perceptual Evaluation of Speech Quality (PESQ) reduces no more than 4%, indicating a little impact on the quality of speech. In addition, the proposed hidden scheme could prevent being detected by some steganalysis tools effectively.
[ { "created": "Sun, 9 Sep 2018 17:24:13 GMT", "version": "v1" } ]
2018-09-11
[ [ "Yang", "Zhongliang", "" ], [ "Peng", "Xueshun", "" ], [ "Huang", "Yongfeng", "" ], [ "Chang", "Chinchen", "" ] ]
Redundant information of low-bit-rate speech is extremely small, thus it's very difficult to implement large capacity steganography on the low-bit-rate speech. Based on multiple vector quantization characteristics of the Line Spectrum Pair (LSP) of the speech codec, this paper proposes a steganography scheme using a 3D-Magic matrix to enlarge capacity and improve quality of speech. A cyclically moving algorithm to construct a 3D-Magic matrix for steganography is proposed in this paper, as well as an embedding and an extracting algorithm of steganography based on the 3D-Magic matrix in low-bit-rate speech codec. Theoretical analysis is provided to demonstrate that the concealment and the hidden capacity are greatly improved with the proposed scheme. Experimental results show the hidden capacity is raised to 200bps in ITU-T G.723.1 codec. Moreover, the quality of steganography speech in Perceptual Evaluation of Speech Quality (PESQ) reduces no more than 4%, indicating a little impact on the quality of speech. In addition, the proposed hidden scheme could prevent being detected by some steganalysis tools effectively.
2401.01764
Polina Kirichenko
Polina Kirichenko, Mark Ibrahim, Randall Balestriero, Diane Bouchacourt, Ramakrishna Vedantam, Hamed Firooz, Andrew Gordon Wilson
Understanding the Detrimental Class-level Effects of Data Augmentation
Neural Information Processing Systems (NeurIPS), 2023
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data augmentation (DA) encodes invariance and provides implicit regularization critical to a model's performance in image classification tasks. However, while DA improves average accuracy, recent studies have shown that its impact can be highly class dependent: achieving optimal average accuracy comes at the cost of significantly hurting individual class accuracy by as much as 20% on ImageNet. There has been little progress in resolving class-level accuracy drops due to a limited understanding of these effects. In this work, we present a framework for understanding how DA interacts with class-level learning dynamics. Using higher-quality multi-label annotations on ImageNet, we systematically categorize the affected classes and find that the majority are inherently ambiguous, co-occur, or involve fine-grained distinctions, while DA controls the model's bias towards one of the closely related classes. While many of the previously reported performance drops are explained by multi-label annotations, our analysis of class confusions reveals other sources of accuracy degradation. We show that simple class-conditional augmentation strategies informed by our framework improve performance on the negatively affected classes.
[ { "created": "Thu, 7 Dec 2023 18:37:43 GMT", "version": "v1" } ]
2024-01-04
[ [ "Kirichenko", "Polina", "" ], [ "Ibrahim", "Mark", "" ], [ "Balestriero", "Randall", "" ], [ "Bouchacourt", "Diane", "" ], [ "Vedantam", "Ramakrishna", "" ], [ "Firooz", "Hamed", "" ], [ "Wilson", "Andrew Gordon", "" ] ]
Data augmentation (DA) encodes invariance and provides implicit regularization critical to a model's performance in image classification tasks. However, while DA improves average accuracy, recent studies have shown that its impact can be highly class dependent: achieving optimal average accuracy comes at the cost of significantly hurting individual class accuracy by as much as 20% on ImageNet. There has been little progress in resolving class-level accuracy drops due to a limited understanding of these effects. In this work, we present a framework for understanding how DA interacts with class-level learning dynamics. Using higher-quality multi-label annotations on ImageNet, we systematically categorize the affected classes and find that the majority are inherently ambiguous, co-occur, or involve fine-grained distinctions, while DA controls the model's bias towards one of the closely related classes. While many of the previously reported performance drops are explained by multi-label annotations, our analysis of class confusions reveals other sources of accuracy degradation. We show that simple class-conditional augmentation strategies informed by our framework improve performance on the negatively affected classes.
1703.09145
Yuguang Liu
Yuguang Liu, Martin D. Levine
Multi-Path Region-Based Convolutional Neural Network for Accurate Detection of Unconstrained "Hard Faces"
11 pages, 7 figures, to be presented at CRV 2017
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large-scale variations still pose a challenge in unconstrained face detection. To the best of our knowledge, no current face detection algorithm can detect a face as large as 800 x 800 pixels while simultaneously detecting another one as small as 8 x 8 pixels within a single image with equally high accuracy. We propose a two-stage cascaded face detection framework, Multi-Path Region-based Convolutional Neural Network (MP-RCNN), that seamlessly combines a deep neural network with a classic learning strategy, to tackle this challenge. The first stage is a Multi-Path Region Proposal Network (MP-RPN) that proposes faces at three different scales. It simultaneously utilizes three parallel outputs of the convolutional feature maps to predict multi-scale candidate face regions. The "atrous" convolution trick (convolution with up-sampled filters) and a newly proposed sampling layer for "hard" examples are embedded in MP-RPN to further boost its performance. The second stage is a Boosted Forests classifier, which utilizes deep facial features pooled from inside the candidate face regions as well as deep contextual features pooled from a larger region surrounding the candidate face regions. This step is included to further remove hard negative samples. Experiments show that this approach achieves state-of-the-art face detection performance on the WIDER FACE dataset "hard" partition, outperforming the former best result by 9.6% for the Average Precision.
[ { "created": "Mon, 27 Mar 2017 15:31:00 GMT", "version": "v1" } ]
2017-03-28
[ [ "Liu", "Yuguang", "" ], [ "Levine", "Martin D.", "" ] ]
Large-scale variations still pose a challenge in unconstrained face detection. To the best of our knowledge, no current face detection algorithm can detect a face as large as 800 x 800 pixels while simultaneously detecting another one as small as 8 x 8 pixels within a single image with equally high accuracy. We propose a two-stage cascaded face detection framework, Multi-Path Region-based Convolutional Neural Network (MP-RCNN), that seamlessly combines a deep neural network with a classic learning strategy, to tackle this challenge. The first stage is a Multi-Path Region Proposal Network (MP-RPN) that proposes faces at three different scales. It simultaneously utilizes three parallel outputs of the convolutional feature maps to predict multi-scale candidate face regions. The "atrous" convolution trick (convolution with up-sampled filters) and a newly proposed sampling layer for "hard" examples are embedded in MP-RPN to further boost its performance. The second stage is a Boosted Forests classifier, which utilizes deep facial features pooled from inside the candidate face regions as well as deep contextual features pooled from a larger region surrounding the candidate face regions. This step is included to further remove hard negative samples. Experiments show that this approach achieves state-of-the-art face detection performance on the WIDER FACE dataset "hard" partition, outperforming the former best result by 9.6% for the Average Precision.
2001.11093
Yehia Elkhatib PhD
Abdessalam Elhabbash, Assylbek Jumagaliyev, Gordon S. Blair, Yehia Elkhatib
SLO-ML: A Language for Service Level Objective Modelling in Multi-cloud Applications
null
In International Conference on Utility and Cloud Computing (UCC), ACM, pages 241-250, December 2019
10.1145/3344341.3368805
null
cs.DC cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cloud modelling languages (CMLs) are designed to assist customers in tackling the diversity of services in the cloud market. While many CMLs have been proposed in the literature, they lack practical support for automating the selection of services based on the specific service level objectives of a customer's application. We put forward SLO-ML, a novel and generative CML to capture service level requirements and, subsequently, to select the services to honour customer requirements and generate the deployment code appropriate to these services. We present the architectural design of SLO-ML and the associated broker that realises the deployment operations. We rigorously evaluate SLO-ML using a mixed methods approach. First, we exploit an experimental case study with a group of researchers and developers using a real-world cloud application. We also assess overheads through an exhaustive set of empirical scalability tests. Through expressing the levels of gained productivity and experienced usability, we highlight SLO-ML's profound potential in enabling user-centric cloud brokers. We also discuss limitations as application requirements grow.
[ { "created": "Wed, 29 Jan 2020 21:05:36 GMT", "version": "v1" } ]
2020-01-31
[ [ "Elhabbash", "Abdessalam", "" ], [ "Jumagaliyev", "Assylbek", "" ], [ "Blair", "Gordon S.", "" ], [ "Elkhatib", "Yehia", "" ] ]
Cloud modelling languages (CMLs) are designed to assist customers in tackling the diversity of services in the cloud market. While many CMLs have been proposed in the literature, they lack practical support for automating the selection of services based on the specific service level objectives of a customer's application. We put forward SLO-ML, a novel and generative CML to capture service level requirements and, subsequently, to select the services to honour customer requirements and generate the deployment code appropriate to these services. We present the architectural design of SLO-ML and the associated broker that realises the deployment operations. We rigorously evaluate SLO-ML using a mixed methods approach. First, we exploit an experimental case study with a group of researchers and developers using a real-world cloud application. We also assess overheads through an exhaustive set of empirical scalability tests. Through expressing the levels of gained productivity and experienced usability, we highlight SLO-ML's profound potential in enabling user-centric cloud brokers. We also discuss limitations as application requirements grow.
2403.07828
Lucas Vogel
Lucas Vogel, Thomas Springer and Matthias W\"ahlisch
From Files to Streams: Revisiting Web History and Exploring Potentials for Future Prospects
null
null
10.1145/3589335.3652001
null
cs.NI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Over the last 30 years, the World Wide Web has changed significantly. In this paper, we argue that common practices to prepare web pages for delivery conflict with many efforts to present content with minimal latency, one fundamental goal that pushed changes in the WWW. To bolster our arguments, we revisit reasons that led to changes of HTTP and compare them systematically with techniques to prepare web pages. We found that the structure of many web pages leverages features of HTTP/1.1 but hinders the use of recent HTTP features to present content quickly. To improve the situation in the future, we propose fine-grained content segmentation. This would allow to exploit streaming capabilities of recent HTTP versions and to render content as quickly as possible without changing underlying protocols or web browsers.
[ { "created": "Tue, 12 Mar 2024 17:09:03 GMT", "version": "v1" }, { "created": "Sat, 23 Mar 2024 18:27:45 GMT", "version": "v2" } ]
2024-03-26
[ [ "Vogel", "Lucas", "" ], [ "Springer", "Thomas", "" ], [ "Wählisch", "Matthias", "" ] ]
Over the last 30 years, the World Wide Web has changed significantly. In this paper, we argue that common practices to prepare web pages for delivery conflict with many efforts to present content with minimal latency, one fundamental goal that pushed changes in the WWW. To bolster our arguments, we revisit reasons that led to changes of HTTP and compare them systematically with techniques to prepare web pages. We found that the structure of many web pages leverages features of HTTP/1.1 but hinders the use of recent HTTP features to present content quickly. To improve the situation in the future, we propose fine-grained content segmentation. This would allow to exploit streaming capabilities of recent HTTP versions and to render content as quickly as possible without changing underlying protocols or web browsers.
2011.06545
Zihan Tan
Julia Chuzhoy, Sepideh Mahabadi, Zihan Tan
Towards Better Approximation of Graph Crossing Number
null
null
null
null
cs.DS cs.CG
http://creativecommons.org/licenses/by/4.0/
Graph Crossing Number is a fundamental problem with various applications. In this problem, the goal is to draw an input graph $G$ in the plane so as to minimize the number of crossings between the images of its edges. Despite extensive work, non-trivial approximation algorithms are only known for bounded-degree graphs. Even for this special case, the best current algorithm achieves a $\tilde O(\sqrt n)$-approximation, while the best current negative result is APX-hardness. All current approximation algorithms for the problem build on the same paradigm: compute a set $E'$ of edges (called a \emph{planarizing set}) such that $G\setminus E'$ is planar; compute a planar drawing of $G\setminus E'$; then add the drawings of the edges of $E'$ to the resulting drawing. Unfortunately, there are examples of graphs, in which any implementation of this method must incur $\Omega (\text{OPT}^2)$ crossings, where $\text{OPT}$ is the value of the optimal solution. This barrier seems to doom the only known approach to designing approximation algorithms for the problem, and to prevent it from yielding a better than $O(\sqrt n)$-approximation. In this paper we propose a new paradigm that allows us to overcome this barrier. We show an algorithm that, given a bounded-degree graph $G$ and a planarizing set $E'$ of its edges, computes another set $E''$ with $E'\subseteq E''$, such that $|E''|$ is relatively small, and there exists a near-optimal drawing of $G$ in which only edges of $E''$ participate in crossings. This allows us to reduce the Crossing Number problem to \emph{Crossing Number with Rotation System} -- a variant in which the ordering of the edges incident to every vertex is fixed as part of input. We show a randomized algorithm for this new problem, that allows us to obtain an $O(n^{1/2-\epsilon})$-approximation for Crossing Number on bounded-degree graphs, for some constant $\epsilon>0$.
[ { "created": "Thu, 12 Nov 2020 18:04:55 GMT", "version": "v1" }, { "created": "Mon, 11 Jan 2021 01:50:37 GMT", "version": "v2" } ]
2021-01-12
[ [ "Chuzhoy", "Julia", "" ], [ "Mahabadi", "Sepideh", "" ], [ "Tan", "Zihan", "" ] ]
Graph Crossing Number is a fundamental problem with various applications. In this problem, the goal is to draw an input graph $G$ in the plane so as to minimize the number of crossings between the images of its edges. Despite extensive work, non-trivial approximation algorithms are only known for bounded-degree graphs. Even for this special case, the best current algorithm achieves a $\tilde O(\sqrt n)$-approximation, while the best current negative result is APX-hardness. All current approximation algorithms for the problem build on the same paradigm: compute a set $E'$ of edges (called a \emph{planarizing set}) such that $G\setminus E'$ is planar; compute a planar drawing of $G\setminus E'$; then add the drawings of the edges of $E'$ to the resulting drawing. Unfortunately, there are examples of graphs, in which any implementation of this method must incur $\Omega (\text{OPT}^2)$ crossings, where $\text{OPT}$ is the value of the optimal solution. This barrier seems to doom the only known approach to designing approximation algorithms for the problem, and to prevent it from yielding a better than $O(\sqrt n)$-approximation. In this paper we propose a new paradigm that allows us to overcome this barrier. We show an algorithm that, given a bounded-degree graph $G$ and a planarizing set $E'$ of its edges, computes another set $E''$ with $E'\subseteq E''$, such that $|E''|$ is relatively small, and there exists a near-optimal drawing of $G$ in which only edges of $E''$ participate in crossings. This allows us to reduce the Crossing Number problem to \emph{Crossing Number with Rotation System} -- a variant in which the ordering of the edges incident to every vertex is fixed as part of input. We show a randomized algorithm for this new problem, that allows us to obtain an $O(n^{1/2-\epsilon})$-approximation for Crossing Number on bounded-degree graphs, for some constant $\epsilon>0$.
2004.09674
Jiahui Liu
Scott Aaronson, Jiahui Liu, Qipeng Liu, Mark Zhandry, Ruizhe Zhang
New Approaches for Quantum Copy-Protection
major revisions in definitions and security proofs
null
null
null
cs.CR quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Quantum copy protection uses the unclonability of quantum states to construct quantum software that provably cannot be pirated. Copy protection would be immensely useful, but unfortunately little is known about how to achieve it in general. In this work, we make progress on this goal, by giving the following results: - We show how to copy protect any program that cannot be learned from its input/output behavior, relative to a classical oracle. This improves on Aaronson [CCC'09], which achieves the same relative to a quantum oracle. By instantiating the oracle with post-quantum candidate obfuscation schemes, we obtain a heuristic construction of copy protection. -We show, roughly, that any program which can be watermarked can be copy detected, a weaker version of copy protection that does not prevent copying, but guarantees that any copying can be detected. Our scheme relies on the security of the assumed watermarking, plus the assumed existence of public key quantum money. Our construction is general, applicable to many recent watermarking schemes.
[ { "created": "Mon, 20 Apr 2020 23:30:17 GMT", "version": "v1" }, { "created": "Wed, 22 Apr 2020 04:50:11 GMT", "version": "v2" }, { "created": "Thu, 23 Apr 2020 05:37:22 GMT", "version": "v3" }, { "created": "Thu, 30 Apr 2020 02:16:28 GMT", "version": "v4" }, { "created": "Fri, 11 Sep 2020 17:26:39 GMT", "version": "v5" }, { "created": "Wed, 14 Oct 2020 15:41:55 GMT", "version": "v6" }, { "created": "Fri, 16 Oct 2020 06:05:36 GMT", "version": "v7" } ]
2020-10-19
[ [ "Aaronson", "Scott", "" ], [ "Liu", "Jiahui", "" ], [ "Liu", "Qipeng", "" ], [ "Zhandry", "Mark", "" ], [ "Zhang", "Ruizhe", "" ] ]
Quantum copy protection uses the unclonability of quantum states to construct quantum software that provably cannot be pirated. Copy protection would be immensely useful, but unfortunately little is known about how to achieve it in general. In this work, we make progress on this goal, by giving the following results: - We show how to copy protect any program that cannot be learned from its input/output behavior, relative to a classical oracle. This improves on Aaronson [CCC'09], which achieves the same relative to a quantum oracle. By instantiating the oracle with post-quantum candidate obfuscation schemes, we obtain a heuristic construction of copy protection. -We show, roughly, that any program which can be watermarked can be copy detected, a weaker version of copy protection that does not prevent copying, but guarantees that any copying can be detected. Our scheme relies on the security of the assumed watermarking, plus the assumed existence of public key quantum money. Our construction is general, applicable to many recent watermarking schemes.
2107.05761
Ian Briggs
Ian Briggs and Pavel Panchekha
Faster Math Functions, Soundly
null
null
null
null
cs.MS cs.SE
http://creativecommons.org/licenses/by-sa/4.0/
Standard library implementations of functions like sin and exp optimize for accuracy, not speed, because they are intended for general-purpose use. But applications tolerate inaccuracy from cancellation, rounding error, and singularities-sometimes even very high error-and many application could tolerate error in function implementations as well. This raises an intriguing possibility: speeding up numerical code by tuning standard function implementations. This paper thus introduces OpTuner, an automatic method for selecting the best implementation of mathematical functions at each use site. OpTuner assembles dozens of implementations for the standard mathematical functions from across the speed-accuracy spectrum. OpTuner then uses error Taylor series and integer linear programming to compute optimal assignments of function implementation to use site and presents the user with a speed-accuracy Pareto curve they can use to speed up their code. In a case study on the POV-Ray ray tracer, OpTuner speeds up a critical computation, leading to a whole program speedup of 9% with no change in the program output (whereas human efforts result in slower code and lower-quality output). On a broader study of 37 standard benchmarks, OpTuner matches 216 implementations to 89 use sites and demonstrates speed-ups of 107% for negligible decreases in accuracy and of up to 438% for error-tolerant applications.
[ { "created": "Mon, 12 Jul 2021 22:12:33 GMT", "version": "v1" } ]
2021-07-14
[ [ "Briggs", "Ian", "" ], [ "Panchekha", "Pavel", "" ] ]
Standard library implementations of functions like sin and exp optimize for accuracy, not speed, because they are intended for general-purpose use. But applications tolerate inaccuracy from cancellation, rounding error, and singularities-sometimes even very high error-and many application could tolerate error in function implementations as well. This raises an intriguing possibility: speeding up numerical code by tuning standard function implementations. This paper thus introduces OpTuner, an automatic method for selecting the best implementation of mathematical functions at each use site. OpTuner assembles dozens of implementations for the standard mathematical functions from across the speed-accuracy spectrum. OpTuner then uses error Taylor series and integer linear programming to compute optimal assignments of function implementation to use site and presents the user with a speed-accuracy Pareto curve they can use to speed up their code. In a case study on the POV-Ray ray tracer, OpTuner speeds up a critical computation, leading to a whole program speedup of 9% with no change in the program output (whereas human efforts result in slower code and lower-quality output). On a broader study of 37 standard benchmarks, OpTuner matches 216 implementations to 89 use sites and demonstrates speed-ups of 107% for negligible decreases in accuracy and of up to 438% for error-tolerant applications.
2401.12586
Yihan Hou
Yihan Hou, Manling Yang, Hao Cui, Lei Wang, Jie Xu, Wei Zeng
C2Ideas: Supporting Creative Interior Color Design Ideation with Large Language Model
26 pages, 11 figures
null
10.1145/3613904.3642224
null
cs.HC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Interior color design is a creative process that endeavors to allocate colors to furniture and other elements within an interior space. While much research focuses on generating realistic interior designs, these automated approaches often misalign with user intention and disregard design rationales. Informed by a need-finding preliminary study, we develop C2Ideas, an innovative system for designers to creatively ideate color schemes enabled by an intent-aligned and domain-oriented large language model. C2Ideas integrates a three-stage process: Idea Prompting stage distills user intentions into color linguistic prompts; Word-Color Association stage transforms the prompts into semantically and stylistically coherent color schemes; and Interior Coloring stage assigns colors to interior elements complying with design principles. We also develop an interactive interface that enables flexible user refinement and interpretable reasoning. C2Ideas has undergone a series of indoor cases and user studies, demonstrating its effectiveness and high recognition of interactive functionality by designers.
[ { "created": "Tue, 23 Jan 2024 09:33:48 GMT", "version": "v1" }, { "created": "Sat, 27 Jan 2024 14:38:06 GMT", "version": "v2" } ]
2024-01-30
[ [ "Hou", "Yihan", "" ], [ "Yang", "Manling", "" ], [ "Cui", "Hao", "" ], [ "Wang", "Lei", "" ], [ "Xu", "Jie", "" ], [ "Zeng", "Wei", "" ] ]
Interior color design is a creative process that endeavors to allocate colors to furniture and other elements within an interior space. While much research focuses on generating realistic interior designs, these automated approaches often misalign with user intention and disregard design rationales. Informed by a need-finding preliminary study, we develop C2Ideas, an innovative system for designers to creatively ideate color schemes enabled by an intent-aligned and domain-oriented large language model. C2Ideas integrates a three-stage process: Idea Prompting stage distills user intentions into color linguistic prompts; Word-Color Association stage transforms the prompts into semantically and stylistically coherent color schemes; and Interior Coloring stage assigns colors to interior elements complying with design principles. We also develop an interactive interface that enables flexible user refinement and interpretable reasoning. C2Ideas has undergone a series of indoor cases and user studies, demonstrating its effectiveness and high recognition of interactive functionality by designers.
2312.13975
Zhouxiang Zhao
Zhouxiang Zhao, Zhaohui Yang, Xu Gan, Quoc-Viet Pham, Chongwen Huang, Wei Xu, Zhaoyang Zhang
A Joint Communication and Computation Design for Semantic Wireless Communication with Probability Graph
arXiv admin note: substantial text overlap with arXiv:2310.00015
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we delve into the challenge of optimizing joint communication and computation for semantic communication over wireless networks using a probability graph framework. In the considered model, the base station (BS) extracts the small-sized compressed semantic information through removing redundant messages based on the stored knowledge base. Specifically, the knowledge base is encapsulated in a probability graph that encapsulates statistical relations. At the user side, the compressed information is accurately deduced using the same probability graph employed by the BS. While this approach introduces an additional computational overhead for semantic information extraction, it significantly curtails communication resource consumption by transmitting concise data. We derive both communication and computation cost models based on the inference process of the probability graph. Building upon these models, we introduce a joint communication and computation resource allocation problem aimed at minimizing the overall energy consumption of the network, while accounting for latency, power, and semantic constraints. To address this problem, we obtain a closed-form solution for transmission power under a fixed semantic compression ratio. Subsequently, we propose an efficient linear search-based algorithm to attain the optimal solution for the considered problem with low computational complexity. Simulation results underscore the effectiveness of our proposed system, showcasing notable improvements compared to conventional non-semantic schemes.
[ { "created": "Thu, 21 Dec 2023 16:03:07 GMT", "version": "v1" }, { "created": "Fri, 22 Dec 2023 08:47:11 GMT", "version": "v2" } ]
2023-12-25
[ [ "Zhao", "Zhouxiang", "" ], [ "Yang", "Zhaohui", "" ], [ "Gan", "Xu", "" ], [ "Pham", "Quoc-Viet", "" ], [ "Huang", "Chongwen", "" ], [ "Xu", "Wei", "" ], [ "Zhang", "Zhaoyang", "" ] ]
In this paper, we delve into the challenge of optimizing joint communication and computation for semantic communication over wireless networks using a probability graph framework. In the considered model, the base station (BS) extracts the small-sized compressed semantic information through removing redundant messages based on the stored knowledge base. Specifically, the knowledge base is encapsulated in a probability graph that encapsulates statistical relations. At the user side, the compressed information is accurately deduced using the same probability graph employed by the BS. While this approach introduces an additional computational overhead for semantic information extraction, it significantly curtails communication resource consumption by transmitting concise data. We derive both communication and computation cost models based on the inference process of the probability graph. Building upon these models, we introduce a joint communication and computation resource allocation problem aimed at minimizing the overall energy consumption of the network, while accounting for latency, power, and semantic constraints. To address this problem, we obtain a closed-form solution for transmission power under a fixed semantic compression ratio. Subsequently, we propose an efficient linear search-based algorithm to attain the optimal solution for the considered problem with low computational complexity. Simulation results underscore the effectiveness of our proposed system, showcasing notable improvements compared to conventional non-semantic schemes.
1801.09859
Pratik Brahma
Pratik Prabhanjan Brahma, Qiuyuan Huang, Dapeng Wu
Structured Memory based Deep Model to Detect as well as Characterize Novel Inputs
7 pages, 6 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While deep learning has pushed the boundaries in various machine learning tasks, the current models are still far away from replicating many functions that a normal human brain can do. Explicit memorization based deep architecture have been recently proposed with the objective to understand and predict better. In this work, we design a system that involves a primary learner and an adjacent representational memory bank which is organized using a comparative learner. This spatially forked deep architecture with a structured memory can simultaneously predict and reason about the nature of an input, which may even belong to a category never seen in the training data, by relating it with the memorized past representations at the higher layers. Characterizing images of unseen object classes in both synthetic and real world datasets is used as an example to showcase the operational success of the proposed framework.
[ { "created": "Tue, 30 Jan 2018 06:04:11 GMT", "version": "v1" } ]
2018-01-31
[ [ "Brahma", "Pratik Prabhanjan", "" ], [ "Huang", "Qiuyuan", "" ], [ "Wu", "Dapeng", "" ] ]
While deep learning has pushed the boundaries in various machine learning tasks, the current models are still far away from replicating many functions that a normal human brain can do. Explicit memorization based deep architecture have been recently proposed with the objective to understand and predict better. In this work, we design a system that involves a primary learner and an adjacent representational memory bank which is organized using a comparative learner. This spatially forked deep architecture with a structured memory can simultaneously predict and reason about the nature of an input, which may even belong to a category never seen in the training data, by relating it with the memorized past representations at the higher layers. Characterizing images of unseen object classes in both synthetic and real world datasets is used as an example to showcase the operational success of the proposed framework.
2107.13643
Ahmed Elhagry
Ahmed Elhagry, Mohamed Saeed, Musie Araia
Lighter Stacked Hourglass Human Pose Estimation
null
null
null
null
cs.CV eess.IV
http://creativecommons.org/licenses/by/4.0/
Human pose estimation (HPE) is one of the most challenging tasks in computer vision as humans are deformable by nature and thus their pose has so much variance. HPE aims to correctly identify the main joint locations of a single person or multiple people in a given image or video. Locating joints of a person in images or videos is an important task that can be applied in action recognition and object tracking. As have many computer vision tasks, HPE has advanced massively with the introduction of deep learning to the field. In this paper, we focus on one of the deep learning-based approaches of HPE proposed by Newell et al., which they named the stacked hourglass network. Their approach is widely used in many applications and is regarded as one of the best works in this area. The main focus of their approach is to capture as much information as it can at all possible scales so that a coherent understanding of the local features and full-body location is achieved. Their findings demonstrate that important cues such as orientation of a person, arrangement of limbs, and adjacent joints' relative location can be identified from multiple scales at different resolutions. To do so, they makes use of a single pipeline to process images in multiple resolutions, which comprises a skip layer to not lose spatial information at each resolution. The resolution of the images stretches as lower as 4x4 to make sure that a smaller spatial feature is included. In this study, we study the effect of architectural modifications on the computational speed and accuracy of the network.
[ { "created": "Wed, 28 Jul 2021 21:05:34 GMT", "version": "v1" } ]
2021-07-30
[ [ "Elhagry", "Ahmed", "" ], [ "Saeed", "Mohamed", "" ], [ "Araia", "Musie", "" ] ]
Human pose estimation (HPE) is one of the most challenging tasks in computer vision as humans are deformable by nature and thus their pose has so much variance. HPE aims to correctly identify the main joint locations of a single person or multiple people in a given image or video. Locating joints of a person in images or videos is an important task that can be applied in action recognition and object tracking. As have many computer vision tasks, HPE has advanced massively with the introduction of deep learning to the field. In this paper, we focus on one of the deep learning-based approaches of HPE proposed by Newell et al., which they named the stacked hourglass network. Their approach is widely used in many applications and is regarded as one of the best works in this area. The main focus of their approach is to capture as much information as it can at all possible scales so that a coherent understanding of the local features and full-body location is achieved. Their findings demonstrate that important cues such as orientation of a person, arrangement of limbs, and adjacent joints' relative location can be identified from multiple scales at different resolutions. To do so, they makes use of a single pipeline to process images in multiple resolutions, which comprises a skip layer to not lose spatial information at each resolution. The resolution of the images stretches as lower as 4x4 to make sure that a smaller spatial feature is included. In this study, we study the effect of architectural modifications on the computational speed and accuracy of the network.
2105.04066
Bin Zhao
Bin Zhao, Haopeng Li, Xiaoqiang Lu, Xuelong Li
Reconstructive Sequence-Graph Network for Video Summarization
Accepted by IEEE TPAMI 2021
null
10.1109/TPAMI.2021.3072117
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Exploiting the inner-shot and inter-shot dependencies is essential for key-shot based video summarization. Current approaches mainly devote to modeling the video as a frame sequence by recurrent neural networks. However, one potential limitation of the sequence models is that they focus on capturing local neighborhood dependencies while the high-order dependencies in long distance are not fully exploited. In general, the frames in each shot record a certain activity and vary smoothly over time, but the multi-hop relationships occur frequently among shots. In this case, both the local and global dependencies are important for understanding the video content. Motivated by this point, we propose a Reconstructive Sequence-Graph Network (RSGN) to encode the frames and shots as sequence and graph hierarchically, where the frame-level dependencies are encoded by Long Short-Term Memory (LSTM), and the shot-level dependencies are captured by the Graph Convolutional Network (GCN). Then, the videos are summarized by exploiting both the local and global dependencies among shots. Besides, a reconstructor is developed to reward the summary generator, so that the generator can be optimized in an unsupervised manner, which can avert the lack of annotated data in video summarization. Furthermore, under the guidance of reconstruction loss, the predicted summary can better preserve the main video content and shot-level dependencies. Practically, the experimental results on three popular datasets i.e., SumMe, TVsum and VTW) have demonstrated the superiority of our proposed approach to the summarization task.
[ { "created": "Mon, 10 May 2021 01:47:55 GMT", "version": "v1" } ]
2021-05-11
[ [ "Zhao", "Bin", "" ], [ "Li", "Haopeng", "" ], [ "Lu", "Xiaoqiang", "" ], [ "Li", "Xuelong", "" ] ]
Exploiting the inner-shot and inter-shot dependencies is essential for key-shot based video summarization. Current approaches mainly devote to modeling the video as a frame sequence by recurrent neural networks. However, one potential limitation of the sequence models is that they focus on capturing local neighborhood dependencies while the high-order dependencies in long distance are not fully exploited. In general, the frames in each shot record a certain activity and vary smoothly over time, but the multi-hop relationships occur frequently among shots. In this case, both the local and global dependencies are important for understanding the video content. Motivated by this point, we propose a Reconstructive Sequence-Graph Network (RSGN) to encode the frames and shots as sequence and graph hierarchically, where the frame-level dependencies are encoded by Long Short-Term Memory (LSTM), and the shot-level dependencies are captured by the Graph Convolutional Network (GCN). Then, the videos are summarized by exploiting both the local and global dependencies among shots. Besides, a reconstructor is developed to reward the summary generator, so that the generator can be optimized in an unsupervised manner, which can avert the lack of annotated data in video summarization. Furthermore, under the guidance of reconstruction loss, the predicted summary can better preserve the main video content and shot-level dependencies. Practically, the experimental results on three popular datasets i.e., SumMe, TVsum and VTW) have demonstrated the superiority of our proposed approach to the summarization task.
1602.06169
Moti Medina
Guy Even, Moti Medina, Boaz Patt-Shamir
Competitive Path Computation and Function Placement in SDNs
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a task of serving requests that arrive in an online fashion in Software-Defined Networks (SDNs) with network function virtualization (NFV). Each request specifies an abstract routing and processing "plan" for a flow. Each processing function can be performed by a specified subset of servers in the system. The algorithm needs to either reject the request or admit it and return detailed routing (a.k.a. "path computation") and processing assignment ("function placement"). Each request also specifies the communication bandwidth and the processing load it requires. Components in the system (links and processors) have bounded capacity; a feasible solution may not violate the capacity constraints. Requests have benefits and the goal is to maximize the total benefit of accepted requests. In this paper we first formalize the problem, and propose a new service model that allows us to cope with requests with unknown duration. The new service model augments the traditional accept/reject schemes with a new possible response of "stand by." Our main result is an online algorithm for path computation and function placement that guarantees, in each time step, throughput of at least $\Omega\left(\frac{\text{OPT}^*}{\log n}\right)$, where $n$ is the system size and $\text{OPT}^*$ is an upper bound on the maximal possible throughput. The guarantee holds assuming that requests ask for at most an $O\left(1/{\log n}\right)$-fraction of the capacity of any component in the system. Furthermore, the guarantee holds even though our algorithm serves requests in an all-or-nothing fashion using a single path and never preempts accepted flows, while $\text{OPT}^*$ may serve fractional requests, may split the allocation over multiple paths, and may arbitrarily preempt and resume service of requests.
[ { "created": "Fri, 19 Feb 2016 14:46:13 GMT", "version": "v1" } ]
2016-02-22
[ [ "Even", "Guy", "" ], [ "Medina", "Moti", "" ], [ "Patt-Shamir", "Boaz", "" ] ]
We consider a task of serving requests that arrive in an online fashion in Software-Defined Networks (SDNs) with network function virtualization (NFV). Each request specifies an abstract routing and processing "plan" for a flow. Each processing function can be performed by a specified subset of servers in the system. The algorithm needs to either reject the request or admit it and return detailed routing (a.k.a. "path computation") and processing assignment ("function placement"). Each request also specifies the communication bandwidth and the processing load it requires. Components in the system (links and processors) have bounded capacity; a feasible solution may not violate the capacity constraints. Requests have benefits and the goal is to maximize the total benefit of accepted requests. In this paper we first formalize the problem, and propose a new service model that allows us to cope with requests with unknown duration. The new service model augments the traditional accept/reject schemes with a new possible response of "stand by." Our main result is an online algorithm for path computation and function placement that guarantees, in each time step, throughput of at least $\Omega\left(\frac{\text{OPT}^*}{\log n}\right)$, where $n$ is the system size and $\text{OPT}^*$ is an upper bound on the maximal possible throughput. The guarantee holds assuming that requests ask for at most an $O\left(1/{\log n}\right)$-fraction of the capacity of any component in the system. Furthermore, the guarantee holds even though our algorithm serves requests in an all-or-nothing fashion using a single path and never preempts accepted flows, while $\text{OPT}^*$ may serve fractional requests, may split the allocation over multiple paths, and may arbitrarily preempt and resume service of requests.
2203.04229
Jia Zheng
Kehan Wang and Jia Zheng and Zihan Zhou
Neural Face Identification in a 2D Wireframe Projection of a Manifold Object
To Appear in CVPR 2022. The project page is at https://manycore-research.github.io/faceformer
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
In computer-aided design (CAD) systems, 2D line drawings are commonly used to illustrate 3D object designs. To reconstruct the 3D models depicted by a single 2D line drawing, an important key is finding the edge loops in the line drawing which correspond to the actual faces of the 3D object. In this paper, we approach the classical problem of face identification from a novel data-driven point of view. We cast it as a sequence generation problem: starting from an arbitrary edge, we adopt a variant of the popular Transformer model to predict the edges associated with the same face in a natural order. This allows us to avoid searching the space of all possible edge loops with various hand-crafted rules and heuristics as most existing methods do, deal with challenging cases such as curved surfaces and nested edge loops, and leverage additional cues such as face types. We further discuss how possibly imperfect predictions can be used for 3D object reconstruction.
[ { "created": "Tue, 8 Mar 2022 17:47:51 GMT", "version": "v1" } ]
2022-03-09
[ [ "Wang", "Kehan", "" ], [ "Zheng", "Jia", "" ], [ "Zhou", "Zihan", "" ] ]
In computer-aided design (CAD) systems, 2D line drawings are commonly used to illustrate 3D object designs. To reconstruct the 3D models depicted by a single 2D line drawing, an important key is finding the edge loops in the line drawing which correspond to the actual faces of the 3D object. In this paper, we approach the classical problem of face identification from a novel data-driven point of view. We cast it as a sequence generation problem: starting from an arbitrary edge, we adopt a variant of the popular Transformer model to predict the edges associated with the same face in a natural order. This allows us to avoid searching the space of all possible edge loops with various hand-crafted rules and heuristics as most existing methods do, deal with challenging cases such as curved surfaces and nested edge loops, and leverage additional cues such as face types. We further discuss how possibly imperfect predictions can be used for 3D object reconstruction.
2010.00051
Kishor Kunal
Kishor Kunal, Jitesh Poojary, Tonmoy Dhar, Meghna Madhusudan, Ramesh Harjani, Sachin S. Sapatnekar
A general approach for identifying hierarchical symmetry constraints for analog circuit layout
ICCAD 2020
null
null
null
cs.LG eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Analog layout synthesis requires some elements in the circuit netlist to be matched and placed symmetrically. However, the set of symmetries is very circuit-specific and a versatile algorithm, applicable to a broad variety of circuits, has been elusive. This paper presents a general methodology for the automated generation of symmetry constraints, and applies these constraints to guide automated layout synthesis. While prior approaches were restricted to identifying simple symmetries, the proposed method operates hierarchically and uses graph-based algorithms to extract multiple axes of symmetry within a circuit. An important ingredient of the algorithm is its ability to identify arrays of repeated structures. In some circuits, the repeated structures are not perfect replicas and can only be found through approximate graph matching. A fast graph neural network based methodology is developed for this purpose, based on evaluating the graph edit distance. The utility of this algorithm is demonstrated on a variety of circuits, including operational amplifiers, data converters, equalizers, and low-noise amplifiers.
[ { "created": "Wed, 30 Sep 2020 18:34:58 GMT", "version": "v1" } ]
2020-10-02
[ [ "Kunal", "Kishor", "" ], [ "Poojary", "Jitesh", "" ], [ "Dhar", "Tonmoy", "" ], [ "Madhusudan", "Meghna", "" ], [ "Harjani", "Ramesh", "" ], [ "Sapatnekar", "Sachin S.", "" ] ]
Analog layout synthesis requires some elements in the circuit netlist to be matched and placed symmetrically. However, the set of symmetries is very circuit-specific and a versatile algorithm, applicable to a broad variety of circuits, has been elusive. This paper presents a general methodology for the automated generation of symmetry constraints, and applies these constraints to guide automated layout synthesis. While prior approaches were restricted to identifying simple symmetries, the proposed method operates hierarchically and uses graph-based algorithms to extract multiple axes of symmetry within a circuit. An important ingredient of the algorithm is its ability to identify arrays of repeated structures. In some circuits, the repeated structures are not perfect replicas and can only be found through approximate graph matching. A fast graph neural network based methodology is developed for this purpose, based on evaluating the graph edit distance. The utility of this algorithm is demonstrated on a variety of circuits, including operational amplifiers, data converters, equalizers, and low-noise amplifiers.
2305.05523
Yini Fang
Yini Fang, Didan Deng, Liang Wu, Frederic Jumelle, Bertram Shi
RMES: Real-Time Micro-Expression Spotting Using Phase From Riesz Pyramid
This paper will be published in ICME 2023
null
null
null
cs.CV
http://creativecommons.org/licenses/by-sa/4.0/
Micro-expressions (MEs) are involuntary and subtle facial expressions that are thought to reveal feelings people are trying to hide. ME spotting detects the temporal intervals containing MEs in videos. Detecting such quick and subtle motions from long videos is difficult. Recent works leverage detailed facial motion representations, such as the optical flow, and deep learning models, leading to high computational complexity. To reduce computational complexity and achieve real-time operation, we propose RMES, a real-time ME spotting framework. We represent motion using phase computed by Riesz Pyramid, and feed this motion representation into a three-stream shallow CNN, which predicts the likelihood of each frame belonging to an ME. In comparison to optical flow, phase provides more localized motion estimates, which are essential for ME spotting, resulting in higher performance. Using phase also reduces the required computation of the ME spotting pipeline by 77.8%. Despite its relative simplicity and low computational complexity, our framework achieves state-of-the-art performance on two public datasets: CAS(ME)2 and SAMM Long Videos.
[ { "created": "Tue, 9 May 2023 15:22:18 GMT", "version": "v1" } ]
2023-05-10
[ [ "Fang", "Yini", "" ], [ "Deng", "Didan", "" ], [ "Wu", "Liang", "" ], [ "Jumelle", "Frederic", "" ], [ "Shi", "Bertram", "" ] ]
Micro-expressions (MEs) are involuntary and subtle facial expressions that are thought to reveal feelings people are trying to hide. ME spotting detects the temporal intervals containing MEs in videos. Detecting such quick and subtle motions from long videos is difficult. Recent works leverage detailed facial motion representations, such as the optical flow, and deep learning models, leading to high computational complexity. To reduce computational complexity and achieve real-time operation, we propose RMES, a real-time ME spotting framework. We represent motion using phase computed by Riesz Pyramid, and feed this motion representation into a three-stream shallow CNN, which predicts the likelihood of each frame belonging to an ME. In comparison to optical flow, phase provides more localized motion estimates, which are essential for ME spotting, resulting in higher performance. Using phase also reduces the required computation of the ME spotting pipeline by 77.8%. Despite its relative simplicity and low computational complexity, our framework achieves state-of-the-art performance on two public datasets: CAS(ME)2 and SAMM Long Videos.
1609.02036
Zhirong Wu
Zhirong Wu, Dahua Lin, Xiaoou Tang
Deep Markov Random Field for Image Modeling
Accepted at ECCV 2016
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Markov Random Fields (MRFs), a formulation widely used in generative image modeling, have long been plagued by the lack of expressive power. This issue is primarily due to the fact that conventional MRFs formulations tend to use simplistic factors to capture local patterns. In this paper, we move beyond such limitations, and propose a novel MRF model that uses fully-connected neurons to express the complex interactions among pixels. Through theoretical analysis, we reveal an inherent connection between this model and recurrent neural networks, and thereon derive an approximated feed-forward network that couples multiple RNNs along opposite directions. This formulation combines the expressive power of deep neural networks and the cyclic dependency structure of MRF in a unified model, bringing the modeling capability to a new level. The feed-forward approximation also allows it to be efficiently learned from data. Experimental results on a variety of low-level vision tasks show notable improvement over state-of-the-arts.
[ { "created": "Wed, 7 Sep 2016 15:56:36 GMT", "version": "v1" } ]
2016-09-08
[ [ "Wu", "Zhirong", "" ], [ "Lin", "Dahua", "" ], [ "Tang", "Xiaoou", "" ] ]
Markov Random Fields (MRFs), a formulation widely used in generative image modeling, have long been plagued by the lack of expressive power. This issue is primarily due to the fact that conventional MRFs formulations tend to use simplistic factors to capture local patterns. In this paper, we move beyond such limitations, and propose a novel MRF model that uses fully-connected neurons to express the complex interactions among pixels. Through theoretical analysis, we reveal an inherent connection between this model and recurrent neural networks, and thereon derive an approximated feed-forward network that couples multiple RNNs along opposite directions. This formulation combines the expressive power of deep neural networks and the cyclic dependency structure of MRF in a unified model, bringing the modeling capability to a new level. The feed-forward approximation also allows it to be efficiently learned from data. Experimental results on a variety of low-level vision tasks show notable improvement over state-of-the-arts.
2112.07148
Hyungju Ahn
Hyung-Ju Ahn and Dae-Hyeok Lee
Decoding 3D Representation of Visual Imagery EEG using Attention-based Dual-Stream Convolutional Neural Network
Submitted to 2022 10th IEEE International Winter Conference on Brain-Computer Interface
null
null
null
cs.HC
http://creativecommons.org/publicdomain/zero/1.0/
A deep neural network has been successfully applied to an electroencephalogram (EEG)-based brain-computer interface. However, in most studies, the correlation between EEG channels and inter-region relationships are not well utilized, resulting in sub-optimized spatial feature extraction. In this study, we propose an attention-based dual-stream 3D-convolutional neural network that can enhance spatial feature extraction by emphasizing the relationship between channels with dot product-based channel attention and 3D convolution. The proposed method showed superior performance than the comparative models by achieving an accuracy of 0.58 for 4-class visual imagery (VI) EEG classification. Through statistical and neurophysiological analysis, visual motion imagery showed higher alpha-power spectral density (PSD) over the visual cortex than static VI. Also, the VI of swarm dispersion showed higher beta-PSD over the pre-frontal cortex than the VI of swarm aggregation.
[ { "created": "Tue, 14 Dec 2021 04:05:04 GMT", "version": "v1" } ]
2021-12-15
[ [ "Ahn", "Hyung-Ju", "" ], [ "Lee", "Dae-Hyeok", "" ] ]
A deep neural network has been successfully applied to an electroencephalogram (EEG)-based brain-computer interface. However, in most studies, the correlation between EEG channels and inter-region relationships are not well utilized, resulting in sub-optimized spatial feature extraction. In this study, we propose an attention-based dual-stream 3D-convolutional neural network that can enhance spatial feature extraction by emphasizing the relationship between channels with dot product-based channel attention and 3D convolution. The proposed method showed superior performance than the comparative models by achieving an accuracy of 0.58 for 4-class visual imagery (VI) EEG classification. Through statistical and neurophysiological analysis, visual motion imagery showed higher alpha-power spectral density (PSD) over the visual cortex than static VI. Also, the VI of swarm dispersion showed higher beta-PSD over the pre-frontal cortex than the VI of swarm aggregation.
1912.03185
Celine Swennenhuis
Jesper Nederlof, C\'eline Swennenhuis
Parameterized Complexity of Partial Scheduling
22 pages, 3 figues. Updated version
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study a natural variant of scheduling that we call \emph{partial scheduling}: In this variant an instance of a scheduling problem along with an integer $k$ is given and one seeks an optimal schedule where not all, but only $k$ jobs, have to be processed. Specifically, we aim to determine the fine-grained parameterized complexity of partial scheduling problems parameterized by $k$ for all variants of scheduling problems that minimize the makespan and involve unit/arbitrary processing times, identical/unrelated parallel machines, release/due dates, and precedence constraints. That is, we investigate whether algorithms with runtimes of the type $f(k)n^{\mathcal{O}(1)}$ or $n^{\mathcal{O}(f(k))}$ exist for a function $f$ that is as small as possible. Our contribution is two-fold: First, we categorize each variant to be either in $\mathsf{P}$, $\mathsf{NP}$-complete and fixed-parameter tractable by $k$, or $\mathsf{W}[1]$-hard parameterized by $k$. Second, for many interesting cases we further investigate the run time on a finer scale and obtain run times that are (almost) optimal assuming the Exponential Time Hypothesis. As one of our main technical contributions, we give an $\mathcal{O}(8^kk(|V|+|E|))$ time algorithm to solve instances of partial scheduling problems minimizing the makespan with unit length jobs, precedence constraints and release dates, where $G=(V,E)$ is the graph with precedence constraints.
[ { "created": "Fri, 6 Dec 2019 15:32:29 GMT", "version": "v1" }, { "created": "Thu, 1 Oct 2020 14:13:21 GMT", "version": "v2" } ]
2020-10-02
[ [ "Nederlof", "Jesper", "" ], [ "Swennenhuis", "Céline", "" ] ]
We study a natural variant of scheduling that we call \emph{partial scheduling}: In this variant an instance of a scheduling problem along with an integer $k$ is given and one seeks an optimal schedule where not all, but only $k$ jobs, have to be processed. Specifically, we aim to determine the fine-grained parameterized complexity of partial scheduling problems parameterized by $k$ for all variants of scheduling problems that minimize the makespan and involve unit/arbitrary processing times, identical/unrelated parallel machines, release/due dates, and precedence constraints. That is, we investigate whether algorithms with runtimes of the type $f(k)n^{\mathcal{O}(1)}$ or $n^{\mathcal{O}(f(k))}$ exist for a function $f$ that is as small as possible. Our contribution is two-fold: First, we categorize each variant to be either in $\mathsf{P}$, $\mathsf{NP}$-complete and fixed-parameter tractable by $k$, or $\mathsf{W}[1]$-hard parameterized by $k$. Second, for many interesting cases we further investigate the run time on a finer scale and obtain run times that are (almost) optimal assuming the Exponential Time Hypothesis. As one of our main technical contributions, we give an $\mathcal{O}(8^kk(|V|+|E|))$ time algorithm to solve instances of partial scheduling problems minimizing the makespan with unit length jobs, precedence constraints and release dates, where $G=(V,E)$ is the graph with precedence constraints.
1601.05677
Yonglong Li
Yonglong Li, Aleksandar Kavcic, Guangyue Han
On the Capacity of Multilevel NAND Flash Memory Channels
Submitted to IEEE Transactions on Information Theory
null
10.1109/ISIT.2016.7541623
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we initiate a first information-theoretic study on multilevel NAND flash memory channels with intercell interference. More specifically, for a multilevel NAND flash memory channel under mild assumptions, we first prove that such a channel is indecomposable and it features asymptotic equipartition property; we then further prove that stationary processes achieve its information capacity, and consequently, as its order tends to infinity, its Markov capacity converges to its information capacity; eventually, we establish that its operational capacity is equal to its information capacity. Our results suggest that it is highly plausible to apply the ideas and techniques in the computation of the capacity of finite-state channels, which are relatively better explored, to that of the capacity of multilevel NAND flash memory channels.
[ { "created": "Thu, 21 Jan 2016 15:34:29 GMT", "version": "v1" }, { "created": "Sat, 7 May 2016 11:16:05 GMT", "version": "v2" } ]
2016-11-15
[ [ "Li", "Yonglong", "" ], [ "Kavcic", "Aleksandar", "" ], [ "Han", "Guangyue", "" ] ]
In this paper, we initiate a first information-theoretic study on multilevel NAND flash memory channels with intercell interference. More specifically, for a multilevel NAND flash memory channel under mild assumptions, we first prove that such a channel is indecomposable and it features asymptotic equipartition property; we then further prove that stationary processes achieve its information capacity, and consequently, as its order tends to infinity, its Markov capacity converges to its information capacity; eventually, we establish that its operational capacity is equal to its information capacity. Our results suggest that it is highly plausible to apply the ideas and techniques in the computation of the capacity of finite-state channels, which are relatively better explored, to that of the capacity of multilevel NAND flash memory channels.
1309.4323
Maria Saumell
Ferran Hurtado, Maarten L\"offler, In\^es Matos, Vera Sacrist\'an, Maria Saumell, Rodrigo I. Silveira, Frank Staals
Terrain visibility with multiple viewpoints
Manuscript accompanying shorter version in ISAAC 2013; some algorithms and bounds have improved with respect to the ISAAC version. The journal version will appear in IJCGA (without Section 4)
null
null
null
cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the problem of visibility in polyhedral terrains in the presence of multiple viewpoints. We consider a triangulated terrain with $m>1$ viewpoints (or guards) located on the terrain surface. A point on the terrain is considered \emph{visible} if it has an unobstructed line of sight to at least one viewpoint. We study several natural and fundamental visibility structures: (1) the visibility map, which is a partition of the terrain into visible and invisible regions; (2) the \emph{colored} visibility map, which is a partition of the terrain into regions whose points have exactly the same visible viewpoints; and (3) the Voronoi visibility map, which is a partition of the terrain into regions whose points have the same closest visible viewpoint. We study the complexity of each structure for both 1.5D and 2.5D terrains, and provide efficient algorithms to construct them. Our algorithm for the visibility map in 2.5D terrains improves on the only existing algorithm in this setting. To the best of our knowledge, the other structures have not been studied before.
[ { "created": "Tue, 17 Sep 2013 14:26:26 GMT", "version": "v1" }, { "created": "Sun, 11 May 2014 21:29:56 GMT", "version": "v2" }, { "created": "Mon, 1 Dec 2014 09:49:36 GMT", "version": "v3" } ]
2014-12-02
[ [ "Hurtado", "Ferran", "" ], [ "Löffler", "Maarten", "" ], [ "Matos", "Inês", "" ], [ "Sacristán", "Vera", "" ], [ "Saumell", "Maria", "" ], [ "Silveira", "Rodrigo I.", "" ], [ "Staals", "Frank", "" ] ]
We study the problem of visibility in polyhedral terrains in the presence of multiple viewpoints. We consider a triangulated terrain with $m>1$ viewpoints (or guards) located on the terrain surface. A point on the terrain is considered \emph{visible} if it has an unobstructed line of sight to at least one viewpoint. We study several natural and fundamental visibility structures: (1) the visibility map, which is a partition of the terrain into visible and invisible regions; (2) the \emph{colored} visibility map, which is a partition of the terrain into regions whose points have exactly the same visible viewpoints; and (3) the Voronoi visibility map, which is a partition of the terrain into regions whose points have the same closest visible viewpoint. We study the complexity of each structure for both 1.5D and 2.5D terrains, and provide efficient algorithms to construct them. Our algorithm for the visibility map in 2.5D terrains improves on the only existing algorithm in this setting. To the best of our knowledge, the other structures have not been studied before.
2005.09826
Zhaoji Zhang
Zhaoji Zhang, Ying Li, Chongwen Huang, Qinghua Guo, Lei Liu, Chau Yuen, and Yong Liang Guan
User Activity Detection and Channel Estimation for Grant-Free Random Access in LEO Satellite-Enabled Internet-of-Things
14 pages, 9 figures, accepted by Internet of Things Journal
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With recent advances on the dense low-earth orbit (LEO) constellation, LEO satellite network has become one promising solution to providing global coverage for Internet-of-Things (IoT) services. Confronted with the sporadic transmission from randomly activated IoT devices, we consider the random access (RA) mechanism, and propose a grant-free RA (GF-RA) scheme to reduce the access delay to the mobile LEO satellites. A Bernoulli-Rician message passing with expectation maximization (BR-MP-EM) algorithm is proposed for this terrestrial-satellite GF-RA system to address the user activity detection (UAD) and channel estimation (CE) problem. This BR-MP-EM algorithm is divided into two stages. In the inner iterations, the Bernoulli messages and Rician messages are updated for the joint UAD and CE problem. Based on the output of the inner iterations, the expectation maximization (EM) method is employed in the outer iterations to update the hyper-parameters related to the channel impairments. Finally, simulation results show the UAD and CE accuracy of the proposed BR-MP-EM algorithm, as well as the robustness against the channel impairments.
[ { "created": "Wed, 20 May 2020 02:19:01 GMT", "version": "v1" } ]
2020-05-21
[ [ "Zhang", "Zhaoji", "" ], [ "Li", "Ying", "" ], [ "Huang", "Chongwen", "" ], [ "Guo", "Qinghua", "" ], [ "Liu", "Lei", "" ], [ "Yuen", "Chau", "" ], [ "Guan", "Yong Liang", "" ] ]
With recent advances on the dense low-earth orbit (LEO) constellation, LEO satellite network has become one promising solution to providing global coverage for Internet-of-Things (IoT) services. Confronted with the sporadic transmission from randomly activated IoT devices, we consider the random access (RA) mechanism, and propose a grant-free RA (GF-RA) scheme to reduce the access delay to the mobile LEO satellites. A Bernoulli-Rician message passing with expectation maximization (BR-MP-EM) algorithm is proposed for this terrestrial-satellite GF-RA system to address the user activity detection (UAD) and channel estimation (CE) problem. This BR-MP-EM algorithm is divided into two stages. In the inner iterations, the Bernoulli messages and Rician messages are updated for the joint UAD and CE problem. Based on the output of the inner iterations, the expectation maximization (EM) method is employed in the outer iterations to update the hyper-parameters related to the channel impairments. Finally, simulation results show the UAD and CE accuracy of the proposed BR-MP-EM algorithm, as well as the robustness against the channel impairments.
1111.7169
Georgios J. Fakas
Georgios J. Fakas, Zhi Cai, Nikos Mamoulis
Size-l Object Summaries for Relational Keyword Search
VLDB2012
Proceedings of the VLDB Endowment (PVLDB), Vol. 5, No. 3, pp. 229-240 (2011)
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A previously proposed keyword search paradigm produces, as a query result, a ranked list of Object Summaries (OSs). An OS is a tree structure of related tuples that summarizes all data held in a relational database about a particular Data Subject (DS). However, some of these OSs are very large in size and therefore unfriendly to users that initially prefer synoptic information before proceeding to more comprehensive information about a particular DS. In this paper, we investigate the effective and efficient retrieval of concise and informative OSs. We argue that a good size-l OS should be a stand-alone and meaningful synopsis of the most important information about the particular DS. More precisely, we define a size-l OS as a partial OS composed of l important tuples. We propose three algorithms for the efficient generation of size-l OSs (in addition to the optimal approach which requires exponential time). Experimental evaluation on DBLP and TPC-H databases verifies the effectiveness and efficiency of our approach.
[ { "created": "Wed, 30 Nov 2011 14:11:28 GMT", "version": "v1" } ]
2011-12-01
[ [ "Fakas", "Georgios J.", "" ], [ "Cai", "Zhi", "" ], [ "Mamoulis", "Nikos", "" ] ]
A previously proposed keyword search paradigm produces, as a query result, a ranked list of Object Summaries (OSs). An OS is a tree structure of related tuples that summarizes all data held in a relational database about a particular Data Subject (DS). However, some of these OSs are very large in size and therefore unfriendly to users that initially prefer synoptic information before proceeding to more comprehensive information about a particular DS. In this paper, we investigate the effective and efficient retrieval of concise and informative OSs. We argue that a good size-l OS should be a stand-alone and meaningful synopsis of the most important information about the particular DS. More precisely, we define a size-l OS as a partial OS composed of l important tuples. We propose three algorithms for the efficient generation of size-l OSs (in addition to the optimal approach which requires exponential time). Experimental evaluation on DBLP and TPC-H databases verifies the effectiveness and efficiency of our approach.
1704.02824
Kunal Suri
Manoj Kannan Soundarapandian, Kunal Suri, Juan Cadavid, Ion Barosan, Mark Van Den Brand, Mauricio Alferez, Sebastien Gerard
Towards Industry 4.0: Gap Analysis between Current Automotive MES and Industry Standards using Model-Based Requirement Engineering
7 Pages, Accepted Paper (Preprint) at Third International Workshop on Automotive Software Architectures (WASA 2017), 03 April 2017 to 07 April 2017, Gothenburg, Sweden
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The dawn of the fourth industrial revolution, Industry 4.0 has created great enthusiasm among companies and researchers by giving them an opportunity to pave the path towards the vision of a connected smart factory ecosystem. However, in context of automotive industry there is an evident gap between the requirements supported by the current automotive manufacturing execution systems (MES) and the requirements proposed by industrial standards from the International Society of Automation (ISA) such as, ISA-95, ISA-88 over which the Industry 4.0 is being built on. In this paper, we bridge this gap by following a model-based requirements engineering approach along with a gap analysis process. Our work is mainly divided into three phases, (i) automotive MES tool selection phase, (ii) requirements modeling phase, (iii) and gap analysis phase based on the modeled requirements. During the MES tool selection phase, we used known reliable sources such as, MES product survey reports, white papers that provide in-depth and comprehensive information about various comparison criteria and tool vendors list for the current MES landscape. During the requirement modeling phase, we specified requirements derived from the needs of ISA-95 and ISA-88 industrial standards using the general purpose Systems Modeling Language (SysML). During the gap analysis phase, we find the misalignment between standard requirements and the compliance of the existing software tools to those standards.
[ { "created": "Mon, 10 Apr 2017 12:15:43 GMT", "version": "v1" } ]
2017-04-11
[ [ "Soundarapandian", "Manoj Kannan", "" ], [ "Suri", "Kunal", "" ], [ "Cadavid", "Juan", "" ], [ "Barosan", "Ion", "" ], [ "Brand", "Mark Van Den", "" ], [ "Alferez", "Mauricio", "" ], [ "Gerard", "Sebastien", "" ] ]
The dawn of the fourth industrial revolution, Industry 4.0 has created great enthusiasm among companies and researchers by giving them an opportunity to pave the path towards the vision of a connected smart factory ecosystem. However, in context of automotive industry there is an evident gap between the requirements supported by the current automotive manufacturing execution systems (MES) and the requirements proposed by industrial standards from the International Society of Automation (ISA) such as, ISA-95, ISA-88 over which the Industry 4.0 is being built on. In this paper, we bridge this gap by following a model-based requirements engineering approach along with a gap analysis process. Our work is mainly divided into three phases, (i) automotive MES tool selection phase, (ii) requirements modeling phase, (iii) and gap analysis phase based on the modeled requirements. During the MES tool selection phase, we used known reliable sources such as, MES product survey reports, white papers that provide in-depth and comprehensive information about various comparison criteria and tool vendors list for the current MES landscape. During the requirement modeling phase, we specified requirements derived from the needs of ISA-95 and ISA-88 industrial standards using the general purpose Systems Modeling Language (SysML). During the gap analysis phase, we find the misalignment between standard requirements and the compliance of the existing software tools to those standards.
1808.04730
Lynton Ardizzone
Lynton Ardizzone, Jakob Kruse, Sebastian Wirkert, Daniel Rahner, Eric W. Pellegrini, Ralf S. Klessen, Lena Maier-Hein, Carsten Rother, Ullrich K\"othe
Analyzing Inverse Problems with Invertible Neural Networks
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In many tasks, in particular in natural science, the goal is to determine hidden system parameters from a set of measurements. Often, the forward process from parameter- to measurement-space is a well-defined function, whereas the inverse problem is ambiguous: one measurement may map to multiple different sets of parameters. In this setting, the posterior parameter distribution, conditioned on an input measurement, has to be determined. We argue that a particular class of neural networks is well suited for this task -- so-called Invertible Neural Networks (INNs). Although INNs are not new, they have, so far, received little attention in literature. While classical neural networks attempt to solve the ambiguous inverse problem directly, INNs are able to learn it jointly with the well-defined forward process, using additional latent output variables to capture the information otherwise lost. Given a specific measurement and sampled latent variables, the inverse pass of the INN provides a full distribution over parameter space. We verify experimentally, on artificial data and real-world problems from astrophysics and medicine, that INNs are a powerful analysis tool to find multi-modalities in parameter space, to uncover parameter correlations, and to identify unrecoverable parameters.
[ { "created": "Tue, 14 Aug 2018 14:58:59 GMT", "version": "v1" }, { "created": "Mon, 10 Sep 2018 13:06:05 GMT", "version": "v2" }, { "created": "Wed, 6 Feb 2019 15:45:02 GMT", "version": "v3" } ]
2019-02-07
[ [ "Ardizzone", "Lynton", "" ], [ "Kruse", "Jakob", "" ], [ "Wirkert", "Sebastian", "" ], [ "Rahner", "Daniel", "" ], [ "Pellegrini", "Eric W.", "" ], [ "Klessen", "Ralf S.", "" ], [ "Maier-Hein", "Lena", "" ], [ "Rother", "Carsten", "" ], [ "Köthe", "Ullrich", "" ] ]
In many tasks, in particular in natural science, the goal is to determine hidden system parameters from a set of measurements. Often, the forward process from parameter- to measurement-space is a well-defined function, whereas the inverse problem is ambiguous: one measurement may map to multiple different sets of parameters. In this setting, the posterior parameter distribution, conditioned on an input measurement, has to be determined. We argue that a particular class of neural networks is well suited for this task -- so-called Invertible Neural Networks (INNs). Although INNs are not new, they have, so far, received little attention in literature. While classical neural networks attempt to solve the ambiguous inverse problem directly, INNs are able to learn it jointly with the well-defined forward process, using additional latent output variables to capture the information otherwise lost. Given a specific measurement and sampled latent variables, the inverse pass of the INN provides a full distribution over parameter space. We verify experimentally, on artificial data and real-world problems from astrophysics and medicine, that INNs are a powerful analysis tool to find multi-modalities in parameter space, to uncover parameter correlations, and to identify unrecoverable parameters.
2210.06444
Janvijay Singh
Janvijay Singh, Fan Bai, Zhen Wang
Entity Tracking via Effective Use of Multi-Task Learning Model and Mention-guided Decoding
9 pages, 1 figure, EACL 2023 Main Conference
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Cross-task knowledge transfer via multi-task learning has recently made remarkable progress in general NLP tasks. However, entity tracking on the procedural text has not benefited from such knowledge transfer because of its distinct formulation, i.e., tracking the event flow while following structural constraints. State-of-the-art entity tracking approaches either design complicated model architectures or rely on task-specific pre-training to achieve good results. To this end, we propose MeeT, a Multi-task learning-enabled entity Tracking approach, which utilizes knowledge gained from general domain tasks to improve entity tracking. Specifically, MeeT first fine-tunes T5, a pre-trained multi-task learning model, with entity tracking-specialized QA formats, and then employs our customized decoding strategy to satisfy the structural constraints. MeeT achieves state-of-the-art performances on two popular entity tracking datasets, even though it does not require any task-specific architecture design or pre-training.
[ { "created": "Wed, 12 Oct 2022 17:46:16 GMT", "version": "v1" }, { "created": "Sun, 12 Feb 2023 01:25:24 GMT", "version": "v2" } ]
2023-02-14
[ [ "Singh", "Janvijay", "" ], [ "Bai", "Fan", "" ], [ "Wang", "Zhen", "" ] ]
Cross-task knowledge transfer via multi-task learning has recently made remarkable progress in general NLP tasks. However, entity tracking on the procedural text has not benefited from such knowledge transfer because of its distinct formulation, i.e., tracking the event flow while following structural constraints. State-of-the-art entity tracking approaches either design complicated model architectures or rely on task-specific pre-training to achieve good results. To this end, we propose MeeT, a Multi-task learning-enabled entity Tracking approach, which utilizes knowledge gained from general domain tasks to improve entity tracking. Specifically, MeeT first fine-tunes T5, a pre-trained multi-task learning model, with entity tracking-specialized QA formats, and then employs our customized decoding strategy to satisfy the structural constraints. MeeT achieves state-of-the-art performances on two popular entity tracking datasets, even though it does not require any task-specific architecture design or pre-training.
1810.08704
Bo Fu
Bo Fu, Kumar Shaurya Shankar, Nathan Michael
RaD-VIO: Rangefinder-aided Downward Visual-Inertial Odometry
Accepted by ICRA 2019
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
State-of-the-art forward facing monocular visual-inertial odometry algorithms are often brittle in practice, especially whilst dealing with initialisation and motion in directions that render the state unobservable. In such cases having a reliable complementary odometry algorithm enables robust and resilient flight. Using the common local planarity assumption, we present a fast, dense, and direct frame-to-frame visual-inertial odometry algorithm for downward facing cameras that minimises a joint cost function involving a homography based photometric cost and an IMU regularisation term. Via extensive evaluation in a variety of scenarios we demonstrate superior performance than existing state-of-the-art downward facing odometry algorithms for Micro Aerial Vehicles (MAVs).
[ { "created": "Fri, 19 Oct 2018 22:27:53 GMT", "version": "v1" }, { "created": "Tue, 14 May 2019 03:08:14 GMT", "version": "v2" } ]
2019-05-15
[ [ "Fu", "Bo", "" ], [ "Shankar", "Kumar Shaurya", "" ], [ "Michael", "Nathan", "" ] ]
State-of-the-art forward facing monocular visual-inertial odometry algorithms are often brittle in practice, especially whilst dealing with initialisation and motion in directions that render the state unobservable. In such cases having a reliable complementary odometry algorithm enables robust and resilient flight. Using the common local planarity assumption, we present a fast, dense, and direct frame-to-frame visual-inertial odometry algorithm for downward facing cameras that minimises a joint cost function involving a homography based photometric cost and an IMU regularisation term. Via extensive evaluation in a variety of scenarios we demonstrate superior performance than existing state-of-the-art downward facing odometry algorithms for Micro Aerial Vehicles (MAVs).
2103.12095
Fabricio Murai
Davi Pedrosa de Aguiar and Fabricio Murai
Am I fit for this physical activity? Neural embedding of physical conditioning from inertial sensors
To be published in 10th Brazilian Conference on Intelligent Systems, BRACIS 2021
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Inertial Measurement Unit (IMU) sensors are present in everyday devices such as smartphones and fitness watches. As a result, the array of health-related research and applications that tap onto this data has been growing, but little attention has been devoted to the prediction of an individual's heart rate (HR) from IMU data, when undergoing a physical activity. Would that be even possible? If so, this could be used to design personalized sets of aerobic exercises, for instance. In this work, we show that it is viable to obtain accurate HR predictions from IMU data using Recurrent Neural Networks, provided only access to HR and IMU data from a short-lived, previously executed activity. We propose a novel method for initializing an RNN's hidden state vectors, using a specialized network that attempts to extract an embedding of the physical conditioning (PCE) of a subject. We show that using a discriminator in the training phase to help the model learn whether two PCEs belong to the same individual further reduces the prediction error. We evaluate the proposed model when predicting the HR of 23 subjects performing a variety of physical activities from IMU data available in public datasets (PAMAP2, PPG-DaLiA). For comparison, we use as baselines the only model specifically proposed for this task and an adapted state-of-the-art model for Human Activity Recognition (HAR), a closely related task. Our method, PCE-LSTM, yields over 10% lower mean absolute error. We demonstrate empirically that this error reduction is in part due to the use of the PCE. Last, we use the two datasets (PPG-DaLiA, WESAD) to show that PCE-LSTM can also be successfully applied when photoplethysmography (PPG) sensors are available, outperforming the state-of-the-art deep learning baselines by more than 30%.
[ { "created": "Mon, 22 Mar 2021 18:00:27 GMT", "version": "v1" }, { "created": "Fri, 20 Aug 2021 00:23:38 GMT", "version": "v2" } ]
2021-08-23
[ [ "de Aguiar", "Davi Pedrosa", "" ], [ "Murai", "Fabricio", "" ] ]
Inertial Measurement Unit (IMU) sensors are present in everyday devices such as smartphones and fitness watches. As a result, the array of health-related research and applications that tap onto this data has been growing, but little attention has been devoted to the prediction of an individual's heart rate (HR) from IMU data, when undergoing a physical activity. Would that be even possible? If so, this could be used to design personalized sets of aerobic exercises, for instance. In this work, we show that it is viable to obtain accurate HR predictions from IMU data using Recurrent Neural Networks, provided only access to HR and IMU data from a short-lived, previously executed activity. We propose a novel method for initializing an RNN's hidden state vectors, using a specialized network that attempts to extract an embedding of the physical conditioning (PCE) of a subject. We show that using a discriminator in the training phase to help the model learn whether two PCEs belong to the same individual further reduces the prediction error. We evaluate the proposed model when predicting the HR of 23 subjects performing a variety of physical activities from IMU data available in public datasets (PAMAP2, PPG-DaLiA). For comparison, we use as baselines the only model specifically proposed for this task and an adapted state-of-the-art model for Human Activity Recognition (HAR), a closely related task. Our method, PCE-LSTM, yields over 10% lower mean absolute error. We demonstrate empirically that this error reduction is in part due to the use of the PCE. Last, we use the two datasets (PPG-DaLiA, WESAD) to show that PCE-LSTM can also be successfully applied when photoplethysmography (PPG) sensors are available, outperforming the state-of-the-art deep learning baselines by more than 30%.
1909.11185
Hayoung Chung
Hayoung Chung, Oded Amir, H. Alicia Kim
Level-set topology optimization considering nonlinear thermoelasticity
null
null
10.1016/j.cma.2019.112735
null
cs.CE math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
At elevated temperature environments, elastic structures experience a change of the stress-free state of the body that can strongly influence the optimal topology of the structure. This work presents level-set based topology optimization of structures undergoing large deformations due to thermal and mechanical loads. The nonlinear analysis model is constructed by multiplicatively decomposing thermal and mechanical effects and introducing an intermediate stress-free state between the undeformed and deformed coordinates. By incorporating the thermoelastic nonlinearity into the level-set topology optimization scheme, wider design spaces can be explored with the consideration of both mechanical and thermal loads. Four numerical examples are presented that demonstrate how temperature changes affect the optimal design of large-deforming structures. In particular, we show how optimization can manipulate the material layout in order to create a counteracting effect between thermal and mechanical loads, even up to a degree that buckling and snap-through are suppressed. Hence the consideration of large deformations in conjunction with thermoelasticity opens many new possibilities for controlling and manipulating the thermo-mechanical response via topology optimization.
[ { "created": "Sat, 21 Sep 2019 07:11:09 GMT", "version": "v1" } ]
2020-02-19
[ [ "Chung", "Hayoung", "" ], [ "Amir", "Oded", "" ], [ "Kim", "H. Alicia", "" ] ]
At elevated temperature environments, elastic structures experience a change of the stress-free state of the body that can strongly influence the optimal topology of the structure. This work presents level-set based topology optimization of structures undergoing large deformations due to thermal and mechanical loads. The nonlinear analysis model is constructed by multiplicatively decomposing thermal and mechanical effects and introducing an intermediate stress-free state between the undeformed and deformed coordinates. By incorporating the thermoelastic nonlinearity into the level-set topology optimization scheme, wider design spaces can be explored with the consideration of both mechanical and thermal loads. Four numerical examples are presented that demonstrate how temperature changes affect the optimal design of large-deforming structures. In particular, we show how optimization can manipulate the material layout in order to create a counteracting effect between thermal and mechanical loads, even up to a degree that buckling and snap-through are suppressed. Hence the consideration of large deformations in conjunction with thermoelasticity opens many new possibilities for controlling and manipulating the thermo-mechanical response via topology optimization.
2307.02147
Yimeng Bai
Yang Zhang, Zhiyu Hu, Yimeng Bai, Jiancan Wu, Qifan Wang, Fuli Feng
Recommendation Unlearning via Influence Function
null
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recommendation unlearning is an emerging task to serve users for erasing unusable data (e.g., some historical behaviors) from a well-trained recommender model. Existing methods process unlearning requests by fully or partially retraining the model after removing the unusable data. However, these methods are impractical due to the high computation cost of full retraining and the highly possible performance damage of partial training. In this light, a desired recommendation unlearning method should obtain a similar model as full retraining in a more efficient manner, i.e., achieving complete, efficient and harmless unlearning. In this work, we propose a new Influence Function-based Recommendation Unlearning (IFRU) framework, which efficiently updates the model without retraining by estimating the influence of the unusable data on the model via the influence function. In the light that recent recommender models use historical data for both the constructions of the optimization loss and the computational graph (e.g., neighborhood aggregation), IFRU jointly estimates the direct influence of unusable data on optimization loss and the spillover influence on the computational graph to pursue complete unlearning. Furthermore, we propose an importance-based pruning algorithm to reduce the cost of the influence function. IFRU is harmless and applicable to mainstream differentiable models. Extensive experiments demonstrate that IFRU achieves more than 250 times acceleration compared to retraining-based methods with recommendation performance comparable to full retraining. Codes are avaiable at https://github.com/baiyimeng/IFRU.
[ { "created": "Wed, 5 Jul 2023 09:42:51 GMT", "version": "v1" }, { "created": "Wed, 17 Jul 2024 05:14:56 GMT", "version": "v2" } ]
2024-07-18
[ [ "Zhang", "Yang", "" ], [ "Hu", "Zhiyu", "" ], [ "Bai", "Yimeng", "" ], [ "Wu", "Jiancan", "" ], [ "Wang", "Qifan", "" ], [ "Feng", "Fuli", "" ] ]
Recommendation unlearning is an emerging task to serve users for erasing unusable data (e.g., some historical behaviors) from a well-trained recommender model. Existing methods process unlearning requests by fully or partially retraining the model after removing the unusable data. However, these methods are impractical due to the high computation cost of full retraining and the highly possible performance damage of partial training. In this light, a desired recommendation unlearning method should obtain a similar model as full retraining in a more efficient manner, i.e., achieving complete, efficient and harmless unlearning. In this work, we propose a new Influence Function-based Recommendation Unlearning (IFRU) framework, which efficiently updates the model without retraining by estimating the influence of the unusable data on the model via the influence function. In the light that recent recommender models use historical data for both the constructions of the optimization loss and the computational graph (e.g., neighborhood aggregation), IFRU jointly estimates the direct influence of unusable data on optimization loss and the spillover influence on the computational graph to pursue complete unlearning. Furthermore, we propose an importance-based pruning algorithm to reduce the cost of the influence function. IFRU is harmless and applicable to mainstream differentiable models. Extensive experiments demonstrate that IFRU achieves more than 250 times acceleration compared to retraining-based methods with recommendation performance comparable to full retraining. Codes are avaiable at https://github.com/baiyimeng/IFRU.
1811.00539
Colin Graber
Colin Graber, Ofer Meshi, Alexander Schwing
Deep Structured Prediction with Nonlinear Output Transformations
Appearing in NIPS 2018
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep structured models are widely used for tasks like semantic segmentation, where explicit correlations between variables provide important prior information which generally helps to reduce the data needs of deep nets. However, current deep structured models are restricted by oftentimes very local neighborhood structure, which cannot be increased for computational complexity reasons, and by the fact that the output configuration, or a representation thereof, cannot be transformed further. Very recent approaches which address those issues include graphical model inference inside deep nets so as to permit subsequent non-linear output space transformations. However, optimization of those formulations is challenging and not well understood. Here, we develop a novel model which generalizes existing approaches, such as structured prediction energy networks, and discuss a formulation which maintains applicability of existing inference techniques.
[ { "created": "Thu, 1 Nov 2018 17:59:58 GMT", "version": "v1" } ]
2018-11-02
[ [ "Graber", "Colin", "" ], [ "Meshi", "Ofer", "" ], [ "Schwing", "Alexander", "" ] ]
Deep structured models are widely used for tasks like semantic segmentation, where explicit correlations between variables provide important prior information which generally helps to reduce the data needs of deep nets. However, current deep structured models are restricted by oftentimes very local neighborhood structure, which cannot be increased for computational complexity reasons, and by the fact that the output configuration, or a representation thereof, cannot be transformed further. Very recent approaches which address those issues include graphical model inference inside deep nets so as to permit subsequent non-linear output space transformations. However, optimization of those formulations is challenging and not well understood. Here, we develop a novel model which generalizes existing approaches, such as structured prediction energy networks, and discuss a formulation which maintains applicability of existing inference techniques.
cs/0603039
Wei Dai
Wei Dai, Youjian Liu, Brian Rider
Quantization Bounds on Grassmann Manifolds and Applications to MIMO Communications
26 pages, 7 figures, submitted to IEEE Transactions on Information Theory in Aug, 2005
null
null
null
cs.IT math.IT
null
This paper considers the quantization problem on the Grassmann manifold \mathcal{G}_{n,p}, the set of all p-dimensional planes (through the origin) in the n-dimensional Euclidean space. The chief result is a closed-form formula for the volume of a metric ball in the Grassmann manifold when the radius is sufficiently small. This volume formula holds for Grassmann manifolds with arbitrary dimension n and p, while previous results pertained only to p=1, or a fixed p with asymptotically large n. Based on this result, several quantization bounds are derived for sphere packing and rate distortion tradeoff. We establish asymptotically equivalent lower and upper bounds for the rate distortion tradeoff. Since the upper bound is derived by constructing random codes, this result implies that the random codes are asymptotically optimal. The above results are also extended to the more general case, in which \mathcal{G}_{n,q} is quantized through a code in \mathcal{G}_{n,p}, where p and q are not necessarily the same. Finally, we discuss some applications of the derived results to multi-antenna communication systems.
[ { "created": "Thu, 9 Mar 2006 16:58:27 GMT", "version": "v1" }, { "created": "Tue, 16 May 2006 02:53:17 GMT", "version": "v2" } ]
2007-07-13
[ [ "Dai", "Wei", "" ], [ "Liu", "Youjian", "" ], [ "Rider", "Brian", "" ] ]
This paper considers the quantization problem on the Grassmann manifold \mathcal{G}_{n,p}, the set of all p-dimensional planes (through the origin) in the n-dimensional Euclidean space. The chief result is a closed-form formula for the volume of a metric ball in the Grassmann manifold when the radius is sufficiently small. This volume formula holds for Grassmann manifolds with arbitrary dimension n and p, while previous results pertained only to p=1, or a fixed p with asymptotically large n. Based on this result, several quantization bounds are derived for sphere packing and rate distortion tradeoff. We establish asymptotically equivalent lower and upper bounds for the rate distortion tradeoff. Since the upper bound is derived by constructing random codes, this result implies that the random codes are asymptotically optimal. The above results are also extended to the more general case, in which \mathcal{G}_{n,q} is quantized through a code in \mathcal{G}_{n,p}, where p and q are not necessarily the same. Finally, we discuss some applications of the derived results to multi-antenna communication systems.
2306.06462
Samyak Jain
Sravanti Addepalli, Samyak Jain, Gaurang Sriramanan, R. Venkatesh Babu
Boosting Adversarial Robustness using Feature Level Stochastic Smoothing
CVPR Workshops 2021. First three authors contributed equally
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Advances in adversarial defenses have led to a significant improvement in the robustness of Deep Neural Networks. However, the robust accuracy of present state-ofthe-art defenses is far from the requirements in critical applications such as robotics and autonomous navigation systems. Further, in practical use cases, network prediction alone might not suffice, and assignment of a confidence value for the prediction can prove crucial. In this work, we propose a generic method for introducing stochasticity in the network predictions, and utilize this for smoothing decision boundaries and rejecting low confidence predictions, thereby boosting the robustness on accepted samples. The proposed Feature Level Stochastic Smoothing based classification also results in a boost in robustness without rejection over existing adversarial training methods. Finally, we combine the proposed method with adversarial detection methods, to achieve the benefits of both approaches.
[ { "created": "Sat, 10 Jun 2023 15:11:24 GMT", "version": "v1" } ]
2023-06-13
[ [ "Addepalli", "Sravanti", "" ], [ "Jain", "Samyak", "" ], [ "Sriramanan", "Gaurang", "" ], [ "Babu", "R. Venkatesh", "" ] ]
Advances in adversarial defenses have led to a significant improvement in the robustness of Deep Neural Networks. However, the robust accuracy of present state-ofthe-art defenses is far from the requirements in critical applications such as robotics and autonomous navigation systems. Further, in practical use cases, network prediction alone might not suffice, and assignment of a confidence value for the prediction can prove crucial. In this work, we propose a generic method for introducing stochasticity in the network predictions, and utilize this for smoothing decision boundaries and rejecting low confidence predictions, thereby boosting the robustness on accepted samples. The proposed Feature Level Stochastic Smoothing based classification also results in a boost in robustness without rejection over existing adversarial training methods. Finally, we combine the proposed method with adversarial detection methods, to achieve the benefits of both approaches.
2210.14257
Wenchuan Mu
Wenchuan Mu and Kwan Hui Lim
Revision for Concision: A Constrained Paraphrase Generation Task
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Academic writing should be concise as concise sentences better keep the readers' attention and convey meaning clearly. Writing concisely is challenging, for writers often struggle to revise their drafts. We introduce and formulate revising for concision as a natural language processing task at the sentence level. Revising for concision requires algorithms to use only necessary words to rewrite a sentence while preserving its meaning. The revised sentence should be evaluated according to its word choice, sentence structure, and organization. The revised sentence also needs to fulfil semantic retention and syntactic soundness. To aide these efforts, we curate and make available a benchmark parallel dataset that can depict revising for concision. The dataset contains 536 pairs of sentences before and after revising, and all pairs are collected from college writing centres. We also present and evaluate the approaches to this problem, which may assist researchers in this area.
[ { "created": "Tue, 25 Oct 2022 18:20:54 GMT", "version": "v1" } ]
2022-10-27
[ [ "Mu", "Wenchuan", "" ], [ "Lim", "Kwan Hui", "" ] ]
Academic writing should be concise as concise sentences better keep the readers' attention and convey meaning clearly. Writing concisely is challenging, for writers often struggle to revise their drafts. We introduce and formulate revising for concision as a natural language processing task at the sentence level. Revising for concision requires algorithms to use only necessary words to rewrite a sentence while preserving its meaning. The revised sentence should be evaluated according to its word choice, sentence structure, and organization. The revised sentence also needs to fulfil semantic retention and syntactic soundness. To aide these efforts, we curate and make available a benchmark parallel dataset that can depict revising for concision. The dataset contains 536 pairs of sentences before and after revising, and all pairs are collected from college writing centres. We also present and evaluate the approaches to this problem, which may assist researchers in this area.
2304.14507
Bala Murugan MS
Vrinda Agarwal, Aaron George Pichappa, Manideep Ramisetty, Bala Murugan MS, Manoj kumar Rajagopal
Suspicious Vehicle Detection Using Licence Plate Detection And Facial Feature Recognition
eight pages and three figures
null
null
null
cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the increasing need to strengthen vehicle safety and detection, the availability of pre-existing methods of catching criminals and identifying vehicles manually through the various traffic surveillance cameras is not only time-consuming but also inefficient. With the advancement of technology in every field the use of real-time traffic surveillance models will help facilitate an easy approach. Keeping this in mind, the main focus of our paper is to develop a combined face recognition and number plate recognition model to ensure vehicle safety and real-time tracking of running-away criminals and stolen vehicles.
[ { "created": "Tue, 18 Apr 2023 06:44:08 GMT", "version": "v1" } ]
2023-05-01
[ [ "Agarwal", "Vrinda", "" ], [ "Pichappa", "Aaron George", "" ], [ "Ramisetty", "Manideep", "" ], [ "MS", "Bala Murugan", "" ], [ "Rajagopal", "Manoj kumar", "" ] ]
With the increasing need to strengthen vehicle safety and detection, the availability of pre-existing methods of catching criminals and identifying vehicles manually through the various traffic surveillance cameras is not only time-consuming but also inefficient. With the advancement of technology in every field the use of real-time traffic surveillance models will help facilitate an easy approach. Keeping this in mind, the main focus of our paper is to develop a combined face recognition and number plate recognition model to ensure vehicle safety and real-time tracking of running-away criminals and stolen vehicles.
1002.4172
Aditya Mahajan
Ashutosh Nayyar, Aditya Mahajan and Demosthenis Teneketzis
Optimal Control Strategies in Delayed Sharing Information Structures
Sumbitted to IEEE Transactions on automatic control
null
10.1109/TAC.2010.2089381
null
cs.OH
http://creativecommons.org/licenses/by-nc-sa/3.0/
The $n$-step delayed sharing information structure is investigated. This information structure comprises of $K$ controllers that share their information with a delay of $n$ time steps. This information structure is a link between the classical information structure, where information is shared perfectly between the controllers, and a non-classical information structure, where there is no "lateral" sharing of information among the controllers. Structural results for optimal control strategies for systems with such information structures are presented. A sequential methodology for finding the optimal strategies is also derived. The solution approach provides an insight for identifying structural results and sequential decomposition for general decentralized stochastic control problems.
[ { "created": "Mon, 22 Feb 2010 19:25:54 GMT", "version": "v1" } ]
2011-12-30
[ [ "Nayyar", "Ashutosh", "" ], [ "Mahajan", "Aditya", "" ], [ "Teneketzis", "Demosthenis", "" ] ]
The $n$-step delayed sharing information structure is investigated. This information structure comprises of $K$ controllers that share their information with a delay of $n$ time steps. This information structure is a link between the classical information structure, where information is shared perfectly between the controllers, and a non-classical information structure, where there is no "lateral" sharing of information among the controllers. Structural results for optimal control strategies for systems with such information structures are presented. A sequential methodology for finding the optimal strategies is also derived. The solution approach provides an insight for identifying structural results and sequential decomposition for general decentralized stochastic control problems.
1504.07662
Dragomir Yankov
Dragomir Yankov, Pavel Berkhin, Lihong Li
Evaluation of Explore-Exploit Policies in Multi-result Ranking Systems
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We analyze the problem of using Explore-Exploit techniques to improve precision in multi-result ranking systems such as web search, query autocompletion and news recommendation. Adopting an exploration policy directly online, without understanding its impact on the production system, may have unwanted consequences - the system may sustain large losses, create user dissatisfaction, or collect exploration data which does not help improve ranking quality. An offline framework is thus necessary to let us decide what policy and how we should apply in a production environment to ensure positive outcome. Here, we describe such an offline framework. Using the framework, we study a popular exploration policy - Thompson sampling. We show that there are different ways of implementing it in multi-result ranking systems, each having different semantic interpretation and leading to different results in terms of sustained click-through-rate (CTR) loss and expected model improvement. In particular, we demonstrate that Thompson sampling can act as an online learner optimizing CTR, which in some cases can lead to an interesting outcome: lift in CTR during exploration. The observation is important for production systems as it suggests that one can get both valuable exploration data to improve ranking performance on the long run, and at the same time increase CTR while exploration lasts.
[ { "created": "Tue, 28 Apr 2015 21:16:07 GMT", "version": "v1" } ]
2015-04-30
[ [ "Yankov", "Dragomir", "" ], [ "Berkhin", "Pavel", "" ], [ "Li", "Lihong", "" ] ]
We analyze the problem of using Explore-Exploit techniques to improve precision in multi-result ranking systems such as web search, query autocompletion and news recommendation. Adopting an exploration policy directly online, without understanding its impact on the production system, may have unwanted consequences - the system may sustain large losses, create user dissatisfaction, or collect exploration data which does not help improve ranking quality. An offline framework is thus necessary to let us decide what policy and how we should apply in a production environment to ensure positive outcome. Here, we describe such an offline framework. Using the framework, we study a popular exploration policy - Thompson sampling. We show that there are different ways of implementing it in multi-result ranking systems, each having different semantic interpretation and leading to different results in terms of sustained click-through-rate (CTR) loss and expected model improvement. In particular, we demonstrate that Thompson sampling can act as an online learner optimizing CTR, which in some cases can lead to an interesting outcome: lift in CTR during exploration. The observation is important for production systems as it suggests that one can get both valuable exploration data to improve ranking performance on the long run, and at the same time increase CTR while exploration lasts.
2401.02173
Weihao Li
Weihao Li, Lei Tan, Pingyang Dai, Yan Zhang
Prompt Decoupling for Text-to-Image Person Re-identification
null
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Text-to-image person re-identification (TIReID) aims to retrieve the target person from an image gallery via a textual description query. Recently, pre-trained vision-language models like CLIP have attracted significant attention and have been widely utilized for this task due to their robust capacity for semantic concept learning and rich multi-modal knowledge. However, recent CLIP-based TIReID methods commonly rely on direct fine-tuning of the entire network to adapt the CLIP model for the TIReID task. Although these methods show competitive performance on this topic, they are suboptimal as they necessitate simultaneous domain adaptation and task adaptation. To address this issue, we attempt to decouple these two processes during the training stage. Specifically, we introduce the prompt tuning strategy to enable domain adaptation and propose a two-stage training approach to disentangle domain adaptation from task adaptation. In the first stage, we freeze the two encoders from CLIP and solely focus on optimizing the prompts to alleviate domain gap between the original training data of CLIP and downstream tasks. In the second stage, we maintain the fixed prompts and fine-tune the CLIP model to prioritize capturing fine-grained information, which is more suitable for TIReID task. Finally, we evaluate the effectiveness of our method on three widely used datasets. Compared to the directly fine-tuned approach, our method achieves significant improvements.
[ { "created": "Thu, 4 Jan 2024 09:55:15 GMT", "version": "v1" } ]
2024-01-05
[ [ "Li", "Weihao", "" ], [ "Tan", "Lei", "" ], [ "Dai", "Pingyang", "" ], [ "Zhang", "Yan", "" ] ]
Text-to-image person re-identification (TIReID) aims to retrieve the target person from an image gallery via a textual description query. Recently, pre-trained vision-language models like CLIP have attracted significant attention and have been widely utilized for this task due to their robust capacity for semantic concept learning and rich multi-modal knowledge. However, recent CLIP-based TIReID methods commonly rely on direct fine-tuning of the entire network to adapt the CLIP model for the TIReID task. Although these methods show competitive performance on this topic, they are suboptimal as they necessitate simultaneous domain adaptation and task adaptation. To address this issue, we attempt to decouple these two processes during the training stage. Specifically, we introduce the prompt tuning strategy to enable domain adaptation and propose a two-stage training approach to disentangle domain adaptation from task adaptation. In the first stage, we freeze the two encoders from CLIP and solely focus on optimizing the prompts to alleviate domain gap between the original training data of CLIP and downstream tasks. In the second stage, we maintain the fixed prompts and fine-tune the CLIP model to prioritize capturing fine-grained information, which is more suitable for TIReID task. Finally, we evaluate the effectiveness of our method on three widely used datasets. Compared to the directly fine-tuned approach, our method achieves significant improvements.
2104.07820
Muhammad Aurangzeb Ahmad
Aloysius Lim, Ashish Singh, Jody Chiam, Carly Eckert, Vikas Kumar, Muhammad Aurangzeb Ahmad, Ankur Teredesai
Machine Learning Approaches for Type 2 Diabetes Prediction and Care Management
null
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
Prediction of diabetes and its various complications has been studied in a number of settings, but a comprehensive overview of problem setting for diabetes prediction and care management has not been addressed in the literature. In this document we seek to remedy this omission in literature with an encompassing overview of diabetes complication prediction as well as situating this problem in the context of real world healthcare management. We illustrate various problems encountered in real world clinical scenarios via our own experience with building and deploying such models. In this manuscript we illustrate a Machine Learning (ML) framework for addressing the problem of predicting Type 2 Diabetes Mellitus (T2DM) together with a solution for risk stratification, intervention and management. These ML models align with how physicians think about disease management and mitigation, which comprises these four steps: Identify, Stratify, Engage, Measure.
[ { "created": "Thu, 15 Apr 2021 23:38:39 GMT", "version": "v1" }, { "created": "Thu, 29 Apr 2021 00:11:58 GMT", "version": "v2" } ]
2021-04-30
[ [ "Lim", "Aloysius", "" ], [ "Singh", "Ashish", "" ], [ "Chiam", "Jody", "" ], [ "Eckert", "Carly", "" ], [ "Kumar", "Vikas", "" ], [ "Ahmad", "Muhammad Aurangzeb", "" ], [ "Teredesai", "Ankur", "" ] ]
Prediction of diabetes and its various complications has been studied in a number of settings, but a comprehensive overview of problem setting for diabetes prediction and care management has not been addressed in the literature. In this document we seek to remedy this omission in literature with an encompassing overview of diabetes complication prediction as well as situating this problem in the context of real world healthcare management. We illustrate various problems encountered in real world clinical scenarios via our own experience with building and deploying such models. In this manuscript we illustrate a Machine Learning (ML) framework for addressing the problem of predicting Type 2 Diabetes Mellitus (T2DM) together with a solution for risk stratification, intervention and management. These ML models align with how physicians think about disease management and mitigation, which comprises these four steps: Identify, Stratify, Engage, Measure.
2303.11846
Hongbin Fang
Qinyan Zhou, Hongbin Fang, Zhihai Bi, Jian Xu
Dynamic models for Planar Peristaltic Locomotion of a Metameric Earthworm-like Robot
12 pages, 4 figures
null
null
null
cs.RO physics.app-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The development of versatile robots capable of traversing challenging and irregular environments is of increasing interest in the field of robotics, and metameric robots have been identified as a promising solution due to their slender, deformable bodies. Inspired by the effective locomotion of earthworms, earthworm-like robots capable of both rectilinear and planar locomotion have been designed and prototyped. While much research has focused on developing kinematic models to describe the planar locomotion of earthworm-like robots, the authors argue that the development of dynamic models is critical to improving the accuracy and efficiency of these robots. A comprehensive analysis of the dynamics of a metameric earthworm-like robot capable of planar motion is presented in this work. The model takes into account the complex interactions between the robot's deformable body and the forces acting on it and draws on the methods previously used to develop mathematical models of snake-like robots. The proposed model represents a significant advancement in the field of metameric robotics and has the potential to enhance the performance of earthworm-like robots in a variety of challenging environments, such as underground pipes and tunnels, and serves as a foundation for future research into the dynamics of soft-bodied robots.
[ { "created": "Tue, 21 Mar 2023 13:43:37 GMT", "version": "v1" } ]
2023-03-22
[ [ "Zhou", "Qinyan", "" ], [ "Fang", "Hongbin", "" ], [ "Bi", "Zhihai", "" ], [ "Xu", "Jian", "" ] ]
The development of versatile robots capable of traversing challenging and irregular environments is of increasing interest in the field of robotics, and metameric robots have been identified as a promising solution due to their slender, deformable bodies. Inspired by the effective locomotion of earthworms, earthworm-like robots capable of both rectilinear and planar locomotion have been designed and prototyped. While much research has focused on developing kinematic models to describe the planar locomotion of earthworm-like robots, the authors argue that the development of dynamic models is critical to improving the accuracy and efficiency of these robots. A comprehensive analysis of the dynamics of a metameric earthworm-like robot capable of planar motion is presented in this work. The model takes into account the complex interactions between the robot's deformable body and the forces acting on it and draws on the methods previously used to develop mathematical models of snake-like robots. The proposed model represents a significant advancement in the field of metameric robotics and has the potential to enhance the performance of earthworm-like robots in a variety of challenging environments, such as underground pipes and tunnels, and serves as a foundation for future research into the dynamics of soft-bodied robots.
1905.10335
Xiyang Liu
Xiyang Liu, Sewoong Oh
Minimax Rates of Estimating Approximate Differential Privacy
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Differential privacy has become a widely accepted notion of privacy, leading to the introduction and deployment of numerous privatization mechanisms. However, ensuring the privacy guarantee is an error-prone process, both in designing mechanisms and in implementing those mechanisms. Both types of errors will be greatly reduced, if we have a data-driven approach to verify privacy guarantees, from a black-box access to a mechanism. We pose it as a property estimation problem, and study the fundamental trade-offs involved in the accuracy in estimated privacy guarantees and the number of samples required. We introduce a novel estimator that uses polynomial approximation of a carefully chosen degree to optimally trade-off bias and variance. With $n$ samples, we show that this estimator achieves performance of a straightforward plug-in estimator with $n \ln n$ samples, a phenomenon referred to as effective sample size amplification. The minimax optimality of the proposed estimator is proved by comparing it to a matching fundamental lower bound.
[ { "created": "Fri, 24 May 2019 17:00:59 GMT", "version": "v1" } ]
2019-05-27
[ [ "Liu", "Xiyang", "" ], [ "Oh", "Sewoong", "" ] ]
Differential privacy has become a widely accepted notion of privacy, leading to the introduction and deployment of numerous privatization mechanisms. However, ensuring the privacy guarantee is an error-prone process, both in designing mechanisms and in implementing those mechanisms. Both types of errors will be greatly reduced, if we have a data-driven approach to verify privacy guarantees, from a black-box access to a mechanism. We pose it as a property estimation problem, and study the fundamental trade-offs involved in the accuracy in estimated privacy guarantees and the number of samples required. We introduce a novel estimator that uses polynomial approximation of a carefully chosen degree to optimally trade-off bias and variance. With $n$ samples, we show that this estimator achieves performance of a straightforward plug-in estimator with $n \ln n$ samples, a phenomenon referred to as effective sample size amplification. The minimax optimality of the proposed estimator is proved by comparing it to a matching fundamental lower bound.
2104.05978
Djamila Aouada
Mohamed Adel Musallam, Kassem Al Ismaeil, Oyebade Oyedotun, Marcos Damian Perez, Michel Poucet, Djamila Aouada
SPARK: SPAcecraft Recognition leveraging Knowledge of Space Environment
5 pages, 7 figures
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
This paper proposes the SPARK dataset as a new unique space object multi-modal image dataset. Image-based object recognition is an important component of Space Situational Awareness, especially for applications such as on-orbit servicing, active debris removal, and satellite formation. However, the lack of sufficient annotated space data has limited research efforts in developing data-driven spacecraft recognition approaches. The SPARK dataset has been generated under a realistic space simulation environment, with a large diversity in sensing conditions for different orbital scenarios. It provides about 150k images per modality, RGB and depth, and 11 classes for spacecrafts and debris. This dataset offers an opportunity to benchmark and further develop object recognition, classification and detection algorithms, as well as multi-modal RGB-Depth approaches under space sensing conditions. Preliminary experimental evaluation validates the relevance of the data, and highlights interesting challenging scenarios specific to the space environment.
[ { "created": "Tue, 13 Apr 2021 07:16:55 GMT", "version": "v1" }, { "created": "Wed, 14 Apr 2021 01:58:18 GMT", "version": "v2" } ]
2021-04-15
[ [ "Musallam", "Mohamed Adel", "" ], [ "Ismaeil", "Kassem Al", "" ], [ "Oyedotun", "Oyebade", "" ], [ "Perez", "Marcos Damian", "" ], [ "Poucet", "Michel", "" ], [ "Aouada", "Djamila", "" ] ]
This paper proposes the SPARK dataset as a new unique space object multi-modal image dataset. Image-based object recognition is an important component of Space Situational Awareness, especially for applications such as on-orbit servicing, active debris removal, and satellite formation. However, the lack of sufficient annotated space data has limited research efforts in developing data-driven spacecraft recognition approaches. The SPARK dataset has been generated under a realistic space simulation environment, with a large diversity in sensing conditions for different orbital scenarios. It provides about 150k images per modality, RGB and depth, and 11 classes for spacecrafts and debris. This dataset offers an opportunity to benchmark and further develop object recognition, classification and detection algorithms, as well as multi-modal RGB-Depth approaches under space sensing conditions. Preliminary experimental evaluation validates the relevance of the data, and highlights interesting challenging scenarios specific to the space environment.
1904.06253
Nicolas Couellan
Nicolas Couellan
The coupling effect of Lipschitz regularization in deep neural networks
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate robustness of deep feed-forward neural networks when input data are subject to random uncertainties. More specifically, we consider regularization of the network by its Lipschitz constant and emphasize its role. We highlight the fact that this regularization is not only a way to control the magnitude of the weights but has also a coupling effect on the network weights accross the layers. We claim and show evidence on a dataset that this coupling effect brings a tradeoff between robustness and expressiveness of the network. This suggests that Lipschitz regularization should be carefully implemented so as to maintain coupling accross layers.
[ { "created": "Fri, 12 Apr 2019 14:36:21 GMT", "version": "v1" } ]
2019-04-15
[ [ "Couellan", "Nicolas", "" ] ]
We investigate robustness of deep feed-forward neural networks when input data are subject to random uncertainties. More specifically, we consider regularization of the network by its Lipschitz constant and emphasize its role. We highlight the fact that this regularization is not only a way to control the magnitude of the weights but has also a coupling effect on the network weights accross the layers. We claim and show evidence on a dataset that this coupling effect brings a tradeoff between robustness and expressiveness of the network. This suggests that Lipschitz regularization should be carefully implemented so as to maintain coupling accross layers.
1402.1241
Shafi'i Muhammad Abdulhamid Mr
Shafii Muhammad Abdulhamid, Ismaila Idris
Design Evaluation of Some Nigerian University Portals: A Programmer's Point of View
8 pages. Computer Science and Telecommunications 2010
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Today, Nigerian Universities feel pressured to get a portal up and running dynamic, individualized web systems have become essential for institutions of higher learning. As a result, most of the Nigerian University portals nowadays do not meet up to standard. In this paper, ten Nigerian University portals were selected and their design evaluated in accordance with the international best practices. The result was revealing.
[ { "created": "Thu, 6 Feb 2014 04:25:53 GMT", "version": "v1" } ]
2014-02-07
[ [ "Abdulhamid", "Shafii Muhammad", "" ], [ "Idris", "Ismaila", "" ] ]
Today, Nigerian Universities feel pressured to get a portal up and running dynamic, individualized web systems have become essential for institutions of higher learning. As a result, most of the Nigerian University portals nowadays do not meet up to standard. In this paper, ten Nigerian University portals were selected and their design evaluated in accordance with the international best practices. The result was revealing.
1506.08506
Jeremy Kepner
Andrew Prout, Jeremy Kepner, Peter Michaleas, William Arcand, David Bestor, Bill Bergeron, Chansup Byun, Lauren Edwards, Vijay Gadepally, Matthew Hubbell, Julie Mullen, Antonio Rosa, Charles Yee, Albert Reuther
Enabling On-Demand Database Computing with MIT SuperCloud Database Management System
6 pages; accepted to IEEE High Performance Extreme Computing (HPEC) conference 2015. arXiv admin note: text overlap with arXiv:1406.4923
null
10.1109/HPEC.2015.7322482
null
cs.DB cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The MIT SuperCloud database management system allows for rapid creation and flexible execution of a variety of the latest scientific databases, including Apache Accumulo and SciDB. It is designed to permit these databases to run on a High Performance Computing Cluster (HPCC) platform as seamlessly as any other HPCC job. It ensures the seamless migration of the databases to the resources assigned by the HPCC scheduler and centralized storage of the database files when not running. It also permits snapshotting of databases to allow researchers to experiment and push the limits of the technology without concerns for data or productivity loss if the database becomes unstable.
[ { "created": "Mon, 29 Jun 2015 04:47:20 GMT", "version": "v1" } ]
2016-06-21
[ [ "Prout", "Andrew", "" ], [ "Kepner", "Jeremy", "" ], [ "Michaleas", "Peter", "" ], [ "Arcand", "William", "" ], [ "Bestor", "David", "" ], [ "Bergeron", "Bill", "" ], [ "Byun", "Chansup", "" ], [ "Edwards", "Lauren", "" ], [ "Gadepally", "Vijay", "" ], [ "Hubbell", "Matthew", "" ], [ "Mullen", "Julie", "" ], [ "Rosa", "Antonio", "" ], [ "Yee", "Charles", "" ], [ "Reuther", "Albert", "" ] ]
The MIT SuperCloud database management system allows for rapid creation and flexible execution of a variety of the latest scientific databases, including Apache Accumulo and SciDB. It is designed to permit these databases to run on a High Performance Computing Cluster (HPCC) platform as seamlessly as any other HPCC job. It ensures the seamless migration of the databases to the resources assigned by the HPCC scheduler and centralized storage of the database files when not running. It also permits snapshotting of databases to allow researchers to experiment and push the limits of the technology without concerns for data or productivity loss if the database becomes unstable.
2203.02077
Shahbaz Rezaei
Guoyao Li, Shahbaz Rezaei, and Xin Liu
User-Level Membership Inference Attack against Metric Embedding Learning
null
null
null
null
cs.LG cs.AI cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Membership inference (MI) determines if a sample was part of a victim model training set. Recent development of MI attacks focus on record-level membership inference which limits their application in many real-world scenarios. For example, in the person re-identification task, the attacker (or investigator) is interested in determining if a user's images have been used during training or not. However, the exact training images might not be accessible to the attacker. In this paper, we develop a user-level MI attack where the goal is to find if any sample from the target user has been used during training even when no exact training sample is available to the attacker. We focus on metric embedding learning due to its dominance in person re-identification, where user-level MI attack is more sensible. We conduct an extensive evaluation on several datasets and show that our approach achieves high accuracy on user-level MI task.
[ { "created": "Fri, 4 Mar 2022 00:49:42 GMT", "version": "v1" }, { "created": "Mon, 25 Apr 2022 23:59:03 GMT", "version": "v2" } ]
2022-04-27
[ [ "Li", "Guoyao", "" ], [ "Rezaei", "Shahbaz", "" ], [ "Liu", "Xin", "" ] ]
Membership inference (MI) determines if a sample was part of a victim model training set. Recent development of MI attacks focus on record-level membership inference which limits their application in many real-world scenarios. For example, in the person re-identification task, the attacker (or investigator) is interested in determining if a user's images have been used during training or not. However, the exact training images might not be accessible to the attacker. In this paper, we develop a user-level MI attack where the goal is to find if any sample from the target user has been used during training even when no exact training sample is available to the attacker. We focus on metric embedding learning due to its dominance in person re-identification, where user-level MI attack is more sensible. We conduct an extensive evaluation on several datasets and show that our approach achieves high accuracy on user-level MI task.
1110.3017
Olaf Hartig
Juan Sequeda and Olaf Hartig
Towards a Query Language for the Web of Data (A Vision Paper)
2 pages
null
null
null
cs.DB cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Research on querying the Web of Data is still in its infancy. In this paper, we provide an initial set of general features that we envision should be considered in order to define a query language for the Web of Data. Furthermore, for each of these features, we pose questions that have not been addressed before in the context of querying the Web of Data. We believe that addressing these questions and studying these features may guide the next 10 years of research on the Web of Data.
[ { "created": "Thu, 13 Oct 2011 18:01:39 GMT", "version": "v1" } ]
2011-10-14
[ [ "Sequeda", "Juan", "" ], [ "Hartig", "Olaf", "" ] ]
Research on querying the Web of Data is still in its infancy. In this paper, we provide an initial set of general features that we envision should be considered in order to define a query language for the Web of Data. Furthermore, for each of these features, we pose questions that have not been addressed before in the context of querying the Web of Data. We believe that addressing these questions and studying these features may guide the next 10 years of research on the Web of Data.
2011.13511
Sina Amini Niaki
Sina Amini Niaki, Ehsan Haghighat, Trevor Campbell, Anoush Poursartip, Reza Vaziri
Physics-Informed Neural Network for Modelling the Thermochemical Curing Process of Composite-Tool Systems During Manufacture
null
CMAME 384 (2021) 113959
10.1016/j.cma.2021.113959
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a Physics-Informed Neural Network (PINN) to simulate the thermochemical evolution of a composite material on a tool undergoing cure in an autoclave. In particular, we solve the governing coupled system of differential equations -- including conductive heat transfer and resin cure kinetics -- by optimizing the parameters of a deep neural network (DNN) using a physics-based loss function. To account for the vastly different behaviour of thermal conduction and resin cure, we design a PINN consisting of two disconnected subnetworks, and develop a sequential training algorithm that mitigates instability present in traditional training methods. Further, we incorporate explicit discontinuities into the DNN at the composite-tool interface and enforce known physical behaviour directly in the loss function to improve the solution near the interface. We train the PINN with a technique that automatically adapts the weights on the loss terms corresponding to PDE, boundary, interface, and initial conditions. Finally, we demonstrate that one can include problem parameters as an input to the model -- resulting in a surrogate that provides real-time simulation for a range of problem settings -- and that one can use transfer learning to significantly reduce the training time for problem settings similar to that of an initial trained model. The performance of the proposed PINN is demonstrated in multiple scenarios with different material thicknesses and thermal boundary conditions.
[ { "created": "Fri, 27 Nov 2020 00:56:15 GMT", "version": "v1" }, { "created": "Mon, 14 Jun 2021 22:11:45 GMT", "version": "v2" } ]
2021-06-16
[ [ "Niaki", "Sina Amini", "" ], [ "Haghighat", "Ehsan", "" ], [ "Campbell", "Trevor", "" ], [ "Poursartip", "Anoush", "" ], [ "Vaziri", "Reza", "" ] ]
We present a Physics-Informed Neural Network (PINN) to simulate the thermochemical evolution of a composite material on a tool undergoing cure in an autoclave. In particular, we solve the governing coupled system of differential equations -- including conductive heat transfer and resin cure kinetics -- by optimizing the parameters of a deep neural network (DNN) using a physics-based loss function. To account for the vastly different behaviour of thermal conduction and resin cure, we design a PINN consisting of two disconnected subnetworks, and develop a sequential training algorithm that mitigates instability present in traditional training methods. Further, we incorporate explicit discontinuities into the DNN at the composite-tool interface and enforce known physical behaviour directly in the loss function to improve the solution near the interface. We train the PINN with a technique that automatically adapts the weights on the loss terms corresponding to PDE, boundary, interface, and initial conditions. Finally, we demonstrate that one can include problem parameters as an input to the model -- resulting in a surrogate that provides real-time simulation for a range of problem settings -- and that one can use transfer learning to significantly reduce the training time for problem settings similar to that of an initial trained model. The performance of the proposed PINN is demonstrated in multiple scenarios with different material thicknesses and thermal boundary conditions.
cs/0611157
Mira Gonen
Reuven Cohen, Mira Gonen, Avishai Wool
Bounding the Bias of Tree-Like Sampling in IP Topologies
12 pages, 1 figure
null
null
null
cs.NI
null
It is widely believed that the Internet's AS-graph degree distribution obeys a power-law form. Most of the evidence showing the power-law distribution is based on BGP data. However, it was recently argued that since BGP collects data in a tree-like fashion, it only produces a sample of the degree distribution, and this sample may be biased. This argument was backed by simulation data and mathematical analysis, which demonstrated that under certain conditions a tree sampling procedure can produce an artificail power-law in the degree distribution. Thus, although the observed degree distribution of the AS-graph follows a power-law, this phenomenon may be an artifact of the sampling process. In this work we provide some evidence to the contrary. We show, by analysis and simulation, that when the underlying graph degree distribution obeys a power-law with an exponent larger than 2, a tree-like sampling process produces a negligible bias in the sampled degree distribution. Furthermore, recent data collected from the DIMES project, which is not based on BGP sampling, indicates that the underlying AS-graph indeed obeys a power-law degree distribution with an exponent larger than 2. By combining this empirical data with our analysis, we conclude that the bias in the degree distribution calculated from BGP data is negligible.
[ { "created": "Thu, 30 Nov 2006 08:59:40 GMT", "version": "v1" } ]
2007-05-23
[ [ "Cohen", "Reuven", "" ], [ "Gonen", "Mira", "" ], [ "Wool", "Avishai", "" ] ]
It is widely believed that the Internet's AS-graph degree distribution obeys a power-law form. Most of the evidence showing the power-law distribution is based on BGP data. However, it was recently argued that since BGP collects data in a tree-like fashion, it only produces a sample of the degree distribution, and this sample may be biased. This argument was backed by simulation data and mathematical analysis, which demonstrated that under certain conditions a tree sampling procedure can produce an artificail power-law in the degree distribution. Thus, although the observed degree distribution of the AS-graph follows a power-law, this phenomenon may be an artifact of the sampling process. In this work we provide some evidence to the contrary. We show, by analysis and simulation, that when the underlying graph degree distribution obeys a power-law with an exponent larger than 2, a tree-like sampling process produces a negligible bias in the sampled degree distribution. Furthermore, recent data collected from the DIMES project, which is not based on BGP sampling, indicates that the underlying AS-graph indeed obeys a power-law degree distribution with an exponent larger than 2. By combining this empirical data with our analysis, we conclude that the bias in the degree distribution calculated from BGP data is negligible.
1704.04761
He Jiang
Jifeng Xuan, He Jiang, Yan Hu, Zhilei Ren, Weiqin Zou, Zhongxuan Luo, Xindong Wu
Towards Effective Bug Triage with Towards Effective Bug Triage with Software Data Reduction Techniques
17 pages, 7 figures
IEEE Transactions on Knowledge and Data Engineering, 2015
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Software companies spend over 45 percent of cost in dealing with software bugs. An inevitable step of fixing bugs is bug triage, which aims to correctly assign a developer to a new bug. To decrease the time cost in manual work, text classification techniques are applied to conduct automatic bug triage. In this paper, we address the problem of data reduction for bug triage, i.e., how to reduce the scale and improve the quality of bug data. We combine instance selection with feature selection to simultaneously reduce data scale on the bug dimension and the word dimension. To determine the order of applying instance selection and feature selection, we extract attributes from historical bug data sets and build a predictive model for a new bug data set. We empirically investigate the performance of data reduction on totally 600,000 bug reports of two large open source projects, namely Eclipse and Mozilla. The results show that our data reduction can effectively reduce the data scale and improve the accuracy of bug triage. Our work provides an approach to leveraging techniques on data processing to form reduced and high-quality bug data in software development and maintenance.
[ { "created": "Sun, 16 Apr 2017 12:25:34 GMT", "version": "v1" } ]
2017-04-18
[ [ "Xuan", "Jifeng", "" ], [ "Jiang", "He", "" ], [ "Hu", "Yan", "" ], [ "Ren", "Zhilei", "" ], [ "Zou", "Weiqin", "" ], [ "Luo", "Zhongxuan", "" ], [ "Wu", "Xindong", "" ] ]
Software companies spend over 45 percent of cost in dealing with software bugs. An inevitable step of fixing bugs is bug triage, which aims to correctly assign a developer to a new bug. To decrease the time cost in manual work, text classification techniques are applied to conduct automatic bug triage. In this paper, we address the problem of data reduction for bug triage, i.e., how to reduce the scale and improve the quality of bug data. We combine instance selection with feature selection to simultaneously reduce data scale on the bug dimension and the word dimension. To determine the order of applying instance selection and feature selection, we extract attributes from historical bug data sets and build a predictive model for a new bug data set. We empirically investigate the performance of data reduction on totally 600,000 bug reports of two large open source projects, namely Eclipse and Mozilla. The results show that our data reduction can effectively reduce the data scale and improve the accuracy of bug triage. Our work provides an approach to leveraging techniques on data processing to form reduced and high-quality bug data in software development and maintenance.
2202.09930
Justin Kottinger
Justin Kottinger, Shaull Almagor, Morteza Lahijanian
Conflict-Based Search for Explainable Multi-Agent Path Finding
To appear in International Conference on Automated Planning and Scheduling (ICAPS 2022), June 2022
2022 Proceedings of the International Conference on Automated Planning and Scheduling (ICAPS))
10.1609/icaps.v32i1.19859
null
cs.AI cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the Multi-Agent Path Finding (MAPF) problem, the goal is to find non-colliding paths for agents in an environment, such that each agent reaches its goal from its initial location. In safety-critical applications, a human supervisor may want to verify that the plan is indeed collision-free. To this end, a recent work introduces a notion of explainability for MAPF based on a visualization of the plan as a short sequence of images representing time segments, where in each time segment the trajectories of the agents are disjoint. Then, the explainable MAPF problem asks for a set of non-colliding paths that admits a short-enough explanation. Explainable MAPF adds a new difficulty to MAPF, in that it is NP-hard with respect to the size of the environment, and not just the number of agents. Thus, traditional MAPF algorithms are not equipped to directly handle explainable-MAPF. In this work, we adapt Conflict Based Search (CBS), a well-studied algorithm for MAPF, to handle explainable MAPF. We show how to add explainability constraints on top of the standard CBS tree and its underlying A* search. We examine the usefulness of this approach and, in particular, the tradeoff between planning time and explainability.
[ { "created": "Sun, 20 Feb 2022 23:13:14 GMT", "version": "v1" }, { "created": "Mon, 4 Apr 2022 17:18:30 GMT", "version": "v2" } ]
2023-03-15
[ [ "Kottinger", "Justin", "" ], [ "Almagor", "Shaull", "" ], [ "Lahijanian", "Morteza", "" ] ]
In the Multi-Agent Path Finding (MAPF) problem, the goal is to find non-colliding paths for agents in an environment, such that each agent reaches its goal from its initial location. In safety-critical applications, a human supervisor may want to verify that the plan is indeed collision-free. To this end, a recent work introduces a notion of explainability for MAPF based on a visualization of the plan as a short sequence of images representing time segments, where in each time segment the trajectories of the agents are disjoint. Then, the explainable MAPF problem asks for a set of non-colliding paths that admits a short-enough explanation. Explainable MAPF adds a new difficulty to MAPF, in that it is NP-hard with respect to the size of the environment, and not just the number of agents. Thus, traditional MAPF algorithms are not equipped to directly handle explainable-MAPF. In this work, we adapt Conflict Based Search (CBS), a well-studied algorithm for MAPF, to handle explainable MAPF. We show how to add explainability constraints on top of the standard CBS tree and its underlying A* search. We examine the usefulness of this approach and, in particular, the tradeoff between planning time and explainability.
1202.6517
Michal Ciesla
Michal Ciesla, Przemyslaw Koziol
Eye Pupil Location Using Webcam
11 pages, 11 figures
null
null
null
cs.HC cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Three different algorithms used for eye pupil location were described and tested. Algorithm efficiency comparison was based on human faces images taken from the BioID database. Moreover all the eye localisation methods were implemented in a dedicated application supporting eye movement based computer control. In this case human face images were acquired by a webcam and processed in a real-time.
[ { "created": "Wed, 29 Feb 2012 11:17:10 GMT", "version": "v1" } ]
2012-03-01
[ [ "Ciesla", "Michal", "" ], [ "Koziol", "Przemyslaw", "" ] ]
Three different algorithms used for eye pupil location were described and tested. Algorithm efficiency comparison was based on human faces images taken from the BioID database. Moreover all the eye localisation methods were implemented in a dedicated application supporting eye movement based computer control. In this case human face images were acquired by a webcam and processed in a real-time.
2008.11007
Julien Siebert
Julien Siebert, Lisa Joeckel, Jens Heidrich, Koji Nakamichi, Kyoko Ohashi, Isao Namba, Rieko Yamamoto, Mikio Aoyama
Towards Guidelines for Assessing Qualities of Machine Learning Systems
Has been accepted at the 13th International Conference on the Quality of Information and Communications Technology QUATIC2020 (https://2020.quatic.org/). QUATIC 2020 proceedings will be included in a volume of Springer CCIS Series (Communications in Computer and Information Science)
Proceedings of the 13th International Conference on the Quality of Information and Communications Technology QUATIC2020 (https://2020.quatic.org/). Springer CCIS Series (Communications in Computer and Information Science)
10.1007/978-3-030-58793-2_2
null
cs.SE cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Nowadays, systems containing components based on machine learning (ML) methods are becoming more widespread. In order to ensure the intended behavior of a software system, there are standards that define necessary quality aspects of the system and its components (such as ISO/IEC 25010). Due to the different nature of ML, we have to adjust quality aspects or add additional ones (such as trustworthiness) and be very precise about which aspect is really relevant for which object of interest (such as completeness of training data), and how to objectively assess adherence to quality requirements. In this article, we present the construction of a quality model (i.e., evaluation objects, quality aspects, and metrics) for an ML system based on an industrial use case. This quality model enables practitioners to specify and assess quality requirements for such kinds of ML systems objectively. In the future, we want to learn how the term quality differs between different types of ML systems and come up with general guidelines for specifying and assessing qualities of ML systems.
[ { "created": "Tue, 25 Aug 2020 13:45:54 GMT", "version": "v1" } ]
2020-08-26
[ [ "Siebert", "Julien", "" ], [ "Joeckel", "Lisa", "" ], [ "Heidrich", "Jens", "" ], [ "Nakamichi", "Koji", "" ], [ "Ohashi", "Kyoko", "" ], [ "Namba", "Isao", "" ], [ "Yamamoto", "Rieko", "" ], [ "Aoyama", "Mikio", "" ] ]
Nowadays, systems containing components based on machine learning (ML) methods are becoming more widespread. In order to ensure the intended behavior of a software system, there are standards that define necessary quality aspects of the system and its components (such as ISO/IEC 25010). Due to the different nature of ML, we have to adjust quality aspects or add additional ones (such as trustworthiness) and be very precise about which aspect is really relevant for which object of interest (such as completeness of training data), and how to objectively assess adherence to quality requirements. In this article, we present the construction of a quality model (i.e., evaluation objects, quality aspects, and metrics) for an ML system based on an industrial use case. This quality model enables practitioners to specify and assess quality requirements for such kinds of ML systems objectively. In the future, we want to learn how the term quality differs between different types of ML systems and come up with general guidelines for specifying and assessing qualities of ML systems.
2309.11446
Nikita Morozov
Valeriy Berezovskiy, Nikita Morozov
Weight Averaging Improves Knowledge Distillation under Domain Shift
ICCV 2023 Workshop on Out-of-Distribution Generalization in Computer Vision (OOD-CV)
null
null
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Knowledge distillation (KD) is a powerful model compression technique broadly used in practical deep learning applications. It is focused on training a small student network to mimic a larger teacher network. While it is widely known that KD can offer an improvement to student generalization in i.i.d setting, its performance under domain shift, i.e. the performance of student networks on data from domains unseen during training, has received little attention in the literature. In this paper we make a step towards bridging the research fields of knowledge distillation and domain generalization. We show that weight averaging techniques proposed in domain generalization literature, such as SWAD and SMA, also improve the performance of knowledge distillation under domain shift. In addition, we propose a simplistic weight averaging strategy that does not require evaluation on validation data during training and show that it performs on par with SWAD and SMA when applied to KD. We name our final distillation approach Weight-Averaged Knowledge Distillation (WAKD).
[ { "created": "Wed, 20 Sep 2023 16:23:30 GMT", "version": "v1" } ]
2023-09-21
[ [ "Berezovskiy", "Valeriy", "" ], [ "Morozov", "Nikita", "" ] ]
Knowledge distillation (KD) is a powerful model compression technique broadly used in practical deep learning applications. It is focused on training a small student network to mimic a larger teacher network. While it is widely known that KD can offer an improvement to student generalization in i.i.d setting, its performance under domain shift, i.e. the performance of student networks on data from domains unseen during training, has received little attention in the literature. In this paper we make a step towards bridging the research fields of knowledge distillation and domain generalization. We show that weight averaging techniques proposed in domain generalization literature, such as SWAD and SMA, also improve the performance of knowledge distillation under domain shift. In addition, we propose a simplistic weight averaging strategy that does not require evaluation on validation data during training and show that it performs on par with SWAD and SMA when applied to KD. We name our final distillation approach Weight-Averaged Knowledge Distillation (WAKD).
1703.04664
Anshumali Shrivastava
Anshumali Shrivastava
Optimal Densification for Fast and Accurate Minwise Hashing
Fast Minwise Hashing
null
null
null
cs.DS cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Minwise hashing is a fundamental and one of the most successful hashing algorithm in the literature. Recent advances based on the idea of densification~\cite{Proc:OneHashLSH_ICML14,Proc:Shrivastava_UAI14} have shown that it is possible to compute $k$ minwise hashes, of a vector with $d$ nonzeros, in mere $(d + k)$ computations, a significant improvement over the classical $O(dk)$. These advances have led to an algorithmic improvement in the query complexity of traditional indexing algorithms based on minwise hashing. Unfortunately, the variance of the current densification techniques is unnecessarily high, which leads to significantly poor accuracy compared to vanilla minwise hashing, especially when the data is sparse. In this paper, we provide a novel densification scheme which relies on carefully tailored 2-universal hashes. We show that the proposed scheme is variance-optimal, and without losing the runtime efficiency, it is significantly more accurate than existing densification techniques. As a result, we obtain a significantly efficient hashing scheme which has the same variance and collision probability as minwise hashing. Experimental evaluations on real sparse and high-dimensional datasets validate our claims. We believe that given the significant advantages, our method will replace minwise hashing implementations in practice.
[ { "created": "Tue, 14 Mar 2017 18:49:57 GMT", "version": "v1" } ]
2017-03-16
[ [ "Shrivastava", "Anshumali", "" ] ]
Minwise hashing is a fundamental and one of the most successful hashing algorithm in the literature. Recent advances based on the idea of densification~\cite{Proc:OneHashLSH_ICML14,Proc:Shrivastava_UAI14} have shown that it is possible to compute $k$ minwise hashes, of a vector with $d$ nonzeros, in mere $(d + k)$ computations, a significant improvement over the classical $O(dk)$. These advances have led to an algorithmic improvement in the query complexity of traditional indexing algorithms based on minwise hashing. Unfortunately, the variance of the current densification techniques is unnecessarily high, which leads to significantly poor accuracy compared to vanilla minwise hashing, especially when the data is sparse. In this paper, we provide a novel densification scheme which relies on carefully tailored 2-universal hashes. We show that the proposed scheme is variance-optimal, and without losing the runtime efficiency, it is significantly more accurate than existing densification techniques. As a result, we obtain a significantly efficient hashing scheme which has the same variance and collision probability as minwise hashing. Experimental evaluations on real sparse and high-dimensional datasets validate our claims. We believe that given the significant advantages, our method will replace minwise hashing implementations in practice.
2402.14857
Xiaotian Zou
Xiaotian Zou, Yongkang Chen, Ke Li
Is the System Message Really Important to Jailbreaks in Large Language Models?
13 pages,3 figures
null
null
null
cs.CL cs.AI cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The rapid evolution of Large Language Models (LLMs) has rendered them indispensable in modern society. While security measures are typically to align LLMs with human values prior to release, recent studies have unveiled a concerning phenomenon named "Jailbreak". This term refers to the unexpected and potentially harmful responses generated by LLMs when prompted with malicious questions. Most existing research focus on generating jailbreak prompts but system message configurations vary significantly in experiments. In this paper, we aim to answer a question: Is the system message really important for jailbreaks in LLMs? We conduct experiments in mainstream LLMs to generate jailbreak prompts with varying system messages: short, long, and none. We discover that different system messages have distinct resistances to jailbreaks. Therefore, we explore the transferability of jailbreaks across LLMs with different system messages. Furthermore, we propose the System Messages Evolutionary Algorithm (SMEA) to generate system messages that are more resistant to jailbreak prompts, even with minor changes. Through SMEA, we get a robust system messages population with little change in the length of system messages. Our research not only bolsters LLMs security but also raises the bar for jailbreaks, fostering advancements in this field of study.
[ { "created": "Tue, 20 Feb 2024 17:39:40 GMT", "version": "v1" }, { "created": "Tue, 18 Jun 2024 19:22:19 GMT", "version": "v2" } ]
2024-06-21
[ [ "Zou", "Xiaotian", "" ], [ "Chen", "Yongkang", "" ], [ "Li", "Ke", "" ] ]
The rapid evolution of Large Language Models (LLMs) has rendered them indispensable in modern society. While security measures are typically to align LLMs with human values prior to release, recent studies have unveiled a concerning phenomenon named "Jailbreak". This term refers to the unexpected and potentially harmful responses generated by LLMs when prompted with malicious questions. Most existing research focus on generating jailbreak prompts but system message configurations vary significantly in experiments. In this paper, we aim to answer a question: Is the system message really important for jailbreaks in LLMs? We conduct experiments in mainstream LLMs to generate jailbreak prompts with varying system messages: short, long, and none. We discover that different system messages have distinct resistances to jailbreaks. Therefore, we explore the transferability of jailbreaks across LLMs with different system messages. Furthermore, we propose the System Messages Evolutionary Algorithm (SMEA) to generate system messages that are more resistant to jailbreak prompts, even with minor changes. Through SMEA, we get a robust system messages population with little change in the length of system messages. Our research not only bolsters LLMs security but also raises the bar for jailbreaks, fostering advancements in this field of study.
2210.17444
Sijie Mai
Sijie Mai, Ying Zeng, Haifeng Hu
Multimodal Information Bottleneck: Learning Minimal Sufficient Unimodal and Multimodal Representations
This paper is accepted by IEEE Transactions on Multimedia. This version addresses some mistakes and typos in the original paper. The appendix is available at https://github.com/TmacMai/Multimodal-Information-Bottleneck/blob/main/appendix.pdf
null
10.1109/TMM.2022.3171679
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Learning effective joint embedding for cross-modal data has always been a focus in the field of multimodal machine learning. We argue that during multimodal fusion, the generated multimodal embedding may be redundant, and the discriminative unimodal information may be ignored, which often interferes with accurate prediction and leads to a higher risk of overfitting. Moreover, unimodal representations also contain noisy information that negatively influences the learning of cross-modal dynamics. To this end, we introduce the multimodal information bottleneck (MIB), aiming to learn a powerful and sufficient multimodal representation that is free of redundancy and to filter out noisy information in unimodal representations. Specifically, inheriting from the general information bottleneck (IB), MIB aims to learn the minimal sufficient representation for a given task by maximizing the mutual information between the representation and the target and simultaneously constraining the mutual information between the representation and the input data. Different from general IB, our MIB regularizes both the multimodal and unimodal representations, which is a comprehensive and flexible framework that is compatible with any fusion methods. We develop three MIB variants, namely, early-fusion MIB, late-fusion MIB, and complete MIB, to focus on different perspectives of information constraints. Experimental results suggest that the proposed method reaches state-of-the-art performance on the tasks of multimodal sentiment analysis and multimodal emotion recognition across three widely used datasets. The codes are available at \url{https://github.com/TmacMai/Multimodal-Information-Bottleneck}.
[ { "created": "Mon, 31 Oct 2022 16:14:18 GMT", "version": "v1" }, { "created": "Sat, 12 Nov 2022 14:27:03 GMT", "version": "v2" }, { "created": "Mon, 5 Dec 2022 12:41:04 GMT", "version": "v3" } ]
2022-12-06
[ [ "Mai", "Sijie", "" ], [ "Zeng", "Ying", "" ], [ "Hu", "Haifeng", "" ] ]
Learning effective joint embedding for cross-modal data has always been a focus in the field of multimodal machine learning. We argue that during multimodal fusion, the generated multimodal embedding may be redundant, and the discriminative unimodal information may be ignored, which often interferes with accurate prediction and leads to a higher risk of overfitting. Moreover, unimodal representations also contain noisy information that negatively influences the learning of cross-modal dynamics. To this end, we introduce the multimodal information bottleneck (MIB), aiming to learn a powerful and sufficient multimodal representation that is free of redundancy and to filter out noisy information in unimodal representations. Specifically, inheriting from the general information bottleneck (IB), MIB aims to learn the minimal sufficient representation for a given task by maximizing the mutual information between the representation and the target and simultaneously constraining the mutual information between the representation and the input data. Different from general IB, our MIB regularizes both the multimodal and unimodal representations, which is a comprehensive and flexible framework that is compatible with any fusion methods. We develop three MIB variants, namely, early-fusion MIB, late-fusion MIB, and complete MIB, to focus on different perspectives of information constraints. Experimental results suggest that the proposed method reaches state-of-the-art performance on the tasks of multimodal sentiment analysis and multimodal emotion recognition across three widely used datasets. The codes are available at \url{https://github.com/TmacMai/Multimodal-Information-Bottleneck}.
1310.8278
Sicun Gao
Sicun Gao, Soonho Kong, Edmund Clarke
Satisfiability Modulo ODEs
Published in FMCAD 2013
null
null
null
cs.LO cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study SMT problems over the reals containing ordinary differential equations. They are important for formal verification of realistic hybrid systems and embedded software. We develop delta-complete algorithms for SMT formulas that are purely existentially quantified, as well as exists-forall formulas whose universal quantification is restricted to the time variables. We demonstrate scalability of the algorithms, as implemented in our open-source solver dReal, on SMT benchmarks with several hundred nonlinear ODEs and variables.
[ { "created": "Wed, 30 Oct 2013 19:24:34 GMT", "version": "v1" } ]
2013-10-31
[ [ "Gao", "Sicun", "" ], [ "Kong", "Soonho", "" ], [ "Clarke", "Edmund", "" ] ]
We study SMT problems over the reals containing ordinary differential equations. They are important for formal verification of realistic hybrid systems and embedded software. We develop delta-complete algorithms for SMT formulas that are purely existentially quantified, as well as exists-forall formulas whose universal quantification is restricted to the time variables. We demonstrate scalability of the algorithms, as implemented in our open-source solver dReal, on SMT benchmarks with several hundred nonlinear ODEs and variables.
1811.02666
Randi Wang
Randi Wang, Vadim Shapiro
Topological Semantics for Lumped Parameter Systems Modeling
null
Advanced Engineering Informatics, 42, 100958 (2019)
10.1016/j.aei.2019.100958
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Behaviors of many engineering systems are described by lumped parameter models that encapsulate the spatially distributed nature of the system into networks of lumped elements; the dynamics of such a network is governed by a system of ordinary differential and algebraic equations. Languages and simulation tools for modeling such systems differ in syntax, informal semantics, and in the methods by which such systems of equations are generated and simulated, leading to numerous interoperability challenges. Logical extensions of SysML aim specially at unifying a subset of the underlying concepts in such languages. We propose to unify semantics of all such systems using standard notions from algebraic topology. In particular, Tonti diagrams classify all physical theories in terms of physical laws (topological and constitutive) defined over a pair of dual cochain complexes and may be used to describe different types of lumped parameter systems. We show that all possible methods for generating the corresponding state equations within each physical domain correspond to paths over Tonti diagrams. We further propose a generalization of Tonti diagram that captures the behavior and supports canonical generation of state equations for multi-domain lumped parameter systems. The unified semantics provides a basis for greater interoperability in systems modeling, supporting automated translation, integration, reuse, and numerical simulation of models created in different authoring systems and applications. Notably, the proposed algebraic topological semantics is also compatible with spatially and temporally distributed models that are at the core of modern CAD and CAE systems.
[ { "created": "Sun, 4 Nov 2018 22:59:17 GMT", "version": "v1" } ]
2019-12-04
[ [ "Wang", "Randi", "" ], [ "Shapiro", "Vadim", "" ] ]
Behaviors of many engineering systems are described by lumped parameter models that encapsulate the spatially distributed nature of the system into networks of lumped elements; the dynamics of such a network is governed by a system of ordinary differential and algebraic equations. Languages and simulation tools for modeling such systems differ in syntax, informal semantics, and in the methods by which such systems of equations are generated and simulated, leading to numerous interoperability challenges. Logical extensions of SysML aim specially at unifying a subset of the underlying concepts in such languages. We propose to unify semantics of all such systems using standard notions from algebraic topology. In particular, Tonti diagrams classify all physical theories in terms of physical laws (topological and constitutive) defined over a pair of dual cochain complexes and may be used to describe different types of lumped parameter systems. We show that all possible methods for generating the corresponding state equations within each physical domain correspond to paths over Tonti diagrams. We further propose a generalization of Tonti diagram that captures the behavior and supports canonical generation of state equations for multi-domain lumped parameter systems. The unified semantics provides a basis for greater interoperability in systems modeling, supporting automated translation, integration, reuse, and numerical simulation of models created in different authoring systems and applications. Notably, the proposed algebraic topological semantics is also compatible with spatially and temporally distributed models that are at the core of modern CAD and CAE systems.
2306.13028
Sidney Tio
Sidney Tio, Pradeep Varakantham
Transferable Curricula through Difficulty Conditioned Generators
IJCAI'23
null
null
null
cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Advancements in reinforcement learning (RL) have demonstrated superhuman performance in complex tasks such as Starcraft, Go, Chess etc. However, knowledge transfer from Artificial "Experts" to humans remain a significant challenge. A promising avenue for such transfer would be the use of curricula. Recent methods in curricula generation focuses on training RL agents efficiently, yet such methods rely on surrogate measures to track student progress, and are not suited for training robots in the real world (or more ambitiously humans). In this paper, we introduce a method named Parameterized Environment Response Model (PERM) that shows promising results in training RL agents in parameterized environments. Inspired by Item Response Theory, PERM seeks to model difficulty of environments and ability of RL agents directly. Given that RL agents and humans are trained more efficiently under the "zone of proximal development", our method generates a curriculum by matching the difficulty of an environment to the current ability of the student. In addition, PERM can be trained offline and does not employ non-stationary measures of student ability, making it suitable for transfer between students. We demonstrate PERM's ability to represent the environment parameter space, and training with RL agents with PERM produces a strong performance in deterministic environments. Lastly, we show that our method is transferable between students, without any sacrifice in training quality.
[ { "created": "Thu, 22 Jun 2023 16:45:45 GMT", "version": "v1" } ]
2023-06-23
[ [ "Tio", "Sidney", "" ], [ "Varakantham", "Pradeep", "" ] ]
Advancements in reinforcement learning (RL) have demonstrated superhuman performance in complex tasks such as Starcraft, Go, Chess etc. However, knowledge transfer from Artificial "Experts" to humans remain a significant challenge. A promising avenue for such transfer would be the use of curricula. Recent methods in curricula generation focuses on training RL agents efficiently, yet such methods rely on surrogate measures to track student progress, and are not suited for training robots in the real world (or more ambitiously humans). In this paper, we introduce a method named Parameterized Environment Response Model (PERM) that shows promising results in training RL agents in parameterized environments. Inspired by Item Response Theory, PERM seeks to model difficulty of environments and ability of RL agents directly. Given that RL agents and humans are trained more efficiently under the "zone of proximal development", our method generates a curriculum by matching the difficulty of an environment to the current ability of the student. In addition, PERM can be trained offline and does not employ non-stationary measures of student ability, making it suitable for transfer between students. We demonstrate PERM's ability to represent the environment parameter space, and training with RL agents with PERM produces a strong performance in deterministic environments. Lastly, we show that our method is transferable between students, without any sacrifice in training quality.
1409.0081
Birgit Vogtenhuber
Oswin Aichholzer, Ruy Fabila-Monroy, Hern\'an Gonz\'alez-Aguilar, Thomas Hackl, Marco A. Heredia, Clemens Huemer, Jorge Urrutia, Pavel Valtr, and Birgit Vogtenhuber
On $k$-Gons and $k$-Holes in Point Sets
null
null
null
null
cs.DM cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a variation of the classical Erd\H{o}s-Szekeres problems on the existence and number of convex $k$-gons and $k$-holes (empty $k$-gons) in a set of $n$ points in the plane. Allowing the $k$-gons to be non-convex, we show bounds and structural results on maximizing and minimizing their numbers. Most noteworthy, for any $k$ and sufficiently large $n$, we give a quadratic lower bound for the number of $k$-holes, and show that this number is maximized by sets in convex position.
[ { "created": "Sat, 30 Aug 2014 04:21:07 GMT", "version": "v1" } ]
2014-09-02
[ [ "Aichholzer", "Oswin", "" ], [ "Fabila-Monroy", "Ruy", "" ], [ "González-Aguilar", "Hernán", "" ], [ "Hackl", "Thomas", "" ], [ "Heredia", "Marco A.", "" ], [ "Huemer", "Clemens", "" ], [ "Urrutia", "Jorge", "" ], [ "Valtr", "Pavel", "" ], [ "Vogtenhuber", "Birgit", "" ] ]
We consider a variation of the classical Erd\H{o}s-Szekeres problems on the existence and number of convex $k$-gons and $k$-holes (empty $k$-gons) in a set of $n$ points in the plane. Allowing the $k$-gons to be non-convex, we show bounds and structural results on maximizing and minimizing their numbers. Most noteworthy, for any $k$ and sufficiently large $n$, we give a quadratic lower bound for the number of $k$-holes, and show that this number is maximized by sets in convex position.
cs/0410033
Florentin Smarandache
Florentin Smarandache
An In-Depth Look at Information Fusion Rules & the Unification of Fusion Theories
27 pages. To be presented at NASA Langley Research Center (Hampton, Virginia), on November 5th, 2004
Partially published in Review of the Air Force Academy (The Scientific Informative Review), Brasov, No. 2, pp. 31-40, 2006.
null
null
cs.AI
null
This paper may look like a glossary of the fusion rules and we also introduce new ones presenting their formulas and examples: Conjunctive, Disjunctive, Exclusive Disjunctive, Mixed Conjunctive-Disjunctive rules, Conditional rule, Dempster's, Yager's, Smets' TBM rule, Dubois-Prade's, Dezert-Smarandache classical and hybrid rules, Murphy's average rule, Inagaki-Lefevre-Colot-Vannoorenberghe Unified Combination rules [and, as particular cases: Iganaki's parameterized rule, Weighting Average Operator, minC (M. Daniel), and newly Proportional Conflict Redistribution rules (Smarandache-Dezert) among which PCR5 is the most exact way of redistribution of the conflicting mass to non-empty sets following the path of the conjunctive rule], Zhang's Center Combination rule, Convolutive x-Averaging, Consensus Operator (Josang), Cautious Rule (Smets), ?-junctions rules (Smets), etc. and three new T-norm & T-conorm rules adjusted from fuzzy and neutrosophic sets to information fusion (Tchamova-Smarandache). Introducing the degree of union and degree of inclusion with respect to the cardinal of sets not with the fuzzy set point of view, besides that of intersection, many fusion rules can be improved. There are corner cases where each rule might have difficulties working or may not get an expected result.
[ { "created": "Thu, 14 Oct 2004 22:53:46 GMT", "version": "v1" }, { "created": "Wed, 27 Oct 2004 17:13:04 GMT", "version": "v2" } ]
2009-01-29
[ [ "Smarandache", "Florentin", "" ] ]
This paper may look like a glossary of the fusion rules and we also introduce new ones presenting their formulas and examples: Conjunctive, Disjunctive, Exclusive Disjunctive, Mixed Conjunctive-Disjunctive rules, Conditional rule, Dempster's, Yager's, Smets' TBM rule, Dubois-Prade's, Dezert-Smarandache classical and hybrid rules, Murphy's average rule, Inagaki-Lefevre-Colot-Vannoorenberghe Unified Combination rules [and, as particular cases: Iganaki's parameterized rule, Weighting Average Operator, minC (M. Daniel), and newly Proportional Conflict Redistribution rules (Smarandache-Dezert) among which PCR5 is the most exact way of redistribution of the conflicting mass to non-empty sets following the path of the conjunctive rule], Zhang's Center Combination rule, Convolutive x-Averaging, Consensus Operator (Josang), Cautious Rule (Smets), ?-junctions rules (Smets), etc. and three new T-norm & T-conorm rules adjusted from fuzzy and neutrosophic sets to information fusion (Tchamova-Smarandache). Introducing the degree of union and degree of inclusion with respect to the cardinal of sets not with the fuzzy set point of view, besides that of intersection, many fusion rules can be improved. There are corner cases where each rule might have difficulties working or may not get an expected result.
0711.4475
{\L}ukasz D{\ke}bowski
{\L}ukasz D\k{e}bowski
Valence extraction using EM selection and co-occurrence matrices
24 pages, 3 tables
Language Resources and Evaluation 43:301-327, 2009
10.1007/s10579-009-9100-5
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper discusses two new procedures for extracting verb valences from raw texts, with an application to the Polish language. The first novel technique, the EM selection algorithm, performs unsupervised disambiguation of valence frame forests, obtained by applying a non-probabilistic deep grammar parser and some post-processing to the text. The second new idea concerns filtering of incorrect frames detected in the parsed text and is motivated by an observation that verbs which take similar arguments tend to have similar frames. This phenomenon is described in terms of newly introduced co-occurrence matrices. Using co-occurrence matrices, we split filtering into two steps. The list of valid arguments is first determined for each verb, whereas the pattern according to which the arguments are combined into frames is computed in the following stage. Our best extracted dictionary reaches an $F$-score of 45%, compared to an $F$-score of 39% for the standard frame-based BHT filtering.
[ { "created": "Wed, 28 Nov 2007 12:16:08 GMT", "version": "v1" }, { "created": "Wed, 5 Dec 2007 12:53:25 GMT", "version": "v2" }, { "created": "Fri, 11 Jul 2008 13:15:45 GMT", "version": "v3" }, { "created": "Wed, 10 Dec 2008 19:14:24 GMT", "version": "v4" }, { "created": "Wed, 29 Jul 2009 12:12:37 GMT", "version": "v5" }, { "created": "Fri, 27 Nov 2009 17:53:24 GMT", "version": "v6" } ]
2020-03-11
[ [ "Dębowski", "Łukasz", "" ] ]
This paper discusses two new procedures for extracting verb valences from raw texts, with an application to the Polish language. The first novel technique, the EM selection algorithm, performs unsupervised disambiguation of valence frame forests, obtained by applying a non-probabilistic deep grammar parser and some post-processing to the text. The second new idea concerns filtering of incorrect frames detected in the parsed text and is motivated by an observation that verbs which take similar arguments tend to have similar frames. This phenomenon is described in terms of newly introduced co-occurrence matrices. Using co-occurrence matrices, we split filtering into two steps. The list of valid arguments is first determined for each verb, whereas the pattern according to which the arguments are combined into frames is computed in the following stage. Our best extracted dictionary reaches an $F$-score of 45%, compared to an $F$-score of 39% for the standard frame-based BHT filtering.
1603.01336
Sabir Ribas
Sabir Ribas, Alberto Ueda, Rodrygo L. T. Santos, Berthier Ribeiro-Neto, Nivio Ziviani
Simplified Relative Citation Ratio for Static Paper Ranking: UFMG/LATIN at WSDM Cup 2016
WSDM Cup. The 9th ACM International Conference on Web Search and Data Mining San Francisco, California, USA. February 22-25, 2016
null
null
null
cs.IR cs.DL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Static rankings of papers play a key role in the academic search setting. Many features are commonly used in the literature to produce such rankings, some examples are citation-based metrics, distinct applications of PageRank, among others. More recently, learning to rank techniques have been successfully applied to combine sets of features producing effective results. In this work, we propose the metric S-RCR, which is a simplified version of a metric called Relative Citation Ratio --- both based on the idea of a co-citation network. When compared to the classical version, our simplification S-RCR leads to improved efficiency with a reasonable effectiveness. We use S-RCR to rank over 120 million papers in the Microsoft Academic Graph dataset. By using this single feature, which has no parameters and does not need to be tuned, our team was able to reach the 3rd position in the first phase of the WSDM Cup 2016.
[ { "created": "Fri, 4 Mar 2016 03:00:46 GMT", "version": "v1" } ]
2016-03-07
[ [ "Ribas", "Sabir", "" ], [ "Ueda", "Alberto", "" ], [ "Santos", "Rodrygo L. T.", "" ], [ "Ribeiro-Neto", "Berthier", "" ], [ "Ziviani", "Nivio", "" ] ]
Static rankings of papers play a key role in the academic search setting. Many features are commonly used in the literature to produce such rankings, some examples are citation-based metrics, distinct applications of PageRank, among others. More recently, learning to rank techniques have been successfully applied to combine sets of features producing effective results. In this work, we propose the metric S-RCR, which is a simplified version of a metric called Relative Citation Ratio --- both based on the idea of a co-citation network. When compared to the classical version, our simplification S-RCR leads to improved efficiency with a reasonable effectiveness. We use S-RCR to rank over 120 million papers in the Microsoft Academic Graph dataset. By using this single feature, which has no parameters and does not need to be tuned, our team was able to reach the 3rd position in the first phase of the WSDM Cup 2016.
2008.05510
Qiqi Ren
Qiqi Ren, Jian Chen, Omid Abbasi, Gunes Karabulut Kurt, Halim Yanikomeroglu, F. Richard Yu
An Application-Driven Non-Orthogonal Multiple Access Enabled Computation Offloading Scheme
13 pages,7 figures
null
10.1109/JIOT.2020.3015339
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To cope with the unprecedented surge in demand for data computing for the applications, the promising concept of multi-access edge computing (MEC) has been proposed to enable the network edges to provide closer data processing for mobile devices (MDs). Since enormous workloads need to be migrated, and MDs always remain resource-constrained, data offloading from devices to the MEC server will inevitably require more efficient transmission designs. The integration of nonorthogonal multiple access (NOMA) technique with MEC has been shown to provide applications with lower latency and higher energy efficiency. However, existing designs of this type have mainly focused on the transmission technique, which is still insufficient. To further advance offloading performance, in this work, we propose an application-driven NOMA enabled computation offloading scheme by exploring the characteristics of applications, where the common data of the application is offloaded through multi-device cooperation. Under the premise of successfully offloading the common data, we formulate the problem as the maximization of individual offloading throughput, where the time allocation and power control are jointly optimized. By using the successive convex approximation (SCA) method, the formulated problem can be iteratively solved. Simulation results demonstrate the convergence of our method and the effectiveness of the proposed scheme.
[ { "created": "Wed, 12 Aug 2020 18:16:03 GMT", "version": "v1" } ]
2020-08-14
[ [ "Ren", "Qiqi", "" ], [ "Chen", "Jian", "" ], [ "Abbasi", "Omid", "" ], [ "Kurt", "Gunes Karabulut", "" ], [ "Yanikomeroglu", "Halim", "" ], [ "Yu", "F. Richard", "" ] ]
To cope with the unprecedented surge in demand for data computing for the applications, the promising concept of multi-access edge computing (MEC) has been proposed to enable the network edges to provide closer data processing for mobile devices (MDs). Since enormous workloads need to be migrated, and MDs always remain resource-constrained, data offloading from devices to the MEC server will inevitably require more efficient transmission designs. The integration of nonorthogonal multiple access (NOMA) technique with MEC has been shown to provide applications with lower latency and higher energy efficiency. However, existing designs of this type have mainly focused on the transmission technique, which is still insufficient. To further advance offloading performance, in this work, we propose an application-driven NOMA enabled computation offloading scheme by exploring the characteristics of applications, where the common data of the application is offloaded through multi-device cooperation. Under the premise of successfully offloading the common data, we formulate the problem as the maximization of individual offloading throughput, where the time allocation and power control are jointly optimized. By using the successive convex approximation (SCA) method, the formulated problem can be iteratively solved. Simulation results demonstrate the convergence of our method and the effectiveness of the proposed scheme.
2011.06840
Sidney Golstein
Sidney Jonathan Golstein, Fran\c{c}ois Rottenberg, Fran\c{c}ois Horlin, Philippe De Doncker, Julien Sarrazin
Physical Layer Security in a SISO Communication using Frequency-Domain Time-Reversal OFDM Precoding and Artificial Noise Injection
null
null
null
null
cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by/4.0/
A frequency domain (FD) time-reversal (TR) precoder is proposed to perform physical layer security (PLS) in single-input single-output (SISO) systems using orthogonal frequency-division multiplexing (OFDM) and artificial noise (AN) signal injection. The AN signal does not corrupt the data transmission to the legitimate receiver but degrades the decoding performance of the eavesdropper. This scheme guarantees the secrecy of a communication towards a legitimate user when the transmitter knows the instantaneous channel state information (CSI) of the legitimate link thanks to the channel reciprocity in time division duplex (TDD) systems, but does not know the instantaneous CSI of a potential eavesdropper. Three optimal decoding structures at the eavesdropper are considered in a fast fading (FF) environment depending on the handshake procedure between Alice and Bob. Closed-form approximations of the AN energy to inject in order to maximize the SR of the communication are derived. In addition, the required conditions at the legitimate receiver's end to guarantee a given SR are determined when Eve's signal-to-noise ratio (SNR) is infinite. Furthermore, a waterfilling power allocation strategy is presented to further enhance the secrecy of the scheme. Simulation results are presented to demonstrate the security performance of the proposed secure system.
[ { "created": "Fri, 13 Nov 2020 09:57:52 GMT", "version": "v1" } ]
2020-11-16
[ [ "Golstein", "Sidney Jonathan", "" ], [ "Rottenberg", "François", "" ], [ "Horlin", "François", "" ], [ "De Doncker", "Philippe", "" ], [ "Sarrazin", "Julien", "" ] ]
A frequency domain (FD) time-reversal (TR) precoder is proposed to perform physical layer security (PLS) in single-input single-output (SISO) systems using orthogonal frequency-division multiplexing (OFDM) and artificial noise (AN) signal injection. The AN signal does not corrupt the data transmission to the legitimate receiver but degrades the decoding performance of the eavesdropper. This scheme guarantees the secrecy of a communication towards a legitimate user when the transmitter knows the instantaneous channel state information (CSI) of the legitimate link thanks to the channel reciprocity in time division duplex (TDD) systems, but does not know the instantaneous CSI of a potential eavesdropper. Three optimal decoding structures at the eavesdropper are considered in a fast fading (FF) environment depending on the handshake procedure between Alice and Bob. Closed-form approximations of the AN energy to inject in order to maximize the SR of the communication are derived. In addition, the required conditions at the legitimate receiver's end to guarantee a given SR are determined when Eve's signal-to-noise ratio (SNR) is infinite. Furthermore, a waterfilling power allocation strategy is presented to further enhance the secrecy of the scheme. Simulation results are presented to demonstrate the security performance of the proposed secure system.
2310.08650
Alexander Most
Alexander Most, Maksim Eren, Nigel Lawrence, Boian Alexandrov
Electrical Grid Anomaly Detection via Tensor Decomposition
8 pages, 2 figures. In IEEE Military Communications Conference, Artificial Intelligence for Cyber Workshop (MILCOM), 2023
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Supervisory Control and Data Acquisition (SCADA) systems often serve as the nervous system for substations within power grids. These systems facilitate real-time monitoring, data acquisition, control of equipment, and ensure smooth and efficient operation of the substation and its connected devices. Previous work has shown that dimensionality reduction-based approaches, such as Principal Component Analysis (PCA), can be used for accurate identification of anomalies in SCADA systems. While not specifically applied to SCADA, non-negative matrix factorization (NMF) has shown strong results at detecting anomalies in wireless sensor networks. These unsupervised approaches model the normal or expected behavior and detect the unseen types of attacks or anomalies by identifying the events that deviate from the expected behavior. These approaches; however, do not model the complex and multi-dimensional interactions that are naturally present in SCADA systems. Differently, non-negative tensor decomposition is a powerful unsupervised machine learning (ML) method that can model the complex and multi-faceted activity details of SCADA events. In this work, we novelly apply the tensor decomposition method Canonical Polyadic Alternating Poisson Regression (CP-APR) with a probabilistic framework, which has previously shown state-of-the-art anomaly detection results on cyber network data, to identify anomalies in SCADA systems. We showcase that the use of statistical behavior analysis of SCADA communication with tensor decomposition improves the specificity and accuracy of identifying anomalies in electrical grid systems. In our experiments, we model real-world SCADA system data collected from the electrical grid operated by Los Alamos National Laboratory (LANL) which provides transmission and distribution service through a partnership with Los Alamos County, and detect synthetically generated anomalies.
[ { "created": "Thu, 12 Oct 2023 18:23:06 GMT", "version": "v1" } ]
2023-10-16
[ [ "Most", "Alexander", "" ], [ "Eren", "Maksim", "" ], [ "Lawrence", "Nigel", "" ], [ "Alexandrov", "Boian", "" ] ]
Supervisory Control and Data Acquisition (SCADA) systems often serve as the nervous system for substations within power grids. These systems facilitate real-time monitoring, data acquisition, control of equipment, and ensure smooth and efficient operation of the substation and its connected devices. Previous work has shown that dimensionality reduction-based approaches, such as Principal Component Analysis (PCA), can be used for accurate identification of anomalies in SCADA systems. While not specifically applied to SCADA, non-negative matrix factorization (NMF) has shown strong results at detecting anomalies in wireless sensor networks. These unsupervised approaches model the normal or expected behavior and detect the unseen types of attacks or anomalies by identifying the events that deviate from the expected behavior. These approaches; however, do not model the complex and multi-dimensional interactions that are naturally present in SCADA systems. Differently, non-negative tensor decomposition is a powerful unsupervised machine learning (ML) method that can model the complex and multi-faceted activity details of SCADA events. In this work, we novelly apply the tensor decomposition method Canonical Polyadic Alternating Poisson Regression (CP-APR) with a probabilistic framework, which has previously shown state-of-the-art anomaly detection results on cyber network data, to identify anomalies in SCADA systems. We showcase that the use of statistical behavior analysis of SCADA communication with tensor decomposition improves the specificity and accuracy of identifying anomalies in electrical grid systems. In our experiments, we model real-world SCADA system data collected from the electrical grid operated by Los Alamos National Laboratory (LANL) which provides transmission and distribution service through a partnership with Los Alamos County, and detect synthetically generated anomalies.
2004.00280
Hengtong Hu
Hengtong Hu, Lingxi Xie, Richang Hong, Qi Tian
Creating Something from Nothing: Unsupervised Knowledge Distillation for Cross-Modal Hashing
This paper has been accepted for CVPR2020
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
In recent years, cross-modal hashing (CMH) has attracted increasing attentions, mainly because its potential ability of mapping contents from different modalities, especially in vision and language, into the same space, so that it becomes efficient in cross-modal data retrieval. There are two main frameworks for CMH, differing from each other in whether semantic supervision is required. Compared to the unsupervised methods, the supervised methods often enjoy more accurate results, but require much heavier labors in data annotation. In this paper, we propose a novel approach that enables guiding a supervised method using outputs produced by an unsupervised method. Specifically, we make use of teacher-student optimization for propagating knowledge. Experiments are performed on two popular CMH benchmarks, i.e., the MIRFlickr and NUS-WIDE datasets. Our approach outperforms all existing unsupervised methods by a large margin.
[ { "created": "Wed, 1 Apr 2020 08:32:15 GMT", "version": "v1" } ]
2020-04-02
[ [ "Hu", "Hengtong", "" ], [ "Xie", "Lingxi", "" ], [ "Hong", "Richang", "" ], [ "Tian", "Qi", "" ] ]
In recent years, cross-modal hashing (CMH) has attracted increasing attentions, mainly because its potential ability of mapping contents from different modalities, especially in vision and language, into the same space, so that it becomes efficient in cross-modal data retrieval. There are two main frameworks for CMH, differing from each other in whether semantic supervision is required. Compared to the unsupervised methods, the supervised methods often enjoy more accurate results, but require much heavier labors in data annotation. In this paper, we propose a novel approach that enables guiding a supervised method using outputs produced by an unsupervised method. Specifically, we make use of teacher-student optimization for propagating knowledge. Experiments are performed on two popular CMH benchmarks, i.e., the MIRFlickr and NUS-WIDE datasets. Our approach outperforms all existing unsupervised methods by a large margin.
2009.13987
Lukas Ruff
Michael Joswig, Marek Kaluba, Lukas Ruff
Geometric Disentanglement by Random Convex Polytopes
23 pages, preprint; extended experiments and theoretical analysis of RPD in v2
null
null
null
cs.LG math.MG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a new geometric method for measuring the quality of representations obtained from deep learning. Our approach, called Random Polytope Descriptor, provides an efficient description of data points based on the construction of random convex polytopes. We demonstrate the use of our technique by qualitatively comparing the behavior of classic and regularized autoencoders. This reveals that applying regularization to autoencoder networks may decrease the out-of-distribution detection performance in latent space. While our technique is similar in spirit to $k$-means clustering, we achieve significantly better false positive/negative balance in clustering tasks on autoencoded datasets.
[ { "created": "Tue, 29 Sep 2020 13:16:26 GMT", "version": "v1" }, { "created": "Sat, 13 Feb 2021 07:39:43 GMT", "version": "v2" } ]
2021-02-16
[ [ "Joswig", "Michael", "" ], [ "Kaluba", "Marek", "" ], [ "Ruff", "Lukas", "" ] ]
We propose a new geometric method for measuring the quality of representations obtained from deep learning. Our approach, called Random Polytope Descriptor, provides an efficient description of data points based on the construction of random convex polytopes. We demonstrate the use of our technique by qualitatively comparing the behavior of classic and regularized autoencoders. This reveals that applying regularization to autoencoder networks may decrease the out-of-distribution detection performance in latent space. While our technique is similar in spirit to $k$-means clustering, we achieve significantly better false positive/negative balance in clustering tasks on autoencoded datasets.
2211.01713
Jiabin Chen
Fei Xu, Jianian Xu, Jiabin Chen, Li Chen, Ruitao Shang, Zhi Zhou, Fangming Liu
iGniter: Interference-Aware GPU Resource Provisioning for Predictable DNN Inference in the Cloud
16 pages, 21 figures, submitted to IEEE Transactions on Parallel and Distributed Systems
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
GPUs are essential to accelerating the latency-sensitive deep neural network (DNN) inference workloads in cloud datacenters. To fully utilize GPU resources, spatial sharing of GPUs among co-located DNN inference workloads becomes increasingly compelling. However, GPU sharing inevitably brings severe performance interference among co-located inference workloads, as motivated by an empirical measurement study of DNN inference on EC2 GPU instances. While existing works on guaranteeing inference performance service level objectives (SLOs) focus on either temporal sharing of GPUs or reactive GPU resource scaling and inference migration techniques, how to proactively mitigate such severe performance interference has received comparatively little attention. In this paper, we propose iGniter, an interference-aware GPU resource provisioning framework for cost-efficiently achieving predictable DNN inference in the cloud. iGniter is comprised of two key components: (1) a lightweight DNN inference performance model, which leverages the system and workload metrics that are practically accessible to capture the performance interference; (2) A cost-efficient GPU resource provisioning strategy that jointly optimizes the GPU resource allocation and adaptive batching based on our inference performance model, with the aim of achieving predictable performance of DNN inference workloads. We implement a prototype of iGniter based on the NVIDIA Triton inference server hosted on EC2 GPU instances. Extensive prototype experiments on four representative DNN models and datasets demonstrate that iGniter can guarantee the performance SLOs of DNN inference workloads with practically acceptable runtime overhead, while saving the monetary cost by up to 25% in comparison to the state-of-the-art GPU resource provisioning strategies.
[ { "created": "Thu, 3 Nov 2022 11:07:09 GMT", "version": "v1" } ]
2022-11-04
[ [ "Xu", "Fei", "" ], [ "Xu", "Jianian", "" ], [ "Chen", "Jiabin", "" ], [ "Chen", "Li", "" ], [ "Shang", "Ruitao", "" ], [ "Zhou", "Zhi", "" ], [ "Liu", "Fangming", "" ] ]
GPUs are essential to accelerating the latency-sensitive deep neural network (DNN) inference workloads in cloud datacenters. To fully utilize GPU resources, spatial sharing of GPUs among co-located DNN inference workloads becomes increasingly compelling. However, GPU sharing inevitably brings severe performance interference among co-located inference workloads, as motivated by an empirical measurement study of DNN inference on EC2 GPU instances. While existing works on guaranteeing inference performance service level objectives (SLOs) focus on either temporal sharing of GPUs or reactive GPU resource scaling and inference migration techniques, how to proactively mitigate such severe performance interference has received comparatively little attention. In this paper, we propose iGniter, an interference-aware GPU resource provisioning framework for cost-efficiently achieving predictable DNN inference in the cloud. iGniter is comprised of two key components: (1) a lightweight DNN inference performance model, which leverages the system and workload metrics that are practically accessible to capture the performance interference; (2) A cost-efficient GPU resource provisioning strategy that jointly optimizes the GPU resource allocation and adaptive batching based on our inference performance model, with the aim of achieving predictable performance of DNN inference workloads. We implement a prototype of iGniter based on the NVIDIA Triton inference server hosted on EC2 GPU instances. Extensive prototype experiments on four representative DNN models and datasets demonstrate that iGniter can guarantee the performance SLOs of DNN inference workloads with practically acceptable runtime overhead, while saving the monetary cost by up to 25% in comparison to the state-of-the-art GPU resource provisioning strategies.
1706.02867
Milad Niknejad
Milad Niknejad, Jose M. Bioucas-Dias, Mario A. T. Figueiredo
Class-specific Poisson denoising by patch-based importance sampling
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we address the problem of recovering images degraded by Poisson noise, where the image is known to belong to a specific class. In the proposed method, a dataset of clean patches from images of the class of interest is clustered using multivariate Gaussian distributions. In order to recover the noisy image, each noisy patch is assigned to one of these distributions, and the corresponding minimum mean squared error (MMSE) estimate is obtained. We propose to use a self-normalized importance sampling approach, which is a method of the Monte-Carlo family, for the both determining the most likely distribution and approximating the MMSE estimate of the clean patch. Experimental results shows that our proposed method outperforms other methods for Poisson denoising at a low SNR regime.
[ { "created": "Fri, 9 Jun 2017 08:47:26 GMT", "version": "v1" } ]
2017-06-12
[ [ "Niknejad", "Milad", "" ], [ "Bioucas-Dias", "Jose M.", "" ], [ "Figueiredo", "Mario A. T.", "" ] ]
In this paper, we address the problem of recovering images degraded by Poisson noise, where the image is known to belong to a specific class. In the proposed method, a dataset of clean patches from images of the class of interest is clustered using multivariate Gaussian distributions. In order to recover the noisy image, each noisy patch is assigned to one of these distributions, and the corresponding minimum mean squared error (MMSE) estimate is obtained. We propose to use a self-normalized importance sampling approach, which is a method of the Monte-Carlo family, for the both determining the most likely distribution and approximating the MMSE estimate of the clean patch. Experimental results shows that our proposed method outperforms other methods for Poisson denoising at a low SNR regime.
2311.07607
Joohwan Ko
Joohwan Ko, Andrew A. Li
Modeling Choice via Self-Attention
null
null
null
null
cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Models of choice are a fundamental input to many now-canonical optimization problems in the field of Operations Management, including assortment, inventory, and price optimization. Naturally, accurate estimation of these models from data is a critical step in the application of these optimization problems in practice. Concurrently, recent advancements in deep learning have sparked interest in integrating these techniques into choice modeling. However, there is a noticeable research gap at the intersection of deep learning and choice modeling, particularly with both theoretical and empirical foundations. Thus motivated, we first propose a choice model that is the first to successfully (both theoretically and practically) leverage a modern neural network architectural concept (self-attention). Theoretically, we show that our attention-based choice model is a low-rank generalization of the Halo Multinomial Logit (Halo-MNL) model. We prove that whereas the Halo-MNL requires $\Omega(m^2)$ data samples to estimate, where $m$ is the number of products, our model supports a natural nonconvex estimator (in particular, that which a standard neural network implementation would apply) which admits a near-optimal stationary point with $O(m)$ samples. Additionally, we establish the first realistic-scale benchmark for choice model estimation on real data, conducting the most extensive evaluation of existing models to date, thereby highlighting our model's superior performance.
[ { "created": "Sat, 11 Nov 2023 11:13:07 GMT", "version": "v1" }, { "created": "Thu, 8 Feb 2024 09:32:44 GMT", "version": "v2" } ]
2024-02-09
[ [ "Ko", "Joohwan", "" ], [ "Li", "Andrew A.", "" ] ]
Models of choice are a fundamental input to many now-canonical optimization problems in the field of Operations Management, including assortment, inventory, and price optimization. Naturally, accurate estimation of these models from data is a critical step in the application of these optimization problems in practice. Concurrently, recent advancements in deep learning have sparked interest in integrating these techniques into choice modeling. However, there is a noticeable research gap at the intersection of deep learning and choice modeling, particularly with both theoretical and empirical foundations. Thus motivated, we first propose a choice model that is the first to successfully (both theoretically and practically) leverage a modern neural network architectural concept (self-attention). Theoretically, we show that our attention-based choice model is a low-rank generalization of the Halo Multinomial Logit (Halo-MNL) model. We prove that whereas the Halo-MNL requires $\Omega(m^2)$ data samples to estimate, where $m$ is the number of products, our model supports a natural nonconvex estimator (in particular, that which a standard neural network implementation would apply) which admits a near-optimal stationary point with $O(m)$ samples. Additionally, we establish the first realistic-scale benchmark for choice model estimation on real data, conducting the most extensive evaluation of existing models to date, thereby highlighting our model's superior performance.
1706.04315
Mikhail Prokopenko
Mikhail Prokopenko, Peter Wang, Sebastian Marian, Aijun Bai, Xiao Li, Xiaoping Chen
RoboCup 2D Soccer Simulation League: Evaluation Challenges
12 pages, RoboCup-2017, Nagoya, Japan, July 2017
null
null
null
cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We summarise the results of RoboCup 2D Soccer Simulation League in 2016 (Leipzig), including the main competition and the evaluation round. The evaluation round held in Leipzig confirmed the strength of RoboCup-2015 champion (WrightEagle, i.e. WE2015) in the League, with only eventual finalists of 2016 competition capable of defeating WE2015. An extended, post-Leipzig, round-robin tournament which included the top 8 teams of 2016, as well as WE2015, with over 1000 games played for each pair, placed WE2015 third behind the champion team (Gliders2016) and the runner-up (HELIOS2016). This establishes WE2015 as a stable benchmark for the 2D Simulation League. We then contrast two ranking methods and suggest two options for future evaluation challenges. The first one, "The Champions Simulation League", is proposed to include 6 previous champions, directly competing against each other in a round-robin tournament, with the view to systematically trace the advancements in the League. The second proposal, "The Global Challenge", is aimed to increase the realism of the environmental conditions during the simulated games, by simulating specific features of different participating countries.
[ { "created": "Wed, 14 Jun 2017 04:53:42 GMT", "version": "v1" } ]
2017-06-15
[ [ "Prokopenko", "Mikhail", "" ], [ "Wang", "Peter", "" ], [ "Marian", "Sebastian", "" ], [ "Bai", "Aijun", "" ], [ "Li", "Xiao", "" ], [ "Chen", "Xiaoping", "" ] ]
We summarise the results of RoboCup 2D Soccer Simulation League in 2016 (Leipzig), including the main competition and the evaluation round. The evaluation round held in Leipzig confirmed the strength of RoboCup-2015 champion (WrightEagle, i.e. WE2015) in the League, with only eventual finalists of 2016 competition capable of defeating WE2015. An extended, post-Leipzig, round-robin tournament which included the top 8 teams of 2016, as well as WE2015, with over 1000 games played for each pair, placed WE2015 third behind the champion team (Gliders2016) and the runner-up (HELIOS2016). This establishes WE2015 as a stable benchmark for the 2D Simulation League. We then contrast two ranking methods and suggest two options for future evaluation challenges. The first one, "The Champions Simulation League", is proposed to include 6 previous champions, directly competing against each other in a round-robin tournament, with the view to systematically trace the advancements in the League. The second proposal, "The Global Challenge", is aimed to increase the realism of the environmental conditions during the simulated games, by simulating specific features of different participating countries.
1609.08042
Xavier Corbillon
Xavier Corbillon and Gwendal Simon and Alisa Devlic and Jacob Chakareski
Viewport-Adaptive Navigable 360-Degree Video Delivery
7 pages + 6 figures
In proceeding of 2017 IEEE International Conference on Communications (ICC), pages 1-7
10.1109/ICC.2017.7996611
null
cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The delivery and display of 360-degree videos on Head-Mounted Displays (HMDs) presents many technical challenges. 360-degree videos are ultra high resolution spherical videos, which contain an omnidirectional view of the scene. However only a portion of this scene is displayed on the HMD. Moreover, HMD need to respond in 10 ms to head movements, which prevents the server to send only the displayed video part based on client feedback. To reduce the bandwidth waste, while still providing an immersive experience, a viewport-adaptive 360-degree video streaming system is proposed. The server prepares multiple video representations, which differ not only by their bit-rate, but also by the qualities of different scene regions. The client chooses a representation for the next segment such that its bit-rate fits the available throughput and a full quality region matches its viewing. We investigate the impact of various spherical-to-plane projections and quality arrangements on the video quality displayed to the user, showing that the cube map layout offers the best quality for the given bit-rate budget. An evaluation with a dataset of users navigating 360-degree videos demonstrates that segments need to be short enough to enable frequent view switches.
[ { "created": "Mon, 26 Sep 2016 16:10:48 GMT", "version": "v1" }, { "created": "Wed, 10 May 2017 08:49:35 GMT", "version": "v2" } ]
2017-08-02
[ [ "Corbillon", "Xavier", "" ], [ "Simon", "Gwendal", "" ], [ "Devlic", "Alisa", "" ], [ "Chakareski", "Jacob", "" ] ]
The delivery and display of 360-degree videos on Head-Mounted Displays (HMDs) presents many technical challenges. 360-degree videos are ultra high resolution spherical videos, which contain an omnidirectional view of the scene. However only a portion of this scene is displayed on the HMD. Moreover, HMD need to respond in 10 ms to head movements, which prevents the server to send only the displayed video part based on client feedback. To reduce the bandwidth waste, while still providing an immersive experience, a viewport-adaptive 360-degree video streaming system is proposed. The server prepares multiple video representations, which differ not only by their bit-rate, but also by the qualities of different scene regions. The client chooses a representation for the next segment such that its bit-rate fits the available throughput and a full quality region matches its viewing. We investigate the impact of various spherical-to-plane projections and quality arrangements on the video quality displayed to the user, showing that the cube map layout offers the best quality for the given bit-rate budget. An evaluation with a dataset of users navigating 360-degree videos demonstrates that segments need to be short enough to enable frequent view switches.
2009.05254
Saroj Sahoo
Saroj Sahoo and Matthew Berger
Visually Analyzing and Steering Zero Shot Learning
null
null
null
null
cs.HC cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a visual analytics system to help a user analyze and steer zero-shot learning models. Zero-shot learning has emerged as a viable scenario for categorizing data that consists of no labeled examples, and thus a promising approach to minimize data annotation from humans. However, it is challenging to understand where zero-shot learning fails, the cause of such failures, and how a user can modify the model to prevent such failures. Our visualization system is designed to help users diagnose and understand mispredictions in such models, so that they may gain insight on the behavior of a model when applied to data associated with categories not seen during training. Through usage scenarios, we highlight how our system can help a user improve performance in zero-shot learning.
[ { "created": "Fri, 11 Sep 2020 06:58:13 GMT", "version": "v1" } ]
2020-09-14
[ [ "Sahoo", "Saroj", "" ], [ "Berger", "Matthew", "" ] ]
We propose a visual analytics system to help a user analyze and steer zero-shot learning models. Zero-shot learning has emerged as a viable scenario for categorizing data that consists of no labeled examples, and thus a promising approach to minimize data annotation from humans. However, it is challenging to understand where zero-shot learning fails, the cause of such failures, and how a user can modify the model to prevent such failures. Our visualization system is designed to help users diagnose and understand mispredictions in such models, so that they may gain insight on the behavior of a model when applied to data associated with categories not seen during training. Through usage scenarios, we highlight how our system can help a user improve performance in zero-shot learning.
2304.01225
Aqsa Ashraf Makhdomi
Aqsa Ashraf Makhdomi and Iqra Altaf Gillani
A greedy approach for increased vehicle utilization in ridesharing networks
null
null
null
null
cs.DS cs.CY cs.IR math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, ridesharing platforms have become a prominent mode of transportation for the residents of urban areas. As a fundamental problem, route recommendation for these platforms is vital for their sustenance. The works done in this direction have recommended routes with higher passenger demand. Despite the existing works, statistics have suggested that these services cause increased greenhouse emissions compared to private vehicles as they roam around in search of riders. This analysis provides finer details regarding the functionality of ridesharing systems and it reveals that in the face of their boom, they have not utilized the vehicle capacity efficiently. We propose to overcome the above limitations and recommend routes that will fetch multiple passengers simultaneously which will result in increased vehicle utilization and thereby decrease the effect of these systems on the environment. As route recommendation is NP-hard, we propose a k-hop-based sliding window approximation algorithm that reduces the search space from entire road network to a window. We further demonstrate that maximizing expected demand is submodular and greedy algorithms can be used to optimize our objective function within a window. We evaluate our proposed model on real-world datasets and experimental results demonstrate superior performance by our proposed model.
[ { "created": "Sun, 2 Apr 2023 07:25:01 GMT", "version": "v1" }, { "created": "Mon, 22 Jan 2024 06:31:50 GMT", "version": "v2" } ]
2024-01-23
[ [ "Makhdomi", "Aqsa Ashraf", "" ], [ "Gillani", "Iqra Altaf", "" ] ]
In recent years, ridesharing platforms have become a prominent mode of transportation for the residents of urban areas. As a fundamental problem, route recommendation for these platforms is vital for their sustenance. The works done in this direction have recommended routes with higher passenger demand. Despite the existing works, statistics have suggested that these services cause increased greenhouse emissions compared to private vehicles as they roam around in search of riders. This analysis provides finer details regarding the functionality of ridesharing systems and it reveals that in the face of their boom, they have not utilized the vehicle capacity efficiently. We propose to overcome the above limitations and recommend routes that will fetch multiple passengers simultaneously which will result in increased vehicle utilization and thereby decrease the effect of these systems on the environment. As route recommendation is NP-hard, we propose a k-hop-based sliding window approximation algorithm that reduces the search space from entire road network to a window. We further demonstrate that maximizing expected demand is submodular and greedy algorithms can be used to optimize our objective function within a window. We evaluate our proposed model on real-world datasets and experimental results demonstrate superior performance by our proposed model.
2106.08015
Leonard Bauersfeld
Leonard Bauersfeld, Elia Kaufmann, Philipp Foehn, Sihao Sun, Davide Scaramuzza
NeuroBEM: Hybrid Aerodynamic Quadrotor Model
9 pages + 1 pages references
Robotics: Science and Systems (RSS), 2021
10.15607/RSS.2021.XVII.042
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Quadrotors are extremely agile, so much in fact, that classic first-principle-models come to their limits. Aerodynamic effects, while insignificant at low speeds, become the dominant model defect during high speeds or agile maneuvers. Accurate modeling is needed to design robust high-performance control systems and enable flying close to the platform's physical limits. We propose a hybrid approach fusing first principles and learning to model quadrotors and their aerodynamic effects with unprecedented accuracy. First principles fail to capture such aerodynamic effects, rendering traditional approaches inaccurate when used for simulation or controller tuning. Data-driven approaches try to capture aerodynamic effects with blackbox modeling, such as neural networks; however, they struggle to robustly generalize to arbitrary flight conditions. Our hybrid approach unifies and outperforms both first-principles blade-element theory and learned residual dynamics. It is evaluated in one of the world's largest motion-capture systems, using autonomous-quadrotor-flight data at speeds up to 65km/h. The resulting model captures the aerodynamic thrust, torques, and parasitic effects with astonishing accuracy, outperforming existing models with 50% reduced prediction errors, and shows strong generalization capabilities beyond the training set.
[ { "created": "Tue, 15 Jun 2021 09:59:52 GMT", "version": "v1" } ]
2022-01-19
[ [ "Bauersfeld", "Leonard", "" ], [ "Kaufmann", "Elia", "" ], [ "Foehn", "Philipp", "" ], [ "Sun", "Sihao", "" ], [ "Scaramuzza", "Davide", "" ] ]
Quadrotors are extremely agile, so much in fact, that classic first-principle-models come to their limits. Aerodynamic effects, while insignificant at low speeds, become the dominant model defect during high speeds or agile maneuvers. Accurate modeling is needed to design robust high-performance control systems and enable flying close to the platform's physical limits. We propose a hybrid approach fusing first principles and learning to model quadrotors and their aerodynamic effects with unprecedented accuracy. First principles fail to capture such aerodynamic effects, rendering traditional approaches inaccurate when used for simulation or controller tuning. Data-driven approaches try to capture aerodynamic effects with blackbox modeling, such as neural networks; however, they struggle to robustly generalize to arbitrary flight conditions. Our hybrid approach unifies and outperforms both first-principles blade-element theory and learned residual dynamics. It is evaluated in one of the world's largest motion-capture systems, using autonomous-quadrotor-flight data at speeds up to 65km/h. The resulting model captures the aerodynamic thrust, torques, and parasitic effects with astonishing accuracy, outperforming existing models with 50% reduced prediction errors, and shows strong generalization capabilities beyond the training set.
1704.03521
Igor Podlubny
Matej Mikulszky, Jana Pocsova, Andrea Mojzisova, Igor Podlubny
Responsive Graphical User Interface (ReGUI) and its Implementation in MATLAB
8 pages, 3 figures
null
null
null
cs.HC cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we introduce the responsive graphical user interface (ReGUI) approach to creating applications, and demonstrate how this approach can be implemented in MATLAB. The same general technique can be used in other programming languages.
[ { "created": "Sun, 9 Apr 2017 20:18:25 GMT", "version": "v1" } ]
2017-04-13
[ [ "Mikulszky", "Matej", "" ], [ "Pocsova", "Jana", "" ], [ "Mojzisova", "Andrea", "" ], [ "Podlubny", "Igor", "" ] ]
In this paper we introduce the responsive graphical user interface (ReGUI) approach to creating applications, and demonstrate how this approach can be implemented in MATLAB. The same general technique can be used in other programming languages.
2307.14956
Bal\'azs Hidasi
Bal\'azs Hidasi, \'Ad\'am Tibor Czapp
The Effect of Third Party Implementations on Reproducibility
Appearing in the Proceedings of the 17th ACM Conference on Recommender Systems (RecSys'23)
null
10.1145/3604915.3609487
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reproducibility of recommender systems research has come under scrutiny during recent years. Along with works focusing on repeating experiments with certain algorithms, the research community has also started discussing various aspects of evaluation and how these affect reproducibility. We add a novel angle to this discussion by examining how unofficial third-party implementations could benefit or hinder reproducibility. Besides giving a general overview, we thoroughly examine six third-party implementations of a popular recommender algorithm and compare them to the official version on five public datasets. In the light of our alarming findings we aim to draw the attention of the research community to this neglected aspect of reproducibility.
[ { "created": "Thu, 27 Jul 2023 15:48:13 GMT", "version": "v1" } ]
2023-07-28
[ [ "Hidasi", "Balázs", "" ], [ "Czapp", "Ádám Tibor", "" ] ]
Reproducibility of recommender systems research has come under scrutiny during recent years. Along with works focusing on repeating experiments with certain algorithms, the research community has also started discussing various aspects of evaluation and how these affect reproducibility. We add a novel angle to this discussion by examining how unofficial third-party implementations could benefit or hinder reproducibility. Besides giving a general overview, we thoroughly examine six third-party implementations of a popular recommender algorithm and compare them to the official version on five public datasets. In the light of our alarming findings we aim to draw the attention of the research community to this neglected aspect of reproducibility.
2105.03502
Fitzroy Nembhard
Fitzroy D. Nembhard and Marco M. Carvalho
Conversational Code Analysis: The Future of Secure Coding
Accepted on May 12, 2021 for publication in Coding Theory - Recent Advances, New Perspectives and Applications, IntechOpen, London
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The area of software development and secure coding can benefit significantly from advancements in virtual assistants. Research has shown that many coders neglect security in favor of meeting deadlines. This shortcoming leaves systems vulnerable to attackers. While a plethora of tools are available for programmers to scan their code for vulnerabilities, finding the right tool can be challenging. It is therefore imperative to adopt measures to get programmers to utilize code analysis tools that will help them produce more secure code. This chapter looks at the limitations of existing approaches to secure coding and proposes a methodology that allows programmers to scan and fix vulnerabilities in program code by communicating with virtual assistants on their smart devices. With the ubiquitous move towards virtual assistants, it is important to design systems that are more reliant on voice than on standard point-and-click and keyboard-driven approaches. Consequently, we propose MyCodeAnalyzer, a Google Assistant app and code analysis framework, which was designed to interactively scan program code for vulnerabilities and flaws using voice commands during development. We describe the proposed methodology, implement a prototype, test it on a vulnerable project and present our results.
[ { "created": "Fri, 7 May 2021 20:53:44 GMT", "version": "v1" }, { "created": "Wed, 12 May 2021 20:28:30 GMT", "version": "v2" } ]
2021-05-14
[ [ "Nembhard", "Fitzroy D.", "" ], [ "Carvalho", "Marco M.", "" ] ]
The area of software development and secure coding can benefit significantly from advancements in virtual assistants. Research has shown that many coders neglect security in favor of meeting deadlines. This shortcoming leaves systems vulnerable to attackers. While a plethora of tools are available for programmers to scan their code for vulnerabilities, finding the right tool can be challenging. It is therefore imperative to adopt measures to get programmers to utilize code analysis tools that will help them produce more secure code. This chapter looks at the limitations of existing approaches to secure coding and proposes a methodology that allows programmers to scan and fix vulnerabilities in program code by communicating with virtual assistants on their smart devices. With the ubiquitous move towards virtual assistants, it is important to design systems that are more reliant on voice than on standard point-and-click and keyboard-driven approaches. Consequently, we propose MyCodeAnalyzer, a Google Assistant app and code analysis framework, which was designed to interactively scan program code for vulnerabilities and flaws using voice commands during development. We describe the proposed methodology, implement a prototype, test it on a vulnerable project and present our results.
1111.0554
Abbas Mehrabian
Shayan Ehsani and Saber Shokat Fadaee and MohammadAmin Fazli and Abbas Mehrabian and Sina Sadeghian Sadeghabad and MohammadAli Safari and Morteza Saghafian
On a Bounded Budget Network Creation Game
28 pages, 3 figures, preliminary version appeared in SPAA'11
ACM Transactions on Algorithms (2015), 11(4), article 34
10.1145/2701615
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a network creation game in which each player (vertex) has a fixed budget to establish links to other players. In our model, each link has unit price and each agent tries to minimize its cost, which is either its local diameter or its total distance to other players in the (undirected) underlying graph of the created network. Two versions of the game are studied: in the MAX version, the cost incurred to a vertex is the maximum distance between the vertex and other vertices, and in the SUM version, the cost incurred to a vertex is the sum of distances between the vertex and other vertices. We prove that in both versions pure Nash equilibria exist, but the problem of finding the best response of a vertex is NP-hard. We take the social cost of the created network to be its diameter, and next we study the maximum possible diameter of an equilibrium graph with n vertices in various cases. When the sum of players' budgets is n-1, the equilibrium graphs are always trees, and we prove that their maximum diameter is Theta(n) and Theta(log n) in MAX and SUM versions, respectively. When each vertex has unit budget (i.e. can establish link to just one vertex), the diameter of any equilibrium graph in either version is Theta(1). We give examples of equilibrium graphs in the MAX version, such that all vertices have positive budgets and yet the diameter is Omega(sqrt(log n)). This interesting (and perhaps counter-intuitive) result shows that increasing the budgets may increase the diameter of equilibrium graphs and hence deteriorate the network structure. Then we prove that every equilibrium graph in the SUM version has diameter 2^O(sqrt(log n)). Finally, we show that if the budget of each player is at least k, then every equilibrium graph in the SUM version is k-connected or has diameter smaller than 4.
[ { "created": "Wed, 2 Nov 2011 16:11:07 GMT", "version": "v1" }, { "created": "Sun, 10 Jun 2012 22:20:25 GMT", "version": "v2" } ]
2015-04-21
[ [ "Ehsani", "Shayan", "" ], [ "Fadaee", "Saber Shokat", "" ], [ "Fazli", "MohammadAmin", "" ], [ "Mehrabian", "Abbas", "" ], [ "Sadeghabad", "Sina Sadeghian", "" ], [ "Safari", "MohammadAli", "" ], [ "Saghafian", "Morteza", "" ] ]
We consider a network creation game in which each player (vertex) has a fixed budget to establish links to other players. In our model, each link has unit price and each agent tries to minimize its cost, which is either its local diameter or its total distance to other players in the (undirected) underlying graph of the created network. Two versions of the game are studied: in the MAX version, the cost incurred to a vertex is the maximum distance between the vertex and other vertices, and in the SUM version, the cost incurred to a vertex is the sum of distances between the vertex and other vertices. We prove that in both versions pure Nash equilibria exist, but the problem of finding the best response of a vertex is NP-hard. We take the social cost of the created network to be its diameter, and next we study the maximum possible diameter of an equilibrium graph with n vertices in various cases. When the sum of players' budgets is n-1, the equilibrium graphs are always trees, and we prove that their maximum diameter is Theta(n) and Theta(log n) in MAX and SUM versions, respectively. When each vertex has unit budget (i.e. can establish link to just one vertex), the diameter of any equilibrium graph in either version is Theta(1). We give examples of equilibrium graphs in the MAX version, such that all vertices have positive budgets and yet the diameter is Omega(sqrt(log n)). This interesting (and perhaps counter-intuitive) result shows that increasing the budgets may increase the diameter of equilibrium graphs and hence deteriorate the network structure. Then we prove that every equilibrium graph in the SUM version has diameter 2^O(sqrt(log n)). Finally, we show that if the budget of each player is at least k, then every equilibrium graph in the SUM version is k-connected or has diameter smaller than 4.
2206.04958
Guangyi Zhao
Guangyi Zhao and Simin Kou and Xuesong Yin
Self-Supervised Deep Subspace Clustering with Entropy-norm
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Auto-Encoder based deep subspace clustering (DSC) is widely used in computer vision, motion segmentation and image processing. However, it suffers from the following three issues in the self-expressive matrix learning process: the first one is less useful information for learning self-expressive weights due to the simple reconstruction loss; the second one is that the construction of the self-expression layer associated with the sample size requires high-computational cost; and the last one is the limited connectivity of the existing regularization terms. In order to address these issues, in this paper we propose a novel model named Self-Supervised deep Subspace Clustering with Entropy-norm (S$^{3}$CE). Specifically, S$^{3}$CE exploits a self-supervised contrastive network to gain a more effetive feature vector. The local structure and dense connectivity of the original data benefit from the self-expressive layer and additional entropy-norm constraint. Moreover, a new module with data enhancement is designed to help S$^{3}$CE focus on the key information of data, and improve the clustering performance of positive and negative instances through spectral clustering. Extensive experimental results demonstrate the superior performance of S$^{3}$CE in comparison to the state-of-the-art approaches.
[ { "created": "Fri, 10 Jun 2022 09:15:33 GMT", "version": "v1" } ]
2022-06-13
[ [ "Zhao", "Guangyi", "" ], [ "Kou", "Simin", "" ], [ "Yin", "Xuesong", "" ] ]
Auto-Encoder based deep subspace clustering (DSC) is widely used in computer vision, motion segmentation and image processing. However, it suffers from the following three issues in the self-expressive matrix learning process: the first one is less useful information for learning self-expressive weights due to the simple reconstruction loss; the second one is that the construction of the self-expression layer associated with the sample size requires high-computational cost; and the last one is the limited connectivity of the existing regularization terms. In order to address these issues, in this paper we propose a novel model named Self-Supervised deep Subspace Clustering with Entropy-norm (S$^{3}$CE). Specifically, S$^{3}$CE exploits a self-supervised contrastive network to gain a more effetive feature vector. The local structure and dense connectivity of the original data benefit from the self-expressive layer and additional entropy-norm constraint. Moreover, a new module with data enhancement is designed to help S$^{3}$CE focus on the key information of data, and improve the clustering performance of positive and negative instances through spectral clustering. Extensive experimental results demonstrate the superior performance of S$^{3}$CE in comparison to the state-of-the-art approaches.
cs/0502076
Sebastian Roch
Elchanan Mossel, S\'ebastien Roch
Learning nonsingular phylogenies and hidden Markov models
Published at http://dx.doi.org/10.1214/105051606000000024 in the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org)
Annals of Applied Probability 2006, Vol. 16, No. 2, 583-614
10.1214/105051606000000024
IMS-AAP-AAP0161
cs.LG cs.CE math.PR math.ST q-bio.PE stat.TH
null
In this paper we study the problem of learning phylogenies and hidden Markov models. We call a Markov model nonsingular if all transition matrices have determinants bounded away from 0 (and 1). We highlight the role of the nonsingularity condition for the learning problem. Learning hidden Markov models without the nonsingularity condition is at least as hard as learning parity with noise, a well-known learning problem conjectured to be computationally hard. On the other hand, we give a polynomial-time algorithm for learning nonsingular phylogenies and hidden Markov models.
[ { "created": "Fri, 18 Feb 2005 01:31:53 GMT", "version": "v1" }, { "created": "Wed, 5 Jul 2006 05:29:36 GMT", "version": "v2" } ]
2016-08-16
[ [ "Mossel", "Elchanan", "" ], [ "Roch", "Sébastien", "" ] ]
In this paper we study the problem of learning phylogenies and hidden Markov models. We call a Markov model nonsingular if all transition matrices have determinants bounded away from 0 (and 1). We highlight the role of the nonsingularity condition for the learning problem. Learning hidden Markov models without the nonsingularity condition is at least as hard as learning parity with noise, a well-known learning problem conjectured to be computationally hard. On the other hand, we give a polynomial-time algorithm for learning nonsingular phylogenies and hidden Markov models.
1706.08947
Behnam Neyshabur
Behnam Neyshabur, Srinadh Bhojanapalli, David McAllester, Nathan Srebro
Exploring Generalization in Deep Learning
19 pages, 8 figures
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With a goal of understanding what drives generalization in deep networks, we consider several recently suggested explanations, including norm-based control, sharpness and robustness. We study how these measures can ensure generalization, highlighting the importance of scale normalization, and making a connection between sharpness and PAC-Bayes theory. We then investigate how well the measures explain different observed phenomena.
[ { "created": "Tue, 27 Jun 2017 17:20:06 GMT", "version": "v1" }, { "created": "Thu, 6 Jul 2017 17:10:40 GMT", "version": "v2" } ]
2017-07-07
[ [ "Neyshabur", "Behnam", "" ], [ "Bhojanapalli", "Srinadh", "" ], [ "McAllester", "David", "" ], [ "Srebro", "Nathan", "" ] ]
With a goal of understanding what drives generalization in deep networks, we consider several recently suggested explanations, including norm-based control, sharpness and robustness. We study how these measures can ensure generalization, highlighting the importance of scale normalization, and making a connection between sharpness and PAC-Bayes theory. We then investigate how well the measures explain different observed phenomena.
1101.3684
Sandor Soos
Sandor Soos and George Kampis
Bio-inspired Methods for Dynamic Network Analysis in Science Mapping
null
null
null
null
cs.DL cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We apply bio-inspired methods for the analysis of different dynamic bibliometric networks (linking papers by citation, authors, and keywords, respectively). Biological species are clusters of individuals defined by widely different criteria and in the biological perspective it is natural to (1) use different categorizations on the same entities (2) to compare the different categorizations and to analyze the dissimilarities, especially as they change over time. We employ the same methodology to comparisons of bibliometric classifications. We constructed them as analogs of three species concepts: cladistic or lineage based, similarity based, and "biological species" (based on co-reproductive ability). We use the Rand and Jaccard indexes to compare classifications in different time intervals. The experiment is aimed to address the classic problem of science mapping, as to what extent the various techniques based on different bibliometric indicators, such as citations, keywords or authors are able to detect convergent structures in the litrerature, that is, to identify coherent specialities or research directions and their dynamics.
[ { "created": "Wed, 19 Jan 2011 13:40:18 GMT", "version": "v1" } ]
2011-01-20
[ [ "Soos", "Sandor", "" ], [ "Kampis", "George", "" ] ]
We apply bio-inspired methods for the analysis of different dynamic bibliometric networks (linking papers by citation, authors, and keywords, respectively). Biological species are clusters of individuals defined by widely different criteria and in the biological perspective it is natural to (1) use different categorizations on the same entities (2) to compare the different categorizations and to analyze the dissimilarities, especially as they change over time. We employ the same methodology to comparisons of bibliometric classifications. We constructed them as analogs of three species concepts: cladistic or lineage based, similarity based, and "biological species" (based on co-reproductive ability). We use the Rand and Jaccard indexes to compare classifications in different time intervals. The experiment is aimed to address the classic problem of science mapping, as to what extent the various techniques based on different bibliometric indicators, such as citations, keywords or authors are able to detect convergent structures in the litrerature, that is, to identify coherent specialities or research directions and their dynamics.
2005.11158
Christian G\"ottel
Christian G\"ottel, Lars Nielsen, Niloofar Yazdani, Pascal Felber, Daniel E. Lucani and Valerio Schiavoni
Hermes: Enabling Energy-efficient IoT Networks with Generalized Deduplication
This work was partially financed by the SCALE-IoT Project (Grant No. 7026-00042B) granted by the Independent Research Fund Denmark, by the Aarhus Universitets Forskningsfond (AUFF) Starting Grant Project AUFF- 2017-FLS-7-1, and Aarhus University's DIGIT Centre. European Commission Project: LEGaTO - Low Energy Toolset for Heterogeneous Computing (EC-H2020-780681)
DEBS'20: Proceedings of the 14th ACM International Conference on Distributed and Event-Based Systems (2020) 133-136
10.1145/3401025.3404098
null
cs.DC
http://creativecommons.org/licenses/by/4.0/
With the advent of the Internet of Things (IoT), the ever growing number of connected devices observed in recent years and foreseen for the next decade suggests that more and more data will have to be transmitted over a network, before being processed and stored in data centers. Generalized deduplication (GD) is a novel technique to effectively reduce the data storage cost by identifying similar data chunks, and able to gradually reduce the pressure from the network infrastructure by limiting the data that needs to be transmitted. This paper presents Hermes, an application-level protocol for the data-plane that can operate over generalized deduplication, as well as over classic deduplication. Hermes significantly reduces the data transmission traffic while effectively decreasing the energy footprint, a relevant matter to consider in the context of IoT deployments. We fully implemented Hermes and evaluated its performance using consumer-grade IoT devices (e.g., Raspberry Pi 4B models). Our results highlight several trade-offs that must be taken into account when considering real-world workloads.
[ { "created": "Fri, 22 May 2020 12:59:38 GMT", "version": "v1" }, { "created": "Mon, 20 Jul 2020 12:02:40 GMT", "version": "v2" } ]
2020-07-21
[ [ "Göttel", "Christian", "" ], [ "Nielsen", "Lars", "" ], [ "Yazdani", "Niloofar", "" ], [ "Felber", "Pascal", "" ], [ "Lucani", "Daniel E.", "" ], [ "Schiavoni", "Valerio", "" ] ]
With the advent of the Internet of Things (IoT), the ever growing number of connected devices observed in recent years and foreseen for the next decade suggests that more and more data will have to be transmitted over a network, before being processed and stored in data centers. Generalized deduplication (GD) is a novel technique to effectively reduce the data storage cost by identifying similar data chunks, and able to gradually reduce the pressure from the network infrastructure by limiting the data that needs to be transmitted. This paper presents Hermes, an application-level protocol for the data-plane that can operate over generalized deduplication, as well as over classic deduplication. Hermes significantly reduces the data transmission traffic while effectively decreasing the energy footprint, a relevant matter to consider in the context of IoT deployments. We fully implemented Hermes and evaluated its performance using consumer-grade IoT devices (e.g., Raspberry Pi 4B models). Our results highlight several trade-offs that must be taken into account when considering real-world workloads.
1711.07011
Ilke Cugu
\.Ilke \c{C}u\u{g}u, Eren \c{S}ener, Emre Akba\c{s}
MicroExpNet: An Extremely Small and Fast Model For Expression Recognition From Face Images
International Conference on Image Processing Theory, Tools and Applications (IPTA) 2019 camera ready version. Codes are available at: https://github.com/cuguilke/microexpnet
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper is aimed at creating extremely small and fast convolutional neural networks (CNN) for the problem of facial expression recognition (FER) from frontal face images. To this end, we employed the popular knowledge distillation (KD) method and identified two major shortcomings with its use: 1) a fine-grained grid search is needed for tuning the temperature hyperparameter and 2) to find the optimal size-accuracy balance, one needs to search for the final network size (or the compression rate). On the other hand, KD is proved to be useful for model compression for the FER problem, and we discovered that its effects gets more and more significant with the decreasing model size. In addition, we hypothesized that translation invariance achieved using max-pooling layers would not be useful for the FER problem as the expressions are sensitive to small, pixel-wise changes around the eye and the mouth. However, we have found an intriguing improvement on generalization when max-pooling is used. We conducted experiments on two widely-used FER datasets, CK+ and Oulu-CASIA. Our smallest model (MicroExpNet), obtained using knowledge distillation, is less than 1MB in size and works at 1851 frames per second on an Intel i7 CPU. Despite being less accurate than the state-of-the-art, MicroExpNet still provides significant insights for designing a microarchitecture for the FER problem.
[ { "created": "Sun, 19 Nov 2017 12:31:09 GMT", "version": "v1" }, { "created": "Mon, 13 Aug 2018 08:40:17 GMT", "version": "v2" }, { "created": "Thu, 17 Jan 2019 07:38:19 GMT", "version": "v3" }, { "created": "Tue, 24 Dec 2019 10:44:15 GMT", "version": "v4" } ]
2019-12-25
[ [ "Çuğu", "İlke", "" ], [ "Şener", "Eren", "" ], [ "Akbaş", "Emre", "" ] ]
This paper is aimed at creating extremely small and fast convolutional neural networks (CNN) for the problem of facial expression recognition (FER) from frontal face images. To this end, we employed the popular knowledge distillation (KD) method and identified two major shortcomings with its use: 1) a fine-grained grid search is needed for tuning the temperature hyperparameter and 2) to find the optimal size-accuracy balance, one needs to search for the final network size (or the compression rate). On the other hand, KD is proved to be useful for model compression for the FER problem, and we discovered that its effects gets more and more significant with the decreasing model size. In addition, we hypothesized that translation invariance achieved using max-pooling layers would not be useful for the FER problem as the expressions are sensitive to small, pixel-wise changes around the eye and the mouth. However, we have found an intriguing improvement on generalization when max-pooling is used. We conducted experiments on two widely-used FER datasets, CK+ and Oulu-CASIA. Our smallest model (MicroExpNet), obtained using knowledge distillation, is less than 1MB in size and works at 1851 frames per second on an Intel i7 CPU. Despite being less accurate than the state-of-the-art, MicroExpNet still provides significant insights for designing a microarchitecture for the FER problem.
2303.14227
Rafael Pina
Rafael Pina, Varuna De Silva and Corentin Artaud
Causality Detection for Efficient Multi-Agent Reinforcement Learning
null
null
null
null
cs.AI cs.LG cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When learning a task as a team, some agents in Multi-Agent Reinforcement Learning (MARL) may fail to understand their true impact in the performance of the team. Such agents end up learning sub-optimal policies, demonstrating undesired lazy behaviours. To investigate this problem, we start by formalising the use of temporal causality applied to MARL problems. We then show how causality can be used to penalise such lazy agents and improve their behaviours. By understanding how their local observations are causally related to the team reward, each agent in the team can adjust their individual credit based on whether they helped to cause the reward or not. We show empirically that using causality estimations in MARL improves not only the holistic performance of the team, but also the individual capabilities of each agent. We observe that the improvements are consistent in a set of different environments.
[ { "created": "Fri, 24 Mar 2023 18:47:44 GMT", "version": "v1" } ]
2023-03-28
[ [ "Pina", "Rafael", "" ], [ "De Silva", "Varuna", "" ], [ "Artaud", "Corentin", "" ] ]
When learning a task as a team, some agents in Multi-Agent Reinforcement Learning (MARL) may fail to understand their true impact in the performance of the team. Such agents end up learning sub-optimal policies, demonstrating undesired lazy behaviours. To investigate this problem, we start by formalising the use of temporal causality applied to MARL problems. We then show how causality can be used to penalise such lazy agents and improve their behaviours. By understanding how their local observations are causally related to the team reward, each agent in the team can adjust their individual credit based on whether they helped to cause the reward or not. We show empirically that using causality estimations in MARL improves not only the holistic performance of the team, but also the individual capabilities of each agent. We observe that the improvements are consistent in a set of different environments.
2009.11722
Bogdan Trasnea
Sorin Grigorescu, Tiberiu Cocias, Bogdan Trasnea, Andrea Margheri, Federico Lombardi, Leonardo Aniello
Cloud2Edge Elastic AI Framework for Prototyping and Deployment of AI Inference Engines in Autonomous Vehicles
21 pages Published in Sensors: https://www.mdpi.com/1424-8220/20/19/5450
null
10.3390/s20195450
null
cs.SE cs.AI cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Self-driving cars and autonomous vehicles are revolutionizing the automotive sector, shaping the future of mobility altogether. Although the integration of novel technologies such as Artificial Intelligence (AI) and Cloud/Edge computing provides golden opportunities to improve autonomous driving applications, there is the need to modernize accordingly the whole prototyping and deployment cycle of AI components. This paper proposes a novel framework for developing so-called AI Inference Engines for autonomous driving applications based on deep learning modules, where training tasks are deployed elastically over both Cloud and Edge resources, with the purpose of reducing the required network bandwidth, as well as mitigating privacy issues. Based on our proposed data driven V-Model, we introduce a simple yet elegant solution for the AI components development cycle, where prototyping takes place in the cloud according to the Software-in-the-Loop (SiL) paradigm, while deployment and evaluation on the target ECUs (Electronic Control Units) is performed as Hardware-in-the-Loop (HiL) testing. The effectiveness of the proposed framework is demonstrated using two real-world use-cases of AI inference engines for autonomous vehicles, that is environment perception and most probable path prediction.
[ { "created": "Wed, 23 Sep 2020 09:23:29 GMT", "version": "v1" } ]
2020-09-25
[ [ "Grigorescu", "Sorin", "" ], [ "Cocias", "Tiberiu", "" ], [ "Trasnea", "Bogdan", "" ], [ "Margheri", "Andrea", "" ], [ "Lombardi", "Federico", "" ], [ "Aniello", "Leonardo", "" ] ]
Self-driving cars and autonomous vehicles are revolutionizing the automotive sector, shaping the future of mobility altogether. Although the integration of novel technologies such as Artificial Intelligence (AI) and Cloud/Edge computing provides golden opportunities to improve autonomous driving applications, there is the need to modernize accordingly the whole prototyping and deployment cycle of AI components. This paper proposes a novel framework for developing so-called AI Inference Engines for autonomous driving applications based on deep learning modules, where training tasks are deployed elastically over both Cloud and Edge resources, with the purpose of reducing the required network bandwidth, as well as mitigating privacy issues. Based on our proposed data driven V-Model, we introduce a simple yet elegant solution for the AI components development cycle, where prototyping takes place in the cloud according to the Software-in-the-Loop (SiL) paradigm, while deployment and evaluation on the target ECUs (Electronic Control Units) is performed as Hardware-in-the-Loop (HiL) testing. The effectiveness of the proposed framework is demonstrated using two real-world use-cases of AI inference engines for autonomous vehicles, that is environment perception and most probable path prediction.
2010.11895
Ruosong Wang
Ruosong Wang, Dean P. Foster, Sham M. Kakade
What are the Statistical Limits of Offline RL with Linear Function Approximation?
null
null
null
null
cs.LG cs.AI math.OC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Offline reinforcement learning seeks to utilize offline (observational) data to guide the learning of (causal) sequential decision making strategies. The hope is that offline reinforcement learning coupled with function approximation methods (to deal with the curse of dimensionality) can provide a means to help alleviate the excessive sample complexity burden in modern sequential decision making problems. However, the extent to which this broader approach can be effective is not well understood, where the literature largely consists of sufficient conditions. This work focuses on the basic question of what are necessary representational and distributional conditions that permit provable sample-efficient offline reinforcement learning. Perhaps surprisingly, our main result shows that even if: i) we have realizability in that the true value function of \emph{every} policy is linear in a given set of features and 2) our off-policy data has good coverage over all features (under a strong spectral condition), then any algorithm still (information-theoretically) requires a number of offline samples that is exponential in the problem horizon in order to non-trivially estimate the value of \emph{any} given policy. Our results highlight that sample-efficient offline policy evaluation is simply not possible unless significantly stronger conditions hold; such conditions include either having low distribution shift (where the offline data distribution is close to the distribution of the policy to be evaluated) or significantly stronger representational conditions (beyond realizability).
[ { "created": "Thu, 22 Oct 2020 17:32:13 GMT", "version": "v1" } ]
2020-10-23
[ [ "Wang", "Ruosong", "" ], [ "Foster", "Dean P.", "" ], [ "Kakade", "Sham M.", "" ] ]
Offline reinforcement learning seeks to utilize offline (observational) data to guide the learning of (causal) sequential decision making strategies. The hope is that offline reinforcement learning coupled with function approximation methods (to deal with the curse of dimensionality) can provide a means to help alleviate the excessive sample complexity burden in modern sequential decision making problems. However, the extent to which this broader approach can be effective is not well understood, where the literature largely consists of sufficient conditions. This work focuses on the basic question of what are necessary representational and distributional conditions that permit provable sample-efficient offline reinforcement learning. Perhaps surprisingly, our main result shows that even if: i) we have realizability in that the true value function of \emph{every} policy is linear in a given set of features and 2) our off-policy data has good coverage over all features (under a strong spectral condition), then any algorithm still (information-theoretically) requires a number of offline samples that is exponential in the problem horizon in order to non-trivially estimate the value of \emph{any} given policy. Our results highlight that sample-efficient offline policy evaluation is simply not possible unless significantly stronger conditions hold; such conditions include either having low distribution shift (where the offline data distribution is close to the distribution of the policy to be evaluated) or significantly stronger representational conditions (beyond realizability).
2302.10899
Zhijian Li
Zhijian Li, Biao Yang, Penghang Yin, Yingyong Qi, and Jack Xin
Feature Affinity Assisted Knowledge Distillation and Quantization of Deep Neural Networks on Label-Free Data
null
null
null
null
cs.LG cs.AI cs.IT cs.NA math.IT math.NA
http://creativecommons.org/licenses/by-nc-sa/4.0/
In this paper, we propose a feature affinity (FA) assisted knowledge distillation (KD) method to improve quantization-aware training of deep neural networks (DNN). The FA loss on intermediate feature maps of DNNs plays the role of teaching middle steps of a solution to a student instead of only giving final answers in the conventional KD where the loss acts on the network logits at the output level. Combining logit loss and FA loss, we found that the quantized student network receives stronger supervision than from the labeled ground-truth data. The resulting FAQD is capable of compressing model on label-free data, which brings immediate practical benefits as pre-trained teacher models are readily available and unlabeled data are abundant. In contrast, data labeling is often laborious and expensive. Finally, we propose a fast feature affinity (FFA) loss that accurately approximates FA loss with a lower order of computational complexity, which helps speed up training for high resolution image input.
[ { "created": "Fri, 10 Feb 2023 01:00:49 GMT", "version": "v1" }, { "created": "Thu, 9 Mar 2023 05:54:14 GMT", "version": "v2" }, { "created": "Fri, 18 Aug 2023 18:29:21 GMT", "version": "v3" } ]
2023-08-22
[ [ "Li", "Zhijian", "" ], [ "Yang", "Biao", "" ], [ "Yin", "Penghang", "" ], [ "Qi", "Yingyong", "" ], [ "Xin", "Jack", "" ] ]
In this paper, we propose a feature affinity (FA) assisted knowledge distillation (KD) method to improve quantization-aware training of deep neural networks (DNN). The FA loss on intermediate feature maps of DNNs plays the role of teaching middle steps of a solution to a student instead of only giving final answers in the conventional KD where the loss acts on the network logits at the output level. Combining logit loss and FA loss, we found that the quantized student network receives stronger supervision than from the labeled ground-truth data. The resulting FAQD is capable of compressing model on label-free data, which brings immediate practical benefits as pre-trained teacher models are readily available and unlabeled data are abundant. In contrast, data labeling is often laborious and expensive. Finally, we propose a fast feature affinity (FFA) loss that accurately approximates FA loss with a lower order of computational complexity, which helps speed up training for high resolution image input.
1806.10478
Tommaso Soru
Tommaso Soru, Edgard Marx, Andr\'e Valdestilhas, Diego Esteves, Diego Moussallem, Gustavo Publio
Neural Machine Translation for Query Construction and Composition
ICML workshop on Neural Abstract Machines & Program Induction v2 (NAMPI), extended abstract
null
null
null
cs.CL cs.AI cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Research on question answering with knowledge base has recently seen an increasing use of deep architectures. In this extended abstract, we study the application of the neural machine translation paradigm for question parsing. We employ a sequence-to-sequence model to learn graph patterns in the SPARQL graph query language and their compositions. Instead of inducing the programs through question-answer pairs, we expect a semi-supervised approach, where alignments between questions and queries are built through templates. We argue that the coverage of language utterances can be expanded using late notable works in natural language generation.
[ { "created": "Wed, 27 Jun 2018 13:40:49 GMT", "version": "v1" }, { "created": "Mon, 9 Jul 2018 14:25:46 GMT", "version": "v2" } ]
2018-07-10
[ [ "Soru", "Tommaso", "" ], [ "Marx", "Edgard", "" ], [ "Valdestilhas", "André", "" ], [ "Esteves", "Diego", "" ], [ "Moussallem", "Diego", "" ], [ "Publio", "Gustavo", "" ] ]
Research on question answering with knowledge base has recently seen an increasing use of deep architectures. In this extended abstract, we study the application of the neural machine translation paradigm for question parsing. We employ a sequence-to-sequence model to learn graph patterns in the SPARQL graph query language and their compositions. Instead of inducing the programs through question-answer pairs, we expect a semi-supervised approach, where alignments between questions and queries are built through templates. We argue that the coverage of language utterances can be expanded using late notable works in natural language generation.
1811.02328
Kaipeng Zhang
Kaipeng Zhang, Zhanpeng Zhang, Chia-Wen Cheng, Winston H. Hsu, Yu Qiao, Wei Liu, Tong Zhang
Super-Identity Convolutional Neural Network for Face Hallucination
Published in ECCV 2018
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Face hallucination is a generative task to super-resolve the facial image with low resolution while human perception of face heavily relies on identity information. However, previous face hallucination approaches largely ignore facial identity recovery. This paper proposes Super-Identity Convolutional Neural Network (SICNN) to recover identity information for generating faces closed to the real identity. Specifically, we define a super-identity loss to measure the identity difference between a hallucinated face and its corresponding high-resolution face within the hypersphere identity metric space. However, directly using this loss will lead to a Dynamic Domain Divergence problem, which is caused by the large margin between the high-resolution domain and the hallucination domain. To overcome this challenge, we present a domain-integrated training approach by constructing a robust identity metric for faces from these two domains. Extensive experimental evaluations demonstrate that the proposed SICNN achieves superior visual quality over the state-of-the-art methods on a challenging task to super-resolve 12$\times$14 faces with an 8$\times$ upscaling factor. In addition, SICNN significantly improves the recognizability of ultra-low-resolution faces.
[ { "created": "Tue, 6 Nov 2018 12:50:08 GMT", "version": "v1" } ]
2018-11-07
[ [ "Zhang", "Kaipeng", "" ], [ "Zhang", "Zhanpeng", "" ], [ "Cheng", "Chia-Wen", "" ], [ "Hsu", "Winston H.", "" ], [ "Qiao", "Yu", "" ], [ "Liu", "Wei", "" ], [ "Zhang", "Tong", "" ] ]
Face hallucination is a generative task to super-resolve the facial image with low resolution while human perception of face heavily relies on identity information. However, previous face hallucination approaches largely ignore facial identity recovery. This paper proposes Super-Identity Convolutional Neural Network (SICNN) to recover identity information for generating faces closed to the real identity. Specifically, we define a super-identity loss to measure the identity difference between a hallucinated face and its corresponding high-resolution face within the hypersphere identity metric space. However, directly using this loss will lead to a Dynamic Domain Divergence problem, which is caused by the large margin between the high-resolution domain and the hallucination domain. To overcome this challenge, we present a domain-integrated training approach by constructing a robust identity metric for faces from these two domains. Extensive experimental evaluations demonstrate that the proposed SICNN achieves superior visual quality over the state-of-the-art methods on a challenging task to super-resolve 12$\times$14 faces with an 8$\times$ upscaling factor. In addition, SICNN significantly improves the recognizability of ultra-low-resolution faces.
2012.05270
Alessio Colucci
Alessio Colucci, D\'avid Juh\'asz, Martin Mosbeck, Alberto Marchisio, Semeen Rehman, Manfred Kreutzer, Guenther Nadbath, Axel Jantsch and Muhammad Shafique
MLComp: A Methodology for Machine Learning-based Performance Estimation and Adaptive Selection of Pareto-Optimal Compiler Optimization Sequences
Accepted for publication at the 24th IEEE/ACM Design, Automation and Test in Europe (DATE'21) Conference, February, 2021
null
10.23919/DATE51398.2021.9474158
null
cs.LG cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Embedded systems have proliferated in various consumer and industrial applications with the evolution of Cyber-Physical Systems and the Internet of Things. These systems are subjected to stringent constraints so that embedded software must be optimized for multiple objectives simultaneously, namely reduced energy consumption, execution time, and code size. Compilers offer optimization phases to improve these metrics. However, proper selection and ordering of them depends on multiple factors and typically requires expert knowledge. State-of-the-art optimizers facilitate different platforms and applications case by case, and they are limited by optimizing one metric at a time, as well as requiring a time-consuming adaptation for different targets through dynamic profiling. To address these problems, we propose the novel MLComp methodology, in which optimization phases are sequenced by a Reinforcement Learning-based policy. Training of the policy is supported by Machine Learning-based analytical models for quick performance estimation, thereby drastically reducing the time spent for dynamic profiling. In our framework, different Machine Learning models are automatically tested to choose the best-fitting one. The trained Performance Estimator model is leveraged to efficiently devise Reinforcement Learning-based multi-objective policies for creating quasi-optimal phase sequences. Compared to state-of-the-art estimation models, our Performance Estimator model achieves lower relative error (<2%) with up to 50x faster training time over multiple platforms and application domains. Our Phase Selection Policy improves execution time and energy consumption of a given code by up to 12% and 6%, respectively. The Performance Estimator and the Phase Selection Policy can be trained efficiently for any target platform and application domain.
[ { "created": "Wed, 9 Dec 2020 19:13:39 GMT", "version": "v1" }, { "created": "Fri, 11 Dec 2020 11:53:33 GMT", "version": "v2" } ]
2021-10-12
[ [ "Colucci", "Alessio", "" ], [ "Juhász", "Dávid", "" ], [ "Mosbeck", "Martin", "" ], [ "Marchisio", "Alberto", "" ], [ "Rehman", "Semeen", "" ], [ "Kreutzer", "Manfred", "" ], [ "Nadbath", "Guenther", "" ], [ "Jantsch", "Axel", "" ], [ "Shafique", "Muhammad", "" ] ]
Embedded systems have proliferated in various consumer and industrial applications with the evolution of Cyber-Physical Systems and the Internet of Things. These systems are subjected to stringent constraints so that embedded software must be optimized for multiple objectives simultaneously, namely reduced energy consumption, execution time, and code size. Compilers offer optimization phases to improve these metrics. However, proper selection and ordering of them depends on multiple factors and typically requires expert knowledge. State-of-the-art optimizers facilitate different platforms and applications case by case, and they are limited by optimizing one metric at a time, as well as requiring a time-consuming adaptation for different targets through dynamic profiling. To address these problems, we propose the novel MLComp methodology, in which optimization phases are sequenced by a Reinforcement Learning-based policy. Training of the policy is supported by Machine Learning-based analytical models for quick performance estimation, thereby drastically reducing the time spent for dynamic profiling. In our framework, different Machine Learning models are automatically tested to choose the best-fitting one. The trained Performance Estimator model is leveraged to efficiently devise Reinforcement Learning-based multi-objective policies for creating quasi-optimal phase sequences. Compared to state-of-the-art estimation models, our Performance Estimator model achieves lower relative error (<2%) with up to 50x faster training time over multiple platforms and application domains. Our Phase Selection Policy improves execution time and energy consumption of a given code by up to 12% and 6%, respectively. The Performance Estimator and the Phase Selection Policy can be trained efficiently for any target platform and application domain.
1104.0862
Photios Stavrou
Photios A. Stavrou and Charalambos D. Charalambous
Causal Rate Distortion Function and Relations to Filtering Theory
8 pages; 3 figures; Presented in 20th International Symposium on Mathematical Theory of Networks and Systems (MTNS 2012)
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by/3.0/
A causal rate distortion function is defined, its solution is described, and its relation to filtering theory is discussed. The relation to filtering is obtained via a causal constraint imposed on the reconstruction kernel to be realizable.
[ { "created": "Tue, 5 Apr 2011 14:45:08 GMT", "version": "v1" }, { "created": "Sat, 18 Feb 2012 01:12:11 GMT", "version": "v2" }, { "created": "Wed, 6 Jun 2012 10:59:57 GMT", "version": "v3" } ]
2012-06-07
[ [ "Stavrou", "Photios A.", "" ], [ "Charalambous", "Charalambos D.", "" ] ]
A causal rate distortion function is defined, its solution is described, and its relation to filtering theory is discussed. The relation to filtering is obtained via a causal constraint imposed on the reconstruction kernel to be realizable.